Workflow
Mercury
icon
Search documents
Saylor Says Strategy Will Not Issue Preferred Equity in Japan, Giving Metaplanet a 12 Month Head Start
Yahoo Finance· 2025-12-09 09:38
Core Insights - The main question regarding Strategy (MSTR) is whether it will list a perpetual preferred equity or "digital credit" in Japan, to which executive chairman Michael Saylor responded that it will not happen in the next twelve months [1] Group 1: Metaplanet's Digital Credit Instruments - Metaplanet is planning to introduce its own digital credit instruments, "Mercury" and "Mars," into Japan's perpetual preferred market, which currently has only five listed equities [2] - Mercury, described as Metaplanet's version of Strategy's STRK, offers a yield of 4.9% in yen and includes convertibility, significantly higher than Japanese bank deposits and money market funds [2] - Mars is designed to mirror Strategy's STRC, a short duration high yield credit product, and comes as Strategy has expanded its own perpetual preferred program [3] Group 2: Market Mechanisms and Strategies - Japan does not permit at-the-market share sales (ATM) like those used by Strategy, leading Metaplanet to utilize a moving strike warrant (MSW) for its perpetual preferred offerings [4] - Saylor advocates for broad participation in issuing digital credit, expecting around a dozen issuers, while Gerovich emphasizes the importance of balance sheet strength and plans to focus on issuing credit primarily in Japan and Asia [5]
'Women are afraid to get pregnant': Indigenous people fight mercury poisoning from illegal gold mining
Sky News· 2025-11-29 04:22
Core Viewpoint - The indigenous Munduruku people in the Brazilian Amazon are suffering from severe health issues linked to mercury poisoning, primarily due to illegal gold mining activities that contaminate their environment and food sources [2][4][21]. Group 1: Health Impacts - Symptoms observed in the Munduruku community include miscarriages, muscle tremors, memory loss, and vision problems, which are attributed to mercury exposure [2][11]. - Mercury accumulates in fish consumed by the community, with studies indicating that one in five fish in northern Brazil contains dangerous levels of mercury [11][20]. - The toxic metal affects reproductive health, accumulating in placentas and breast milk, often exceeding safe thresholds for pregnant women [15]. Group 2: Illegal Gold Mining - Illegal gold mining is prevalent in indigenous territories, exacerbated by rising global gold prices, which incentivize miners despite the legal prohibitions [6][21]. - The mining operations are often linked to organized crime, using the gold to launder drug money and contributing to environmental degradation [8][21]. - The Brazilian government has initiated crackdowns on illegal mining, resulting in a reported 94% reduction in active illegal mining areas in some regions, although challenges remain due to the high demand for gold [16][17]. Group 3: Community Response - The Munduruku have been actively resisting mining on their land since the 1960s and recently leveraged international attention during climate talks to secure legal rights to additional territory [12][22]. - Community leaders emphasize the need for land demarcation to strengthen their ability to protect their environment and health from illegal mining activities [24]. - The ongoing struggle against illegal mining is compounded by the rising gold prices, which attract more invaders to their land [24].
用更一致的轨迹、更少的解码步数「驯服」掩码扩散语言模型,扩散语言模型的推理性能和效率大幅提升
机器之心· 2025-11-05 04:15
Core Insights - The article discusses the rapid advancements in diffusion large language models (LLMs), highlighting their potential as strong competitors to traditional LLMs [2][7] - A recent paper from a collaborative research team proposes an efficient decoding strategy combined with reinforcement learning for masked diffusion large language models (MDLM), significantly improving their reasoning performance and efficiency [2][21] Group 1: Problem Identification - Masked diffusion large language models like LLaDA exhibit capabilities comparable to autoregressive models but face challenges with full diffusion-style decoding, which is less effective than block-wise decoding [7][9] - The decoding process of MDLMs often encounters an issue where early generation of <EOS> tokens leads to performance degradation, creating a decoding trap [14][15] Group 2: Proposed Solutions - The research team introduces an early rejection mechanism for <EOS> tokens to suppress their confidence during early decoding steps, thus preventing premature termination of generation [15] - A power-increasing decoding step scheduler is designed to optimize the decoding process, reducing the inference steps from O(L) to O(logL), thereby accelerating reasoning [15][16] Group 3: Consistency Trajectory Optimization - The team proposes a consistency trajectory grouping strategy (CJ-GRPO) to address inconsistencies between rollout and optimization trajectories, enhancing training stability and effectiveness [16] - By combining the early rejection mechanism, increasing step scheduler, and CJ-GRPO, the model can maintain performance comparable to baseline methods while significantly reducing decoding steps [16][24] Group 4: Experimental Results - Extensive experiments demonstrate that the proposed methods outperform baseline models in mathematical reasoning and planning tasks, with performance improvements of up to 2-4 times in certain benchmarks [23][24] - The results indicate that the combination of CJ-GRPO with EOSER and ASS maintains competitive performance in low-step inference scenarios, achieving a balance of speed and quality [24] Group 5: Future Directions - The article suggests exploring hybrid reasoning modes that combine the strengths of diffusion and autoregressive models to meet diverse task requirements [26]
X @IcoBeast.eth🦇🔊
IcoBeast.eth🦇🔊· 2025-08-29 14:11
Integrations & Partnerships - Mercury App now features full native integration of Project X, enabling users to LP (liquidity provide) and potentially other features on the go [1][2][3] - Mercury has integrated Hypercore and HyperEVM into a single application [2] - Mercury is pulling in many partner integrations, suggesting a wide range of features and options for users [2] User Experience & Features - Mercury offers a casual-user friendly experience with simple higher/lower trading choices [2] - Deposits are easily managed via card or crypto [2] - Perps (perpetual futures) integration is noted as "slick" [1]
X @Forbes
Forbes· 2025-08-24 07:05
Astronomical Event - A six-planet "parade" is occurring, offering a final opportunity to observe Mercury [1] Media Information - The event is being publicized via social media platform X (formerly Twitter) with a link to further information [1]
X @Forbes
Forbes· 2025-08-23 07:05
Astronomical Events - A "Planet Parade" featuring Mercury, Venus, Jupiter, and Saturn is scheduled for Sunday [1]
扩散语言模型写代码!速度比自回归快10倍
量子位· 2025-07-10 03:19
Core Viewpoint - The article discusses the launch of Mercury, a new commercial-grade large language model based on diffusion technology, which can generate code at a significantly faster rate than traditional models. Group 1: Model Innovation - Mercury breaks the limitations of autoregressive models by predicting all tokens at once, enhancing generation speed [2] - The model allows for dynamic error correction during the generation process, providing greater flexibility compared to traditional models [4][20] - Despite using diffusion technology, Mercury retains the Transformer architecture, enabling the reuse of efficient training and inference optimization techniques [6][7] Group 2: Performance Metrics - Mercury's code generation speed can be up to 10 times faster than traditional tools, significantly reducing development cycles [8] - On H100 GPUs, Mercury achieves a throughput of 1109 tokens per second, showcasing its efficient use of hardware [9][13] - In benchmark tests, Mercury Coder Mini and Small achieved response times of 0.25 seconds and 0.31 seconds, respectively, outperforming many competitors [16] Group 3: Error Correction and Flexibility - The model incorporates a real-time error correction module that detects and corrects logical flaws in code during the denoising steps [21] - Mercury integrates abstract syntax trees (AST) from programming languages like Python and Java to minimize syntax errors [22] Group 4: Development Team - Inception Labs, the developer of Mercury, consists of a team of experts from prestigious institutions, including Stanford and UCLA, with a focus on improving model performance using diffusion technology [29][34]
多模态扩散模型开始爆发,这次是高速可控还能学习推理的LaViDa
机器之心· 2025-05-30 04:16
Core Viewpoint - The article introduces LaViDa, a large vision-language diffusion model that combines the advantages of diffusion models with the ability to process both visual and textual information effectively [1][5]. Group 1: Model Overview - LaViDa is a vision-language model that inherits the high speed and controllability of diffusion language models, achieving impressive performance in experiments [1][5]. - Unlike autoregressive large language models (LLMs), diffusion models treat text generation as a diffusion process over discrete tokens, allowing for better handling of tasks requiring bidirectional context [2][3][4]. Group 2: Technical Architecture - LaViDa consists of a visual encoder and a diffusion language model, connected through a multi-layer perceptron (MLP) projection network [10]. - The visual encoder processes multiple views of an input image, generating a total of 3645 embeddings, which are then reduced to 980 through average pooling for training efficiency [12][13]. Group 3: Training Methodology - The training process involves a two-stage approach: pre-training to align visual embeddings with the diffusion language model's latent space, followed by end-to-end fine-tuning for instruction adherence [19]. - A third training phase using distilled samples was conducted to enhance the reasoning capabilities of LaViDa, resulting in a model named LaViDa-Reason [25]. Group 4: Experimental Performance - LaViDa demonstrates competitive performance across various visual-language tasks, achieving the highest score of 43.3 on the MMMU benchmark and excelling in reasoning tasks [20][22]. - In scientific tasks, LaViDa scored 81.4 and 80.2 on ScienceQA, showcasing its strong capabilities in complex reasoning [23]. Group 5: Text Completion and Flexibility - LaViDa provides strong controllability for text generation, particularly in text completion tasks, allowing for flexible token replacement based on masked inputs [28][30]. - The model can dynamically adjust the number of tokens generated, successfully completing tasks that require specific constraints, unlike autoregressive models [31][32]. Group 6: Speed and Quality Trade-offs - LaViDa allows users to balance speed and quality by adjusting the number of diffusion steps, demonstrating flexibility in performance based on application needs [33][35]. - Performance evaluations indicate that LaViDa can outperform autoregressive baselines in speed and quality under certain configurations, highlighting its adaptability [35].