固有模态偏好
Search documents
破解多模态大模型“选择困难症”!内部决策机制首次揭秘:在冲突信息间疯狂"振荡"
量子位· 2025-11-14 05:38
Core Argument - The article argues that modality following in multi-modal large language models (MLLMs) is a dynamic process influenced by relative reasoning uncertainty and inherent modality preference, rather than a static attribute [1][4][37]. Group 1: Research Contributions - A new toy dataset was constructed to systematically and independently vary the reasoning difficulty of visual and textual inputs, enabling different difficulty combinations for multi-modal inputs [4]. - The study decomposes the explicit behavior of modality following into two core components: case-specific relative reasoning uncertainty and the model's stable inherent modality preference [4][5]. - An empirical finding indicates that the probability of a model following a certain modality decreases monotonically as the relative reasoning uncertainty of that modality increases [5]. Group 2: Framework Design - A controlled dataset was created to validate hypotheses, allowing independent control of visual and textual reasoning complexity [9][10]. - Uncertainty was measured using output entropy, which reflects the model's perceived uncertainty, with lower entropy indicating confident predictions and higher entropy indicating consideration of alternative options [11]. - Relative uncertainty was quantified to measure the confidence gap between text and visual modalities, providing a core metric for subsequent analysis [12]. Group 3: Limitations of Traditional Metrics - Traditional macro metrics like Text Following Rate (TFR) and Visual Following Rate (VFR) were tested on the constructed dataset, revealing confusing patterns that highlight their limitations [14]. - The study identifies a common trend where models perceive text as easier on average, yet exhibit opposite macro preferences, raising questions about the underlying reasons for these discrepancies [15][16]. Group 4: Experimental Paradigm - A new experimental paradigm was designed to decouple model capability from preference, allowing for a clearer understanding of the model's decision-making process [18]. - The researchers grouped data points based on relative uncertainty to create a complete preference curve, reflecting how model preferences change dynamically with relative difficulty [18]. Group 5: Key Experimental Findings - All tested models exhibited a consistent trend where the probability of following text decreases smoothly as text becomes relatively more difficult [19][21]. - The "balance point" was defined as the point where the curve crosses the 50% probability line, serving as a quantifiable measure of inherent modality preference [22]. - The framework successfully explained previous puzzles regarding model behavior by revealing differences in inherent preferences that were not visible in macro metrics [23][24]. Group 6: Internal Mechanisms - The study explored the internal decision-making mechanisms of models, particularly their oscillation behavior when faced with conflicting information near the balance point [29][30]. - The findings indicate that models exhibit higher oscillation counts in ambiguous regions, providing a mechanistic explanation for observed indecision in external behavior [34][36]. Conclusion - The research presents a new framework for understanding modality following in MLLMs, emphasizing the importance of separating model capability from inherent preference, and revealing a robust rule that the likelihood of following a modality decreases with increasing relative uncertainty [37].
破解多模态大模型“选择困难症”!内部决策机制首次揭秘:在冲突信息间疯狂"振荡"
量子位· 2025-11-14 02:04
Core Argument - The article argues that modality following in multi-modal large language models (MLLMs) is a dynamic process influenced by relative reasoning uncertainty and inherent modality preference, rather than a static attribute [1][4][37]. Group 1: Contributions and Findings - A new controlled toy dataset was constructed to systematically manipulate the reasoning difficulty of visual and textual inputs [4]. - The study decomposes modality following into two core components: case-specific relative reasoning uncertainty and the model's stable inherent modality preference [4][5]. - A fundamental finding indicates that the probability of a model following a certain modality decreases monotonically as the relative reasoning uncertainty of that modality increases [5]. - The framework provides a more reasonable method for quantifying inherent preference, defining it as the balance point where the model treats both modalities equally [5][22]. - The research explores the internal decision-making mechanisms of models, revealing oscillations in predictions when uncertainty is near the balance point [5][29]. Group 2: Experimental Design - The researchers established a controlled experimental environment using a novel toy dataset that independently controls visual and textual reasoning complexity [9][10]. - A model-centered uncertainty metric, output entropy, was employed to reflect the model's perceived uncertainty [11]. - Relative single-modal uncertainty was introduced to quantify the confidence gap in each conflicting case, serving as a core metric for subsequent analysis [12]. Group 3: Limitations of Traditional Metrics - Traditional macro metrics like Text Following Rate (TFR) and Visual Following Rate (VFR) were tested on the constructed dataset, revealing confusing patterns that highlight their limitations [14]. - The study identifies two puzzles regarding the models' preferences and difficulty perceptions, suggesting that traditional metrics obscure the true motivations behind model decisions [16][23]. Group 4: New Experimental Paradigm - A new experimental paradigm was designed to decouple model capability from preference, allowing for a clearer understanding of the models' decision-making processes [18]. - The researchers grouped data points based on relative uncertainty to create a complete preference curve reflecting how model preferences change with relative difficulty [18]. Group 5: Key Experimental Discoveries - All tested models exhibited a consistent trend: as text becomes relatively more difficult, the probability of following text decreases smoothly [19][21]. - The balance point quantifies inherent preference, indicating whether a model has a visual or textual bias based on its position on the relative uncertainty axis [22]. - The framework successfully explains the previously mentioned puzzles by revealing differences in inherent preferences among models [23][24]. Group 6: Internal Mechanisms - The study investigates why models exhibit oscillations in decision-making when approaching their balance point, providing a mechanism for observed indecision [29][33]. - The distinction between clear and ambiguous regions in input uncertainty is made, with oscillation frequency being significantly higher in ambiguous regions [30][34].