Core Insights - The Institute of Artificial Intelligence of China Telecom (TeleAI) has introduced a new metric called Information Capacity, which redefines the evaluation of large language models (LLMs) beyond traditional size-based comparisons [1][4] - Information Capacity measures the ratio of model intelligence to inference complexity, indicating the knowledge density of a model [3][4] - This metric allows for fair efficiency comparisons across different model series and accurate performance predictions within a model series [3][5] Group 1 - Information Capacity reflects a model's efficiency in compressing and processing knowledge relative to its computational cost, analogous to a sponge's water absorption efficiency [3][4] - The research team, guided by Professor Xuelong Li, quantitatively measures an LLM's efficiency based on compression performance, revealing intelligence density per unit of computational cost [4][5] - The metric facilitates optimal allocation of computing and communication resources under the AI Flow framework, addressing the increasing computational demands of large models [4][6] Group 2 - The introduction of Information Capacity provides a quantitative benchmark for greener development of large models and supports dynamic routing of models for efficient task handling [6] - The AI Flow framework is expected to replace the mainstream cloud-centric computing paradigm with its Device-Edge-Cloud hierarchical network [6] - All relevant code and data from this research have been open-sourced on GitHub and Hugging Face, promoting community collaboration in standardizing large model efficiency evaluation [7]
TeleAI Unveils Breakthrough Metric to Quantify AI “Talent” in Large Language Models