Workflow
TTS
icon
Search documents
Citi(C) - 2025 Q3 - Earnings Call Presentation
2025-10-14 15:00
Financial Performance - Citigroup's Q3 2025 revenues reached $22.1 billion, a 9% increase year-over-year[5] - Net income for Q3 2025 was $3.8 billion, up 16% year-over-year, or $4.5 billion excluding notable items, a 38% increase year-over-year[5] - Earnings per share (EPS) for Q3 2025 were $1.86, a 23% increase year-over-year, or $2.24 excluding notable items, a 48% increase year-over-year[5] - The company returned approximately $6.1 billion to common shareholders through share repurchases and dividends in Q3 2025, including $5.0 billion in share repurchases[5] Business Segment Performance - Services revenues increased by 7% year-over-year to $5.4 billion in Q3 2025[7] - Markets revenues increased by 15% year-over-year to $5.6 billion in Q3 2025[7] - Banking revenues increased by 34% year-over-year to $2.1 billion in Q3 2025[7] - U.S Personal Banking revenues increased by 7% year-over-year to $5.3 billion in Q3 2025[7] Capital and Credit Quality - Citigroup's CET1 Capital Ratio was 13.2%, approximately 110 bps above the regulatory requirement[5] - U.S Credit Cards Loans reached $168 billion in Q3 2025[19]
AI 孙燕姿遍地都是,可 ChatGPT 们为什么一唱歌就跑调?
3 6 Ke· 2025-05-29 03:35
一度被「雪藏」的 ChatGPT 歌手人格,开始憋不住了? 这两天 X 网友 Tibor Blaho 激动发现,ChatGPT 在高级语音模式下又可以唱歌了,唱的还是听得出调子旋律的、经典圣诞老歌《Last Christmas》。 ChatGPT 唱的这几句《Last Christmas》与原版「Wham!」的相比,歌词一字不落,调子大概也在线。不过,GPT-4o 版本的 ChatGPT,唱歌节奏感上还 差点意思,属实抢拍有点明显了。 不单单是流行曲,歌剧 ChatGPT 似乎也能来上几句。 你如果一时间没想好听什么歌,跟 ChatGPT 直接说「Sing me a song」,或许在接下来的一天里,都会被这首魔性的「AI 之歌」洗脑。 其实,去年 5 月 OpenAI 首次推出 GPT-4o 旗舰模型时,也引发过一波 AI 聊天助手 ChatGPT 唱歌潮。 时隔一年,当 ChatGPT 再度为你献上一首生日歌时,无论是旋律还是唱腔,听起来都更加自然和流畅,也更加有人味,仿佛真的是一位老友在旁边捧着 蛋糕,合唱生日歌为你庆生。 AI 孙燕姿火了两年,ChatGPT 们怎么还不会唱歌 你可能会奇怪,社交媒体上 ...
OpenAI给所有模型做“身份卡”!一个页面读懂能力、速度、价格全指标
量子位· 2025-03-10 03:29
Core Viewpoint - OpenAI has introduced a set of "identity cards" for its various models to clarify their capabilities, speeds, supported modalities, and pricing, addressing the confusion surrounding its numerous models [2][3][7]. Group 1: Model Identity Cards - Each model's identity card includes key information such as capabilities, speed, supported input/output modalities, and pricing, presented in a clear and concise format [3][11]. - A comparison feature allows users to compare up to three models at once, making it easier to understand the differences in their specifications [4][17]. - The identity cards categorize models into different series, including reasoning models, drawing models (DALL·E), speech synthesis models (TTS), speech recognition models (Whisper), and fine-tuned models for safety detection [8][9][10]. Group 2: Pricing and Performance - For example, the pricing for the o1 model series is structured as follows: Input at $15.00 per million tokens, Cached Input at $7.50, and Output at $60.00 [13]. - The performance metrics for models like GPT-4o include reasoning capabilities, speed ratings, and a maximum context window of over 200,000 tokens [11][12]. - The tiered pricing structure for API access includes various limits on requests per minute (RPM) and requests per day (RPD), with higher tiers offering significantly increased capacities [15]. Group 3: User Guidance and Community Contributions - Community members have created guides to help individual users navigate the complexities of model selection, summarizing the functionalities of different models [20][21]. - A notable contribution from AI blogger Peter Gostev provides a detailed comparison of ChatGPT models, making it easier for users to understand their options [20][22]. - Despite the usefulness of such summaries, there are concerns about their potential to become outdated quickly as models evolve [23][24]. Group 4: Future Developments - OpenAI is working towards consolidating its models into a unified version by the time GPT-5 is released, which aims to simplify the model selection process for users [28][29].