RAG
Search documents
Elastic(ESTC) - 2026 Q3 - Earnings Call Transcript
2026-02-26 23:00
Elastic (NYSE:ESTC) Q3 2026 Earnings call February 26, 2026 05:00 PM ET Speaker10Good afternoon, welcome to the Elastic third quarter fiscal 2026 earnings results conference call. All participants will be in listen-only mode. Should you need assistance, please signal a conference specialist by pressing the star key followed by zero. After today's presentation, there will be an opportunity to ask questions. To ask a question, you may press star then one on your telephone keypad. To withdraw your question, pl ...
X @Avi Chawla
Avi Chawla· 2026-02-13 09:05
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs.Avi Chawla (@_avichawla):A graph-powered all-in-one RAG system!RAG-Anything is a graph-driven, all-in-one multimodal document processing RAG system built on LightRAG.It supports all content modalities within a single integrated framework.100% open-source. https://t.co/1Kw21DDcA7 ...
离谱!裁员裁出新高度了。。。
菜鸟教程· 2026-02-10 03:29
Core Insights - The article highlights the rapid decline in demand for traditional CRUD development roles due to the swift advancement of AI technology, positioning these roles as potentially obsolete in the near future [1] - It emphasizes that 63% of companies are transitioning to AI product development, making AI application development skills a necessity in the current job market [2] - The article points out that positions such as large model application development engineers are in high demand, with a significant talent shortage leading to salary increases of 40-60% for qualified candidates [2] Summary by Sections AI Technology and Job Market - Traditional skills in business coding, API integration, and bug fixing are rapidly depreciating in value in the AI era [2] - Companies are increasingly seeking developers who are proficient in AI technologies, particularly in fine-tuning, Agents, and Retrieval-Augmented Generation (RAG) [2][7] Course Offering - The article promotes a course titled "Large Model Application Development Practice," designed to help developers build complete application development paths from scratch [3][4] - The course includes two live sessions focusing on theoretical knowledge, practical development skills, and demonstrable projects [3] Career Advancement Opportunities - The course offers additional benefits such as internal referral opportunities and direct hiring rights upon completion [5][16] - Participants will receive a collection of large model application case studies and a white paper on AI commercial implementation [5][14] Learning Outcomes - The curriculum covers essential technologies like fine-tuning for specific tasks, RAG for efficient knowledge retrieval and generation, and the development of AI Agents for multi-task collaboration and complex problem-solving [7][14] - The course aims to equip participants with the skills to navigate the evolving job market, particularly in high-demand sectors such as finance, healthcare, and legal [7][14] Market Demand and Job Security - The article stresses the urgency for developers to acquire AI skills to avoid job insecurity, especially for those approaching the age of 35 [11][18] - It notes that many past participants have successfully secured high-paying job offers after completing the course [9][16]
存储“涨声”再起:一季度NAND闪存涨幅预期超40%
2 1 Shi Ji Jing Ji Bao Dao· 2026-02-09 10:55
Core Viewpoint - The price increase of NAND flash memory is driven by the rising demand from AI applications, particularly in large-scale inference processes, leading to significant upward revisions in price forecasts by market research firms [1][2][3]. Group 1: Price Forecasts and Market Trends - Samsung Electronics raised NAND flash contract prices by over 100% in January, prompting multiple market research firms to revise their price forecasts upward [1]. - TrendForce increased its first-quarter NAND flash price growth forecast from 33-38% to 55-60%, indicating potential for further upward adjustments [1]. - Counterpoint predicts NAND flash prices will rise by over 40% in the current quarter [1]. Group 2: AI Demand and Storage Architecture - The surge in NAND flash demand is primarily attributed to AI applications, particularly in retrieval-augmented generation (RAG) which enhances the accuracy of large language models [1][2]. - The transition from training to large-scale inference in generative AI has led to increased demand for NAND flash, as systems require high-speed access to vast amounts of data [2][3]. - The need for high-frequency access to context data during inference has resulted in a shift towards a storage architecture that includes HBM, DRAM, and NAND [3]. Group 3: Supply Constraints and Future Outlook - The global NAND flash production capacity is concentrated among a few major players, including Samsung, SK Hynix, and Micron, with investments in NAND lagging behind HBM and advanced DRAM [4][5]. - Morgan Stanley forecasts a 40% year-over-year increase in average NAND sales prices by 2026, with only a slight decline expected in 2027 [5]. - The introduction of High Bandwidth Flash (HBF) aims to address the limitations of traditional NAND SSDs, providing higher bandwidth and capacity suitable for AI inference applications [5][6]. Group 4: Technological Advancements - HBF combines 3D NAND flash with high-bandwidth interface technology, offering 8 to 16 times the capacity of traditional HBM, making it a competitive solution for AI applications [5][6]. - The industry is moving towards a multi-layer architecture of "DRAM cache + HBF acceleration + NAND mass storage," which is expected to alleviate supply-demand imbalances and drive growth [6].
X @Avi Chawla
Avi Chawla· 2026-02-03 20:23
RT Avi Chawla (@_avichawla)The ultimate Full-stack AI Engineering roadmap to go from 0 to 100.This is the exact mapped-out path on what it actually takes to go from Beginner → Full-Stack AI Engineer.> Start with Coding Fundamentals.> Learn Python, Bash, Git, and testing.> Every strong AI engineer starts with fundamentals.> Learn how to interact with models by understanding LLM APIs.> This will teach you structured outputs, caching, system prompts, etc.> APIs are great, but raw LLMs still need the latest inf ...
X @Avi Chawla
Avi Chawla· 2026-02-03 06:30
The ultimate Full-stack AI Engineering roadmap to go from 0 to 100.This is the exact mapped-out path on what it actually takes to go from Beginner → Full-Stack AI Engineer.> Start with Coding Fundamentals.> Learn Python, Bash, Git, and testing.> Every strong AI engineer starts with fundamentals.> Learn how to interact with models by understanding LLM APIs.> This will teach you structured outputs, caching, system prompts, etc.> APIs are great, but raw LLMs still need the latest info to be effective.> Learn h ...
正式裁员30000人,赔偿N+4!
猿大侠· 2026-02-02 04:11
还记得今年某大厂公布了2024年财报,数据显示,截至2024年12月31日,员工总数为194320人,而截至2023年 12月31日,这一数字为219260人。 这也意味着,过去一年 减员了近24940人 。 一边是传统岗位加速淘汰,一边 是大 模型人才一将难求! 冰与火的反差,正在技术圈残酷上演! | 时间 | 员工数量 | 员工环比减少 | | --- | --- | --- | | 截止 2021 年 12月 31 日 | 259316人 | / | | 截止 2022 年 03 月 31 日 | 254941 人 | -4375 人 | | 截止 2022 年 06 月 30 日 | 245700 人 | -9241 人 | | 截止 2022 年 09 月 30 日 | 243903 人 | -1797 人 | | 截止 2022 年 12月 31 日 | 239740 人 | -4163 人 | | 截止 2023 年 03 月 31 日 | 235216 人 | -4524 人 | | 截止 2023 年 06月 30 日 | 228675 人 | -6541 人 | | 截止 2023 年 0 ...
X @Avi Chawla
Avi Chawla· 2026-02-01 12:43
If you found it insightful, reshare it with your network.Find me → @_avichawlaEvery day, I share tutorials and insights on DS, ML, LLMs, and RAGs. https://t.co/AZVktAeFEhAvi Chawla (@_avichawla):Here's a common misconception about RAG!When we talk about RAG, it's usually thought: index the doc → retrieve the same doc.But indexing ≠ retrievalSo the data you index doesn't have to be the data you feed the LLM during generation.Here are 4 smart ways to index data: https://t.co/0nKUuBeJ70 ...
X @Avi Chawla
Avi Chawla· 2026-02-01 06:30
Here's a common misconception about RAG!When we talk about RAG, it's usually thought: index the doc → retrieve the same doc.But indexing ≠ retrievalSo the data you index doesn't have to be the data you feed the LLM during generation.Here are 4 smart ways to index data:1) Chunk Indexing- The most common approach.- Split the doc into chunks, embed, and store them in a vector DB.- At query time, the closest chunks are retrieved directly.This is simple and effective, but large or noisy chunks can reduce precisi ...
X @Avi Chawla
Avi Chawla· 2026-01-27 19:33
RT Avi Chawla (@_avichawla)RAG was never the end goal.Memory in AI agents is where everything is heading. Let me break down this evolution in the simplest way possible.RAG (2020-2023):- Retrieve info once, generate response- No decision-making, just fetch and answer- Problem: Often retrieves irrelevant contextAgentic RAG:- Agent decides if retrieval is needed- Agent picks which source to query- Agent validates if results are useful- Problem: Still read-only, can't learn from interactionsAI Memory:- Read AND ...