Metrics
Search documents
100M views… Zero Impact?
20VC with Harry Stebbings· 2025-10-31 16:27
I've had clips that have exceeded 100 million views and the branding of the show is there. It's clearly an interview on my podcast. There's a link to the full episode and the impact on the long- form episode has been imperceptible.You look at the downloads and there is sweet all zero impact. So, I think it's also just worth noting these teams and the platforms, you know, the teams behind the platforms are very smart. They're very well funded.They have data scientists and teams of many others whose sole obje ...
X @Starknet (BTCFi arc)
Starknet 🐺🐱· 2025-10-02 13:44
RT Brother Lyskey (@0xLyskey)1/ Starknet metrics since BTCFi launch.no blabla, just metrics 🧵 https://t.co/d1FHJZ6Uuk ...
Meta's miss: Audio Rooms
20VC with Harry Stebbings· 2025-09-07 14:01
Goal Setting - The North Star is the goal, not a metric [1] - The company's goal should be clearly defined [1] - A metric is used to describe the goal, but it is never a perfect representation [1] Metric Definition - The most important thing is to have absolute clarity on what your goal is and then do the best you possibly can to describe that goal with a metric [1] - A metric is always broken [1] Example - When joining Meta, the goal was to connect the world online [1]
The truth about North Star metrics
20VC with Harry Stebbings· 2025-09-06 14:00
Core Goal & Metrics - The North Star is the company's goal, not a metric [1] - The company's goal should be clearly defined [1] - Metrics are used to describe the goal, but never perfectly [1] - Metrics are inherently flawed [2]
Practical tactics to build reliable AI apps — Dmitry Kuchin, Multinear
AI Engineer· 2025-08-03 04:34
Core Problem & Solution - Traditional software development lifecycle is insufficient for AI applications due to non-deterministic models, requiring a data science approach and continuous experimentation [3] - The key is to reverse engineer metrics from real-world scenarios, focusing on product experience and business outcomes rather than abstract data science metrics [6] - Build evaluations (evals) at the beginning of the process, not at the end, to identify failures and areas for improvement early on [14] - Continuous improvement of evals and solutions is necessary to reach a baseline benchmark for optimization [19] Evaluation Methodology - Evaluations should mimic specific user questions and criteria relevant to the solution's end goal [7] - Use Large Language Models (LLMs) to generate evaluations, considering different user personas and expected answers [9][11] - Focus on the details of each evaluation failure to understand the root cause, whether it's the test definition or the solution's performance [15] - Experimentation involves changing models, logic, prompts, or data, and continuously running evaluations to catch regressions [16][18] Industry Specific Examples - For customer support bots, measure the rate of escalation to human support as a key metric [5] - For text-to-SQL or text-to-graph database applications, create a mock database with known data to validate expected results [22] - For call center conversation classifiers, use simple matching to determine if the correct rubric is applied [23] Key Takeaways - Evaluate AI applications the way users actually use them, avoiding abstract metrics [24] - Frequent evaluations enable rapid progress and reduce regressions [25] - Well-defined evaluations lead to explainable AI, providing insights into how the solution works and its limitations [26]
X @Token Terminal 📊
Token Terminal 📊· 2025-07-07 23:07
RT Token Terminal 📊 (@tokenterminal)BIG BEAUTIFUL METRICS PAGES & WHERE TO FIND THEM:📂 Market sectors📂 Lending📂 Metrics📂 Active loansThank you for your attention to this matter! https://t.co/cpeKGGIESA ...