Workflow
NVFB4
icon
Search documents
Running LLMs locally: Practical LLM Performance on DGX Spark — Mozhgan Kabiri chimeh, NVIDIA
AI Engineer· 2026-04-10 00:20
Hello everyone. I'm Moska Gabricima, developer relations manager at Nvidia, where I work closely with developers building and deploying AI systems. Today, we're looking at running LLMs locally, practical LLM performance on the Jetson Spark.This isn't a theoretical talk. It's a data-backed journey through the trade-offs of modern AI infrastructure. Findings are based on hands-on experiments with the goal of understanding what's actually practical on a single system.The evolution in AI puts greater demand on ...