Workflow
Language AI
icon
Search documents
Inference at Scale: How DeepL Built an AI Infrastructure for Real-Time Language AI
NVIDIA· 2025-12-02 23:24
Core Business & Technology - Dal leverages AI and research to enhance business communication across borders through translation language AI [1] - The company's AI-powered translation aims for accuracy, fluency, and nuance, requiring sophisticated AI models that understand language depth and context [1] - Dal utilizes large data centers and trains models on billions of sentences, words, and characters to extract insights [2] - Nvidia's high-end infrastructure, including Blackwell, is being deployed in Dal's data centers [2] Performance & Efficiency - New clusters will enable the translation of the entire internet in approximately two weeks [3] - Tensor RTLM technology is used to improve inference process efficiency, reduce latency, and maintain quality and accuracy [4] - Tensor RTLM allows the company to create the outcome for its business with the least possible investment [4] - The Grace Blackwell stack is optimized for efficiency, utilizing Nvidia's liquid cooling and green power through a partnership with Eco Data Center [5] Collaboration & Future - Dal collaborates with Nvidia on software advancements to improve AI-driven communication solutions [5] - The company focuses on enabling businesses to communicate better internally and externally, fostering dialogue across borders [5]