Войти
  • 373Просмотров
  • 1 неделя назадОпубликованоThe TWIML AI Podcast with Sam Charrington

Scaling Agentic Inference Across Heterogeneous Compute [Zain Asgar] - 757

In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. 🗒️ For the full list of resources for this episode, visit the show notes page: 🔔 Subscribe to our channel for more great content just like this: 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: Follow us on Twitter: Follow us on LinkedIn: Join our Slack Community: Subscribe to our newsletter: Want to get in touch? Send us a message: 📖 CHAPTERS =============================== 00:00 - Introduction 01:51 - Gimlet Labs 03:37 - Compute for agentic AI workloads 04:43 - Optimization for agentic workloads 05:50 - Heterogeneity 09:44 - Challenges 12:24 - Gimlet cloud vs customer-hosted environments 13:21 - Licensing and pricing model 13:54 - “Three-layer cake” stack 17:04 - Layer 1: Workload disaggregation 18:11 - Layer 2: Compilation system 19:29 - Layer 3: Kernel optimization 27:27 - TCO and performance gains 32:09 - Heterogeneity in training 34:39 - Use cases 39:08 - Sovereign clouds 42:10 - Physical rack constraints 45:43 - Future directions and reasons for developers choosing Gimlet cloud 🔗 LINKS & RESOURCES =============================== Gimlet Labs Emerges from Stealth with 8-Figure Revenues, Fundamentally Shifting the Paradigm in How Agentic AI Workloads Are Run and Opening Up New Compute Capacity - Gimlet Labs - Benchmarking AI-generated CUDA kernels on an H100 - Speeding up PyTorch inference on Apple devices with AI-generated Metal kernels - Qualcomm's AI250 Attacks the AI Inference Memory Bottleneck | Durga Malladi Interview - Closing the Loop Between AI Training and Inference with Lin Qiao - 742 - 📸 Camera: 🎙️Microphone: 🚦Lights: 🎛️ Audio Interface: 🎚️ Stream Deck: