Alberto Alvarez takes us inside NXP’s groundbreaking work bringing private, secure, and multimodal AI experiences to edge devices. Explore how NXP’s EIQ GenAI Flow enables on-device inference, fine-tuning, and optimization of LLMs—powered by the family of SoCs and the Neutron NPU. Highlights: - Private conversational AI with wake word detection, RAG, and natural speech synthesis - Multimodal inference using LLAMA3 + CLIP without cloud connectivity - Real-time, low-power image + language processing using Kinara’s accelerator - 4-bit & 8-bit quantization for massive models running at the edge Whether you’re building smart industrial systems or AI-powered embedded interfaces, this talk showcases the future of scalable, secure, on-device intelligence. #EdgeAI #GenerativeAI #NXP #AIonDevice #LLM #EmbeddedAI #TinyML #AICompanion #MultimodalAI #RAG #Kinara #AlbertoAlvarez #Milan2025 #TechForGood











