Please see the pinned comment for a solution for the failed retrieval at the very end of the video! RAG systems can be pretty useful. They allow you to dynamically augment LLM prompts with data / information that would not be available to the model otherwise. Let's build such a system - one that runs 100% locally on our machine via Ollama, Gemma 3 & Qdrant (a vector database). Want to learn more? Explore my courses! 👉 Complete Generative AI Course: 👉 Running LLMs Locally via LM Studio & Ollama: Website: Code: Socials: Twitch: Main YT Channel: @maximilian-schwarzmueller X: Udemy Courses: LinkedIn:











