In this video, I show you how to build a full speech-to-speech pipeline using Python – great for your next tinkering or maker project! We start with Porcupine from Picovoice to detect a custom wake word. Once triggered, OpenAI's Whisper transcribes your speech to text, which is then passed to Ollama (running a local LLM like LLaMA or Mistral) as a prompt. The response from the AI is spoken back using a text-to-speech (TTS) engine in Python. If you're a hobbyist, maker, or just curious about voice interfaces with AI, this one's for you. 🛠️ Tools Used: Picovoice Porcupine (wake word) OpenAI Whisper (speech recognition) Ollama (local LLM) Python TTS (pyttsx3 / edge-tts / your choice) Link to code: 👨🔧 I’m a hobbyist maker sharing cool projects – subscribe for more voice/AI/electronics experiments!











