Are you looking to identify specific classes that your network hasn't been trained on? Before embarking on training your own network, consider exploring our CLIP zero-shot classification application. This innovative tool enables you to classify new objects using a simple text prompt, eliminating the need for additional network training. CLIP is a versatile multi-modal framework that processes both image and text inputs, aligning them within a shared embedding space. This capability allows for direct comparison between images and text. Our CLIP application performs zero-shot classification, meaning it can identify objects it has never been explicitly trained on. Hailo CLIP repo: Hailo Community forum Video example of CLIP running on Network Optics Video Management System: Link to VMS webinar (requires registration) - #raspberrypi AI Kit - Unboxing and Installation Guide: Contents: Introduction: 00:00-01:19 CLIP Zero-Shot Classification Demo 01:20-01:41 GUI Walk Through 01:42-04:48 Prompt Tips 04:49-07:46 Person Detection 07:47-11:15 Conclusion and Community Engagement 11:16-12:05











