Google’s Secure AI Framework (SAIF) → Best practices for securing AI deployments → Build AI securely on Google Cloud (Full report) → Large language models can speed up coding, operations tasks, pattern matching, and more for developer workflows. However, exposing sensitive data with these models is top of mind for organization decision-makers and developers alike. Watch along as Luis Urena, Developer Advocate at Google Cloud, discusses best practices for securing AI deployments, how to build an input and output parser using Sensitive Data Protection, and interviews Cloud Security Architect, Jim Miller, on his experience supporting Google Cloud customers deploy AI. Chapters: 0:00 - Intro 1:04 - Google’s commitment to AI security 2:26 - What is Sensitive Data Protection? 4:32 - Demo: Large Language Models in action 8:44 - Interview with Jim Miller 11:19 - Wrap up Build your own input and output parser using Sensitive Data Protection (Github)→ Watch more Making with AI → Subscribe to Google Cloud Tech → #MakingwithAI #GoogleCloud Speakers: Luis Urena, Jim Miller Products Mentioned: Cloud - AI and Machine Learning - Vertex AI











