This time, we talk about AI risks beyond our usual focus on rogue AIs: malicious use and accidents. In particular, we look at how AI could undermine democracy, enable automated cyberattacks on critical infrastructure, lower barriers for biological and chemical misuse, concentrate power in a few governments or corporations, and cause large-scale accidents through bugs, design flaws, and weak safety culture. ▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ An Overview of Catastrophic AI Risks: For more about AI risk from Dan Hendrycks: Introduction to ML Safety: Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust Sources about the spread of a deepfake audio during Slovakia's 2023 election: - - - - Language Models Can Teach Themselves to Program Better: DeepObfusCode: Source Code Obfuscation Through Sequence-to-Sequence Networks: Artificial intelligence and the offense–defense balance in cyber security: %E2%80%93defense-balance-cyber-security-matteo-bonfanti Publications on AI in drug discovery, by use case family and by year (page 5): DeepFace: The growing influence of industry in AI research: Building a Culture of Safety for AI: Perspectives and Challenges: ▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ 🟠 Patreon: 🔵 Channel membership: 🟢 Merch: 🟤 Ko-fi, for one-time and recurring donations: ▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Rational Animations Discord: Reddit: X/Twitter: Instagram: ▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Our sweet patrons and channel members! ▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ The team behind the video:











