LLMs don’t actually call tools—here’s what really happens. In this episode of Code to Care, we break down how Agentic AI enables large language models to go beyond generating responses—they now make decisions and request tools to complete tasks. But here's the twist: LLMs never execute tools directly. Instead, they signal what should happen—and rely on you or your system to do the work. 🔐 Why? Security firewalls + human oversight. What you’ll learn: • The 3 core functions of Agentic AI • How LLMs "call" tools through structured requests • Why tool execution is delegated for security + compliance • Why human-in-the-loop workflows are essential for sensitive actions TIMESTAMPS 00:00 – Intro: What Is Agentic AI? 00:26 – Core Functions: Write, Decide, Call Tools 01:31 – Tool Calling = Real-World Action 01:40 – How LLMs Actually Call Tools 02:10 – HR Use Case Example 03:41 – System Setup: LLM + Tools + App 05:02 – LLMs Don’t Call Tools Directly 05:20 – Commercial Plug 05:36 – Request + Tool Execution Flow 06:36 – LLM Processes Tool Results 07:11 – Why LLMs Can’t Call Tools Themselves 09:06 – Flow Recap: Agentic Execution 09:30 – Wrap-Up + What’s Next #agenticai #ai #aiagents #llms #techtalk #codetocare #aitools











