Business teams often rely on SharePoint Excel files for live data updates like budgets, inventory, or project status, updated by multiple users in real time. Traditionally, ingesting this data into analytics platforms required complex workflows with tools like Azure Logic Apps or Power Automate, involving multiple steps, services, and additional resources. In this video, you’ll learn how the new Databricks native SharePoint connector simplifies this process by directly connecting SharePoint Excel files to Databricks Delta Lake with minimal setup. This reduces latency, eliminates middleware overhead, and makes live business data instantly available for advanced analytics and reporting inside Databricks. You’ll see a complete walkthrough: -How to find your SharePoint Site ID via the API -Creating and configuring the SharePoint connection and ingestion pipeline in Databricks -Running pipeline management code using Databricks APIs -Handling ingestion of base64 encoded Excel file content and transforming it into clean Delta tables for downstream analytics -Discussion of current connector limitations like token refresh errors, inability to specify folders or files, and the need for custom decoding code This demo highlights how native integration empowers teams to accelerate insights from business-critical SharePoint data with a clean, automated pipeline. Thanks for watching! References and Resources: Databricks SharePoint Ingestion Documentation: How to Find your SharePoint Site ID: GitHub Sample Notebook:











