Modern Data Warehousing in Fabric: Cost, Scale, and Control
Description
Want to learn how to get the best performance while keeping control of cost? In this session we'll go into the details of controlling the underlying compute resource with custom sql pools and understanding the impact they have on billing.
Key Takeaways
- Fabric Data Warehouse was built from scratch in the 2020s — not a Synapse lift-and-shift — delivering best-of-breed price-performance for analytical workloads
- Cost control strategies: capacity unit (CU) monitoring, workload management, query optimization, pausing unused capacities — Fabric exposes more levers than traditional warehouses
- Scaling patterns: Fabric auto-scales within purchased capacity; understanding CU consumption per workload type (Spark vs SQL vs Pipeline) is critical for cost management
- T-SQL compatibility: Fabric Warehouse supports a rich T-SQL surface area — migrations from Synapse Analytics or on-prem SQL Server are more straightforward than alternatives
- Data Warehouse vs Lakehouse in Fabric: Warehouse = SQL-first, strong ACID, familiar DBA tooling; Lakehouse = Spark-first, open format, more flexible — use both in the same workspace
- Security model: Row-Level Security, Column-Level Security, Object-Level Security all available in Fabric Warehouse using familiar SQL GRANT/DENY syntax
- Microsoft blog 'A turning point for enterprise data warehousing' (Feb 2026) gives the platform direction and competitive positioning context for this session
My Notes
Action Items
- [ ]