Client Challenge:
A client operating on Azure SQL Managed Instance – General Purpose tier was experiencing significant performance limitations.
Their workload required higher IOPS and throughput, but the General Purpose tier relies on standard Azure managed disks, which scale performance solely based on disk size.
The result?
To get more IOPS and throughput, the client had to provision one very large database file, even though only a fraction of the space was actively used. They were paying for storage capacity they didn’t need—just to unlock more performance.
This approach severely capped their database throughput and limited future scalability.
The Problem With a Single Large Database File
General Purpose MI performance is tied to the number of 32 GB storage chunks assigned to a database file.
This creates a bottleneck:
- A single 512 GB file (16 × 32 GB chunks) delivers
~5,000 IOPS and ~200 MiB/s throughput
even if the database only stores 50 GB of real data.
The client was effectively forced into over-provisioning storage to compensate for insufficient performance.
Our Investigation & Strategy
SQL Mechanics analyzed the file layout and storage allocation behavior inside Managed Instance.
We identified the key issue:
The database was using only one large data file, preventing Azure from leveraging storage parallelism.
Azure MI scales I/O performance per file, not per database.
Therefore, the fastest and most cost-effective way to unlock additional throughput is to add multiple equal-sized data files.
Two Scenarios
1. New Databases
The optimal path is to create multiple files before loading data.
This ensures even distribution across all files.
2. Existing Databases
If the database already contains data, files can still be added later—
but SQL Server won’t automatically rebalance existing data.
To fix this, we:
- Rebuilt indexes
- Reorganized heaps
- Redistributed objects evenly across all files
This allowed the workload to fully utilize parallel file I/O.
The Results
Scenario A: 50 GB Database Originally Using a 512 GB File
| Configuration | Total Billed Size | IOPS | Throughput |
| Before – 1 × 512 GB file | 512 GB | ~5,000 | ~200 MiB/s |
| After – 16 × 32 GB files | 512 GB | ~8,000 | ~1.6 GiB/s |
| Price Change | Same | — | — |
Outcome:
✔ 60% IOPS increase
✔ 8× throughput increase
✔ No increase in cost
Scenario B: 1.5 TB Database Using a 2 TB File
The client’s database required ~2 TB of allocated storage.
| Configuration | Files | IOPS | Throughput |
| Before – 1 × 2 TB file | 1 | ~7,500 | ~250 MiB/s |
| After – 16 × 129 GB files | 16 | 36,800 | ~2.4 GiB/s |
Outcome:
✔ Nearly 5× increase in IOPS
✔ More than 9× increase in throughput
✔ Same billed storage
✔ No changes to compute tier or vCores
✔ No move to Business Critical required
This performance uplift was achieved entirely through file-level parallelism inside Azure Managed Instance.
Business Impact
- Massive performance increase without upgrading service tier
- No additional storage cost
- Faster query execution and index operations
- Improved overall system responsiveness
- A scalable model for future database growth
Conclusion
This case study shows that even in the General Purpose tier—where Azure MI relies on standard managed disks—you can unlock dramatic performance gains simply by redesigning your database file layout.
With the right number of files and proper data redistribution, SQL Mechanics helped the client achieve:
🚀 Up to 9× throughput
🚀 Up to 5× IOPS
💰 At no additional cost
A simple architectural adjustment delivered enterprise-grade performance without premium hardware or higher-tier pricing.