Microsoft Fabric: The New Method for Organizing Data in the Azure Cloud
by Ginger Grant | First published: November 1, 2025 - Last Updated: November 3, 2025
Setting up Azure Cloud resources traditionally has meant selecting a combination of tools needed for migrating data. In the Azure Cloud, you could use Azure Data Factory or Azure Databricks. Do you need data storage? Azure Data Lake Gen 2 will handle that. Do you need to stream data? Set up an IoT Hub or Event Hub. Do you need a database? You could set up an Azure SQL Database server. And of course, all these components need to have authorization, cost monitoring, user setup, and development, test and production versions. That means lots of user maintenance tasks, and then you still needed to use the tool. Just setting up the right access alone is going to be a lot of effort. The cost accounting wasn’t easy to figure out either. Surely there needed to be something less complicated.
While organizations are looking for the right combination of tools for their organization, they realize that it is important to figure out how to bring the data together from different systems. Data may start out in multiple places, but it needs to be centrally evaluated to be able to provide a holistic view of what is occurring in an organization. Ideally, it needs to support the an environment, which increasingly is multicloud and can be supported by sophisticated users or by people just getting started. That’s asking a lot.
After studying the problem—and trying something that they decided didn’t work well enough—Microsoft came up with a solution to meet the needs of organizations to centralize their data while only needing to configure one resource to make it happen: Microsoft Fabric.
Surprising, Microsoft is all in on working with other providers these days. What about data you have in other clouds like AWS and GCP? No problem—you can just link to that data so you don’t have to import it; you can just reference other clouds inside of Fabric. What if the organization has a data warehouse in Snowflake? No problem—you can mirror the data to Fabric, and that means when the data is updated, you have it. Does it work with Databricks? Sure. Microsoft worked with Databricks to ensure that if your data is being modified or is in object storage, you can use it in Fabric.
What about tools? There are a lot of people who are skilled in Microsoft’s tools that they have been using over the years, like Azure Data Factory for transforming data, or who understand how to use Power Query to format the data. Those tools are inside of Fabric. What if you want to use PySpark? Sure, do that—Fabric provides that feature. Do you want to use T-SQL for data transformation? Sure, use a notebook for T-SQL. What if you don’t want to use Microsoft’s pipelines and you want to write data transformation in Python? That is also supported. Fabric’s toolset is really full-featured.
Microsoft also realized that people are getting tired of trying to figure out all the different pricing, so Fabric is available for a fixed hourly price. Configuration for tools is generally limited to picking a name and location for the tool. This makes tool configuration and costing much easier than before.
What about data centralization? How does Fabric handle that? Well, Fabric puts all the data for your organization in one place—a centralized storage location called OneLake. And, as you might have guessed, you get it when you use Fabric, without having to configure a thing. After listening to the marketplace, it appears that Microsoft has designed a product to really address concerns, and they are investing resources to make it a success.
