Microsoft Power BI

Microsoft Power BI community sharing our Power BI experience, tutorials, use cases, tips and tricks. Learn together. Grow together. Follow our Power BI Masterclass: https://linktr.ee/powerbi.masterclass or me: https://linktr.ee/tomas.kutac

Mastering SQL Database in Microsoft Fabric: A Comprehensive Guide

Amit Chandak
Microsoft Power BI
Published in
23 min readJan 10, 2025

--

Microsoft Fabric is a revolutionary unified data and analytics platform that empowers technical users, business users, and citizen developers to seamlessly perform data ingestion, transformation, real-time analytics, visualization, and AI-driven app development, all within an integrated and versatile ecosystem.

Source: Microsoft Learn

What is SQL Database in Microsoft Fabric?

The SQL Database in Microsoft Fabric is a developer-friendly transactional database built on the Azure SQL Database Engine, specifically designed to integrate seamlessly with Microsoft Fabric. It serves as the home for OLTP (Online Transaction Processing) workloads within Fabric and provides capabilities for both operational and analytical use cases.

The SQL Database in Microsoft Fabric allows businesses to streamline operations, build AI applications, and leverage analytics in a unified and integrated environment, all while simplifying database management and ensuring data security.

What are the key features of SQL Database in Microsoft Fabric?

  1. Transactional and Analytical Integration: SQL Database in Microsoft Fabric offers robust transactional capabilities, seamlessly replicating data in real-time into OneLake in the open-source Delta Parquet format. This ensures that operational data is instantly available for advanced analytics and AI-driven insights without the need for complex data movement or transformation.
  • The database automatically replicates data into OneLake in near real-time, converting it into Parquet format for analytics.
  • Supports both operational queries and analytical processing without the need to move data.
  • The SQL analytics endpoint, automatically created for your SQL Database in Microsoft Fabric, enables you to build semantic models, create shortcuts, and seamlessly analyze data alongside other Fabric data assets stored in the Delta format. This integration simplifies unified data exploration and insights generation across the Fabric ecosystem.

2. Ease of Use: You can display

  • Simplified setup: Deploy databases with minimal configuration.
  • Integrated with tools like SQL Server Management Studio (SSMS), Visual Studio Code, and GitHub Copilot.
  • Built-in web-based Query Editor in the Fabric portal.

3. Seamless Connectivity:

  • Supports cross-database queries and integration with other Fabric items such as Lakehouse, Data Warehouses, and Notebooks.

4. Developer-Centric Features:

  • Integration with source control (GitHub and Azure DevOps).
  • Enables the creation of GraphQL APIs directly from the Fabric portal.

5. Built for AI and Advanced Analytics:

  • Optimized for AI workloads with vector support and integration with tools like LangChain and Semantic Kernel.
  • Allows embedding predictive insights from data science workflows directly into BI reports.

6. Unified Billing and Capacity Management:

  • Part of the Fabric Capacity Model, with compute billed only during usage and storage billed separately.

7. Portability:

  • Enables database import/export with SqlPackage, supporting migration and incremental deployments.

8. Security and Governance:

  • Secured with Microsoft Entra Authentication, row-level security (RLS), object-level security (OLS), and TLS encryption.
  • Central governance via Microsoft Purview.

As of now, it is in public preview, with no billing for SQL database usage during the preview phase. The SQL Database in Microsoft Fabric is available for free during its public preview period. Compute and data storage charges will commence on January 1, 2025, with backup billing starting on February 1, 2025.

How to enable SQL Database in Microsoft Fabric tenant settings?

  • Login/Sign in to Microsoft Fabric (app.fabric.microsoft.com) or Power BI (app.powerbi.com)
  • Access Admin Portal: From Setting Icon on the tight top, go to Admin Portal.
  • Navigate to Tenant Settings: Expand SQL Database (preview) in Tenant Settings.
  • Enable SQL Database: Toggle the switch to On.
  • Set Permissions (Optional): Allow all users, and specific security groups, or exclude groups as needed.
  • Save Changes: Click Apply to activate the feature.

Ensure you have a workspace that uses Microsoft Fabric or trial capacity.

Open the Workspace from the option on the Left Pane. In My case the Workspace is 01-GA-Fabric

Click on New Item on the Left Top-> New Item Menu will open on the right. Scroll down and Click on SQL Database in Store data

As of now, there is a limit on how many databases you can create. So, if you already have a couple, it might give an error.

Give a name and click on create

As of now, there is a limit on how many databases you can create. So, if you already have a couple, it might give an error.

You will see a message

An empty SQL Database is created for use

You can load data into the SQL Database in Microsoft Fabric using the following methods:

  • Dataflow Gen 2: Streamline data preparation and transformation.
  • Data Pipeline: Automate data ingestion and movement workflows.
  • T-SQL: Execute SQL scripts for direct data loading and management.
  • Copy Job: Perform efficient data copying tasks between sources.

Along with the SQL Database, a SQL analytics endpoint and a default semantic model are automatically created. These components enable seamless integration with other Fabric assets, allowing efficient analytics and reporting capabilities across the platform.

The following script demonstrates how SQL Database in Microsoft Fabric supports transactional T-SQL capabilities and ensures near-real-time data replication to the SQL analytics endpoint. Here’s a step-by-step breakdown:

Key Features:

  1. Real-Time Replication: After running each query (Create, Insert, Update, Delete, Alter), you can verify that the changes are replicated on the SQL analytics endpoint within a few seconds.
  2. Transactional T-SQL Support: The SQL Database in Microsoft Fabric is fully based on Azure SQL Database, supporting T-SQL for transactions, including identity columns and primary key enforcement, which are currently unavailable in the Warehouse.

Queries and Tests:

Step 1: Create a Table with an Identity Column

Click on “New Query” in the SQL Database UI, paste the query below, and click on “Run.”

CREATE TABLE Employees (
EmployeeID INT IDENTITY(1,1) PRIMARY KEY,
FirstName VARCHAR(50),
LastName VARCHAR(50),
HireDate DATETIME2(6),
Salary DECIMAL(10, 2)
);
  • A table with an identity column and a primary key is created.
  • Verify if the table is reflected in the SQL analytics endpoint.

Switch to the SQL Analytics endpoint from the top-right corner to check the data there.

Refresh the schemas in the SQL Analytics endpoint, if needed, and you will be able to see the table replicated in the schema.

Step 2: Insert Sample Data

INSERT INTO Employees (FirstName, LastName, HireDate, Salary)
VALUES
('John', 'Doe', '2023-01-15', 50000),
('Jane', 'Smith', '2023-03-10', 60000),
('Alice', 'Johnson', '2023-02-05', 70000),
('Bob', 'Brown', '2023-06-20', 80000),
('Chris', 'Taylor', '2023-08-25', 90000),
('Emily', 'Davis', '2023-05-30', 55000),
('Michael', 'Wilson', '2023-04-15', 62000),
('Laura', 'Garcia', '2023-07-10', 58000),
('David', 'Martinez', '2023-01-05', 49000),
('Sarah', 'Clark', '2023-09-01', 51000),
('Daniel', 'Hall', '2023-11-15', 67000),
('Sophia', 'Allen', '2023-12-20', 73000),
('James', 'Young', '2023-10-10', 82000),
('Olivia', 'Scott', '2023-09-25', 77000),
('Ethan', 'Moore', '2023-08-05', 68000),
('Charlotte', 'Perez', '2023-07-15', 56000),
('Liam', 'White', '2023-06-05', 60000),
('Emma', 'Green', '2023-05-10', 62000),
('Lucas', 'Adams', '2023-03-25', 75000),
('Isabella', 'Nelson', '2023-02-15', 66000);

Insert multiple rows and confirm the data is available in the SQL analytics endpoint.

You can run select count(*) from Employees, to match the number of rows between SQL Endpoint and SQL analytics End Point.

Inserted data is also replicated in the SQL Analytics endpoint.

Step 3: Update Data

UPDATE Employees
SET Salary = 55000
WHERE EmployeeID = 1;

Update Data in SQL Database

SQL analytics endpoint — Data updated

Step 4: Delete Data

DELETE FROM test.Employees
WHERE EmployeeID = 2;
  • Delete a row and confirm the deletion is reflected in the SQL analytics endpoint.

Delete data in SQL Database.

Run a query in the SQL Analytics endpoint; data deletions are also replicated.

Step 5: Alter Table and Add New Data

ALTER TABLE Employees
ADD Department VARCHAR(50);

INSERT INTO Employees (FirstName, LastName, HireDate, Salary, Department)
VALUES
('Joe', 'Davis', '2024-02-01', 65000, 'Sales'),
('James', 'Taylor', '2024-02-15', 58000, 'HR');

SQL Database: Alter and add Data

SQL analytics endpoint: Alter the table, and the newly added rows with the department are also replicated.

You can connect to an SQL database in Microsoft Fabric using SQL Server Management Studio (SSMS) by leveraging the “Open In” option on the ribbon. Here’s a breakdown:

Key Differences Between SQL Database and SQL Analytics Endpoint:

  1. URL Structure:
  • The SQL Database URL includes a port number, making it distinct from the SQL Analytics Endpoint URL, which does not include a port. The rest of the URL remains identical.

2. Database Name:

  • The SQL Database requires a specific database name, which is different from the SQL Database’s display name in Fabric. This name is mandatory for establishing a connection.
  • The SQL Analytics Endpoint retains the same name as the SQL Database for easier identification.

3. Access Scope:

  • When connected to the SQL Analytics Endpoint, you can access other Lakehouse SQL endpoints and Warehouses within the workspace, provided you have the necessary permissions.

Here’s how to load data into a SQL Database in Microsoft Fabric using Dataflow Gen 2:

Steps:

1. Create a Dataflow Gen 2:

Navigate to New Data:

  • In your workspace, click New Data from the top-left menu.
  • In the New Data dropdown menu, select Get DataDataflow Gen 2.

Rename the Dataflow:

  • Once the Dataflow opens, rename it by clicking on the default name (e.g., “Dataflow (Load to SQL DB)”) at the top.
  • Enter a new name (e.g., Load to SQL DB) and press Enter to save the change.

2. Load Data from GitHub (Excel File):

Get Data via Web API:

  • On the Home tab of the Dataflow ribbon, click Get DataMore.
  • Search for Web API and select it.

Enter GitHub Raw URL:

  • Paste the GitHub Raw URL for your Excel file. Keep everything else the same and press “Next.”
  • After connecting, the preview will display the available sheets.

Select Sheets:

  • Choose the Item and Geography sheets.
  • Click Create to load the data into Dataflow Gen 2.

3. Set the Destination to SQL Database:

Load Data into SQL Database:

  • In the Dataflow settings, choose the Destination option.
  • Set the destination to your SQL Database within Microsoft Fabric.

Set the destination for each table one by one. The option is in the Home tab or at the bottom-right corner.

Connect to the SQL Database using your organization account on the first screen, then press “Next.”

If

Choose the workspace, SQL Database name, new or existing table, and table name, then press “Next.”

If needed, uncheck “Use Automatic Settings” and adjust the mapping. Click “Save Settings,” and repeat the same for other tables.

Once all the table's destinations are finalized in Dataflow Gen 2, click on the Publish button to publish the Dataflow. This will automatically trigger the process to load the data into the designated SQL Database. This step ensures the data is successfully ingested and ready for use in analytical or operational workloads.

Once Dataflow Gen 2 is published, the data loading process will automatically begin, and the data will be transferred and loaded into the designated SQL Database. This seamless integration ensures that the data is ready for immediate use in both transactional and analytical scenarios.

The data is now successfully loaded into the SQL Database and is also available in the SQL Analytics Endpoint, enabling seamless access for both transactional and analytical workloads.

1. Create a New Data Pipeline:

  1. Navigate to Workspace:
  • Go to your workspace and click on the New Item option at the top-left corner.

2. Select Data Pipeline:

  • A New Item menu will open on the right-hand side.
  • Scroll down to the Get Data section and select Data Pipeline to create a new pipeline.

You can also initiate the Data Pipeline directly from the SQL Database UI in Microsoft Fabric. This approach simplifies the process by starting the copy activity immediately, with the destination automatically set to the SQL Database.

Here’s how you can do it:

  1. Access SQL Database UI:
  • Open the SQL Database in your Microsoft Fabric workspace.

2. Start Data Pipeline:

  • From the UI, select the option Get Data -> Data Pipeline to start a new Data Pipeline.
  • This will launch a copy activity(Copy Data Assistant) workflow after taking the pipeline name

In both cases (whether starting a pipeline from the Workspace or the SQL Database UI), the first step is to provide a pipeline name:

  1. When initiating the pipeline, a popup will appear asking you to input a name.
  2. Enter the desired name for your pipeline (e.g., “Load to SQL DB”).
  3. Confirm by clicking Create to proceed.

On the Data Pipeline interface, find the Copy Data Assistant option in the middle of the screen. Click on the Copy Data Assistant to begin configuring your data copy process.

In the Copy Data Assistant UI, follow these steps to configure the source and retrieve GitHub data. This same Copy Data Assistant UI will appear regardless of whether you start from the Workspace Data Pipeline or the SQL Database Data Pipeline step.

  1. Choose Source:
  • You will reach to Choose Source option in the Copy Data Assistant UI.

2. Search for HTTP:

  • In the list of available data sources, search for HTTP to configure the connection for GitHub data.

3. Input GitHub Raw URL:

  • Provide the GitHub Raw URL for the data file and configure any additional connection settings if required. Press on next

Keep all the settings on the next screen as default. Click on “Next”

In the Next UI of the Copy Data Assistant, configure the settings as follows:

  1. File Format:
  • Choose Excel as the file format.

2. Compression Type:

  • Leave the Compression Type as is (default).

3. Worksheet Mode:

  • Select Index as the worksheet mode.

4. Sheet Index:

  • Set the Sheet Index to 1 to load the customer data.

5. Use First Row as Header:

  • Check the option Use First Row as Header.

6. Leave rest as default and Click on “Next”

Notes:

  • This configuration loads the Customer Data from the first sheet (Sheet Index 1).
  • To load additional sheets, repeat the process:
  • For Sales Data in Sheet 2, I will use Copy Job from the Azure SQL, which might be available in your case.

Scroll down in the Choose data destination section. From there, select the SQL Database from the OneLake Catalog within the required workspace. Ensure the correct SQL Database is chosen as the destination. Click on “Next”

On the “Connect to Data Destination” screen, choose Load SettingLoad to New Table. In the Table field, provide the schema and table name. Since I am using the dbo schema, I selected that, and the table name is Sales. I have edited all column mappings to use the correct “Type”, meaning the appropriate data types. Click on “Next”

On the “Review + Save” screen, I am leaving the “Start Data Transfer Immediately” checkbox selected. This means the pipeline will run and load the data as soon as I click Save and Run.

The pipeline job will execute, and once it completes, the data will be successfully loaded into the Customer table in the SQL Database.

You will be able to observe the Customer table in both the SQL Database and its associated SQL Analytics Endpoint, ensuring the data is available for transactional operations and analytical queries.

To add the Sales Fact, you can use a Copy Job for data loading. While the process we used for the Customer table can be repeated with a pipeline and the same GitHub source, this example demonstrates using a Copy Job. The data source will be an Azure SQL Database, where the Sales data has been preloaded over time for testing purposes, including additional rows for comprehensive validation. In this demonstration, incremental loading is not covered to keep the setup straightforward. Once the Copy Job is configured, connect to the Azure SQL source, map the Sales data to the destination table in the SQL Database within Microsoft Fabric, and proceed with the data loading. Additionally, if you are using test data, it is recommended to disable the automatically created job scheduler to avoid incurring unnecessary costs, as Azure’s pricing is based on query usage. This approach ensures efficient data loading while managing expenses effectively.

To create a Copy Job in Microsoft Fabric, start by opening your workspace and navigating to the New Item menu located at the top-left corner. Once the “New Item” menu opens, Navigate to the Get Data section and click on Copy Job (Preview) to begin the setup. This will open the Copy Job pop-up, where you can provide the Name and click on “Create” to start the Copy Job flow.

Select Azure SQL from “Choose data source” UI

When setting up a connection to Azure SQL in the Copy Job, ensure you have the following details ready:

  1. Server Name:
  • The fully qualified domain name (FQDN) of your Azure SQL server (e.g., yourserver.database.windows.net).
  1. Database Name:
  • The name of the database in Azure SQL that contains the source data.

2. Authentication Kind :

  • Choose the authentication method:
  • Basic: SQL Authentication: Provide the username and password for the Azure SQL account.
  • Organization: Use AAD/SSO credentials for secure authentication.
  • Service Principal: User Service principal, if applicable

3. Depending on the above selection provide other details

Click on “Next”

In the “Choose Data” UI, multiple tables are displayed, and you can select one or more of them. In my case, I will only select the “sales1” table, which will be used as the Sales Fact.

Click on “Next”

The next step is “Choose Data Destination,” but there is no SQL Database destination available by default in OneLake connectors.

Since there is no SQL Database connector, we will use the Azure SQL connector. Click on it, and the connection details UI will open.

You can retrieve the details from the SQL Database UI, as explained earlier, using “Open in” -> SQL Server Management Studio (SSMS).

Provide the server URL, database details, and rename the connection name, if needed. In Authentication Kind, select Organization and Sign In. After signing in, click on “Next”.

Provide the schema and table name, and click on “Edit Mapping”.

The mapping looks correct to me, so no changes are needed. Just click on “Next”.

In this blog, I will load the data once, but I will also showcase the Incremental Setting. I will use the “Incremental Copy” option in the “Settings” UI.

For each table, I need to select an incremental column. The ideal column for incremental load is the Update Date, which should be a Datetime column. However, in my case, I do not have an Update Date or Create Date (the second best option). So, I will use the SalesDate column instead. Click on “Next”.

In the “Review + Save” UI, simply click on “Save + Run”.

It will run the Copy Data job and load the initial data for the first time. Additionally, it will auto-schedule the load to run every 15 minutes.

Once the data is loaded, it provides details about how many records are read and written. I have taken the UI screenshot from “View Run History”, but the same information is available at the end of the run.

If you want to set up upsert for incremental data load, click on “Advanced Settings” in the “Copy Job” UI.

And then, choose the key column(s) for the required tables and click on “Apply”.

It’s time to disable the incremental load that is scheduled every 15 minutes. Click on “Schedule” in the Copy Job UI, and a scheduler pop-up will open on the right side. Disable the scheduler and click on “Apply”.

Go back to the SQL Database or the SQL Analytics endpoint of the SQL Database, and you will be able to see the Sales table. If need check the data by clicking on table name or running a SQL query.

I need a date table, which I will create in the SQL Database using the SQL script available in my blog. However, this time I will not use “Direct Query” mode in Power BI.

Power BI — Direct Query: Date Table in SQL Server

You need to create a date table first

Create Table date_table(
[Date] datetime2(6),
month_start_date datetime2(6),
month_end_date datetime2(6),
month_year varchar(255),
year_month int,
month_number int,
qtr_start_date datetime2(6),
qtr_end_date datetime2(6),
qtr varchar(255),
qtr_year varchar(255),
year_qtr int,
qtr_no int,
year_start_date datetime2(6),
year_end_date datetime2(6),
year int,
weekday int,
weekday_name varchar(255),
week_start_date datetime2(6),
week_end_date datetime2(6),
weeknum int,
year_week int,
year_week_rank int
);

Using this anonymous procedure, load the data. I noticed that the GENERATE_SERIES code does not work with the SQL Database version we have.

DECLARE @Counter INT, @StartDate Date, @EndDate Date,@Days int
SET @StartDate='2018-01-01'
SET @EndDate='2021-12-01'
SET @Counter=0
SET @Days= datediff(day,@StartDate,@EndDate)
Begin
truncate table date_table;
WHILE ( @Counter <= @Days)
BEGIN
insert into date_table([Date] ,
month_start_date ,
month_end_date ,
month_year ,
year_month ,
month_number ,
qtr_start_date ,
qtr_end_date ,
qtr ,
qtr_year ,
year_qtr ,
qtr_no ,
year_start_date ,
year_end_date ,
year ,
weekday ,
weekday_name ,
week_start_date ,
week_end_date ,
weeknum ,
year_week ,
year_week_rank)

select [Date]
, DATEADD(DAY,1,EOMONTH([Date],-1)) month_start_date
, EOMONTH([Date],0) month_end_date
, format([Date],'MMM-yyyy') month_year
, year([Date]) *100 + month([Date]) year_month
, month([Date]) month_number
, DATEADD(DAY,1,EOMONTH([Date],-1* (case when month([Date])%3=0 then 3 else month([Date])%3 end ) )) qtr_start_date
, EOMONTH([Date], (case when month([Date])%3=0 then 0 else (3- (month([Date])%3)) end ) ) qtr_end_date
, 'Q' + cast( DATEPART(Q,[Date]) as varchar) qtr
, 'Q' + cast( DATEPART(Q,[Date]) as varchar) +'-' + cast( year([Date]) as varchar) qtr_year
, year([Date]) *100 + DATEPART(Q,[Date]) qtr_year
,DATEPART(Q,[Date]) qtr_no

, DATEADD(DAY,1,EOMONTH([Date],-1* (month([Date]) ))) year_start_date
, EOMONTH([Date],12-1* (month([Date]) ))year_end_date
, year([Date]) year

, DATEPART(dw,[Date]) weekday
, format([Date],'ddd') weekday_name
, DATEADD(day,-1*DATEPART(dw,[Date])+1,[Date]) week_start_date
, DATEADD(day,-1*DATEPART(dw,[Date])+7,[Date]) week_end_date
, DATEPART(wk,[Date]) weeknum
, year([Date])*100+ DATEPART(wk,[Date]) year_week
, dense_rank() over(order by (year([Date])*100+ DATEPART(wk,[Date]))) year_week_rank
from (
select DATEADD(DAY,@Counter,@StartDate) Date ) A1
SET @Counter = @Counter + 1
END
end

My sales table contains data from 2018 to 2020, so I am using

  • @StartDate = ‘2018–01–01’: This sets the start date to January 1, 2018.
  • @EndDate = ‘2021–12–01’: This sets the end date to December 1, 2021.

However, you can adjust the dates as per your needs.

date_table is available in both SQL database and SQL analytics endpoint with Data

The time has come to create a custom semantic model and analyze the data.

To create a custom Semantic Model and analyze the data, start by opening the SQL Analytics Endpoint UI of your SQL Database in Microsoft Fabric. Once in the interface, navigate to the Reporting tab in the ribbon menu at the top. From there, click on New Semantic Model to initiate the setup process. This allows you to define and customize your Semantic Model, including setting up relationships, measures, and other metadata, enabling you to efficiently analyze and report on your data.

Provide the semantic model name and select the required tables. In this case, the tables are:

  • Sales
  • Item
  • Geography
  • Customer
  • date_table

Click on “Confirm”

When the Semantic Model UI opens, it will launch in edit mode unless blocked by a popup blocker. In this UI, you can perform various operations, such as renaming tables and columns, creating and managing relationships, and defining measures and roles. The “Edit Table” option allows you to add more tables to the model. Additionally, you can create new DAX Tables and DAX Columns directly within the Semantic Model. However, note that sometimes the UI for adding new DAX Tables and Columns may be disabled in the web version and only available in Power BI Desktop when in Semantic Model edit mode.

The ability to edit Direct Lake and Live Semantic Models was rolled out recently and is now available in Power BI Desktop. This feature enhances flexibility by enabling users to directly modify Semantic Models connected to Direct Lake mode or operating in Live mode. In Power BI Desktop, you can make adjustments such as creating measures, managing relationships, and modifying tables seamlessly within the Semantic Model.

In the Semantic Model UI, I will keep the changes minimal. First, I will rename date_table to Date for better clarity. Next, I will establish relationships between the tables using the drag-and-drop feature in the Relationship UI. The relationships will be configured as follows:

  • SalesItem: On ItemId (Many-to-One)
  • SalesCustomer: On CustomerId (Many-to-One)
  • SalesGeography: On CityId (Many-to-One)
  • SalesDate: On SalesDate and Date (Many-to-One)

Alternatively, instead of using drag-and-drop, you can configure these relationships through the “Manage Relationships” UI for precise control and management. This setup ensures the Semantic Model is well-structured and ready for analysis.

The Sales Semantic Model

Now, let’s mark the Date table as a date table. On the right corner of the Date table, click on the ellipses (three dots). From the options menu, click on “Mark as Date Table”.

On the “Mark as a Date Table” pop-up:

  1. Switch on “Mark as a Date Table”.
  2. In the “Choose a Date Column” field, select “Date” from the Date table.
  3. Click on “Save”.

It should save without any errors since we have created a continuous date table using the SQL script.

I would like to add a Measure as well. Follow these steps:

  1. Click on the Sales table in the Tables pane on the right.
  2. On the Home ribbon (top-left), click on “Measure”.
  3. Create a Gross measure using the following definition:
Gross = SUM(Sales[Qty] * Sales[Price])

All changes are auto-saved.

Now, it’s time to create a report. Click on “New Report” from the Home ribbon.

A blank new report will open in a new tab.

  1. Add a table visual:
  • Click on the Table icon in the Visualization Pane under Build Visual.

2. Add fields to the visual:

  • Drag Item from the Brand table and Gross from the Sales table into the visual.

3. We’ve created your first report with one visual.

4. Feel free to add more visuals or go back to the model to create additional measures like:

  • Net
  • Discount
  • Margin
  • Margin%
  • Discount%

5. Add these measures to your report visuals as needed to enhance your analysis.

Save the Report:

  1. On the top-right, click on the Save icon.
  2. A “Save Your Report” pop-up will appear.
  3. Enter a Report Name.
  4. Select the Workspace where you want to save the report.
  5. Click on “Save” to finalize and save the report.

When you save the report for the first time, it will open in Reading Mode.

This blog provides a comprehensive, end-to-end guide on SQL Database in Microsoft Fabric, covering its key features, capabilities, and practical use cases. From data loading techniques using Dataflow Gen 2, Data Pipelines, and Copy Jobs, to creating and managing Semantic Models and leveraging the SQL Analytics Endpoint for data analysis, this guide is designed to help you make the most of SQL Database within Fabric. Whether you’re building transactional workloads, analytical models, or exploring integration with other Fabric assets, this blog has you covered.

If you found this guide helpful, please like and share it with your network to help others explore the powerful features of SQL Database in Microsoft Fabric. Your support and feedback are greatly appreciated!

You can watch this video on the same topic

SQL Database in Microsoft Fabric: End-to-End Guide| Dataflow Gen 2| Pipeline| Copy Job| SQL Date

If you want to learn Power BI from the ground up — covering everything from basic data modeling to advanced DAX and real-world analytics — be sure to explore my 20-hour Masterclass, packed with 150+ essential topics to elevate your data analytics and business intelligence skills.

Official Links:

Release Notes

Microsoft Learn

--

--

Microsoft Power BI
Microsoft Power BI

Published in Microsoft Power BI

Microsoft Power BI community sharing our Power BI experience, tutorials, use cases, tips and tricks. Learn together. Grow together. Follow our Power BI Masterclass: https://linktr.ee/powerbi.masterclass or me: https://linktr.ee/tomas.kutac

Amit Chandak
Amit Chandak

Written by Amit Chandak

Amit has 20+ years of experience in Business Intelligence, Analytics, Data Science. He is Chief Analytics Officer at Kanerika & Power BI community Super User