I took a break from blogging this week to finish the course for and get certified with Orchestrating Big Data with Azure Data Factory on edX. This is a verified certificate of achievement.
The topics covered include but not limited to:
Creating data workflows with Azure Data Factory
Scheduling data pipelines to orchestrate big data processes
Applying data transformations in a pipeline with Hive or U-SQL
Next on my wishlist is a Microsoft Certification on Azure Data Engineer Associate (if and when I have the time).
We’re now on Part 5 of our SQL Server Reporting Services Basics Series. In this installment, we’ll learn how to use a range parameter. If you want to review the series: Part 1, Part 2, Part 3, Part 4.
You may remember in Part 4 we created a simple parameterized report. We’re going to build upon the same report and add few elements to how we use the parameter.
If you followed Part 4, you have everything that you need to run this report. We only need to modify it a bit to make it work with the range parameter. Let’s dig in.
Create Dataset For Range Parameter Source
When I say “range parameter”, I mean a parameter that uses a preset range of values; example 1 – 100.
For the purpose of this demo, I have created a lookup table for the ranges. You may remember from Part 4 that our existing parameter is called Order Total. We’re going to use the same parameter.
Let’s create a new table called DollarTotalRange:
CREATE TABLE DollarTotalRange
(
RangeID INT NOT NULL IDENTITY(1, 1),
MinTotal INT NULL,
MaxTotal INT NULL,
RangeLabel VARCHAR(150)
)
Let’s populate that with some data that we’re going to use for our ranges:
Let’s also populate the RangeLabel column with the combination of the MinTotal and MaxTotal columns as ranges:
UPDATE dbo.DollarTotalRange
SET RangeLabel = CAST(FORMAT( MinTotal, '#,#' ) AS VARCHAR (25)) +
' - ' +
ISNULL( CAST(FORMAT( MaxTotal, '#,#' ) AS VARCHAR (25)), 'MAX' ) ;
Let’s take a look at our ranges:
Modify the Stored Procedure with the new Parameter
Let’s modify the existing stored procedure so it can use the new parameter type. It is important to note that the data type of the Order Total parameter is FLOAT. We’re changing that to STRING.
We’re going to make sure also that we can accommodate the possible maximum value of 999,999,999; or you may want to set your own Max Value.
ALTER PROCEDURE [dbo].[OrderTotalQuery] (
@vOrderTotal VARCHAR(255) = NULL
)
AS
BEGIN
DECLARE @Min INT, @Max INT
SELECT @Min = MinTotal, @Max = MaxTotal FROM dbo.DollarTotalRange WHERE RangeLabel = @vOrderTotal
SELECT c.CustomerName
, SUM(sol.Quantity * sol.UnitPrice) AS OrderTotal
FROM Sales.Orders so
JOIN sales.OrderLines sol ON so.OrderID = sol.OrderID
JOIN sales.Customers c on so.CustomerID = c.CustomerID
GROUP By c.CustomerName, sol.OrderID
HAVING SUM(sol.Quantity * sol.UnitPrice) >= @Min AND SUM(sol.Quantity * sol.UnitPrice) <= ISNULL(@Max,999999999)
ORDER BY OrderTotal Desc
END
Create a New Report Dataset
First thing first, let’s create a new stored procedure based on the DollarTotalRange table we created above.
CREATE PROCEDURE [dbo].[DollarRangeSelect]
AS
BEGIN
SELECT MinTotal, MaxTotal, RangeLabel
FROM dbo.DollarTotalRange ;
END ;
Open your Visual Studio and load you SSRS Solution if you haven’t loaded it yet. Navigate to the Solution Explorer, and under Shared Dataset create a new dataset called ds_SSRSTutorial_001_DollarRange. Select the Stored Procedure for the Query Type. Then select the stored procedure we created above.
Then leave everything else as is. Click OK.
Modify the Existing Report Dataset
Under the Shared Dataset, open the existing ds_SSRSTutorial_001_OrderTotalQuery. Click on the Parameters tab and change the data type of the @vOrderTotal parameter from FLOAT to TEXT.
Click OK.
Create a new Report Parameter
Create a new report parameter based on the dataset we just created above. Name it rpt_ds_SSRSTutorial_001_DollarRange.
Click OK.
Modify the Report Parameter
On the Report Data window, navigate to the report parameter.
Navigate to the Report Data window. Edit the @vOrderTotal parameter
From the General tab, change the data type of the parameter @vOrderTotal to Text.
On the Available Values tab, Select the Get values from a query option. Select the rpt_ds_SSRSTutorial_001_DollarRange dataset we created above. Select RangeLabel column for both Value field and Label field.
Let’s Preview the Report
Deploy the Report!
If you are not familiar with your options of deploying your report, go back Part 3: Report Deployment.
Next, we will continue to work on another type of report parameter, multi-value parameter.
Azure Data Factory (ADF) offers a convenient cloud-based platform for orchestrating data from and to on-premise, on-cloud, and hybrid sources and destinations. But it is not a full Extract, Transform, and Load (ETL) tool. For those who are well-versed with SQL Server Integration Services (SSIS), ADF would be the Control Flow portion.
You can scale out your SSIS implementation in Azure. In fact, there are two (2) options to do this: SSIS On-Premise using the SSIS runtime hosted by SQL Server or On Azure using the Azure-SSIS Integration Runtime.
Azure Data Factory is not quite an ETL tool as SSIS is. There is that transformation gap that needs to be filled for ADF to become a true On-Cloud ETL Tool. The second iteration of ADF in V2 is closing the transformation gap with the introduction of Data Flow.
Let’s build and run a Data Flow in Azure Data Factory v2.
Overview
Just to give you an idea of what we’re trying to do in this post, we’re going to load a dataset based on a local, on-premise SQL Server Database, copy that data into Azure SQL Database, and load that data into blob storage in CSV Format.
In this post we discuss the following topics in a step-by-step fashion:
Azure Data Flow
Data Transformation in Azure Data Factory with Data Flow
Data Flow Components:
Select
Derived Column
Aggregate
Pivot
Sort
CSV Sink
First thing first: If this is the first time you’re using Azure Data Factory, you need to create a data factory in Azure, and for the purpose of the demo in this post, you also need to set up a blob storage.
Change the copy activity source and sink as follow:
Source Query:
SELECT c.CustomerName
, p.FullName AS SalesPerson
, o.OrderDate
, s.StockItemName
, l.Quantity
, l.UnitPrice
FROM Sales.Orders o
JOIN Sales.OrderLines l
ON o.OrderID = l.OrderID
JOIN Sales.Customers c
ON o.CustomerID = c.CustomerID
JOIN Warehouse.StockItems s
ON l.StockItemID = s.StockItemID
JOIN Application.People p
ON p.PersonID = o.SalespersonPersonID ;
Navigate to your Azure Data Factory. Click the Author & Monitor tile to open the ADF home page. From there, click on the pencil icon on the left to open the author canvas.
Click on the ellipsis next to Data Flows (which is still in preview as of this writing).
Select Add Dataflow in the context menu.
Let’s name our Data Flow DataFlowTest001.
Add the Data Source
As of this writing, there are only a few supported sources in Data Flow: Azure Data Warehouse, Azure SQL Database, CSV, and Parquet (column-oriented data storage format of Hadoop). Other sources such as Amazon S3, Oracle, ODBC, HTTP, etc. are coming soon.
Let’s set up the Source.
Click on the Add Source tile.
Let’s name our source as DataFlowTest001DataSource.
Next, create a new dataset.
Create New Source Dataset
We need to set up a new dataset. You should have completed the Prerequisite (see above) by now so you will have a working data for this demo.
There are only a few supported data sources as of this writing: Azure SQL Data Warehouse, Azure SQL Database, CSV, and Parquet (column-oriented data storage format for Hadoop).
In the Source Dataset, click New.
Select Azure SQL Database. Click Finish.
Set up the Source Dataset
Set the dataset as follows:
Name the dataset as AzureSQLSalesReport.
Select the Linked ServiceAzureSqlDatabase.
For the Table, select the SalesReport. Click Finish.
If you are wondering what that Linked Service is, that would be the connection in SSIS. It is easy to set up. At the bottom of the left-hand blade, click on Connection. This gives you the list of your Linked Services and Integration Runtimes. You can also create new Linked Services from this page.
The AzureSqlDatabase linked service was set up as follows:
Add a Select Component
To add a component to the Data Flow, click the Plus sign at the bottom of the DataFlowDataSource component.
In the Row Modifier section of the context menu, select Select.
Set up the Select Component
What the Select component does is just that, selecting columns like you would if execute a Select command.
Name the component as SalesPersonData.
Add the following columns if they aren’t yet added.
SalesPerson
OrderDate
Quantity
UnitPrice
You may notice the information on the left-hand side that says, “4 items: 2 column(s) from the inputs left unmapped.” That’s what we want.
You may want to Inspect and Preview the data. But make sure you switch the Debug mode on top before you preview.
Add a Derived Column Component
Name the Derived Column component or stream as DerivedYearOrderDate. Make sure that the incoming stream is the SalesPersonData (Component).
Add a new derived column called Year and set the expression to year(Orderdate)
Add an Aggregate Component
Name the Aggregate component as LineSalesTotal. The incoming stream should be DerivedYearOrderDate.
Click the Group By Column and add the following Columns
SalesPerson
Year (derived column based on the OrderDate)
Click the Aggregates tab. Add a new column and call it Linesales. Set the expression to sum(multiply(UnitPrice, Quantity))
Add a Pivot Component
Name the component as PivotSalesPersonPerYear. Make sure that the incoming stream is from our aggregate, LineSalesTotal.
Click on the Group By tab. Select SalesPerson. Keep the alias as is or assign a different alias.
Click on the Pivot Key tab. Select Year.
Click on the Pivoted Column tab. Set up as follows:
For the Column name pattern, select the first pattern in the dropdown.
Select Lateral for Column arrangement.
Type [space] Total Sales in the Suffix box.
In the expression box, add sum(LineSales)
Leave the column prefix empty.
Add a Sort Component
Let’s sort the stream by name, alphabetically sorted ascending.
Name the Sort component as SortSalesPersonByName. Make sure that the incoming stream is PivotSalesPersonPerYear.
On the Sort Condition, select SalesPerson.
Toggle the Order arrow so it is pointing up.
If you preview the data on the Sort component, you should now see something like this:
Add Destination Sink Component
Let’s call the Sink, SalesReportByYearCSV.
I have previously created a CSV dataset on Azure Blob Storage . You can also create a New dataset in your Data Flow. Create, or in our case, select SalesReportCSV dataset from the Sink dataset dropdown.
Navigate to the Settings tab. On the File name option, select Output to single file.
For the File name, type 2013-2016_SalesReportByYear.
Note: The Column Count on the last 3 components don’t look right. Even the mapping is not showing the right number of the streamed columns. The preview shows the expected results, though.
Overview of the Azure Data Factory Data Flow
If you’ve completed all the above steps, you should now have something like this:
Run the Data Flow
Just like in SSIS, you can not run the Data Flow itself in the data flow context. It needs to be in the Control Flow, or Pipeline in ADF, for its execution.
Create a new Pipeline. Name it DataFlowSalesReport.
Navigate to Activities > Move & Transform. Drag the Data Flow activity component to the canvas. That opens the blade on the left-hand side where you can set up the Data Flow that we just created above with the pipeline activity.
In the Adding Data Flow blade, select Use Existing Data Flow.
Select the Existing Data Flow dropdown, DataFlowTest001. Click Finish.
The only setting that I would change in the pipeline setting is the Timeout. Set the pipeline to time out after 30 minutes.
Publish the Changes and Trigger the Activity Now
Click Publish (should be on top, right-hand-side.
Click on the Trigger, then select Trigger Now to execute the pipeline (and essentially the Data Flow).
Here are some of the few ways of checking the status of the pipeline activity processing:
Check the CSV File Output
To check the file, open your Azure Storage Explorer and Navigate to your Blob storage. You can download the Azure Storage Explorer here.
And you should find the CSV output file produced by the Data Flow.