SQL Server Linked Servers feature allows you to query data from other databases (even non SQL Server databases) from within SQL Server.
While I believe large data transfer jobs are best handled using an ETL solution like SSIS, sometimes all you want is a just a couple hundred rows of data from another database and setting up an ETL jobs might be overkill.
In this example, I am going to set up a link to a Teradata Data Warehouse. This will allow me to query data from the warehouse and use it in my SQL Server environment.
(Note that you can connect to any database, such as another SQL Server or Oracle. I am just using Teradata as an example)
First go into SSMS and connect to your SQL Server
Now, in Object Explorer go to Server Objects -> Linked Servers
Right click on Linked Servers and select New Linked Server…
For a Teradata connection, I am choosing Microsoft OLE DB Provider for ODBC Drivers
(on a side note, you will need to have the ODBC driver for Teradata installed for this to work)
Linked Server = just whatever name you want to give your linked server – I chose EDW (short for Enterprise Data Warehouse)
Product Name = again free text, I just wrote Teradata to let people know what kind of product you are connecting too.
Data Souce: This is the server name you are trying to connect to
Next, click Security tab.
I want to use my own credentials to connect to Teradata so I clicked Be Made Using This Security Context: ( the green arrow)
Type in your Username and password. Click Ok
Now go to create a new query in SSMS (SQL Server Management Studio)
We will be using a function called OPENQUERY. The syntax is as follows
Select * from OPENQUERY(LINKEDSERVER, QUERY)
SELECT * FROM OPENQUERY(EDW, 'SELECT USER_NM, LAST_NM, FIRST_NM FROM EMPLOYEE_TABLE')
We can use the data returned from Teradata in SQL Server by simply putting it in a table or a temp table as shown below
SELECT * into ##TempTable FROM OPENQUERY(EDW, 'SELECT USER_NM, LAST_NM, FIRST_NM FROM EMPLOYEE_TABLE')
We can also pass variables to our Teradata query. My preferred method is simply using a stored procedure.
CREATE PROCEDURE [LINKED_SERVER_QUERY]
@USER VARCHAR(8) = NULL
AS
declare @query varchar(8000) = 'Select * from OPENQUERY(EDW, ''SELECT USER_NM, LAST_NM, FIRST_NM FROM EMPLOYEE_TABLE WHERE USER_NM = ''''' + @USER + ''''''')'
exec(@query)
GO
exec [LINKED_SERVER_QUERY] @USER = 'blarson'
Shiny is an R Studio application that allows you to how host your R code on a website. This is a great method for creating a dashboard or deploying a predictive model. The best part is, it is free. There are, however, paid services associated with R Studio and Shiny if you want to get professional support and such.
In this lesson, we are going to learn to build a Shiny server application step by step.
Let’s start at the beginning. First off, to use Shiny, you need to have R Studio installed on your computer. Once you have that, you will need to install the Shiny package
install.packages(‘shiny’)
Next, we are going to learn about the three parts of a Shiny server application.
Below is the basic framework of a Shiny app:
library(shiny)
ui <- fluidPage("Test")
server <- function(input, output) {}
shinyApp(ui = ui, server = server)
Let’s break this down:
ui: ui is the user interface. This is the code that makes up the web page the user will launch and interact with. Anyone familiar with HTML, this is the basic markup. This is where you set the wording, font, colors, what kind of object will be on the page, etc
server: server is the brains of the operation. This is where you load and manipulate any data being used in your page. Again, if you are a web developer, think Java Script here.
shinyApp: this is the code that launches your App.
If you run the code above, you will see something like this
Note the IP address in the upper left. If you copy this into a browser of your choice, it will run there too.
You will see the same address in the R Console. As long as the R Shiny server is running, you will see this Listening command. For those not familiar, the address 127.0.0.1 is called a pingback address. This is the internal address of the NIC card on your machine. This means the server is running on the machine you are on. The :4654 is the port my computer grabbed. This can be different for you. The easiest way for me to explain the port without going too deep into network theory is: think of it like a TV channel. From your computer, you could technically be running hundreds of these R Shiny servers. The port (TV channel) is what tells the computer which one of these web pages you want to visit.
If you happen to be on a network, you can go to another computer and type in your host computers network IP address (you can get it by going to a cmd console and typing ipconfig). It will be the address I have the arrows pointing to in my example
Then, using my example, type http://10.171.46.91:4654 into the second computer’s browser and you should see your Shiny website.
Step 2
Let’s see if we can make our website more interesting, let’s start easy, we will format the text.
First close out your shiny window to end the session.
Change your ui line to match the one below.
ui <- fluidPage(h1(“Test”))
Now run your code again
library(shiny)
ui <- fluidPage(h1(“Test”))
server <- function(input, output) {}
shinyApp(ui = ui, server = server)
Step 3
Okay, let’s make it a little more interesting by adding some detail to our page. We will do so by using panel methods in our fluidPage()
Let’s break down the new code. First thing first, we are loading the built in datasets library and grabbing the mtcars data set from it.
library(datasets)
mpgData <- mtcars
Next we are going to skip down to the server section next as this is where we work with our data:
First thing you will notice is function(input, output) – this is a standard call that allows you to pass information back and forth through the server code. In our example, we are only using the output right now.
We create a variable called output$mpgPlot to which we pass the shiny method renderPlot. Inside that method, we are simply creating a simple boxplot comparing mpg to cyl using the mtcars data set that we labeled mpgData
The boxplot we rendered as output$mpgPlot is now passed to our ui
Next, in the ui I renamed the title panel and in the mainPanel, I added the command plotOutput(“mpgData”). That will allow me to post the boxplot I created in the server to the ui web page.
Now let’s add a little interactivity to our web page. We are going to give the user the option to view the boxplots with or without the outliers taken into effect.
— notice the variable name is input$outliers It is a composite of the variable from our checkboxinput method and “input” from out function(input, output) function that opens up our server.
In this lesson, we are going to take the first basic step towards building a data warehouse. We are going to create a historical table from the table we built in Lesson 2. If you didn’t do Lesson 2, you can find the SQL needed to build the table at the bottom of this lesson.
If you remember the table we built in Lesson 2, we had a list of students and the work teams they were assigned to. In lesson 2, this list was designed to change every week, and we only wanted to know what the current week data was. However, now Principal Skinner has asked if we can keep a listing of all the weeks so he can go back and see how many times someone was assigned to one of the work teams.
The table below is our starting table.
What we want to do is copy the table’s contents into a new historical table and mark it somehow so we know what week each kid was on each team. We are going to do this by creating a new column called INSERT_DT which will record the date the data was added to the historical table.
First things first, let’s build our historical table in SQL Server
Drag a Data Flow object onto your design window and click on it
Now drag an OLE DB Source box over and click on it
Select dbo.Student_Name from the drop down and click OK
Now we are going to use a new object – Derived Column, drag it over and connect the Blue arrow from the bottom of the OLE DB Source box to it
Click on Derived Column: Purple Arrow (far left) – Name your new column, I named mine INSERT_DT
Green Arrow (Middle) : in the Expression column, type GETDATE()
Yellow Arrow (far right) : once you tab out of the Expression column, you will see your Data Type changed itself to be a timestamp
Red Arrow (Bottom) : hit Okay
What we just did was create a new column called INSERT_DT which will use the GETDATE() method to get the current date and time. Next we are going to use this column we just created.
Grab and OLE DB Destination and drag it over. Connect it with the outbound arrow from the Derived Column box
Select the new dbo.STUDENT_TEAM_HIST table we created and next click on Mappings
You will see you now have four columns feeding into your new destination table. If you go back and look at your source table, you will see you still only have 3 columns there. The fourth column is the one we just derived.
Hit okay, now we are ready to run the package
Right click on Package and hit Execute Package
It should run. Notice the little comments telling you how many rows moved.
Go to SQL Server and check your new table. You will see the INSERT_DT
Okay, now to see the historical data in action
Let’s change Micah’s team from Red to Blue in our STUDENT_TEAM table
update [dbo].[STUDENT_TEAM]
set STUDENT_TEAM = 'Blue'
where STUDENT_NM = 'Micah'
select *
from dbo.STUDENT_TEAM
Go back to SSIS, hit the red box at the top to end debugging mode
Right click and execute the package again
Now run your query again. Note the two different teams Micah was on are listed. Along with the timestamps so you can tell when he was on which team.
In the first lesson we exported data from a table in our SQL Server database to a CSV file. Today we are going to do just the opposite, we are going to bring data from a CSV file into our database.
I like to try to provide close to real world scenarios when I make my tutorials. Obviously in the simpler intro lessons, that can be difficult. That being said, for this lesson, imagine you manage a database for a school. Every week 9 students are selected to be on either the Red Team (Hall Safety Monitors) or the Blue Team (Lunch Room Aids). Every Friday, the Vice Principal Smith picks the student names at random and puts them into a CSV file that is emailed out to all the teachers. You point out that if you could just get the names into your database, you could put the information on the school’s main Intranet page, so the teachers don’t need to download and open the same CSV file 300 times a week.
So our task is to upload the new weekly CSV file into the database every Friday at the end of the school day.
First, before heading into SSIS, we need to create a destination table in our database for the CSV file. Here is the file we want to import into our database, you’ll note it has three columns, two columns are strings and one is an integer:
So go into SSMS (SQL Server Management Studio) and run the following query to create the table:
This code will create an empty table with 3 columns named STUDENT_NM, STUDENT_AGE, STUDENT_TEAM.
Now you can minimize SSMS and open up Visual Studios Data Tools
Once open, go into your Training Project and create a new SSIS Package. I’m naming this one Lesson_2
Now grab an Execute SQL Task from the SSIS Toolbox and drag it to the Design window.
Notice the red arrow at the bottom, there is already a Connection set up for the Sandbox database. This is because when we made this connection in Lesson_1, we made it by clicking on the Connection Managers in the Solution Explorer window.
This creates a (Project) connection – one that can be seen in every package. If you only want your connection to exist inside a single package (this becomes important as the number of packages you have grows, otherwise you’d have thousands of connections in Solution Explorer) – you can right click anywhere inside the Connection Managers box at bottom of the design window.
But this particular package will work fine just using the existing connection, we can move on without creating a new one.
Double Click on your Execute SQL Task icon in the design window.
In the window that pops up, you can rename your Task, provide a description of the task if you would like. This does come in handy when you have to go back to old package. It makes it easier to understand what is going on.
Down at the green arrow in the picture, you’ll have to selection a Connection from the drop down. Luckily were currently only have one to choose from.
Next go to SQL Statement and click on the little ‘…’ box that appears in the right corner.
In the pop up window, this is where you enter your SQL Query. In this case were going to delete all data from table in this box. The reason for this step is so that last week’s students won’t still be in the table on Monday morning. We only want the current students in our database.
Now click Okay on this window and click OKAY again to get back to the Design Window
Next drag over a Data Flow Task. Connect the Create Table task by dragging the arrow over from the Create Table task. Note the direction of the arrow, this is the order in which tasks will run.
Now open Data Flow Task. In the SSIS Toolbox, go to Other Sources and drag a Flat File Source over to the Design Window.
Double click on your new Flat File Source box and a new window will open. Click on New… next to Flat File Connection Manager
On the new window, click Browse
Browse to your CSV File. I created a folder called SSIS Training to make it easy to store all my files.
If you click on Columns in the left window, you can see what the CSV file contains.
Finally, before existing this window, Click back on General and make sure Column names in the first data row is checked.
Now go back to the Design window and drag an OLE DB Destination box onto the design window. Connect the blue arrow from the Flat File Source to the OLE DB Destination
Now double click on OLE DB Destination
Select STUDENT_TEAM from Name of table or the view
Now Click on Mappings
Everything should line up nicely since we named the table columns to match the CSV file.
Now click okay to go back to the Design Window, Click on Control Flow in the upper left to get back to the main page of the package
Right click the package and click Execute Package
If all goes well, you should end up with two green checks
If this happens, check to make sure your CSV file is not currently open, that can cause errors. If so, close the file and try executing the package again
Finally, check the table in your SQL Server to see if it populated
If you want to see how it would work the next week, go into the CSV file and change the data. When you run the package again, the names in the SQL table will change too.
One of the main purposes of SSIS is moving data from one location to another with ease. In this lesson we are going to export data from our SQL Server database to a CSV file.
If you want to follow along, you can use the following code to create the data table I will be working with. I loaded this into a database I created for practice called Sandbox.
We will start by opening the Training_Project we created in the intro lesson (SSIS Tutorial: Introduction )and creating a new package.
I renamed my new package Lesson_1
Now we need to make a new connection so that SSIS knows what our data source will be communicating with. To do so, go to Solution Explorer, right click on Connection Manager and select New Connection Manager
Since we will be connecting to a SQL Server, select OLE DB from the list below
Since there are no existing connections to pick from, choose New
Okay, starting from the top select Native OLE DB\SQL Server Native Client 11.0 (note you might have 10.0 – that will work as well)
Since my SQL Server is locally installed on my machine, I am using Localhost as Server Name, otherwise provide the server name here.
Select your database from the drop down, again my database is Sandbox
Finally, hit Test Connection, you should get a connection successful message box.
Click Okay
You’ll see your new connection in the box, now click Okay
Now at the bottom of your Design Window, you’ll see your new connection in the box labeled Connection Managers
So next, we need to go to the SSIS Toolbox and drag a Data Flow Task over to the designer
Once in the designer, click on the Data Flow Task box, this will bring you to the Data Flow window
Data Flow is used whenever you need to move data between disparate systems. Since we will be moving data from SQL Server to a CSV file, we need to use a Data Flow Task
You should note that the SSIS Toolbox has changed, offering up new Data Flow related tools.
Scroll down to Other Sources and drag OLE DB Source to the design box
Double click on the new object
Make sure your Connection manager you just created in the top drop down. Leave Data access mode at Table or view and select dbo.employee_id as your table
If you click Preview in the bottom left, you will get a pop up of the data in the table
If you click Columns in the upper left, you will see the columns that will be exported from the table. You can change the Output Column names if you want to use a different name in your output file.
We will skip over Error Output for now, as that is easily an entire lesson all its own.
Now go back to the SSIS toolbox and under Other Destinations, click on Flat File Destination and drag it over to the design window.
Drag the blue arrow from the OLE DB source box to the Flat File Destination Box
It should look like this when done
Now click on Flat File Destination
Since we don’t have a current Flat File Connection, we will need to click on New to create one.
Select Delimited and click OK
Find a folder you want the file to end up in. Select CSV files from the bottom right drop down, and name your file. Click OK (Note, use a name of a file that does not currently exist. This will create the file)
Check the box: Column names in first data row
If you click on Columns in the upper left, you will see the names of your header columns.
Click Okay to go back to the Flat File Destination Window
If you click on Mappings, you will see you have 3 columns from your source going to three columns in what will be the new CSV file. In future lessons I will show how you can use this to match up different named columns.
Click Okay to return to the design window
Go to Solution Explorer, right click on the package and click Execute Package
You should get green check marks to indicate success. This is a very small package, so they probably turned green instantly. In larger jobs, the green check will first be a yellow circle to indicate progress. It turns green when that step is complete.
Note on the connecting line between Source and Destination that you get a read out of how many rows were processed.
Go to the folder you chose as your destination, you will see your new CSV file there
Open the file, you will see your data has been exported into your CSV file
Copy, paste and run this code in your SQL Server Management Studio to build the database and table we are using in this exercise:
create database sandbox;
use sandbox;
CREATE TABLE employee_id (
emp_nm varchar(30) not null,
emp_id varchar(8),
b_emp_id varchar(8),
PRIMARY KEY(emp_nm) );
To check if a table exists in SQL Server, you can use the INFORMATION_SCHEMA.TABLES table.
Running the following code, produces the results below:
USE SANDBOX;
Select *
from INFORMATION_SCHEMA.TABLES;
You can use this table with an IF THEN clause do determine how your query responds whether or not a table exists.
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'employee_id')
BEGIN
PRINT 'Yes'
END
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'employee_ids')
BEGIN
PRINT 'Yes'
END
ELSE
BEGIN
PRINT 'No'
End
One of the more common uses I find for this when I need to create a table in a script. I want to make sure a table with same name doesn’t already exist, or my query will fail. So I write a query like the one below.
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'employee_id')
BEGIN
drop table employee_id
END
ELSE
BEGIN
print 'Table does not exist'
End
CREATE TABLE employee_id (
emp_nm nvarchar(30) not null,
emp_id nvarchar(8),
b_emp_id nvarchar(8)
PRIMARY KEY(emp_nm) );
INSERT INTO employee_id
(emp_nm, emp_id)
VALUES
('Bob', 'A1234567'),
('Lisa', 'A1234568')
INSERT INTO employee_id
(emp_nm, b_emp_id)
VALUES
('Priyanka', 'B1234567');
Coalesce is a simple but useful SQL function I use quite often. It returns the first non-null value in a list of values.
A common real world application for this function is when you are trying to join data from multiple sources. If you work for a company known for acquiring other companies, your HR tables are bound to be full of all kinds of inconsistencies.
In this example below, you will see a snip-it of an HR table created by merging tables from two companies. The first two employees have an emp_id and the last one has a b_emp_id. The last one, Priyanka, is an employee from a newly acquired company, so her emp_id is from the HR table of the newly acquired company.
So if you just want to merge the 2 emp_id columns into one, you can use the coalesce() function.
The syntax for coalesce is: coalesce (col1,col2,….) as alias
Coalesce takes the first non-null. In the example below, we check emp_id first, if it has a value, we use that. If emp_id is null, we look for the value in b_emp_id.
select emp_nm,
coalesce (emp_id, b_emp_id) as emp_id
from employee_id
If you want to try this for yourself, here is the code to build the table out for you.
SQL Server Integration Services is the ETL tool for the Microsoft SQL Server platform. SSIS allows you to take data from various sources (from Excel files, to text files, to other databases, etc), and bring it all together.
If you are well versed in another ETL platform, SSIS is a relatively easy system to get up to speed on.
SSIS comes as part of SQL Server Data Tools, which you should be able to install with your SQL Server installation software. You will need SQL Server Standard, Developer or above editions to run SQL Server Data Tools. SQL Server Express does not support Data Tools.
For some unknown reason, once it is installed, you will not find a program called SSIS. Maybe the engineers at Microsoft think this is funny, but in order to run SSIS you will need to look for the following program instead.
I’m using 2015, but for most of what I am doing here, any version should be compatible.
When you launch data tools, you will notice it run in Visual Studios
To start an SSIS job, you will either need to open an existing project, or create a new one. In this example, I will create a new one.
File ->New -> Project (or Ctrl+Shift+N)
Inside the Business Intelligence Templates, select Integration Services. I always just select Integrations Service Project, I’m not a big fan of the Wizard
Next step: Name your project
Now you are in SSIS. Here are the 3 main windows you will be starting with.
From right to left:
SSIS Toolbox
Package Designer
Solution Explorer
Packages
Inside solutions, packages are the collections of jobs or scripts found inside a project. It is inside the package that you will build out your ETL job.
For our first lesson, we are just going to build a simple package. Your new solution should have opened with a new package when it opened. If it is, right click on the green arrow to rename it, if not, right click on the red arrow to create a new package.
I renamed my package First_Package, you can name your package whatever you choose.
This first package will simply just display a pop up message. In the SSIS Toolbox, go to Script Task and drag it into the package designer window.
Double click on the Script Task Box in the Design window
Note in my example, I have C# set as the scripting language. The other default option is Visual Basic. If you are more comfortable with that, feel free to use it. I prefer C# mainly because I spent more time working with it.
You don’t need to know any C# for this tutorial. This is literally a single line of code assignment. I will cover more C# in the future.
Click Edit Script… to continue
Note, this step can take a minute or so for the script editor to appear. Don’t panic if your computer appears locked up.
Once the script editor opens, don’t panic by all the code you see. Luckily Microsoft has done most of the ground work for us. We only need to scroll down until you see:
public void Main()
Now place your cursor below //TODO: Add your code here
The code you need to type for this script is:
MessageBox.Show(“This is my first SSIS Package”);
Now I know you will be looking for a save button. But again, our friends at Microsoft might have been drinking when they coded this. Instead, just click the upper right X to close out the whole window –
I know – why would they do it that way? how much effort would a save and close button have cost them? I don’t know. It just is what it is. Just click the X and move on with your life.
Now click the OK button – again I guess Save was too much to type
Now right click on your package (green arrow) and click Execute Package
Your message will pop up in a Message box window
Click OK on the messagebox and Click the red square to end the package execution.
Congrats, you have just built and executed your first SSIS Package.
XML parsing is a SQL method for separating string data found in a single field in a table. Look at the table below:
This table has two columns, emp_nm (employee name), lang (programming languages the employee is proficient in). Notice that the column lang has multiple values for each record. While this is easily human readable, if you want it to more machine usable (think Pivot tables or R statistical analysis), you are going to want your data to look more this this:
Notice now the table has the same two columns, but the lang column now only has 1 word per record. So the question is, how do you do this using SQL?
XML parsing
The code used to “parse” out the data in the lang column is below:
SELECT emp_nm
,SUBSTRING(LTRIM(RTRIM(m.n.value('.[1]','varchar(8000)'))),1,75) AS lang
FROM
(
SELECT
emp_nm
,CAST('<XMLRoot><RowData>' + REPLACE([lang],',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM employee_lang
)t
CROSS APPLY x.nodes('/XMLRoot/RowData')m(n)
Let’s break it down a little first.
SELECT
emp_nm
,CAST('<XMLRoot><RowData>' + REPLACE([lang],',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM employee_lang
What we are doing with the code about is using a CAST and REPLACE functions to convert the elements in column lang into a XML line. See results below:
For Bob, this is the result of the CAST/REPLACE code on the lang column
If you run the above code, it will return !QL ROCK!
Replace is saying — everywhere an S is in the string, replace it with a !
CAST()
The cast function is casting the string as an XML data point. This is needed for the next section, when we unpack the XLM string
Cross Apply / Value
SELECT emp_nm
,m.n.value('.[1]','varchar(8000)')
FROM
(
SELECT
emp_nm
,CAST('<XMLRoot><RowData>' + REPLACE([lang],',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM employee_lang
)t
CROSS APPLY x.nodes('/XMLRoot/RowData')m(n)
So without going into way too much detail, you can find pages dedicated to Cross Apply and Value(), I’ll give you the quick breakdown.
First notice we aliased our CAST() statement as x. So if you look at the CROSS APPLY, you will see we are asking to look at x.nodes. Had we aliased our cast y, we would be looking at y.nodes.
Now look at m(n) at the end of the line. This is like an array or list in programming languages. Keep that in mind for the next step.
Inside the x.nodes() is ‘/XMLRoot/RowData’ , this is telling us to assign to m everything following /XMLRoot and to iterate n by everything following /RowData, so for Bob:
m(n=1) = Python
m(n=2) = R
m(n=3) = Java
Now we pass that array m(n) to our Value() method. Hence m.n.value(). Note m and n were just letters I picked, you can use others.
Inside m.n.value(‘.[1]’,’varchar(8000)’) was used as it should pretty much cover any size string you may have to deal with.
So the final iteration simply add as SUBSTRING to clean it up and get rid of white space