T_SQL Tuesday #25: Sharing Tricks

The topic for this month’s TSQL Tuesday, hosted by Allen White, is an invitation to share your tricks.  Before I get into my trick to share, I wanted to mention that Allen is a pretty awesome speaker too.  I got to see his session “Gather SQL Server Performance Data with Powershell” at the SQL Saturday in Columbus earlier this year.  Allen is really excited about SQL Server and the cool things you can do with Powershell.

Now for my trick, I have a view that was created to help write more dynamic rolling sql queries and reports in my organization.  I’m not sure who the original author of the script is, as it has been passed around and modified several times.  I thought I would share it here in the hopes that it helps someone else someday.  If you’re the original author or know who is, please let me know so I can give you due credit.

This view defines a bunch of different date parameters compared to the current date, including:


You can take a quick look at the result returned from this view.  After creating the view (script included below), run:

FROM    vw_date_ranges

With these columns defined, you can easily query a database table looking for rows based off of a date by cross joining this view and the adding the date columns to the where clause.  For example, if you want to see all orders for “This Week”, you can run a query similar to:

SELECT SalesOrderID,
FROM   Sales.SalesOrderHeader
       CROSS JOIN vw_date_ranges
WHERE  Sales.SalesOrderHeader.OrderDate > vw_date_ranges.SUNDAY_WEEK_BEGIN

Or if you want to see all orders for “Last Month”, you can run a query similar to:

SELECT SalesOrderID,
FROM   Sales.SalesOrderHeader
       CROSS JOIN vw_date_ranges
WHERE  Sales.SalesOrderHeader.OrderDate >= vw_date_ranges.PREVIOUS_MONTH_BEGIN
       AND Sales.SalesOrderHeader.OrderDate < vw_date_ranges.MONTH_BEGIN

Using this method helps me keep my sql query clean, and provides a rolling date range on queries and reports.

Download the script here


CREATE VIEW [dbo].[vw_date_ranges]
select DATEADD(dd, DATEDIFF(dd,0,getdate()), 0)                                TODAY_BEGIN,
dateadd(ms,-3,DATEADD(dd, DATEDIFF(dd,0,getdate()+ 1 ), 0)) - .000011574        TODAY_END,
DATEADD(dd, DATEDIFF(dd,0,getdate()), -1)                                       YESTERDAY_BEGIN,
dateadd(ms,-3,DATEADD(dd, DATEDIFF(dd,0,getdate()  ), 0)) - .000011574          YESTERDAY_END,
DATEADD(dd, DATEDIFF(dd,0,getdate()), -2)                                       DAY_BEFORE_YESTERDAY_BEGIN,
dateadd(ms,-3,DATEADD(dd, DATEDIFF(dd,0,getdate()  ), -1)) - .000011574         DAY_BEFORE_YESTERDAY_END,
-- Relative Dates - Weeks
DATEADD(wk, DATEDIFF(wk,0,getdate()), -1)                        SUNDAY_WEEK_BEGIN,
DATEADD(wk, DATEDIFF(wk,0,getdate()), -1) + 6.999988426          SUNDAY_WEEK_END,
DATEADD(wk, DATEDIFF(wk,0,getdate()), 0)                         MONDAY_WEEK_BEGIN,
DATEADD(wk, DATEDIFF(wk,0,getdate()), 0)  + 7.999988426          MONDAY_WEEK_END,
DATEADD(wk, DATEDIFF(wk,0,getdate()), -8)                        PREVIOUS_SUNDAY_WEEK_BEGIN,
DATEADD(wk, DATEDIFF(wk,0,getdate()), -8) + 6.999988426          PREVIOUS_SUNDAY_WEEK_END,
DATEADD(wk, DATEDIFF(wk,0,getdate()), -7)                        PREVIOUS_MONDAY_WEEK_BEGIN,
DATEADD(wk, DATEDIFF(wk,0,getdate()), -7) + 7.999988426          PREVIOUS_MONDAY_WEEK_END,
-- Relative Dates - Months
dateadd(mm,datediff(mm,0,getdate()),0)                                                    MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(m,0,getdate()  )+1, 0)) - .000011574                   MONTH_END,
dateadd(mm,datediff(mm,0,getdate() - 1),0)                                                YESTERDAYS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(m,0,getdate() - 1 )+1, 0)) - .000011574                YESTERDAYS_MONTH_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )-1, 0)                                              PREVIOUS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  ), 0)) - .000011574                    PREVIOUS_MONTH_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )-2, 0)                                              SECOND_PREVIOUS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  )-1, 0)) - .000011574                  SECOND_PREVIOUS_MONTH_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )-3, 0)                                              THIRD_PREVIOUS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  )-2, 0)) - .000011574                  THIRD_PREVIOUS_MONTH_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )-4, 0)                                              FOURTH_PREVIOUS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  )-3, 0)) - .000011574                  FOURTH_PREVIOUS_MONTH_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )-12, 0)                                              TWELTH_PREVIOUS_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  )-11, 0)) - .000011574                  TWELTH_PREVIOUS_MONTH_END,
-- Added 2/23/09 These will be used in a claim audit report that looks at data from 6 weeks prior.
DATEADD(wk, DATEDIFF(wk,0,getdate()  )-6, 0) + 7.999988426  as PREVIOUS_SIXTH_MONDAY_WEEK_END,
DATEADD(wk, DATEDIFF(wk,0,getdate()  )-6, -1) + 6.999988426  as PREVIOUS_SIXTH_SUNDAY_WEEK_END,
DATEADD(mm, DATEDIFF(mm,0,getdate()  )+1, 0)                                              NEXT_MONTH_BEGIN,
dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate()  )+2, 0)) + .000011574                    NEXT_MONTH_END

Fixing orphaned users

Whenever I restore a production database to a development server, I often end up with orphaned users. You can see in the screen shot below that username matt has no login associated with it.

An account becomes orphaned when there is a user account with security rights in the database that is not linked to a user account that can log in to the server.  This happens because SQL logins with the same username have different SID’s on different servers.  It is not a problem for Active Directory logins because the SID is stored in Active Directory.

Once an account becomes orphaned, it is a pretty easy fix:

USE <database name>
ALTER USER <username> WITH LOGIN = <username>

But it’s still a pain to remember to run this code to reset the logins.  I usually forget, and the scratch my head for a minute to figure out why the application won’t connect to the development database anymore.

Fix it once and for all

Microsoft has a handy script to fix orphaned users (link).  Download and run the script on your production server to create two stored procedures: sp_hexadecimal and sp_help_revlogin.

Then run:

EXEC sp_help_revlogin

Run the output from this procedure against your development server to create the appropriate logins with the correct username/password and SID.

Ever since I ran this, I have not had any more orphaned users when restoring databases back to dev.

T-SQL Tuesday #23: Fixing Joined Views

This month’s TSQL Tuesday topic, hosted by Stuart Ainsworth, is Joins.  I am going to share a story of a performance improvement that I made with joined views.

One of the main vendor-built applications that I support has views built on top of tables.  Unfortunately that’s not the end of the story.  These views are built on top of views, which join together other views, which link back to the database tables.  In all the training materials provided by the vendor, they say to *always* use the views when writing a report on the data and never directly query the table.

Using the vendor provided views generally works out OK and performs reasonably well.  One particular report I wrote kept bugging me due to how long it took to run (several minutes each time). compared to how much data was actually returned.  So I started looking at the execution times and the query plans.  Note, I took the screenshots below using the fantastic free tool from SQL Sentry, SQL Sentry Plan Explorer.

This report needed to join 5 tables to get the data that I needed.  Using the vendor provided views, here is the join diagram I started out with:

You can see that the vendor views are joining together a lot more hidden tables (table & field names blurred to protect the innocent) than the 5 I actually need.  The nested views are even hitting the same tables more than once.

Here is the original query plan:

You can’t see it in the screen shot, but one of the thick lines in the middle is representing 23 million rows!

OK, time to pull out the detective hat.  I decided to rewrite the query using only the base tables.  I had to do a bit of extra work with things like UTC datetime vs. local time which the views converted.  After the rewrite, the join diagram looked like this:

Here is the final query plan:

The highest number of rows coming through is about 11,000.  A far cry from the 23 million rows in the original query! The execution plan also looks a lot leaner than what I started with as well.

The original query was running with an average CPU time = 26062 ms, and an average elapsed time = 26424 ms.
My rewritten query is now running with an average CPU time = 0 ms, and an average elapsed time = 266 ms.

Looking at the actual execution plan, SSMS is prompting me that there is a missing index for my new query, and there is still a Clustered Index Scan that accounts for 87% of the query.   I may look into that more at a later date, but for now I am very happy running a query in a couple of seconds that used to take several minutes. I don’t think it’s really worth the extra effort to try to shave another 100 ms off of a query that completes in under 300 ms.

The vendor supplied nested views were each joining together multiple views and tables, which was causing a lot of extra and unnecessary bloat in my query.  An hour of work and cutting out all the bloat made a huge difference in this particular report’s run time.

And the users rejoice at how fast their report now runs!

Creating a new template

Today I am continuing from my introduction to SSMS Template Explorer.  Part 2 today is a quick look at creating your own templates.  I mentioned in the previous post that if you delete one of the Microsoft provided templates, it will be recreated when you next launch SSMS.  Another thing of note: if you make a change to any of the default templates, those changes will keep when you next launch SSMS.  In other words, changes do not get overwritten.  My personal preference is to make a new template and not editing the Microsoft supplied template even though any edits that you make stick in the template.

Create the template file

  1. First right-click on the root node in Template Explorer labeled “SQL Server Templates”
  2. Select New, and then Folder
  3. Name your folder.  I generally start my folder names with a period so that they get sorted at the top of the list.  for example: “.Admin”
  4. Once you create your folder, Right click on that folder name.
  5. Select New, and then Template
  6. Name your template whatever makes sense to you.

Edit the template file

Now that your template is created, the next logical step is to double-click it to open it.  If you do that, you will see a blank query open up in the main window.  This is not exactly what you want.  Instead, right-click on your template and then click Edit.

Now when the blank query opens in the main window, it should have the template name in the query tab at the top.  By going this route, when you save the query your script will get saved into the template file.

The first thing I usually do is start with a quick header.  I will put some general information about what the template is for, the source if I copied it from a blog post somewhere, and the shortcut key to specify values for the template because I can never seem to remember them.

-- Template Header
-- It's here so I remember what I am using this piece of script for
-- Created by Matt Nelson, 9/15/2011
-- Press CTRL + SHIFT + M to fill in variables (because I can never remember the key combination)

I can add my script now that my header is in place.  There is a special syntax that you can use to use the CTRL+SHIFT+M shortcut to fill in blanks.  I am going to steal Microsoft’s Backup Database template here to show you the syntax

BACKUP DATABASE sysname, Database_Name>
TO  DISK = N'sysname, Database_Name>.bak'
NAME = N'sysname, Database_Name>-Full Database Backup',
STATS = 10;

When you want to make a variable placeholder in your script you will put a 3 part variable enclosed between the greater than and less than symbols: <Parameter,Type,Value>  like: sysname, Database_Name> .

  • Parameter is basically the variable name.
  • Type is a placeholder for the of variable.  There are no constraints around the type like there are with a database table column.  I generally leave this blank.
  • Value can also be blank, or you can make it a sample for yourself.

Once you get your script all set up, don’t forget to save it.  The next time you want to run it, all you have to do is double-click on it in Template Explorer.

T-SQL Tuesday #22 Data Presentation

This month’s TSQL Tuesday topic: Data Presentation.

I am not going to show any code this month, and I am also going to veer off the topic of SQL server slightly.

My company does a lot of reporting though Cognos.  The nice thing about Cognos is that it is platform independent.  Once a connection is made to an Oracle, MSSQL, or even MySQL database, the presentation looks the same to a report developer.  The end user goes to the URL for cognos and runs their report.  They may not even know what application or technical backend is supplying the data.

We have LOTS of reports that get run across the enterprise.  We also probably have a couple hundred report developers that write reports with varying degrees of frequency.  I’m sure you can imagine that without any standards no two reports would look similar and end users would not know how to use or interpret the information that they get out of a report.  Hence why data presentation is so important.

There are several standards that have to be met in our environment in order for a report to be published for end users.  While I don’t think it would be appropriate for me to show you any screenshots, I can summarize some of the general rules here.

  1. All reports must use the corporate header and footer.  Information included must contain the business unit for the report, Title of the report, corporate logo, confidentiality statement, page numbers, and telling how to best view the report (ie: html, pdf, Excel, etc.)
  2. The last page of the report must be the standard documentation page.  This page has a table with specific information recorded about the report including:
    • Report Purpose: A general description of what the report is and how it should be used.
    • Data Source: name of the server, database, or application where the data in the report came from.
    • Author: who wrote the report, and how to contact them with questions.
    • Security: Security requirements of the report.  Does it contain confidential information? Who or what security groups are allowed to run the report.
    • Business Logic: List of information that may be critical to interpreting the report.  For example, “Annual sales figures are calculated based on the fiscal year of July 1 to June 30 in this report”.  Also list any abbreviations that everyone may not know.  For example, “AHG = Average Height by Gender (I made that up)”.
    • Assumptions: What assumptions, if any, were made while writing the report.   I have also seen some report authors insert any user defined variables here.  This way when someone runs a report and they think that it is wrong, the report author can see exactly what values the user entered into the prompt boxes as a point to start troubleshooting.

The documentation page on a report becomes a valuable tool for anyone trying to understand a report and also very useful for someone who is troubleshooting a report.  Of course it is also helpful when an end user does not delete the documentation page…..

UDA-SQL-0283 Metadata describing column does not match results from database

In my current job, I have a MySQL database server that I am responsible for.  We also run some reporting through Cognos against this server.  I ran across a problem while testing reports on an upgrade to Cognos 10.  I thought I would share the solution here in case it helps someone else out someday.

I kept running into error messages while testing the Cognos reports that hit the MySQL database:

  • RQP-DEF-0177 An error occurred while performing operation ‘sqlOpenResult’ status=’-28′.
  • UDA-SQL-0114 The cursor supplied to the operation “sqlOpenResult” is inactive.
  • UDA-SQL-0283 Metadata describing <column name> does not match results from the database.

I then opened up Cognos Framework Manager to look at the package.  The weird thing is that I was able to run a Test Sample and see data return in Framework Manager.  However the same data item was throwing errors in Report Studio.  When I looked at the data items in Framework Manager, they had a data type of nVarChar.

Time to bring in the big guns

I started working with our Cognos support group to figure out what was going on.  They were able to edit the definition of one of the data queries in Framework Manager.  When the updated the data item, the data type of the text columns changed from “nVarChar” to “Character Length 16”.  Once they made the change and updated the package, the reports started running again in Report Studio.

Woohoo, problem solved!

Oh wait, I spoke too soon…I went back in to Framework Manager to repeat our support group’s steps against other query items to update the whole package.  Unfortunately when I updated any of the data queries, the data type kept changing back to “nVarChar” and the reports threw errors again.  I even tried to re-update the data item that our support group fixed.  When I made my edit, the reports broke again.

The Solution

Long story short, and after 2 weeks and plenty of headaches….I had a different version of the MySQL ODBC driver installed on my PC from what our Cognos support group did.  I changed the driver version from 5.1 to 3.51, and finally it worked!  I figured that the newer driver would be better, but apparently Cognos 10 Framework Manager has a problem with the MySQL ODBC 5.1 driver, and needs the 3.51 driver instead.

With the older driver installed on my system, I was able to update the Framework Manager package and then pull back data error free in Report Studio.

One other thing of note: after I update the query subject in the Physical Layer in Framework Manager, the links broke in the Presentation layer.  I had to go back into the Presentation Layer to update the links there as well.

SSMS Template explorer

I have used several different methods for collecting SQL scripts (both that I have written, and have borrowed from others) to reuse.  A few that come to mind include: 1. saving the scripts in individual .sql files somewhere on my hard drive, 2. storing the scripts as an individual page in Microsoft Onenote. 3. saving scripts right in SSMS Template Explorer.

While I still have a collection of scripts in all 3 places for various purposes, I have been moving more and more towards using template explorer.
The biggest reason for my change  is because whenever I need to run a script against a database server I am already opening up SSMS.  It is then easier to open up template explorer than it is to then click on File->Open, and then browse for a specific script.

Lets get started
  1. Open SSMS
  2. Open the Template explorer by either using the key combination CTRL + ALT + T  (my preferred method), or clicking on View->Template Explorer.
  3. The template explorer will then open on the right hand dock in SSMS (kinda like how it shows up in my screen shot here!).

From here you can expand any of the categories to see a list of applicable scripts.  Microsoft gives us a decent collection of templates to get us started and you can add your own scripts to the collection (covered in a future blog post).  Find a script template that you want to use and double click it.  SSMS will open that template up as a new query in the main window.  WARNING: Make sure that the script window is connected to the appropriate server\instance before executing any scripts. (not that I have ever done that….)

For this example, I expanded the Backup category, and then double clicked on “Backup Database”

Now wait a minute, there’s a lot of funky stuff there.  “BACKUP DATABASE” won’t run.

Here comes the magic of Template Explorer.  Press CTRL + SHIFT + M on your keyboard.  Behold the Specify Values prompt box.

Fill in the database name, and the file location where you want the backup saved to and press OK.  Like Magic, the template place markers are removed from the script and replaced with the values that you specified a moment ago.  The script is now ready to run as long as you double check to make sure you are running it against the appropriate server\instance.

A couple gotchas
  • All of the Template Explorer scripts are saved on my PC in:
    C:\Users\<user profile>\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\Templates\Sql
    This is running SSMS 2008 R2 on Windows 7.
  • This save path for the templates does not get backed up on my local PC.  It also does not travel if I use SSMS on another PC.
    • So far my solution has been to use a Remote Desktop Connection back to my main work PC if I am using another PC in the office.
    • Every now and then I will manually copy the directory up to a network location that does get backed up so that I can get my templates back if my PC dies.  Does anyone have a better solution?
  • If you delete a template or folder that Microsoft provided, it will be recreated whenever you restart SSMS.  I found that it is best to leave the Microsoft provided templates alone and add my own.

Coming soon in a future post, making your own template. [EDIT: here is part 2]


Just a quick funny for today.

For my daily to do list, I generally work off of a half sheet of paper on my desk.  I keep a running list and it feels good to physically scratch a done item off.  This morning I added to my list: “Update SSMS tools on <server name removed to protect the innocent>“.

The only thing is that I started writing UPDATE in all capital letters as if I were writing an SQL query.  But then I got part way through the word “update” and realized that I did not need to use all uppercase on my to do list.

so I ended up with: “UPDate SSMS on <server name>

I guess I’m an sql geek.

Applied SQL: sys.sql_modules

Jen McCown from midnightdba.com started a useful (to me anyways) set of homework assignments called Applied SQL.  Each post has a homework assignment for us.  This first one has been useful to me in order to make myself read Books Online and understand where information comes from, rather than just running a script when I want information to get information out of the black box. Continue reading

Applied SQL: sys.objects

System tables have lots of interesting and useful information stored on all facets of SQL server.  Generally speaking, whenever I find a script that shows useful information, I store that script away for future use.  I then run the script whenever I need the information without really thinking about it.  Jen McCown from midnightdba.com started a useful (to me anyways) set of homework assignments called Applied SQL.  Each post has a homework assignment for us.  This first one has been useful to me in order to make myself read Books Online and understand where information comes from, rather than just running a script when I want information to get information out of the black box.

Assignment 1 is looking at the sys.objects system table.  Below is my homework, maybe I’ll get some bonus points for sorting my results too!

Continue reading