12 June 2012
However, getting the logs can be a pain. Worse, we never have enough data when we need it. It can be hard justifying an expensive monitoring app like SCOM or Foglight. So I built a utility using Powershell that does most of the work, for free.
If you gave the Save-SqlHistory function a list of 1000 databases to query, and 40 monitoring queries, it would run all 40 queries against all 1000 databases. It logs the results to 40 csv files, one per query.
Run Save-SqlHistory in a scheduled task or SQL Agent job, and you have an instant monitoring tool.
The best way to start is to use queries that are return useful information. I like to use the fabulous DMV queries by Glenn Berry (blog / @GlennAlanBerry). The example queries in Save-SqlHistory are based largely on his work, and work with SQL 2012.
To get started, downloade the code. The two CSV files to change are Queries.csv and Servers.csv, and they are intuitive. Run the script against a test database somewhere and look at the results. Next, think about what data to collect.
What information do you wish you had? Stats about index fragmentation? Top queries by CPU usage? Page life expectancy? The runtime of each backup? All you need to do is find/write queries that return this and add their locations to the Queries.csv file. Along with Glenn’s work, I would highly recommend Brent Ozar’s (b / t) sp_Blitz script and Adam Machanic’s (b / t) sp_WhoIsActive script.
I will be refining and adding features to Save-SqlHistory. First on the list is multithreading, so querying 1,000 databases doesn’t take too long. Next on the list is whatever you suggest, just contact me if you have an idea.