Managing and Monitoring Resources - SQL Server 2008

Many options exist for managing and monitoring resources. Most features that ship in SQL Server expose a way to monitor their state not only through their own tool, but also from larger, enterprise-wide monitoring tools like Microsoft’s System Center Operations Manager. A good example is the SQL Server Agent scheduling service. You can monitor SQL Server Agent jobs using SQL Server Management Studio or the System Center Operations Manager. With that said, Microsoft realizes that some organizations don’t have the budget for these kinds of enterprise-wide management tools. In addition, some of these enterprise tools don’t have the granularity that DBAs demand. SQL Server 2008 introduces some additional tools, including the Data Collector and Resource Governor, to help you monitor and manage resources in SQL Server.

Data Collector

Throughout the evolution of SQL Server, its developers have strived to improve the tools available to users to aid in performance configuration. With SQL Server 2005, the Database Tuning Advisor (DTA) tool was introduced to allow users to submit a workload and receive suggestions on how to improve the query performance. SQL Server 2008 adds the ability to easily obtain and analyze server-wide performance-related information. Obtaining this information is part of a feature called the Data Collector, which allows users to pull data from a variety of sources into a data warehouse for analysis. Sources of this information include SQL Trace events, Performance Monitor counters, and the SQL Server History and Health tool, to name a few.

The Data Collector is the plumbing that allows users to easily choose which interesting properties to monitor. It places this information in a management data warehouse, which is either a new database that the user creates or an existing database on the server.

Once you have a common place for this data, you can do a variety of useful things with it, such as perform analytical operations to obtain trend information and to deduce potential issues before they occur. You can also easily communicate these findings with the rest of your organization by creating reports from the data that was collected via Reporting Services.

Configuring the Management Data Warehouse

You can configure a management data warehouse to be used by the Data Collector via Transact-SQL (T-SQL) stored procedures or by using the Configure Management Data Warehouse Wizard in SQL Server Management Studio. Here, we’ll walk through using the wizard.

To launch the wizard, in Object Explorer, expand the Management node, right-click the Data Collection node, and choose Configure Management Data Warehouse. Click Next on the welcome page to move to the Configure Management Data Warehouse Storage page.

Configure Management Data Warehouse Storage page

Configure Management Data Warehouse Storage page

Here, you can specify either a local or remote database for the destination of your data collection. You can also optionally specify a local cache directory. Having a local cache improves performance, since you will not be immediately writing to the data warehouse database upon acquisition of each piece of data. This frequency of collection is a configurable property and will be discussed in the “Adding Collection Items” section a little later in this chapter.

The next page in the wizard, asks you to configure the security of the database. On this page, when you select a login in the top grid, the role membership for that selected login is shown in the bottom grid.

Map Logins and Users page

Map Logins and Users page

User2 is selected as the login, and the bottom grid reveals this login is a member of the mdw_admin, mdw_reader, and mdw_writer roles. These are the three database roles within the management data warehouse database, and are defined as follows:

  • mdw_admin: Members of this role have read, write, update, and delete access to the management data warehouse database.
  • mdw_reader: Members of this role have read access to the historical data only. This role is used primarily for troubleshooting purposes.
  • mdw_writer: Members of this role can upload and write data into the management data warehouse database.

In our example, we have User1 and User2. User1 was assigned just the mdw_reader role. User2 is an mdw_admin.

The next page of the wizard will provide a summary of the actions that will be performed. When you click Finish, the wizard will create or update the selected database with the objects used by the Data Collector.

Summary page

Summary page

In addition to creating database roles and various stored procedures, the wizard will create three data collection sets: Disk Usage, Query Statistics, and Server Activity. These system-installed collection sets contain a number of collection items utilizing both the T-SQL and Performance Monitor counter collector types. They not only provide some useful information upon collection, but they are also good to use to learn more about how to create your own data collection sets. You can see the creation scripts for these collection sets by right-clicking them and selecting Script Data Collection as ➤CREATE To ➤New Query Editor Window.

To demonstrate using the Data Collector, we’ll walk through creating your own data collection set.

Creating Collection Sets

Collection sets are containers that hold zero or more collection item objects. The set itself defines properties, such as whether the data that is obtained should be cached, how long the data should be retained before it is deleted from the warehouse, and SQL Server Agent proxy account information used by Agent jobs when collecting the data.

In SQL Server 2008, you will need to use a T-SQL stored procedure to create the collection set, since SQL Server Management Studio does not have a tool to perform this action. You use the sp_syscollector_create_collection_set stored procedure in the msdb database.

As an example, we will create a collection set called UserSession_Stats. In the next section, we will add a collection item that will simply make a call to a view and return the active sessions for the local instance of SQL Server.

USE msdb
DECLARE @mycollection_set_id int
DECLARE @mycollection_set_uid UNIQUEIDENTIFIER --Create collection set, the container for various collection items
EXEC [dbo].[sp_syscollector_create_collection_set] @name=N'UserSession_Stats',
@collection_mode=1, --This means noncached mode
@description=N'Collects data about user session information',
@logging_level=1, --Sets logging level used by SSIS
--Uses existing SQL Agent schedule

@collection_set_id=@mycollection_set_id OUTPUT,
@collection_set_uid=@mycollection_set_uid OUTPUT

Note a few parameters in this stored procedure:

  • collection_mode: This property determines if the collection and upload are on the same or separate schedules.
  • logging_level: This property is set because under the covers, a SQL Server Integration Services (SSIS) package is performing the actual data collection.
  • proxy_name: The proxy account information may also be required, since you might need to establish the security context that will be used to obtain the data you are after. In this example, we are running under the sysadmin account, so we don’t need to specify a proxy account.
  • schedule_name: Many existing Agent schedules are available. This example uses the schedule that executes every 6 hours. You could select to run the collection on other schedules, such as every 5, 10, 15, 30, or 60 minutes. These shared schedules are created for you in SQL Agent as part of the configuration wizard.

After creating a collection set, you can add collection items for it.

Adding Collection Items

A collection item defines the piece of data that the user is requesting. Just as a collection set has properties associated with it, so does the collection item. One of these properties is collector type, which identifies the actual mechanism for collecting data and uploading it to the management data warehouse. SQL Server 2008 comes with three collector types out of the box: T-SQL Query Collector, SQL Trace Collector, and Performance Counters Collector.

You can also create your own collector types by using the sp_syscollector_create_collection_item stored procedure. For our example, we can take the output variables that came from the sp_syscollector_create_collection_set stored procedure and pass them into sp_syscollector_create_collection_item, which will create a collection item for our sample collection set. The following code does this:

DECLARE @mycollector_type_uid UNIQUEIDENTIFIER

SELECT @mycollector_type_uid = collector_type_uid FROM
WHERE NAME = N'Generic T-SQL Query Collector Type';

DECLARE @collection_item_id INT

EXEC [dbo].[sp_syscollector_create_collection_item] @name=N'User_Connections',
SELECT COUNT(*) AS '' Active_sessions'' FROM sys.dm_exec_sessions
<Databases UseSystemDatabases="false" UseUserDatabases="false"/>
</TSQLQueryCollector>', @collection_item_id=@collection_item_id OUTPUT,
@frequency=5, -- How frequently we will obtain a sample data point

The frequency parameter determines how frequently in seconds SQL Server will sample the data.

You can see that the unique identifier for the T-SQL Query Collector type was obtained from a query to the syscollector_collector_types view and passed as one of the parameters to the syscollector_create_collection_item stored procedure. The parameters parameter is where you define specifics of the query you wish to issue and where to place the query results.

Each collector type has a different schema. You can see the schema for each of these types by querying the parameter_schema column of the syscollector_collector_types view. All the available options for that collector type will be described in the schema. You can also refer to SQL Server Books Online for the options for the collector types that come with SQL Server.

With the T-SQL Query Collector, you have the option to issue the query againt system databases, user databases, both, or none at all. Other collector types will offer different options.

Once you have defined the collection set and collection items, you are ready to start collecting the data.

Collecting Data

To start collecting data, you can issue the sp_syscollector_start_collection_set stored procedure call, or right-click the collection set and select Start Data Collection Set from the context menu. In our example, every 6 hours, we will be uploading our samples of data that were collected at 5-second intervals. When uploaded, the data it is copied into our output table usersession_info in the management data warehouse database that we defined as part of the TSQLQueryCollector parameter when we created the collection item.

After you’ve collected and stored some data, you can work with it in interesting ways, such as creating reports or data mining using Analysis Services. Some reports are available out of the box. A good example of this is the Server Activity History report. This is more of an interactive report, allowing users to zoom in and out of time ranges.

SQL Server Activity chart within the Server Activity History report

SQL Server Activity chart within the Server Activity History report

Various counters within the Server Activity History report

Various counters within the Server Activity History report

Being able to collect data and record it in one central place is very useful. Although it would be nice to have better tools to support this initial release of the Data Collector, the overall value of this feature is huge.

Resource Governor

How many times have you seen a query eat up system resources like memory and CPU for extended lengths of time? These types of runaway queries will be a thing of the past with a properly configured Resource Governor. Resource Governor allows the DBA to allocate database resources (CPU bandwidth and memory) to users and applications.

In earlier versions of SQL Server, it was not possible to differentiate workloads. Thus, an administrator query, an OLTP workload, and a report-generation query would all have the same priority to the SQL Server database engine. Not having the ability to limit or prioritize workloads can lead to a disproportionate distribution of resources and unpredictable performance.

Resource Governor in SQL Server 2008 allows the DBA to create workload groups and resource pools. The DBA can then define priorities on these workloads and assign workloads to resource pools. So, for example, you could configure a scenario where the OLTP queries take more of the CPU cycles than do your administrator queries, or you could make all your administrator queries take most of the CPU and let your users have whatever is left (after all, you’re the one who uses SQL Server every day, so you deserve all of the CPU bandwidth!).

Creating Resource Pools

Resource pools represent a collection of physical resources of the server. In SQL Server 2008, resource pools define minimum and maximum values for memory and CPU utilization. A resource pool is similar to the concept of a virtual instance of SQL Server. However, a true virtual instance would include much more separation than just the division of memory and CPU utilization.

SQL Server 2008 has two predefined resource pools:

  • Internal: The internal pool is used solely by the SQL Server database engine. System administrators cannot change or configure any settings for the internal pool.
  • Default: The default pool will be used by all workloads that do not have a pool assigned to them. Thus, if you never configure Resource Governor in SQL Server, all your workloads will be running in the default pool. This pool cannot be altered or dropped, but unlike with the internal pool, you can alter its minimum and maximum settings.
  • Most of the time, system administrators will be interested in creating their own resource pools. When defining minimum and maximum values for a resource pool, keep in mind the following:

  • The sum of minimum values across all pools cannot exceed 100 percent. This is because SQL Server makes every effort to ensure these minimum values are respected.
  • The maximum values can be set anywhere in the range between the minimum defined and 100 percent.

heoretically, if all pools were set to amaximum of 100 percent, it would be impossible for all workloads using those pools to achieve 100 percent. Thus, when planning your minimum and maximum values, it is important to consider the effective maximum percentage. To explain this concept, let’s take a look at an example.

Suppose we have three user-defined resource pools, plus the internal and default pools. The Effective Maximum Percentage column in the table is defined as the maximum CPU percentage, taking into consideration all the other pools defined. We calculate this percentage as the smallest, or minimum, value between the defined maximum for the pool and the difference between the defined maximum and the sum of the other minimum percentages.

Resource Pools for the Example

Resource Pools for the Example

You can see that the effective maximum percentages are different from what we intended the maximum to be when we defined the pools originally.

To create a resource pool, use the CREATE RESOURCE POOL statement. The following code creates two resource pools:

This example creates the resource pools AdminQueries and UserQueries. Both of these can utilize a maximum of 100 percent of the CPU. We will show how different maximum configurations affect the pools shortly. First, we need to talk about how to determine who gets to use which resource pool.

Creating Workload Groups

We have defined pools for resources, but we don’t want to assign users directly to the pools themselves, since that would be a management nightmare. Instead, we create workload groups. These groups allow system administrators to easily monitor resource consumption and move different workloads among different pools.

A workload group is mapped to a resource pool. A resource pool can have zero or more workload groups using the pool. A workload group provides a bucket to group user sessions. We will cover how to map user sessions to workload groups when we discuss the classifier function in the next section.

Just as there are preconfigured internal and default resource pools, there are predefined internal and default workload groups. The internal workload group is used solely by the SQL Server database engine and cannot be altered or deleted. The default workload group will be used by all user sessions that are not assigned a workload group.

To create a workload group, use the CREATE WORKLOAD GROUP statement. Continuing our example, let’s create four workload groups that will use the two resource pools we defined in the previous section, as follows:

Here, we defined the NightlyMaintenanceTasks and AdhocAdmin groups to use the AdminQueries resource pool, and the SAPUsers and DailyExecReports groups to use the UserQueries resource pool. At this point, we have established which users or connections will be grouped into which workload. Now we need what is called a classifier function.

Creating a Classifier Function

A classifier function classifies the incoming sessions and assigns the session requests and queries to a specific workload group. The group differentiation can be identified by almost any property available in the connection string (IP address, application name, username, and so on). As an example, let’s create our classifier function to behave in the following manner:

  • If a user connects and has a login called SAP_Login, make him part of the SAPUsers workload group.
  • If a user connects and the application name is Microsoft SQL Server Management Studio, make him part of the AdhocAdmin workload group.
  • If a user is a member of the ReportUsers group, make him part of the DailyExecReports workload group.
  • If a user is connecting using shared memory and is in the NightlyAdmin group, assign him to the NightlyMaintenanceTasks workload group.

The code to create this classifer function is as follows:

CREATE FUNCTION class_func_1()
DECLARE @val sysname
--Handle workload groups defined by login names
SET @val='SAPUsers';
RETURN @val;
IF APP_NAME() = 'Microsoft SQL Server Management Studio'
SET @val='AdhocAdmin';
RETURN @val;
END IF IS_MEMBER ('ReportUsers') = 1
SET @val='DailyExecReports';
RETURN @val;
END IF ConnectionProperty('net_transport') = 'Shared memory' AND
IS_MEMBER ('NightlyAdmin') = 1
SET @val='NightlyMaintenanceTasks';
RETURN @val;

After you have defined the classifier function, you need to bind it to Resource Governor. To do this, issue the following statement:

WITH (CLASSIFIER_FUNCTION = dbo.class_func_1)

Changes won’t be in effect until you issue the RECONFIGURE statement, as follows:


At this point, we have configured our resource pools and workload groups, and have applied a classifier function for use by Resource Governor. To show the real power of this feature, let’s create two connections. The first connection will be made by the SAP_Login login. Therefore, it will be part of the SAPUsers workload group and bound to the UserQueries resource pool. The second connection will be made by a system administrator who is using SQL Server Management Studio. Therefore, this connection will be part of the AdhocAdmin workload group and bound to the AdminQueries resource pool.

In order to easily see the effects of Resource Governor, you can look at the various Performance Monitor counters available.

Viewing Resource Governor Performance Monitor Counters

Both SQL Server:Resource Pools Stats and SQL Server:Workload Group Stats are new Performance Monitor performance objects in SQL Server 2008. Together, these objects provide enough information to monitor the various workloads and resource pools that you have defined.

SQL Server:Resource Pools Stats contains counters for both memory and CPU utilization. Examples include the amount of memory used by the resource pool, the system CPU usage by all requests in the resource pool, and the number of query memory grants per second occurring in the resource pool. SQL Server:Workload Group Stats contains CPU usage counters, as well as queryrelated information, such as the number of requests waiting in the queue due to Resource Governor limits.

To demonstrate using these counters, let’s continue with our example of two resource pools and four workload groups. Launch Performance Monitor (either by typing perfmon at a command prompt or by navigating from the Control Panel to the Administrative Tools folder and selecting Performance). In Performance Monitor, right-click the graph and select Add Counter from the context menu. Select SQLServer:Resource Pools Stats from the Performance Object drop-down list, and add the “CPU usage %” counter for both the AdminQueries and UserQueries resource pools, as shown in Figure.

Resource Governor Performance Monitor counters

Resource Governor Performance Monitor counters

At this point, we are ready to apply a workload to our two connections. For lack of a better workload, let’s use the following script, whose purpose is to stress the CPU by selecting the version of SQL Server a million times.

DECLARE @i int=10000000;
DECLARE @s varchar(100);
WHILE @i > 0
SELECT @s = @@version;
SET @i = @i - 1;

If both the SAP_Login and sysadmin users run this script on their connection, our Performance Monitor results will look like Figure.

Performance Monitor results for both connections with equal CPU utilization

Performance Monitor results for both connections with equal CPU utilization

Both of our connections are competing for resources, and basically they share the same percentage of CPU utilization. You can also see that the lines are not that straight. This is because there are other services and applications that are also competing for CPU time.

Now imagine that we are starting to receive calls from our SAP users saying that their queries are running more slowly. One thing we could do is restrict our AdminQueries resource pool to use only 20 percent maximum CPU and our UserQueries to use up to 80 percent maximum CPU. To do this, we issue the following statements:


In order for changes to become effective, we need to issue the RECONFIGURE command as well:


Now when we look at our Performance Monitor counters, we can see a distinct difference in the CPU utilization between the two workloads. The counters show the administrator workloads taking a significantly lower CPU utilization than the user queries, which is what we expected.

Performance Monitor results for both connections with adjusted CPU utilization

Performance Monitor results for both connections with adjusted CPU utilization

If you want to get a quick look at which sessions are in which workload groups, you can use one of the new Resource Governor-related dynamic management views. The following code will show this information:

s.session_id, s.login_name, s.program_name, s.group_id,
sys.dm_exec_sessions as s JOIN
sys.dm_resource_governor_workload_groups as g
ON s.group_id=g.group_id
session_id > 50

The result of this query against a server running the Resource Governor sample script shows the following:

Resource Governor sample script shows the following

Resource Governor will no doubt be an incredible asset to your organization. Up until now, some third-party applications assume they own the SQL box and tune their queries with this in mind. Now system administrators have the ability to limit memory and CPU utilization, regardless of what the applications are trying to do. Handling these runaway query scenarios is what Resource Governor is all about.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

SQL Server 2008 Topics