General Development - SQL Server 2008

In addition to the commands you use to define your DDL and DML, T-SQL supports commands toimplement functional code in stored procedures, functions, triggers, and batches. Several changes and new features have been introduced for T-SQL commands:

  • Error handling: Some ability to deal with errors in T-SQL code.
  • .WRITE extension to the UPDATE statement: Easy mechanism to support chunked updates to (max) datatypes: varchar(max), nvarchar(max), and varbinary(max).
  • EXECUTE: Extensions to EXECUTE to specify a server from which to execute the code.
  • Code security context: Extensions to procedure and function declarations to specify security context.
  • .NET declarations statements: Extensions to declare .NET assemblies for use in T-SQL.
  • T-SQL syntax enhancements: Improvements associated with declaring and setting variables within the same statement.
  • Table value parameters: Table structures can be passed as parameters in user-defined functions and stored procedures.

Error Handling

For as long as any T-SQL programmer can remember, error handling has been the weakest part of writing T-SQL. The story in SQL Server starting in 2005 got far better, as SQL Server now supports the use of TRY...CATCH constructs for providing rich error handling.

Before we show you the TRY...CATCH construct, we’ll establish how this would need to be done in SQL Server 2000. We’ll create the following two tables (again, if you’re following along, create these tables in your own database, or simply in tempdb):

CREATE SCHEMA Entertainment
CREATE TABLE TV
(
TVid int PRIMARY KEY,
location varchar(20),
diagonalWidth int
CONSTRAINT CKEntertainment_tv_ checkWidth CHECK (diagonalWidth >= 30)
)
GO
CREATE TABLE dbo.error_log
(
tableName SYSNAME,
userName SYSNAME,
errorNumber int,
errorSeverity int,
errorState int,
errorMessage varchar(4000)
)
GO

In previous versions of SQL Server, logging and handling errors were ugly. You needed to query the @@error system variable to see if a runtime error had occurred, and then you could do what youwanted. Generally speaking, we like to include a facility for logging that an error occurs, like this:

CREATE PROCEDURE entertainment.tv$insert
(
@TVid int,
@location varchar(30),
@diagonal Width int
)
AS
DECLARE @Error int
BEGIN TRANSACTION
--insert a row
INSERT entertainment.TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth)
--save @@ERROR so we don't lose it.
SET @Error=@@ERROR
IF @Error<>0
BEGIN
-- an error has occurred
GOTO ErrorHandler
END
COMMIT TRANSACTION
GOTO ExitProc
ErrorHandler:
-- roll back the transaction
ROLLBACK TRANSACTION
-- log the error into the error_log table
INSERT dbo.error_log (tableName, userName,
errorNumber, errorSeverity ,errorState ,
errorMessage)
VALUES('TV',suser_sname(),@Error,0,0,'We do not know the message!')
ExitProc:
GO

Now suppose we execute the procedure with an invalid parameter value disallowed by our CHECK constraint:

exec entertainment.tv$insert @TVid = 1, @location = 'Bed Room', @diagonalWidth = 29

Since our table has a CHECK constraint making sure that the diagonalWidth column is 30 or greater, this returns the following:

Msg 547, Level 16, State 0, Procedure tv$insert, Line 13
The INSERT statement conflicted with the CHECK constraint
"CKEntertainment_tv_checkWidth".
The conflict occurred in database "AdventureWorks"
,table "TV", column 'diagonalWidth'.
The statement has been terminated.

Checking the error log table, we see that the error was logged, though the information is somewhat useless:

SELECT * FROM dbo.error_log
General Development
Error handling quickly becomes a rather large percentage of the code with repetitive blocks of code used to check for an error. Even worse, we could not stop this message from being sent to the client. So the burden of deciding what went wrong was placed on the client, based on using these error messages. Needless to say, error handling in SQL Server 2000 and earlier was a real pain, and this often lead to applications not using CHECK constraints.

TRY...CATCH lets us build error handling at the level we need, in the way we need to, by setting a region where if any error occurs, it will break out of the region and head to an error handler. The basic structure is as follows:

BEGIN TRY
<code>
END TRY
BEGIN CATCH
<code>
END CATCH

So if any error occurs in the TRY block, execution is diverted to the CATCH block, and the error can be dealt with. For example, take a look at the following simple example:

BEGIN TRY
RAISERROR ('Something is amiss',16,1)
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ERROR_NUMBER,
ERROR_SEVERITY() AS ERROR_SEVERITY,
ERROR_STATE() AS ERROR_STATE,
ERROR_MESSAGE() AS ERROR_MESSAGE
END CATCH

In the TRY block, all we are doing is raising an error. Running this, we get the following:
raising an error
Notice when you execute this, you never see the following:
raising an error
Now let’s look at a more detailed and interesting example. First, we clear the tables we have built for our examples:

DELETE FROM entertainment.TV --in case you have added rows
DELETE FROM dbo.error_log

Next, we recode the procedure to employ TRY and CATCH blocks. Far less code is required, and it is much clearer what is going on.

ALTER PROCEDURE entertainment.tv$insert
(
@TVid int,
@location varchar(30),
@diagonalWidth int
)
AS
SET NOCOUNT ON
BEGIN TRY
BEGIN TRANSACTION
INSERT TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
INSERT dbo.error_log (tableName, userName,
errorNumber, errorSeverity ,errorState ,
errorMessage)
VALUES('TV',suser_sname(),ERROR_NUMBER(),
ERROR_SEVERITY(), ERROR_STATE(), ERROR_MESSAGE())
RAISERROR ('Error creating new TV row',16,1)
END CATCH

Execution goes into the TRY block, starts a transaction, and then creates rows in our table. If it fails, we fall into the CATCH block where the error is sent to the log procedure as it was in the previous example. The difference is that we get access to the error information, so we can insert meaningful information, rather than only the error number.

Now execute the procedure and check the error log table.

exec entertainment.tv$insert @TVid = 1, @location = 'Bed Room',
@diagonalWidth = 29
GO
SELECT * FROM dbo.error_log
GO

This returns the error message we created:

Msg 50000, Level 16, State 1, Procedure tv$insert, Line 18
Error creating new TV row

And from the SELECT from the error log, we get the full error message:
full error message:
So we can avoid showing the “ugly” error message and try to give a better message. It is not perfect, but it is leaps and bounds above what we had. The main limitation is that we will need to do some messy work to translate that constraint to a usable message. But at least this message was not sent to the user.

But this is not all. TRY...CATCH blocks can be nested to give you powerful error-handling capabilities when nesting calls. For example, say we create the following procedure:

CREATE PROCEDURE dbo.raise_an_error
AS
BEGIN
BEGIN TRY
raiserror ('Boom, boom, boom, boom',16,1)
END TRY
BEGIN CATCH --just catch it, return it as a select,
--and raise another error
SELECT ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState, ERROR_LINE() AS ErrorLine,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_MESSAGE() AS ErrorMessage
RAISERROR ('Error in procedure raise_an_error',16,1)
END CATCH
END
GO

So all this procedure will do is raise an error, causing our CATCH block to start, select the error as a result set, and then reraise the error. This reraising of the error causes there to be a single point of impact for error handling. You can decide what to do with it when you call the procedure. For example, consider the following batch that we will use to call this procedure:

SET NOCOUNT ON
BEGIN TRY
EXEC raise_an_error --@no_parm = 1 (we will uncomment this for a test
SELECT 'I am never getting here'
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState, ERROR_LINE() AS ErrorLine,
ECAST(ERROR_PROCEDURE() AS varchar(30)) AS ErrorProcedure,
CAST(ERROR_MESSAGE() AS varchar(40))AS ErrorMessage
END CATCH

Running this simply causes an error to be raised by the subordinate procedure. We get two result sets. First, we get this:

raised by the subordinate procedure
And then we get this:
simply causes an error
Uncomment the @no_parm = 1 bit from the statement, and you will see that that error is trapped and the message Procedure raise_an_error has no parameters and arguments were supplied is returned as a result set.

If you want to ignore errors altogether, you can include an empty CATCH block:

SET NOCOUNT ON
BEGIN TRY
exec raise_an_error @no_parm = 1
select 'hi'
END TRY
BEGIN CATCH
END CATCH

You can also see that, in all cases, the code never executes the select 'hi' statement. There is no RESUME type of action in the TRY...CATCH way of handling errors.

While there is an error raised because of the invalid parameter, it is not visible to the caller. So it is incredibly important that you make certain that a CATCH handler is included, unless you really don’t want the error raised.

The one type of error that will not be handled by the TRY...CATCH mechanism is a syntax error. Consider the following:

SET NOCOUNT ON
BEGIN TRY
EXEEC procedure --error here is on purpose!
END TRY
BEGIN CATCH
END CATCH

This returns the following:

Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'exeec'.

The only case where TRY...CATCH captures syntax errors is when used in an EXECUTE ('<SQL code>') situation. Suppose you execute the following:

SET NOCOUNT ON
BEGIN TRY
EXEC ('seeelect *')
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState, ERROR_LINE() AS ErrorLine,
cast(ERROR_PROCEDURE() AS varchar(60)) AS ErrorProcedure,
cast(ERROR_MESSAGE() AS varchar(550))AS ErrorMessage
END CATCH

In this case, an error will be returned via the SELECT statement in the CATCH block.

One of the limitations that you will need to deal with is when you are doing several operations in the same batch. For example, consider our tv$insert procedure. Instead of inserting a single row, let’s say we are going to insert two rows:

...
BEGIN TRANSACTION
INSERT TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth)
--second insert:
INSERT TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth / 2 )
COMMIT TRANSACTION
...

How would we tell the two inserts apart if one of them had an error? It would not be possible, as either statement could break the rules of the TV table’s CHECK constraint. In this case, one possible way to deal with this would be a custom error message value. So you might do something like this:

ALTER PROCEDURE entertainment.tv$insert
(
@TVid int,
@location varchar(30),
@diagonalWidth int
)
AS
SET NOCOUNT ON
DECLARE @errorMessage varchar(2000)
BEGIN TRY
BEGIN TRANSACTION
SET @errorMessage = 'Error inserting TV with diagonalWidth / 1'
INSERT TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth)
--second insert:
SET @errorMessage = 'Error inserting TV with diagonalWidth / 2'
INSERT TV (TVid, location, diagonalWidth)
VALUES(@TVid, @location, @diagonalWidth / 2 )
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
INSERT dbo.error_log VALUES('TV',suser_sname(),
ERROR_NUMBER(),ERROR_SEVERITY(),
ERROR_STATE(), ERROR_MESSAGE())
RAISERROR (@errorMessage,16,1)
END CATCH
GO

Now we can execute it:

exec entertainment.tv$insert @TVid = 10, @location = 'Bed Room',
@diagonalWidth = 30

This returns the following:

Msg 50000, Level 16, State 1, Procedure tv$insert, Line 28
Error inserting TV with diagonalWidth / 1

Let’s try it with a number big enough to satisfy it, but not when it is divided by two:

exec entertainment.tv$insert @TVid = 11, @location = 'Bed Room',
@diagonalWidth = 60

This returns the following:

Msg 50000, Level 16, State 1, Procedure tv$insert, Line 28
Error inserting TV with diagonalWidth / 2

The key here (other than we really like to watch TV and don’t like small ones) is to make sure to give your CATCH block enough information to raise a useful error. Otherwise the error messages you may produce using the new error handling will not be all that much better than what we had before.

Error handling in SQL Server is vastly improved over previous versions, but it is going to take a big mindshift to get us there. Once you start blocking errors from a client that has expected errors in the past, you may break code by trying to fix it. So careful study and some reengineering will likely be in order to properly start using the new error-handling capabilities.

.WRITE Extension to the UPDATE Statement

In older versions of SQL Server, modifying the data in text and image columns was a real beast using T-SQL code. There were arcane READTEXT and WRITETEXT commands to do “chunked” reads and writes(just reading and writing part of a value to save the resources of fetching huge amounts of data). Starting with SQL Server 2005, the use of the text and image types is being deprecated for the new (max) datatypes: varchar(max), nvarchar(max), and varbinary(max). text and image are still available, but their use should be phased out in favor of the far better (max) types.

For the most part, you can treat the (max) datatypes just like their regular 8,000-byte or less counterparts, but if you are dealing with very large values, this may not be desired. Each of the (max) types can store up to 2GB in a single column. Imagine having to fetch this value to the client, make some changes, and then issue an UPDATE statement for a two-character change? RAM is cheap, but not cheap enough to put 4GB on all of your machines. So we can do a “chunked” update of the data in the row using a new extension to the UPDATE statement. (Note that you can do chunked reads simply by using the substring function.)

As an example, consider the following simple table with a varchar(max) column:

CREATE TABLE testBIGtext
(
testBIGtextId int PRIMARY KEY,
value varchar(max)
)
Now we create a new value simply as an empty string:
INSERT INTO testBIGtext
VALUES(1,'')

Next, we just build a loop and, using .WRITE, we put some text into the value at an offset for some length. Note that the offset must be less than or equal to the current length of the value in the column. he syntax is shown here:

UPDATE <tableName>
SET <(max)columnName>.WRITE(<scalar value>, <offset in column>,<# of bytes>
WHERE ...

Then we just start a loop and write 1,000 of each letter of the alphabet into the value column:

DECLARE @offset int
SET @offset = 0
WHILE @offset < 26
BEGIN
UPDATE testBIGtext
--the text I am writing is just starting at the letter A --> char(97)
--and increasing by adding the value of offset to 97 char(97) = a
--char (98) = b. It is also used as the offset in the varchar(max) column
--It is multiplied by the length of the data being written to fill a
--pattern of aaabbbccc...zzz only with a 1000 of each
SET value.write(replicate(char(97 + @offset),1000),@offset*1000, 1000)
WHERE testBIGTextId = 1
SET @offset = @offset + 1
END

Everything else is just plain SQL. Run the following to make sure our data is in there:

SELECT testBIGtextId, len(value) AS CharLength
FROM testBIGtext

This returns the following:

else is just plain SQL

This is a tremendous win for SQL programmers. You can easily work with long datatypes using normal functions, plus there’s a chunking mechanism, so that when you have a column holding a couple hundred megabytes of information, you don’t need to replace the whole thing in one operation.

EXECUTE

The EXECUTE command in previous versions of SQL Server could be used only to execute SQL on the same server. In SQL Server, EXECUTE has an AT parameter to specify that the command be executed on a linked server.

To see this in action, let’s set up our local server as a remote linked server. We will create a linked server using sp_addlinkedserver, call it LocalLinkedServer, and point this to our instance of SQL Server:

--note: if you are not running SQL Server as the default instance, you may
--have to change where we have specified localhost to point to your server instance
EXECUTE sp_addlinkedserver @server= 'LocalLinkedServer', @srvproduct='',
@provider='SQLOLEDB', @datasrc='localhost'
--enable the linked server to allow remote procedure calls
EXECUTE sp_serveroption 'LocalLinkedServer','RPC OUT',True

Now we can execute our T-SQL on the linked server by specifying AT and the linked server name:

EXECUTE('SELECT * FROM AdventureWorks.Production.Culture') AT LocalLinkedServer

The query is executed on the linked server and the results returned. The AT parameter applies only to using EXECUTE on batches of statements, not on explicit stored procedure or function calls.

You can then use sp_dropserver to get rid of the linked server:

EXECUTE sp_dropserver LocalLinkedServer

For completeness, we need to mention that a method of executing a batch of commands on another SQL Server already exists: using sp_executesql. It has the added benefit of allowing for parameter and return values. This procedure can also be called remotely, as follows:

Code Security Context

The EXECUTE AS clause on a procedure or function declaration allows you to define the security context in which a stored procedure or function (other than in-line table-valued functions) is executed. Without this clause, the object executes in the security context of the CALLER. Note that this does not affect the execution of the procedure unless there is a break in the ownership chain. Any object owned by the creator of the procedure will be available to the user.

The syntax of this clause is as follows:

CREATE PROCEDURE <procedureName>
[parameters]
WITH EXECUTE AS <option>
AS
<Procedure definition>

It is the same syntax when used for functions. There are four possible values for the EXECUTE AS option:

  • EXECUTE AS CALLER (the default)
  • EXECUTE AS SELF
  • EXECUTE AS OWNER
  • EXECUTE AS USER=<username>

You can also execute one of these as a stand-alone command to change the context of who is executing the procedure back to the CALLER if needed. One additional statement is included: REVERT to go back to the context set in the WITH clause of the procedure declaration.

As an example, we’re going to create a user, named mainOwner, and then a procedure that uses the EXECUTE AS option on a procedure to show the basics of EXECUTE AS.

We’ll start by creating several users, tables, and procedures:

--this user will be the owner of the primary schema
CREATE LOGIN mainOwner WITH PASSWORD = 'mainOwnery'
CREATE USER mainOwner FOR LOGIN mainOwner
GRANT CREATE SCHEMA TO mainOwner
GRANT CREATE TABLE TO mainOwner
--this will be the procedure creator
CREATE LOGIN secondaryOwner WITH PASSWORD = 'secondaryOwnery'
CREATE USER secondaryOwner FOR LOGIN secondaryOwner
GRANT CREATE SCHEMA to secondaryOwner
GRANT CREATE PROCEDURE TO secondaryOwner
GRANT CREATE TABLE TO secondaryOwner
--this will be the average user who needs to access data
CREATE LOGIN aveSchlub WITH PASSWORD = 'aveSchluby'
CREATE USER aveSchlub FOR LOGIN aveSchlub

Then we change to the context of the main object owner, create a new schema, and create a table with some rows:

EXECUTE AS USER='mainOwner'
GO
CREATE SCHEMA mainOwnersSchema
GO
CREATE TABLE mainOwnersSchema.person
(
personId int constraint PKtestAccess_person primary key,
firstName varchar(20),
lastName varchar(20)
)
GO
INSERT INTO mainOwnersSchema.person
VALUES (1, 'Paul','McCartney')
INSERT INTO mainOwnersSchema.person
VALUES (2, 'Pete','Townsend')
GO

Next, this user gives SELECT permission to the secondaryOwner user:

GRANT SELECT on mainOwnersSchema.person to secondaryOwner

Then we set the context to the secondary user to create the procedure:

REVERT --we can step back on the stack of principals,
--but we can't change directly to secondaryOwner
GO
EXECUTE AS USER = 'secondaryOwner'
GO

Then we create a schema and another table:

CREATE SCHEMA secondaryOwnerSchema
GO
CREATE TABLE secondaryOwnerSchema.otherPerson
(
personId int constraint PKtestAccess_person primary key,
firstName varchar(20),
lastName varchar(20)
)
GO
INSERT INTO secondaryOwnerSchema.otherPerson
VALUES (1, 'Rocky','Racoon')
INSERT INTO secondaryOwnerSchema.otherPerson
VALUES (2, 'Sally','Simpson')
GO

Next, we create two procedures as the secondary users: one for the WITH EXECUTE AS as CALLER, which is the default, then SELF, which puts it in the context of the creator—in this case, secondaryOwner:

CREATE PROCEDURE secondaryOwnerSchema.person$asCaller
WITH EXECUTE AS CALLER --this is the default
AS
SELECT personId, firstName, lastName
FROM secondaryOwnerSchema.otherPerson --<-- ownership same as proc
SELECT personId, firstName, lastName
FROM mainOwnersSchema.person --<-- breaks ownership chain
GO
CREATE PROCEDURE secondaryOwnerSchema.person$asSelf
WITH EXECUTE AS SELF --now this runs in context of secondaryOwner,
--since it created it
AS
SELECT personId, firstName, lastName
FROM secondaryOwnerSchema.otherPerson --<-- ownership same as proc
SELECT personId, firstName, lastName
FROM mainOwnersSchema.person --<-- breaks ownership chain
GO

Next, we grant rights on the procedure to the aveSchlub user:

GRANT EXECUTE ON secondaryOwnerSchema.person$asCaller to aveSchlub
GRANT EXECUTE ON secondaryOwnerSchema.person$asSelf to aveSchlub

Then we change to the context of aveSchlub:

REVERT
GO
EXECUTE AS USER = 'aveSchlub'
GO

and execute the procedure:

--this proc is in context of the caller, in this case, aveSchlub
EXECUTE secondaryOwnerSchema.person$asCaller

which gives us the following output:

output

Next, we execute the asSelf variant:

--secondaryOwner, so it works
EXECUTE secondaryOwnerSchema.person$asSelf

which gives us the following output:
execute the asSelf variant
What makes this different is that when the ownership chain is broken, the security context we are in is the secondaryUser, not the context of the caller, aveSchlub. This is a really cool feature, as we can now give users temporary rights that will not even be apparent to them, and will not require granting any permissions.

It is not, however, a feature that should be overused, as it could be all too easy to just build your procedures in the context of the database owner. One nice side effect of this is that we could use it instead of chaining, by setting EXECUTE AS to a user who can access a different database directly, so the system user may have rights to the database, but the executing user cannot.

.NET Declarations

.NET is tightly integrated with SQL Server. In this section, we are simply going to cover the commands to make assemblies available for use in T-SQL code.

Assembly Maintenance

Prior to using .NET assemblies in T-SQL code, you must declare and load them from the dynamic link library (DLL) file. CREATE ASSEMBLY loads amanaged class into SQL Server memory space so that the common language runtime (CLR) database objects (discussed in the next section) can be created.

The syntax of the CREATE ASSEMBLY command is as follows:

CREATE ASSEMBLY <assemblyName> FROM <assemblyLocation>

Here is an example:
CREATE ASSEMBLY dateObject FROM 'C:projectsbinDebugsetDate.dll'

After loading, the assembly can be subsequently removed using the DROP ASSEMBLY command:

DROP ASSEMBLY dateObject

An assembly cannot be dropped if any CLR database objects reference it. These references can be seen in Management Studio by right-clicking the assembly.

CLR Database Objects

Once an assembly has been created as an operating system file and created as a database object, it can then be used to declare objects, including stored procedures, functions, triggers, user-defined types, and user-defined aggregate functions. The syntax of CREATE and ALTER for the T-SQL objects also includes the ability to reference CLR objects instead of T-SQL. The following is the basic syntax for these CLR extensions:

[CREATE][ALTER] DBOBJECTTYPE ([parameters])
AS EXTERNAL NAME assembly_name:class_name

For example, to create a procedure that points to a .NET assembly that has been created as an object on the SQL Server—in this case, a fictitious one called getDateValueString—we could build the following procedure:

CREATE PROCEDURE dateString
(
@dateValue datetime output
)AS EXTERNAL NAME dateObject:utf8string::getDateValueString

The following commands are affected by this change:

  • CREATE/ALTER PROCEDURE
  • CREATE/ALTER FUNCTION
  • CREATE/ALTER TRIGGER
  • CREATE/ALTER TYPE
  • CREATE/ALTER AGGREGATE

This has been just a very brief introduction to the new SQL Server commands that revolve around .NET integration.

Declaring and Setting Variables

How many times have you wanted to declare and set a variable all in the same statement? Developers who have worked with T-SQL may be mystified why they could never issue a statement like this:

DECLARE @iCounter int = 0;

Well, ladies and gentlemen, in SQL Server 2008, we can now finally declare and set variables within the same statement. A small feature, no doubt, but a very useful one indeed.

In addition to initializing the variable, improvements have also been made to perform arthimetic operations on the variables themselves:

INSERT dbo.Counters
VALUES (@iCounter+1);

The value of Counters would be the value 1 after executing this statement. You can also use the +=, -=, *=, and /= operators. Taking our initial @iCounter declaration, let’s use the new += operator, as follows:

UPDATE dbo.Counters
SET Counter+=1;

The result of Counters would now be 2.

Passing Table-Valued Parameters

In earlier versions of SQL Server, stored procedures and user-defined functions allowed only scalar parameters to be passed. For most cases, this was acceptable. But in certain circumstances, this limitation led to the same stored procedure or function being called multiple times to perform a single task.

Consider the scenario where a customer is submitting an order on a web site. The order may be for multiple products. When the order is submitted, a server-side stored procedure is used to process the shopping cart list. Before SQL Server 2008, this stored procedure would accept the order items individually, and performance was slow. In SQL Server 2008, developers can now pass a table as a parameter to stored procedures and user-defined functions. In our scenario, this would allow the entire shopping cart to be passed to the stored procedure for processing in one trip, as opposed to potentially many round-trips to the server.

To be fair to pre-2008 database application designs, you could save on some round-trips by creating some sort of order item array or a special proprietary BLOB and pass that as a single parameter, but in the end, you would end up with a lot of complex code to maintain. SQL Server 2008 is able to easily and efficiently pass table structures to stored procedures and user-defined functions, without all this complex code to maintain.

Now that you are sold on the value of table-valued parameters, it’s time to explain how to create and use them. Table-valued parameters are defined via a new user-defined table type. Let’s continue our shopping cart scenario and assume that we have a stored procedure that will simply update the inventory at the warehouse given a specific order. Instead of calling this stored procedure for every single order item, we want to leverage table-valued parameters and pass a table containing all the products in the shopping cart. We need to first create a Products table that will contain three products and the number remaining in inventory, as follows:

CREATE TABLE Inventory
(product_id int PRIMARY KEY,
product_name varchar(50) NOT NULL,
quantity int DEFAULT(0))
GO
INSERT INTO Inventory VALUES (1,'Meat flavored dog shampoo',150)
INSERT INTO Inventory VALUES (2,'Generic canine nail clippers',261)
INSERT INTO Inventory VALUES (3,'Canine paw scissors',89)
GO

We should now create a user-defined table type that describes the table we wish to pass to our stored procedure:

CREATE TYPE OrderType AS
TABLE(order_item_id int PRIMARY KEY,
product_id int NOT NULL,
quantity int NOT NULL CHECK (quantity >= 1),
gift_wrap BIT DEFAULT (0))

Looking at the type-creation code, you may notice that you can have most of the constraints that you could otherwise use within a regular CREATE TABLE definition The most significant exception is foreign keys, which are not allowed in user-defined table types.

It is important to note a few restrictions about user-defined table types:

• They can be used only as input parameters to user-defined functions and stored procedures.
• They cannot be used as OUTPUT parameters.
• They cannot be defined within the body of the user-defined function or stored procedure.
• They must be defined independently.

In SQL Server 2008, table value parameters are read only within the stored procedure or userdefined function. Outside these functions, you can treat table types like any other table, and INSERT, UPDATE, and DELETE as needed.

At this point in our example, we are ready to create the UpdateInventory stored procedure:

CREATE PROCEDURE UpdateInventory (@myOrder OrderType READONLY)
AS
BEGIN
UPDATE Inventory
SET Inventory.quantity-=mo.quantity
FROM Inventory
INNER JOIN @myOrder mo
ON (mo.product_id = Inventory.product_id)
END

Notice that the READONLY keyword is used for the @myOrder table variable. This is because the READONLY keyword is required in SQL Server 2008 to identify table variables. Within the code of the stored procedure, we are treating the @myOrder table variable like any other table. This procedure simply updates the inventory of each product by decrementing it by the value of the quantity ordered. Finally, to test our procedure, we can do the following:

DECLARE @myOrder OrderType
INSERT INTO @myOrder VALUES(1,1,10,0)
INSERT INTO @myOrder VALUES(2,3,5,0)
EXEC UpdateInventory @myOrder

The Inventory table before running the UpdateInventory stored procedure is as follows:
to test our procedure
The Inventory table after running the UpdateInventory stored procedure is as follows:
UpdateInventory stored procedure
We can see that the meat-flavored dog shampoo has decreased by ten units, and the canine paw scissors units have decreased by five, as these were the values defined in our order.

User-defined table types are fully supported in ADO.NET 3.0 via a new parameter type called SqlDbType.Structured. They are also supported in the ODBC and OLE DB client stacks via a new parameter type called SQL_SS_TABLE. When clients pass table variables to SQL Server, the content gets passed by value. When table variables are used within SQL Server, they are passed by reference for maximum performance gain.

For those of you who have a lot of experience with SQL Server, you may be wondering why you couldn’t just use BULK INSERT instead of going through all this code. In some circumstances, it may be better to use BULK INSERT. Generally, using BULK INSERT is better when your mission is purely to insert data, especially from a file. If you want to use joins or do special business logic, you should consider leveraging table value parameters within you database application.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

SQL Server 2008 Topics