In addition to the commands you use to define your DDL and DML, T-SQL supports commands toimplement functional code in stored procedures, functions, triggers, and batches. Several changes and new features have been introduced for T-SQL commands:
For as long as any T-SQL programmer can remember, error handling has been the weakest part of writing T-SQL. The story in SQL Server starting in 2005 got far better, as SQL Server now supports the use of TRY...CATCH constructs for providing rich error handling.
Before we show you the TRY...CATCH construct, we’ll establish how this would need to be done in SQL Server 2000. We’ll create the following two tables (again, if you’re following along, create these tables in your own database, or simply in tempdb):CREATE SCHEMA Entertainment
In previous versions of SQL Server, logging and handling errors were ugly. You needed to query the @@error system variable to see if a runtime error had occurred, and then you could do what youwanted. Generally speaking, we like to include a facility for logging that an error occurs, like this:CREATE PROCEDURE entertainment.tv$insert
Now suppose we execute the procedure with an invalid parameter value disallowed by our CHECK constraint:
exec entertainment.tv$insert @TVid = 1, @location = 'Bed Room', @diagonalWidth = 29
Since our table has a CHECK constraint making sure that the diagonalWidth column is 30 or greater, this returns the following:Msg 547, Level 16, State 0, Procedure tv$insert, Line 13
Checking the error log table, we see that the error was logged, though the information is somewhat useless:
SELECT * FROM dbo.error_log
Error handling quickly becomes a rather large percentage of the code with repetitive blocks of code used to check for an error. Even worse, we could not stop this message from being sent to the client. So the burden of deciding what went wrong was placed on the client, based on using these error messages. Needless to say, error handling in SQL Server 2000 and earlier was a real pain, and this often lead to applications not using CHECK constraints.
TRY...CATCH lets us build error handling at the level we need, in the way we need to, by setting a region where if any error occurs, it will break out of the region and head to an error handler. The basic structure is as follows:BEGIN TRY
So if any error occurs in the TRY block, execution is diverted to the CATCH block, and the error can be dealt with. For example, take a look at the following simple example:BEGIN TRY
In the TRY block, all we are doing is raising an error. Running this, we get the following:
Notice when you execute this, you never see the following:
Now let’s look at a more detailed and interesting example. First, we clear the tables we have built for our examples:
Next, we recode the procedure to employ TRY and CATCH blocks. Far less code is required, and it is much clearer what is going on.ALTER PROCEDURE entertainment.tv$insert
Execution goes into the TRY block, starts a transaction, and then creates rows in our table. If it fails, we fall into the CATCH block where the error is sent to the log procedure as it was in the previous example. The difference is that we get access to the error information, so we can insert meaningful information, rather than only the error number.
Now execute the procedure and check the error log table.exec entertainment.tv$insert @TVid = 1, @location = 'Bed Room',
This returns the error message we created:Msg 50000, Level 16, State 1, Procedure tv$insert, Line 18
And from the SELECT from the error log, we get the full error message:
So we can avoid showing the “ugly” error message and try to give a better message. It is not perfect, but it is leaps and bounds above what we had. The main limitation is that we will need to do some messy work to translate that constraint to a usable message. But at least this message was not sent to the user.
But this is not all. TRY...CATCH blocks can be nested to give you powerful error-handling capabilities when nesting calls. For example, say we create the following procedure:CREATE PROCEDURE dbo.raise_an_error
So all this procedure will do is raise an error, causing our CATCH block to start, select the error as a result set, and then reraise the error. This reraising of the error causes there to be a single point of impact for error handling. You can decide what to do with it when you call the procedure. For example, consider the following batch that we will use to call this procedure:SET NOCOUNT ON
Running this simply causes an error to be raised by the subordinate procedure. We get two result sets. First, we get this:
And then we get this:
Uncomment the @no_parm = 1 bit from the statement, and you will see that that error is trapped and the message Procedure raise_an_error has no parameters and arguments were supplied is returned as a result set.
If you want to ignore errors altogether, you can include an empty CATCH block:SET NOCOUNT ON
You can also see that, in all cases, the code never executes the select 'hi' statement. There is no RESUME type of action in the TRY...CATCH way of handling errors.
While there is an error raised because of the invalid parameter, it is not visible to the caller. So it is incredibly important that you make certain that a CATCH handler is included, unless you really don’t want the error raised.
The one type of error that will not be handled by the TRY...CATCH mechanism is a syntax error. Consider the following:SET NOCOUNT ON
This returns the following:Msg 102, Level 15, State 1, Line 3
The only case where TRY...CATCH captures syntax errors is when used in an EXECUTE ('<SQL code>') situation. Suppose you execute the following:SET NOCOUNT ON
In this case, an error will be returned via the SELECT statement in the CATCH block.
One of the limitations that you will need to deal with is when you are doing several operations in the same batch. For example, consider our tv$insert procedure. Instead of inserting a single row, let’s say we are going to insert two rows:...
How would we tell the two inserts apart if one of them had an error? It would not be possible, as either statement could break the rules of the TV table’s CHECK constraint. In this case, one possible way to deal with this would be a custom error message value. So you might do something like this:ALTER PROCEDURE entertainment.tv$insert
Now we can execute it:exec entertainment.tv$insert @TVid = 10, @location = 'Bed Room',
This returns the following:Msg 50000, Level 16, State 1, Procedure tv$insert, Line 28
Let’s try it with a number big enough to satisfy it, but not when it is divided by two:exec entertainment.tv$insert @TVid = 11, @location = 'Bed Room',
This returns the following:Msg 50000, Level 16, State 1, Procedure tv$insert, Line 28
The key here (other than we really like to watch TV and don’t like small ones) is to make sure to give your CATCH block enough information to raise a useful error. Otherwise the error messages you may produce using the new error handling will not be all that much better than what we had before.
Error handling in SQL Server is vastly improved over previous versions, but it is going to take a big mindshift to get us there. Once you start blocking errors from a client that has expected errors in the past, you may break code by trying to fix it. So careful study and some reengineering will likely be in order to properly start using the new error-handling capabilities.
.WRITE Extension to the UPDATE Statement
In older versions of SQL Server, modifying the data in text and image columns was a real beast using T-SQL code. There were arcane READTEXT and WRITETEXT commands to do “chunked” reads and writes(just reading and writing part of a value to save the resources of fetching huge amounts of data). Starting with SQL Server 2005, the use of the text and image types is being deprecated for the new (max) datatypes: varchar(max), nvarchar(max), and varbinary(max). text and image are still available, but their use should be phased out in favor of the far better (max) types.
For the most part, you can treat the (max) datatypes just like their regular 8,000-byte or less counterparts, but if you are dealing with very large values, this may not be desired. Each of the (max) types can store up to 2GB in a single column. Imagine having to fetch this value to the client, make some changes, and then issue an UPDATE statement for a two-character change? RAM is cheap, but not cheap enough to put 4GB on all of your machines. So we can do a “chunked” update of the data in the row using a new extension to the UPDATE statement. (Note that you can do chunked reads simply by using the substring function.)
As an example, consider the following simple table with a varchar(max) column:CREATE TABLE testBIGtext
Next, we just build a loop and, using .WRITE, we put some text into the value at an offset for some length. Note that the offset must be less than or equal to the current length of the value in the column. he syntax is shown here:UPDATE <tableName>
Then we just start a loop and write 1,000 of each letter of the alphabet into the value column:DECLARE @offset int
Everything else is just plain SQL. Run the following to make sure our data is in there:SELECT testBIGtextId, len(value) AS CharLength
This returns the following:
This is a tremendous win for SQL programmers. You can easily work with long datatypes using normal functions, plus there’s a chunking mechanism, so that when you have a column holding a couple hundred megabytes of information, you don’t need to replace the whole thing in one operation.
The EXECUTE command in previous versions of SQL Server could be used only to execute SQL on the same server. In SQL Server, EXECUTE has an AT parameter to specify that the command be executed on a linked server.
To see this in action, let’s set up our local server as a remote linked server. We will create a linked server using sp_addlinkedserver, call it LocalLinkedServer, and point this to our instance of SQL Server:--note: if you are not running SQL Server as the default instance, you may
Now we can execute our T-SQL on the linked server by specifying AT and the linked server name:EXECUTE('SELECT * FROM AdventureWorks.Production.Culture') AT LocalLinkedServer
The query is executed on the linked server and the results returned. The AT parameter applies only to using EXECUTE on batches of statements, not on explicit stored procedure or function calls.
You can then use sp_dropserver to get rid of the linked server:EXECUTE sp_dropserver LocalLinkedServer
For completeness, we need to mention that a method of executing a batch of commands on another SQL Server already exists: using sp_executesql. It has the added benefit of allowing for parameter and return values. This procedure can also be called remotely, as follows:
Code Security Context
The EXECUTE AS clause on a procedure or function declaration allows you to define the security context in which a stored procedure or function (other than in-line table-valued functions) is executed. Without this clause, the object executes in the security context of the CALLER. Note that this does not affect the execution of the procedure unless there is a break in the ownership chain. Any object owned by the creator of the procedure will be available to the user.
The syntax of this clause is as follows:CREATE PROCEDURE <procedureName>
It is the same syntax when used for functions. There are four possible values for the EXECUTE AS option:
You can also execute one of these as a stand-alone command to change the context of who is executing the procedure back to the CALLER if needed. One additional statement is included: REVERT to go back to the context set in the WITH clause of the procedure declaration.
As an example, we’re going to create a user, named mainOwner, and then a procedure that uses the EXECUTE AS option on a procedure to show the basics of EXECUTE AS.
We’ll start by creating several users, tables, and procedures:--this user will be the owner of the primary schema
Then we change to the context of the main object owner, create a new schema, and create a table with some rows:EXECUTE AS USER='mainOwner'
Next, this user gives SELECT permission to the secondaryOwner user:
GRANT SELECT on mainOwnersSchema.person to secondaryOwner
Then we set the context to the secondary user to create the procedure:REVERT --we can step back on the stack of principals,
Then we create a schema and another table:CREATE SCHEMA secondaryOwnerSchema
Next, we create two procedures as the secondary users: one for the WITH EXECUTE AS as CALLER, which is the default, then SELF, which puts it in the context of the creator—in this case, secondaryOwner:CREATE PROCEDURE secondaryOwnerSchema.person$asCaller
Next, we grant rights on the procedure to the aveSchlub user:GRANT EXECUTE ON secondaryOwnerSchema.person$asCaller to aveSchlub
Then we change to the context of aveSchlub:REVERT
and execute the procedure:--this proc is in context of the caller, in this case, aveSchlub
which gives us the following output:
Next, we execute the asSelf variant:--secondaryOwner, so it works
which gives us the following output:
What makes this different is that when the ownership chain is broken, the security context we are in is the secondaryUser, not the context of the caller, aveSchlub. This is a really cool feature, as we can now give users temporary rights that will not even be apparent to them, and will not require granting any permissions.
It is not, however, a feature that should be overused, as it could be all too easy to just build your procedures in the context of the database owner. One nice side effect of this is that we could use it instead of chaining, by setting EXECUTE AS to a user who can access a different database directly, so the system user may have rights to the database, but the executing user cannot.
.NET is tightly integrated with SQL Server. In this section, we are simply going to cover the commands to make assemblies available for use in T-SQL code.
Prior to using .NET assemblies in T-SQL code, you must declare and load them from the dynamic link library (DLL) file. CREATE ASSEMBLY loads amanaged class into SQL Server memory space so that the common language runtime (CLR) database objects (discussed in the next section) can be created.
The syntax of the CREATE ASSEMBLY command is as follows:CREATE ASSEMBLY <assemblyName> FROM <assemblyLocation>
Here is an example:
CREATE ASSEMBLY dateObject FROM 'C:projectsbinDebugsetDate.dll'
After loading, the assembly can be subsequently removed using the DROP ASSEMBLY command:
DROP ASSEMBLY dateObject
An assembly cannot be dropped if any CLR database objects reference it. These references can be seen in Management Studio by right-clicking the assembly.
CLR Database Objects
Once an assembly has been created as an operating system file and created as a database object, it can then be used to declare objects, including stored procedures, functions, triggers, user-defined types, and user-defined aggregate functions. The syntax of CREATE and ALTER for the T-SQL objects also includes the ability to reference CLR objects instead of T-SQL. The following is the basic syntax for these CLR extensions:[CREATE][ALTER] DBOBJECTTYPE ([parameters])
For example, to create a procedure that points to a .NET assembly that has been created as an object on the SQL Server—in this case, a fictitious one called getDateValueString—we could build the following procedure:CREATE PROCEDURE dateString
The following commands are affected by this change:
This has been just a very brief introduction to the new SQL Server commands that revolve around .NET integration.
Declaring and Setting Variables
How many times have you wanted to declare and set a variable all in the same statement? Developers who have worked with T-SQL may be mystified why they could never issue a statement like this:DECLARE @iCounter int = 0;
Well, ladies and gentlemen, in SQL Server 2008, we can now finally declare and set variables within the same statement. A small feature, no doubt, but a very useful one indeed.
In addition to initializing the variable, improvements have also been made to perform arthimetic operations on the variables themselves:INSERT dbo.Counters
The value of Counters would be the value 1 after executing this statement. You can also use the +=, -=, *=, and /= operators. Taking our initial @iCounter declaration, let’s use the new += operator, as follows:UPDATE dbo.Counters
The result of Counters would now be 2.
Passing Table-Valued Parameters
In earlier versions of SQL Server, stored procedures and user-defined functions allowed only scalar parameters to be passed. For most cases, this was acceptable. But in certain circumstances, this limitation led to the same stored procedure or function being called multiple times to perform a single task.
Consider the scenario where a customer is submitting an order on a web site. The order may be for multiple products. When the order is submitted, a server-side stored procedure is used to process the shopping cart list. Before SQL Server 2008, this stored procedure would accept the order items individually, and performance was slow. In SQL Server 2008, developers can now pass a table as a parameter to stored procedures and user-defined functions. In our scenario, this would allow the entire shopping cart to be passed to the stored procedure for processing in one trip, as opposed to potentially many round-trips to the server.
To be fair to pre-2008 database application designs, you could save on some round-trips by creating some sort of order item array or a special proprietary BLOB and pass that as a single parameter, but in the end, you would end up with a lot of complex code to maintain. SQL Server 2008 is able to easily and efficiently pass table structures to stored procedures and user-defined functions, without all this complex code to maintain.
Now that you are sold on the value of table-valued parameters, it’s time to explain how to create and use them. Table-valued parameters are defined via a new user-defined table type. Let’s continue our shopping cart scenario and assume that we have a stored procedure that will simply update the inventory at the warehouse given a specific order. Instead of calling this stored procedure for every single order item, we want to leverage table-valued parameters and pass a table containing all the products in the shopping cart. We need to first create a Products table that will contain three products and the number remaining in inventory, as follows:CREATE TABLE Inventory
We should now create a user-defined table type that describes the table we wish to pass to our stored procedure:CREATE TYPE OrderType AS
Looking at the type-creation code, you may notice that you can have most of the constraints that you could otherwise use within a regular CREATE TABLE definition The most significant exception is foreign keys, which are not allowed in user-defined table types.
It is important to note a few restrictions about user-defined table types:
• They can be used only as input parameters to user-defined functions and stored procedures.
• They cannot be used as OUTPUT parameters.
• They cannot be defined within the body of the user-defined function or stored procedure.
• They must be defined independently.
In SQL Server 2008, table value parameters are read only within the stored procedure or userdefined function. Outside these functions, you can treat table types like any other table, and INSERT, UPDATE, and DELETE as needed.
At this point in our example, we are ready to create the UpdateInventory stored procedure:CREATE PROCEDURE UpdateInventory (@myOrder OrderType READONLY)
Notice that the READONLY keyword is used for the @myOrder table variable. This is because the READONLY keyword is required in SQL Server 2008 to identify table variables. Within the code of the stored procedure, we are treating the @myOrder table variable like any other table. This procedure simply updates the inventory of each product by decrementing it by the value of the quantity ordered. Finally, to test our procedure, we can do the following:DECLARE @myOrder OrderType
The Inventory table before running the UpdateInventory stored procedure is as follows:
The Inventory table after running the UpdateInventory stored procedure is as follows:
We can see that the meat-flavored dog shampoo has decreased by ten units, and the canine paw scissors units have decreased by five, as these were the values defined in our order.
User-defined table types are fully supported in ADO.NET 3.0 via a new parameter type called SqlDbType.Structured. They are also supported in the ODBC and OLE DB client stacks via a new parameter type called SQL_SS_TABLE. When clients pass table variables to SQL Server, the content gets passed by value. When table variables are used within SQL Server, they are passed by reference for maximum performance gain.
For those of you who have a lot of experience with SQL Server, you may be wondering why you couldn’t just use BULK INSERT instead of going through all this code. In some circumstances, it may be better to use BULK INSERT. Generally, using BULK INSERT is better when your mission is purely to insert data, especially from a file. If you want to use joins or do special business logic, you should consider leveraging table value parameters within you database application.
SQL Server 2008 Related Interview Questions
|SQL Server 2000 Interview Questions||MSBI Interview Questions|
|SQL Server 2008 Interview Questions||SQL Server 2005 Interview Questions|
|SSIS(SQL Server Integration Services) Interview Questions||SSRS(SQL Server Reporting Services) Interview Questions|
|Microsoft Entity Framework Interview Questions||LINQ Interview Questions|
|SQL Server Analysis Services (SSAS) Interview Questions||Sql Server Dba Interview Questions|
SQL Server 2008 Related Practice Tests
|SQL Server 2000 Practice Tests||MSBI Practice Tests|
|SQL Server 2008 Practice Tests||SQL Server 2005 Practice Tests|
|SSIS(SQL Server Integration Services) Practice Tests||SSRS(SQL Server Reporting Services) Practice Tests|
|Microsoft Entity Framework Practice Tests||LINQ Practice Tests|
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.