Now that you understand a bit about XML and XPath, we can start talking about how you can get XML into SQL Server. There are several different ways to do this. First, you can just dump your XML into a nvarchar column in the database using a simple INSERT statement. Using this technique is just like entering any text into a column. With SQL Server, you can use the XML datatype rather than a text column.
There are three other ways of getting XML into your database:
SQLXML is an additional set of technologies that include updategram support, the SQLXML BulkLoad provider, client-side FOR XML support, and SOAP support. For SQL Server 2000, SQLXML 3.0 shipped separately; it doesn’t need to run on a server. With SQL Server 2005 and SQL Server 2008, SQLXML 4.0 ships with the product, but it can also be redistributed on its own. Don’t confuse SQLXML with the SQL Server XML datatype.
Each technique for getting XML into the database has its strengths and weaknesses. If you are just looking for the fastest and highest performance way to get XML data into SQL Server, consider the BulkLoad provider. The BulkLoad provider doesn’t attempt to load all your XML into memory, but instead reads your XML data as a stream, interprets it, and loads it into your SQL Server. The BulkLoadprovider is a separate component, so you cannot use it inside a stored procedure or user-defined function (UDF). You could use it in an extended stored procedure (XP) by calling out to it, but that is an uncommon scenario and has its own set of issues (XPs are complex, hard to debug, and can open up your server to security issues if written incorrectly).
On the other hand, OPENXML can be used in stored procedures and UDFs, since it ships as part of the native T-SQL language. You’ll pay a performance penalty for this integration though. OPENXML requires you to use a stored procedure, sp_xml_preparedocument, to parse the XML for consumption. This stored procedure loads a special version of the MSXML parser called MSXMLSQL to process the XML document and, in turn, loads the entire XML document into memory.
Updategrams are very useful for applications where you want to modify your database and you are OK with building an annotated schema and applying those changes through this annotated schema. SQLXML takes the updategram and translates it to SQL Data Manipulation Language (DML) statements.
However, if you need to apply business logic to the SQL DML statements, you’ll be unable to use updategrams, since you cannot access the generated DML statements.
Before we get to the details of using these techniques, we need to look at how to configure SQL Server for SOAP.
SQL Server Configuration for SOAP
With SQLXML 3.0, SQL Server 2000 required an Internet Information Services (IIS) server to listen for SQL commands using either URL queries or SOAP calls. The Internet Server Application Programming Interface (ISAPI) listener will parse out the SQL, execute it, and return a result.
Starting with SQL Server 2005, SQL Server can natively listen on a port for HTTP calls, without requiring IIS. Rather than using the IIS configuration utility from SQLXML 3.0, you can use serverside endpoint support. However, if you still want to keep an IIS server in the mid-tier, you’ll need to continue to use the SQL ISAPI listener included with SQLXML 3.0 against your SQL Server back end. You can also call SQLXML from your applications, since SQLXML supports a managed object model, as you’ll learn later in this chapter in the section “Programming SQLXML from .NET and COM.”
As a simple setup for this chapter, we’ll create an endpoint so we can send our queries and updategrams to the server. The next chapter covers full web services support.
To configure our server, we’ll just issue a CREATE ENDPOINT command and allow our server to listen for T-SQL batches, as follows:CREATE ENDPOINT pubs
This code creates a virtual directory called pubs. Remember that this virtual directory will not show up in IIS’s virtual directories. Be careful about this, since you may have an endpoint that tries to use the same port as an existing IIS endpoint. You cannot have both SQL and IIS listen on the same ports.
Rather than needing to parse XML into rows by loading and parsing the XML and then iterating through the XML and generating T-SQL commands, you can use the OPENXML function. The syntax for OPENXML may look difficult at first, but once you try it, you’ll see that it is very approachable.OPENXML(idoc int [in],rowpattern nvarchar[in],[flags byte[in]])
The first parameter is the integer handle to your XML document. A handle is just a unique integer identifier for your document. You can retrieve this using the built-in sp_xml_preparedocument storedprocedure. When you pass in your XML document as a parameter, the sp_xml_preparedocument procedure parses it and returns the integer you need to pass to the OPENXML function. The XML document you pass can be text based, or you can pass the new XML datatype starting with SQL Server 2005. You can optionally pass the namespace Uniform Resource Identifier (URI) you want for your XPath
expressions. Your usage of this parameter will depend on your usage of namespaces in your XML. If you don’t use any namespaces, you won’t use this parameter in most of your calls to the stored procedure.
Conversely, the sp_xml_removedocument built-in procedure takes the integer handle to your XML document and removes the internal in-memory representation of your XML document that you created with sp_xml_preparedocument. You should call this stored procedure after you are finished with your XML document. If you forget, SQL Server will destroy the in-memory representation once the session that created it disconnects. However, it isn’t good practice to rely on this behavior.
The second parameter to OPENXML is the XPath expression that you want to use to parse out the rows. This expression can be as simple or complex as you require.
The third parameter is optional and allows you to switch from attribute- to element-centric mapping. By default, OPENXML uses attribute-centric mapping, which is a value of 0. You’ll want to switch this to element-centric mapping if your XML is element-centric by specifying a value of 2. By passing a value of 1, you are telling SQL Server to use attribute-centric mapping by default, and for any unprocessed columns, element-centric mapping is used. A value of 8 specifies to not copy overflow text to the @mp:xmltext metaproperty.
Finally, we have the WITH clause. This clause allows you to specify a schema definition for your newly created rowsets, or you can specify a table if you know your XML will map to the schema in a table that already exists. The schema definition uses this format:ColName ColType [ColPattern | Metaproperty][, ColName ColType
The metaproperties allow you to extract information, such as local positioning and namespace information of XML nodes, which provide more details than are visible in the textual representation. You can map these metaproperties to the rowset columns in an OPENXML statement using the ColPattern parameter. Below it shows the different values for the metaproperty attribute.
Let’s now take a look at a couple of examples that use the OPENXML function. The XML document that we will be using is quite simple:<ROOT>
First, let’s just take our XML document and store it in a relational table. Since we’re using pubs already, we’ll take the data and store it in a new authorsXML table in pubs. We’ll simply accept the defaults and not fill in any optional parameters for the OPENXML function. The code will take our XML document using element-centric mapping, parse the document, and place it into the authorsXML table.CREATE TABLE [authorsXML] (
If we tweaked the preceding statement and removed the INSERT and instead just did a SELECT on our data, such as SELECT *, SQL Server would return our parsed XML as a rowset. The results would look as follows:
Now, you may realize that we’re not storing some of the XML, such as the address, city, state, ZIP code, and contract values. If we wanted to, we could capture the XML document by creating another column and using the @mp:xmltext command in our schema definition, like this:
catchall nvarchar(1000) '@mp:xmltext'
The next example shows how to navigate an XML document using an XPath expression in OPENXML. Since OPENXML returns a relational rowset, you could actually join the results with another table and then store this rowset in your table. After calling OPENXML, your XML data can be treated just like any other relational data. Here, we’ll use the returned XML rowsets to join data with the publishers table to return only authors who have the same city as a publisher.DECLARE @idoc int
The results should look as follows:
The best way to use OPENXML is in a stored procedure, especially if you are taking your XML from the mid-tier and putting it into the database. Rather than parsing in the mid-tier, you can send your XML as text to the stored procedure and have the server parse and store it in a single operation. This provides a lot better performance and a lot less network traffic than parsing the XML in the mid-tier and sending T-SQL commands to the server to store the data.
If you are going to use your newly parsed XML over and over again, then rather than calling OPENXML multiple times, just store the results in a table variable. This will speed up the processing and free resources on the server for other work. The sample stored procedure that follows implements OPENXML. Notice the use of the new nvarchar(max) datatype. In SQL Server 2000, you would have to use a text datatype. For all new development, use the nvarchar(max) datatype, since the text datatype may be removed in future versions.CREATE PROCEDURE update_authors_OPENXML (
XML Views Using Annotated XML Schemas
XML Schemas define the structure of an XML document, in the same way that a relational schema defines the structure of a relational database. With schemas, you can define what makes an XML document legal according to your specifications. For example, you can define the elements, attributes, hierarchy of elements, order of elements, datatypes of your elements and attributes, and any default values for your elements and attributes. Schemas are not required in your XML documents but are recommended, especially if you’ll be sharing your XML data with other applications that may not understand your XML data or how to correctly create that XML data without understanding your schema. The standard for schemas is XML Schema Definition (XSD).
With SQL Server, you can create an XML Schema that maps to your relational structure using some special schema markup. This is useful when you want to create an XML view of your underlying relational data. This view not only allows you to query your relational data into XML, but you can also persist changes using updategrams and SQLXML bulk-loading. It takes some work to create the annotated schema, but if you are going to be working extensively with XML, the extra work is worth the effort. Plus, you’ll want to use annotated schemas with updategrams, which you’ll learn about in the section “SQLXML Updategrams” later in this chapter.
Visual Studio includes a very capable XML Schema editor so that you don’t need to generate XML Schemas by hand. Following is a typical XML Schema for the authors XML that we were using previously. As you can see, the XML Schema is an XML document. The system knows it is a schema document because we declare a namespace, xs, that uses the XSD namespace. This namespace is a reference to the World Wide Web Consortium (W3C) XSD namespace, which is http://www.w3.org/2001/XMLSchema. This reference is aliased to xs and then all elements use this alias as their prefix inside of the schema.
Also, notice how the schema declares an element called AuthorsXMLNew, which contains the rest of your XML data. Then there is a complex type that declares a sequence of XML elements. These elements include the ID, first name, last name, phone, and so on, of the authors. Notice how the elements also declare a type. Schemas can define datatypes for your elements and attributes. We declare some strings, an unsigned int, and an unsigned byte. You can declare other datatypes beyond what this schema has, such as dates, Booleans, binary, and other types.<?xml version="1.0" encoding="utf-8"?>
Now that we have a base schema, if we want to use this annotated schema with SQLXML, we need to make some changes. First, we need to add a reference to the XML Schema mapping. To do this, we need to modify our XML Schema by first adding the namespace for SQL Server’s schema mapping, which is urn:schemas-microsoft-com:mapping-schema. This schema allows us to map our XML Schema to our relational database schema. We’ll alias this namespace with sql so that we can use the prefix sql: when we refer to it. Therefore, if we wanted to modify the preceding schema to support SQL Server mapping, we would use this new schema:<?xml version="1.0" encoding="utf-8" ?>
You’ll also see the use of the urn:schemas-microsoft-com:xml-sql namespace in documents. This namespace provides access to SQLXML functionality that can be used in templates or XPath queries.
Default and Explicit Mapping
You may notice that the preceding schema just adds the namespace for the mapping. The schema isn’t listed since SQL Server supports default mapping between your relational schema and your XML Schema. For example, the authors complex type would be automatically mapped to the authors table. The au_id string would automatically map to the au_id column, and so on.
You can also explicitly map between your schema and your SQL datatypes. For very simple applications, you can use the default mapping. In most cases, you’ll use explicit mapping since your XML and relational schemas may be different or you’ll want more control over how the mapping is performed or the datatypes are used. You use the sql:relation markup, which is part of the SQLXML mapping schema, to specify a mapping between an XML item and a SQL Server table. For columns, you use the sql:field markup. You can also include a sql:datatype to explicitly map your XML datatype to a SQL Server datatype so that implicit conversion doesn’t happen. Therefore, if we were to add these markups rather than using the default mapping for our schema, our schema would change to look like the following code:<?xml version="1.0" encoding="utf-8"?>
Since in a relational database you can relate data by keys, you can use annotated schemas to describe those relationships in your XML. However, annotated schemas will make those relationships hierarchical through the use of the sql:relationship mapping. You can think of this as joining a table. The relationship mapping has a parent element that specifies the parent relation or table. It also has a parent-key element, which specifies the key to use, and this key can encompass multiple columns. Also, you have child and child-key elements to perform the same functionality for the child as the other elements do for the parent.
There is also inverse functionality, so you can flip this relationship. If for some reason your mapping is different from the primary key/foreign key relationship in the underlying table, the inverse attribute will flip this relationship. This is the case with updategrams, which you’ll learn about in the section “SQLXML Updategrams” later in this chapter. You’ll use this attribute only with updategrams.
Imagine we had our authors and the authors were related to books in our relational schema through the use of an author ID. We would change our schema mapping to understand that relationship by using the following XML Schema. Notice how the relationship mapping is in a special section of our XSD schema.<?xml version="1.0" encoding="utf-8" ?>
Key Column Mapping Using sql:key-fields
Now that you’ve seen how to build relationships, you also need to look at how to make SQL Server nest your XML data correctly. For nesting to correctly occur, you’ll want to specify the key columns used in your table that make the most sense when creating XML hierarchies. To give SQL hints on the correct ordering, use the sql:key-fields mapping, which tells SQL which columns contain key values. The sample that follows lets SQL Server know that the au_id column is a key column:<?xml version="1.0" encoding="utf-8" ?>
Excluding Data from the XML Result Using sql:mapped
Using the sql:mapped syntax, you can specify whether to map an element or attribute in your XSD schema to a database object. If you don’t want to have the default mapping occur and you don’t want to have the XML appear in your results, you should use the sql:mapped attribute.
There may be times when there is extraneous XML that you don’t want to appear in your table; for example, if you don’t control the XML Schema and want to omit the data from your table since a column for the data doesn’t exist in the table. This attribute has a Boolean value, with true meaning that mapping should occur and false meaning that mapping shouldn’t occur.
Creating a Constant Element
If you want an element to be constant in your XML document even if there is no mapping to the underlying database, you should use the sql:is-constant mapping. This mapping is Boolean, and a value of true makes the element always appear in your XML document. This mapping is very useful for creating a root element for your XML. The following is an example of using this mapping:<?xml version="1.0" encoding="utf-8" ?>
Limiting Values by Using a Filter
You may want to filter the results returned to your XML document by values from your database. The sql:limit-field and sql:limit-value mappings let you do this by allowing you to specify a filter column and the value to limit that column by. You don’t need to specify the limit value if you don’t want to since SQL Server will default this to null. You can also have multiple limiting values for multiple mappings. The shortened example that follows shows a schema that limits authors who live in Boston:<?xml version="1.0" encoding="utf-8" ?>
Other Features in Schema Mapping
The majority of your mapped schemas will use the preceding mappings. Table briefly summarizes the other mapping technologies. To see annotated schemas in action, take a look at the integrated example in the “the “Programming SQLXML from .NET and COM” section later in this chapter.
Other Schema Mapping Features
SQL XML Updategrams
So far, you’ve seen how to shred data using OPENXML and how to get data out of SQL Server using an annotated schema. Updategrams build upon the annotated schema concept. Updategrams allow you to change data in SQL Server using an XML format. Rather than writing T-SQL, you specify your changes to your data using before-and-after images specified in an XML format. You can execute these updategrams from ADO or ADO.NET as you’ll see in the full example in the “Programming SQLXML from .NET and COM” section later in this chapter.
The first step towards understanding XML updategrams is to understand the namespace they use, namely urn:schemas-microsoft-com:xml-updategram. This namespace is usually abbreviated to updg as part of your namespace declaration.
Every updategram must contain at least one sync element, which is an XML element that contains the data you want to change in the form of before and after elements. You can have multiple sync elements, and each element is considered a transaction, which means that everything in that sync block is either completely committed or entirely rolled back. The before element contains the data as it was before the change. You’ll want to specify a key so that SQL Server can find the data that you want to change. You can modify only one row in your before element.
The after element is the changed data. You can imagine that an insertion will have an after but no before. On the other hand, a delete will have a before but no after. Finally, an update will have both a before and an after.
The following is an updategram in its simplest form:<ROOT xmlns:updg="urn:schemas-microsoft-com:xml-updategram">
You’ll notice that you can optionally specify an annotated schema file that will map explicitly the elements in your updategram to columns in your tables. If you don’t specify an annotated schema file, SQL Server will use default mapping, as you saw in the annotated schema mapping section. It is also important to note that you can mix and match element- or attribute-based mapping. However, for the sake of clarity, our recommendation is to select one style or the other.
To specify a null value with an updategram, you use the sync element’s nullvalue attribute to specify the placeholder for the null. For example, if you wanted the value of "nothing" to be null, you would use the following updategram, which uses attribute-based syntax:<?xml version="1.0"?>
You can also use parameters with your updategrams by specifying $parametername. For example, if you wanted to create a parameter for the selection of the author, you would use the following updategram:<?xml version="1.0"?>
If you want to use identity columns and you want to pass the identity values between tables, you can use the at-identity attribute. This attribute is a placeholder that you can include, and SQL Server will provide the correct value for it when processed. If you want to pass the identity value back to the client, you can use the returnid attribute. SQL Server will then return an XML document containing the identity value after the updategram is applied successfully to the server.
An example will make this all clearer. If we wanted to insert a new author into our authors table, delete an existing author, and change the values for yet another author, we would use the following updategram against our authors table. The next section shows how to program in .NET using the SQLXML classes to execute this code.<?xml version="1.0"?>
If you want to load a large set of XML data into SQL Server, you’ll want to use the XML bulk-loading
capabilities of SQLXML. Don’t—we repeat don’t—use updategrams or OPENXML. You’ll find performance lacking with these two components for loading large amounts of XML data. Of course, you may be wondering what makes up a large amount of XML data. Well, it depends on a number of factorssuch as the size and complexity of your XML. You could be loading hundreds of small XML files, or youcould be loading one big XML file. If you have fast processors, a lot of memory, and fast disks on your server, you could possibly get away with using OPENXML. Our recommendation is to run a test on yoursystem to see which method performs acceptably with the data volume that you intend to run.
XML BulkLoad is an object that you call as part of the SQLXML object model that in turn calls the bulk-loading capabilities of SQL Server to load your data from an XML source into SQL Server. Our recommendation is to run a trace while you’re bulk-loading your XML data, and you’ll see the bulk-loading operations appear as part of that. This will give you insight into the commands that BulkLoad is running on your behalf and will allow you to troubleshoot any errors that occur or misshapen data that is imported.
XML BulkLoad leverages the mapping schema technology that we’ve been talking about in this chapter. The mapping schema will tell the BulkLoad component where to place your XML data in the database. The object model for XML BulkLoad is very straightforward. There is one method called Execute and a lot of properties that allow you to configure how to handle the bulk-loading, as shown in Table. The Execute method takes two parameters. The first is the path to the schema mapping file. The second optional parameter is a path or stream to the XML file you want to import.
The first property that you should understand is the Transaction Boolean property. Normally, you want to leave this property false to make the load nontransacted. This will increase your performance at the cost of not being able to roll back if there is a failure.
The next property is the XMLFragment Boolean property. If you set this to true, BulkLoad allows XML fragments, which are XML documents with no root element.
If you are working with constraints and you want those constraints enforced as part of your BulkLoad, you’ll want to set the CheckConstraints property to true. By default, BulkLoad turns off constraint checking, which improves performance. Regardless of whether you set this to true or false, you’ll want to place primary keys ahead of a table with a foreign key in your mapping schema.
If you want to ignore duplicate keys, you need to set the IgnoreDuplicateKeys Boolean property to true. This is useful if you get data feeds where the person providing the data feed may not know what data is in your database and you don’t want the bulk-loading operation to fail because of duplicate keys. BulkLoad will not commit the row with the duplicate key, but instead just jump over that row in processing.
Many database designers use identity columns to guarantee uniqueness of keys in the table. Sometimes the XML you are loading has an identity-like element that you may want to use rather than having SQL Server generate a value using its own algorithm. To do this, set the KeepIdentity property to true. This is the default value for this property. One thing to remember is that it is a global value, so you cannot have SQL Server generate some identities and have BulkLoad pull from the XML for others.
The KeepNulls property defaults to false with BulkLoad. BulkLoad will not automatically insert null as the value for any column that is missing a corresponding attribute or element in the XML document. If you set this property to true, you must be careful here since BulkLoad will fail if you don’t allow nulls in those columns. BulkLoad will not assign the default value for a column, if one is specified in SQL Server, if the property is true.
One interesting BulkLoad property is ForceTableLock, which locks the table as BulkLoad loads its data. This will speed performance of the load at the cost of locking other users out of the table. The default value is false, so BulkLoad acquires a table lock each time it inserts a record into the table.
If your target tables don’t already exist, BulkLoad can create the tables for you. You need to set the SchemaGen property to true to have BulkLoad perform this functionality. BulkLoad will take the datatypes from your schema mapping and autogenerate the correct database schema based on those datatypes. If a table or column already exists with the same name and you want to drop and re-create them, set the SGDropTables property to true.
The next section shows using BulkLoad from a managed environment. BulkLoad supports both COM and .NET so you can program from both environments with this technology.
SQL Server 2008 Related Interview Questions
|SQL Server 2000 Interview Questions||MSBI Interview Questions|
|SQL Server 2008 Interview Questions||SQL Server 2005 Interview Questions|
|SSIS(SQL Server Integration Services) Interview Questions||SSRS(SQL Server Reporting Services) Interview Questions|
|Microsoft Entity Framework Interview Questions||LINQ Interview Questions|
|SQL Server Analysis Services (SSAS) Interview Questions||Sql Server Dba Interview Questions|
SQL Server 2008 Related Practice Tests
|SQL Server 2000 Practice Tests||MSBI Practice Tests|
|SQL Server 2008 Practice Tests||SQL Server 2005 Practice Tests|
|SSIS(SQL Server Integration Services) Practice Tests||SSRS(SQL Server Reporting Services) Practice Tests|
|Microsoft Entity Framework Practice Tests||LINQ Practice Tests|
Sql Server 2008 Tutorial
Sql Server 2008 Overview
Sql Server Installation And Configuration
Sql Server Encryption
Automation And Monitoring
Integrated Full-text Search
New Datatypes In Sql Server 2008
T-sql Enhancements For Developers
T-sql Enhancements For Dbas
Sql Server And Xml
Sql Server Xml And Xquery Support
Linq To Sql
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.