Jerry Dixon's Exclusive New Column: SQL Server 2005 for the .NET Developer

Get the most out of your data with SQL 2005

For most of this year, I've been giving presentations and teaching classes on various aspects of Visual Studio 2005 and SQL Server 2005. In my consulting role, I've been constantly presented with problems that, while definitely solvable with current tools, simply screamed for the new products. Alas, like most of you, I haven't been in a position to use these tools in a production environment. Now the wait is over. By the time you read this, production versions of both products should be available to the general public. Now all we need is to help each other learn how to make the best use of the new and improved features that these products provide. That's where .NET Developer's Journal comes in.

The good folks at the .NETDJ have agreed to publish a series of articles in a new SQL Server 2005 column. Each month I will explore a different feature of the product, concentrating on how that feature will affect the .NET developer (after all, I am a .NET/SQL Server developer myself). I will start with those topics that I believe will affect the greatest number of developers, and will proceed from there. This month's topic, the feature that could easily be the most abused capability of the entire product, is .NET CLR Integration.

CLR Integration
What is CLR Integration? Simply put, it means that the .NET Common Language Runtime can be used by SQL Server, which in turn allows the database engine to load and execute .NET assemblies. Because of this, a developer can use a .NET language to create stored procedures, triggers, user-defined functions, user-defined data types, and custom aggregates. The resulting possibilities are enormous. SQL objects can now utilize the power, flexibility, and performance of the .NET Framework. This includes many, but as we shall see, not all, of the thousands of classes built into the Framework.

Design Considerations
Now that we have all of this additional power at our disposal, we need to carefully consider how it should be used. Many people have said that every SQL object should be written in C#, for example. This would give the developers and the DBAs a common language. Others have said that it should never be used, because it has no practical advantage over T-SQL. Both views miss the point: CLR Integration is a tool, and as such, it can be extremely effective when used properly. Improper use can cause as many problems as it solves.

So how does one know when objects should be written in one way or the other? In general, .NET will be the better choice when there is complex logic involved, when built-in .NET Framework classes would be helpful, or when CPU-intensive tasks are performed. The native code produced by .NET excels in these situations. T-SQL, on the other hand, should be used when there are extensive data access or batch operations to be performed. Of course, specific situations may dictate a departure from these general rules. (Remember not to sell T-SQL short. As we will discuss in a future article, T-SQL has gained many new capabilities itself, and can definitely give .NET a run for its money.)

Not all such decisions should be made based on performance alone, however. Application design and maintenance should also be considered. Because we can now write stored procedures in .NET, we could easily place all of our business logic in the database. Few would argue in favor of this, at least in the general case. However, this is another idea that has been proposed. The fact that our database is much more powerful than it used to be is no reason to abandon good, solid application design principles. CLR integration should be used as an additional tool with which to build a high-performance data layer, not to remove the need for a well-designed business-logic tier.

.NET Object Creation
Once we decide to use .NET classes to build our SQL objects, we must see to their actual implementation. Visual Studio 2005 comes to our rescue here, and makes things quite easy (see Figure 1). Here we see that I have chosen a C# Database project, and selected the "SQL Server Project" template. This template creates an in-process assembly (or class library), which is the type of assembly required by SQL Server. After naming my project and clicking the "OK" button, I am asked to select a database connection, as shown in Figure 2. I can select an existing connection or create a new one. If I choose to create a new connection, I will be presented with a fairly standard database connection form, which is shown in Figure 3. I can fill out the form, or click on the "Advanced..." button, which allows me to manually change a whole list of connection parameters. I can also test the connection by clicking on the "Test Connection" button. After I have selected an appropriate connection, I accept it by clicking the "OK" button.

It is quite likely that I will want to test and debug the project while I am working on it, so Visual Studio displays the question shown in Figure 4. It is asking me whether I want to enable debugging. Selecting "Yes" allows debugging, but will cause all managed threads that are currently running on the selected SQL Server to stop. This is necessary in order to begin the debugging process. If this is unacceptable, I can select "No" and forego the ability to debug.

At this point, I have a standard-looking Visual Studio project. If I browse the project with Solution Explorer, I will see that a few assemblies have automatically been referenced. The two assemblies of most interest here are System.Data and sqlaccess. The sqlaccess assembly contains classes that allow .NET code to manipulate objects inside of SQL Server, such as tables. The System.Data assembly contains classes that we will code against inside our managed objects. One especially important namespace inside System.Data is System.Data.SqlServer. This namespace contains many classes that we will use on a regular basis. A few of these are summarized in Table 1. Why should we use these classes instead of the SqlConnection and SqlCommand classes found in System.Data.SqlClient? The answer is: performance and reliability. Remember that this code is different from code in a "normal" assembly in one major respect: it is invoked by SQL Server, and runs inside SQL Server. The SqlServer classes allow code to communicate directly with SQL Server via an internal, optimized data provider. The provider used by SqlClient classes is not as efficient in this case, as it is optimized for code that runs outside of SQL Server.

To create my managed SQL object, I select "Add New Item..." from the Project Menu. The dialog box shown in Figure 5 appears. This is where I select the actual object type that I want to create. In this example, I want to create a new user-defined function, so I select that template from the dialog box. I then give the new function a name and click the "Add" button. The code shown in Listing 1 is automatically created.

When working with managed SQL objects, there are two additional namespaces that we need to use. The first, System.Data.Sql, contains attribute definitions for each of the SQL object types. In this example, I am creating a user-defined function, so I need the SqlFunction attribute. If I want the name of the user-defined function to be the same as the .NET function name, I only need to include the SqlFunction attribute. However, I can change the name by passing a parameter to the SqlFunction constructor like this:

[SqlFunction(Name:= "MyNewName")]

The second namespace, System.Data.SqlTypes, provides access to the SQL Server-specific data types that I must use in my managed objects. I want to create a function that returns the factorial for a given number, so I change the code as shown in Listing 2. When built, this will create a C# function that I can call from SQL Server T-SQL statements.

SQL Server Deployment
The resulting assembly must somehow be "installed" in SQL Server before it can be used. This process is known as deployment. As you can see in Figure 6, Visual Studio has options on the Build menu that can be used to deploy the assembly, but there are other steps after this that must be completed using T-SQL. Therefore, let's switch to SQL Server Management Studio (SSMS) and perform the entire deployment process using T-SQL. Refer to Listing 3. As a security measure, the database must first be configured to allow CLR integration. This is accomplished by executing the sp_configure and reconfigure statements as shown. Next, the assembly must be imported into SQL Server (yes, imported - the assembly is actually stored in the database). The new CREATE ASSEMBLY statement is used for this. Using this statement, we give the assembly a name (MyFuncs), specify the path from which to read the assembly, and indicate the permission level we wish to give to the assembly.

Permission levels are very important. They control the rights that an assembly has with regard to resources outside of SQL Server. The most restrictive - and recommended - setting is SAFE. It allows no access to external resources. The next level, EXTERNAL_ACCESS, allows the use of the file system, the network, and the registry. The final level, UNSAFE, should be used with extreme caution. It allows access to any external resource to which the SQL service account has access. This can be very dangerous and should be allowed only for the most trusted assemblies. Whichever setting is selected, it is important to note that not all Framework classes can be used when creating SQL objects. Only those classes marked as safe for SQL Server can be used. Excluded classes include those that could cause SQL Server instability, such as those in the System.Threading namespace. See the Framework documentation for specifics.

When the CREATE ASSEMBLY statement has been executed, the assembly will be imported into the system. The next step is to create a T-SQL wrapper for the user-defined function. Refer again to Listing 3. The wrapper is created by using the new EXTERNAL NAME clause. We start by creating a regular T-SQL function, but instead of writing the function implementation, the EXTERNAL NAME clause is used. This clause specifies the fully qualified name of the .NET function, starting with the assembly name given in the CREATE ASSEMBLY statement.

Now our function is ready to use. Let's have some fun. I've created a pure T-SQL version of our factorial function so that we can compare the relative performance of the two implementations. It is shown in Listing 4. The test results can be seen in Table 2. The .NET version is a bit faster when called with larger parameters, and approximately the same speed with smaller parameters. This could be important in certain applications. I should admit that this particular implementation of the factorial algorithm is for demo purposes only, and wouldn't be practical in production. Factorial 21 or greater fails because the result is larger than a BIGINT can handle.

This month we have seen how .NET code can be used to create a user-defined function. Remember that .NET code can also be used to create stored procedures, triggers, user-defined data types, and custom aggregates. This can provide additional capabilities and improved performance over pure T-SQL; however, this is not guaranteed. T-SQL has been improved in this version of SQL Server, and can compete with .NET in many cases. Next month we will explore some of those new T-SQL features.

More Stories By Jerry Dixon

Jerry Dixon is a senior developer and architect for ACH Food Companies in Memphis, Tennessee. Over the past 16 years he has led development projects for a number of enterprise, mid-level, and small business organizations. While he has fulfilled multiple roles as an infrastructure designer, database administrator, and software developer, he specializes in XML, SQL and ASP.NET. He is a co-leader and frequent presenter at the Memphis .NET User Group. Jerry holds the following Microsoft certifications: MCSD (VB 6.0 and .NET), MCDBA (SQL 2000), MCSA (Windows 2000 and 2003), MCSE (Windows 2000), MCAD (.NET), MCT. He resides in Olive Branch, MS with his wife and son.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.