Sunday, January 17, 2010

Two Useful SQL Queries

On my current project at Adage Technologies, I had the opportunity to take an existing SQL Server database and to redesign the database structure.  The existing database had no foreign key relationships and contained duplicate data in various tables and columns.  One of the first tasks I went about was attempting to determine which columns were not used or contained the same value for all records.  I googled and discovered a post on Stack Overflow regarding finding database columns with only null values.

http://stackoverflow.com/questions/63291/sql-select-columns-with-null-values-only

This addressed one of the scenarios I was attempting to eliminate, where the column was never used and only contained nulls.  I was also looking for columns that contained the same value repeated in all columns.  I therefore modified Charles Graham’s answer from Stack Overflow, and created the query below:
1: DECLARE @col varchar(255), @table varchar(255), @cmd varchar(max), @cmd2 varchar(max)
2: 
3: DECLARE getTableName CURSOR FOR
4: SELECT [name] FROM sys.tables
5: 
6: OPEN getTableName
7: 
8: FETCH NEXT FROM getTableName INTO @table
9: 
10: WHILE @@FETCH_STATUS = 0
11: BEGIN
12:        DECLARE getinfo CURSOR FOR
13:        SELECT c.name FROM sys.tables t JOIN sys.columns c ON t.Object_ID = c.Object_ID
14:        WHERE t.Name = @table
15: 
16:        OPEN getinfo
17: 
18:        FETCH NEXT FROM getinfo INTO @col
19: 
20:        WHILE @@FETCH_STATUS = 0
21:        BEGIN
22:                SELECT @cmd = 'IF NOT EXISTS (SELECT top 1 * FROM [' + @table + ']
23: WHERE [' + @col + '] IS NOT NULL) BEGIN print ''' + @table + ' - ' +
24: @col + ' - (null)'' END'
25:                EXEC(@cmd)
26: 
27:                SELECT @cmd2 = 'SET ANSI_Warnings OFF DECLARE @count int, @val
28: varchar(255) SELECT @count = (SELECT COUNT(DISTINCT [' + @col + '])
29: FROM [' + @table + ']) IF (@count < 2) BEGIN SELECT @val = (SELECT TOP
30: 1 [' + @col + '] FROM [' + @table + ']) print ''' + @table + ' - ' +
31: @col + ' - '' + @val + '''' END'
32:                EXEC (@cmd2)
33: 
34:                FETCH NEXT FROM getinfo INTO @col
35:        END
36: 
37:        CLOSE getinfo
38:        DEALLOCATE getinfo
39: 
40:        FETCH NEXT FROM getTableName INTO @table
41: 
42: END
43: 
44: CLOSE getTableName
45: DEALLOCATE getTableName
46: 
The modified query returns all columns that have the same value in every record.  The query prints out results in the following format: Table – Column – Value.

The second query that I found useful while testing the success of my migration code was the following query written by Mitchel Sellers (http://www.mitchelsellers.com/blogs/articletype/articleview/articleid/121/determing-sql-server-table-size.aspx).

This query returns the record count for all tables, as well as information on the amount of space used by each table.

1: CREATE PROCEDURE GetAllTableSizes
2: AS
3: /*
4:     Obtains spaced used data for ALL user tables in the database
5: */
6: DECLARE @TableName VARCHAR(100)    --For storing values in the cursor
7: 
8: --Cursor to get the name of all user tables from the sysobjects listing
9: DECLARE tableCursor CURSOR
10: FOR 
11: select [name]
12: from dbo.sysobjects 
13: where  OBJECTPROPERTY(id, N'IsUserTable') = 1
14: FOR READ ONLY
15: 
16: --A procedure level temp table to store the results
17: CREATE TABLE #TempTable
18: (
19:     tableName varchar(100),
20:     numberofRows varchar(100),
21:     reservedSize varchar(50),
22:     dataSize varchar(50),
23:     indexSize varchar(50),
24:     unusedSize varchar(50)
25: )
26: 
27: --Open the cursor
28: OPEN tableCursor
29: 
30: --Get the first table name from the cursor
31: FETCH NEXT FROM tableCursor INTO @TableName
32: 
33: --Loop until the cursor was not able to fetch
34: WHILE (@@Fetch_Status >= 0)
35: BEGIN
36:     --Dump the results of the sp_spaceused query to the temp table
37:     INSERT  #TempTable
38:         EXEC sp_spaceused @TableName
39: 
40:     --Get the next table name
41:     FETCH NEXT FROM tableCursor INTO @TableName
42: END
43: 
44: --Get rid of the cursor
45: CLOSE tableCursor
46: DEALLOCATE tableCursor
47: 
48: --Select all records so we can use the reults
49: SELECT * 
50: FROM #TempTable
51: 
52: --Final cleanup!
53: DROP TABLE #TempTable
54: 
55: GO
These two queries were both extremely useful during the database design and migration process.

Tuesday, June 2, 2009

Update

I recently left Catalyst Software Solutions to join Adage Technologies. Therefore I am in the process of migrating my blog. I have copied over all of my old posts. However the formatting did not come out perfectly, so I apologize for any glitches.

I decided to go with Blogger for the time being as I've read great things about it and it is simple to use. I may decide to migrate again later, but for the time being this will be my new blog's home.

Welcome.

Thursday, March 12, 2009

Registering CSS and Javascript files within Sharepoint WebParts

While working on my latest Sharepoint project, Ben Padgett and I discovered that we needed an easy and reusable way to import custom stylesheets and javascript files for individual web parts. We wrote the following two static methods, which use built in ASP.Net and Sharepoint functionality to register stylesheets and javascript files within a webpart.

By default, Sharepoint master pages have a ContentPlaceHolder named “PlaceHolderAdditionalPageHead”. This ContentPlaceHolder is where you can register your own additional stylesheets. Sharepoint goes a step further and also includes a CssRegistration control for registering css files. We set the name of the CssRegistration control to the path where our style sheet is located Below is the code for registering a stylesheet:

   1: public static void RegisterCSS(string cssPath, WebPart webpart)
   2: {           
   3:     ContentPlaceHolder _Header = (ContentPlaceHolder)webpart.Page.Master.FindControl("PlaceHolderAdditionalPageHead");
   4:     if (_Header != null)
   5:     {
   6:         CssRegistration cssControls = new CssRegistration();
   7:         cssControls.Name = cssPath;
   8:         _Header.Controls.Add(cssControls);
   9:     }
  10: }

Our method for importing javascript files takes advantage of the ASP.Net ClientScriptManager. Similar to the RegisterCss method, the two required parameters are the relative path to the javascript file and the webpart which registers the javascript file. Below is the code for registering a custom javascript file:

   1: public static void RegisterScript(string scriptPath, WebPart webpart)
   2: {
   3:     string IncludeScriptFormat = @"";
   4:  
   5:     if (!webpart.Page.ClientScript.IsClientScriptBlockRegistered(scriptPath))
   6:     {
   7:         string includescript = String.Format(IncludeScriptFormat, "javascript", scriptPath);
   8:         webpart.Page.ClientScript.RegisterClientScriptBlock(webpart.GetType(), scriptPath, includescript);
   9:     }
  10: }

These methods should be called within OnInit or OnPreRender so that the stylesheet and or javascript file is included within the page head prior to loading. In order to register a style sheet located in the Layouts hive directory, you just need to pass the relative path to the style sheet file as well as the WebPart that uses the style sheet. Below is an example OnInit method within a WebPart that makes use of our static functions.

   1: protected override void OnInit(EventArgs e)
   2: {
   3:         MySharepointLibrary.RegisterCSS("/_layouts/css/MyStyleSheet.css", this);
   4:  
   5:         MySharepointLibrary.RegisterScript("/_layouts/js/MyJavascript.js", this);
   6:  
   7:         base.OnInit(e);
   8: }

Overall, these methods ultimately help create organization for stylesheets and javascript files, by allowing individual WebParts to import their own styles and scripts.

Feel free to leave any questions or concerns in the comments.

Thanks.

Cross-browser drag and drop “ListBoxes” using jQuery.

My latest project involved creating a public facing website on top of Microsoft Sharepoint. For one of the custom administrative pieces, I determined that drag and droppable lists provided the best solution for managing collections of objects. However, I ran into several problems while trying to implement this using jQuery with support for all browsers.

The ideal option was to use an ASP.Net ListBox. This control is rendered as a select html element, with all of the ListItems rendered as html option elements. Unfortunately, I quickly learned that this was a limiting factor in what I was trying to accomplish. It is not possible to bind events to option elements in any browser except Firefox. Therefore the following jQuery statement does not work in most browsers:

$(“option”).draggable();

After discovering this limitation, I decided to use ASP.Net BulletedList controls. These controls are rendered as unordered list elements (<ul>), with each ListItem rendered as an html listitem element (<li>). Internet Explorer, Chrome, Safari and Firefox all support binding events to ListItem elements. Therefore, the following statement was possible in all browsers:

$(“li”).draggable();

Now that I had discovered how to get jQuery’s drag and drop working for lists, my next goal was to create a decent looking user interface. I needed to somehow get my unordered lists to look similar to a standard ListBox.

After some googling, I stumbled upon a great jQuery plugin, jScrollPane (not to be confused with a Java jScrollPane!). jScrollPane allows you to convert div containers to fully customizable scrollable boxes. Implementing jScrollPane is incredible easy as well. All that is required is calling .jScrollPane() on the div element you want to convert.

Now that I had created a “ListBox” with draggable items, the final step was implementing droppable for my jScrollPane’s. My layout consisted of a jScrollPaneDiv containing a SelectedItemsDiv. Within the SelectedItemsDiv, there was a <ul> with the id of SelectedItemsList. With this setup, the following jQuery snippet created a droppable container within my jScrollPane.

   1: $("#SelectedItemsDiv").droppable({ 
   2:     accept: 'li', 
   3:     hoverClass: 'ui-state-hover',
   4:     drop: function(event, ui) {    
   5:         (ui.draggable).clone().appendTo("#SelectedItemsList");
   6:     } 
   7: });    
I hope this helps some of the issues and requirements for implementing drag and droppable lists using jQuery. If anyone has any questions or would like to get a more complete example, feel free to leave a comment!

Thanks.

Thursday, November 13, 2008

LINQ to SQL: Force update on entity

In my last project, we used LINQ to SQL for both the data access layer and the business logic layer. The entities generated in a DBML file (database markup language) are partial classes. This allowed for implementing business logic directly into the objects by extending the classes' partial definitions. This allowed for rapid development once the data model was established in SQL Server 2005. However, during the project, there were quite a few problems with using LINQ to SQL generated entities for the business objects as I have mentioned in some of my previous blog posts. The most recent problem I ran into involved the lack of support for change tracking properties defined outside of the designer generated file.

In my solution, I ran into a situation in which I wanted to force an Update on an entity. Since I used the DBML designer generated entities to store my business logic, in several of these objects I added new properties to the partial class definitions. I then used stored procedures to save the changes in these new properties to the database. Since all of the CRUD (create, update, delete) operations are handled by the DBML, I designed the system that the update stored procedures for custom properties would be fired in the parent objects Update method (for instance, partial void UpdateAddress(Address instance)). However, I quickly discovered that LINQ to SQL's change tracking is only done on the properties implemented in the designer file.

For instance, if I have an Order table in my database, the designer generates a partial Order class. I extend my order class by adding a CustomerAddress object property. Due to databinding and some UI features I want to implement, I want the Customer and Address information to be displayed and databound to a single object. Therefore, a CustomerAddress object contains all of the properties found in the Customer and Address designer generated classes. When a user makes a change to a property in the CustomerAddress object, ideally this would trigure that the parent object, the Order, would be marked as "dirty" and it would require an update. However, LINQ to SQL change tracking does not support this functionality.

At first, I attempted to fix this problem by having my externally defined properties raise SendPropertyChanged("PropertyName") in the set definition of the property. For instance, when a property in the CustomerAddress property in the Order object is changed, the Order object should raise that the CustomerAddress changed. Therefore, an Update for the Order object should appear in the DataContext.GetChangeSet(). However, this does not work.

The second thought that came to mind was that perhaps exists a method such as "UpdateOnSubmit", similar to InsertOnSubmit and DeleteOnSubmit. However, LINQ to SQL does not support provide a forced Update method.

Ultimately, the solution that worked involved modifying the data model and adding a "LastChange" DateTime column to my Order table. Whenever the CustomerAddress property is modified, the LastChange property is set to the current DateTime. This will trigger an Update on the Order object in the DataContext change set. This enabled the effect I desired: the Order object would be marked as "dirty", causing UpdateOrder(Order instance) to be called. My stored procedures that implemented saving / updating CustomerAddress objects were then executed from within UpdateOrder(Order instance).

If anyone has any questions, or any alternative suggestions, I would appreciate the feedback in the comments!

Thursday, November 6, 2008

Dynamically create WCF service clients.

I'm currently studying for the Microsoft .Net Framework 3.5 WCF certification exam. One of the concepts that we've been discussing around the office recently is how to properly consume self-developed WCF services. Over the past few weeks, Josh Heyse has hammered into my brain the fact that I should not being adding service references to services encapsulated within a solution.

Despite the fact that adding a service reference is incredibly easy, it can lead to complications if the service is modified or changed. Service references use metadata (wsdl) to regenerate service contracts and proxies. Because the service contracts in the client are actual copies, they don't automatically change when the original service contract changes. If my service definition changed while developing, I would have to force refreshes on my service references to generate updated proxies, and errors are harder to catch at compile time. Therefore, when you have access to the dll in which the ServiceContracts are defined, it is better to dynamically create your clients using a ChannelFactory and the ServiceContract definition.

In light of all this, I've decided to create a simple guide to dynamically create WCF clients using a ChannelFactory. In addition, I'm going to demonstrate how to create WCF clients based off of endpoint information stored in a configuration file.

Step 1: Create a Windows Class Library Project named Contracts . This is where we will store all of the ServiceContracts and DataContracts for our WCF service. In this example I'm going to create a simple math service, with the service contract defined by an interface named IMathService. Below is a copy of the code for this service definition.

2 namespace Contracts

3 {

4 [ServiceContract]

5 public interface IMathService

6 {

7 [OperationContract]

8 int AddInt(int a, int b);

9

10 [OperationContract]

11 int SubtractInt(int a, int b);

12

13 [OperationContract]

14 double AddDouble(double a, double b);

15

16 [OperationContract]

17 double SubtractDouble(double a, double b);

18 }

19 }

Step 2: In the same solution, create a WCF Service Library named SampleMathService. Add the Contracts project as a reference to this project. Delete the default Service1.cs and IService1.cs files. Instead, add a new class file with the name MathService.cs. In this new class, add "using Contracts;" to the top, and implement the IMathService interface. Below is a copy of the code:

1 namespace SampleMathService

2 {

3 public class MathService : IMathService

4 {

5 #region IMathService Members

6

7 public int AddInt(int a, int b)

8 {

9 return a + b;

10 }

11

12 public int SubtractInt(int a, int b)

13 {

14 return a - b;

15 }

16

17 public double AddDouble(double a, double b)

18 {

19 return a + b;

20 }

21

22 public double SubtractDouble(double a, double b)

23 {

24 return a - b;

25 }

26

27 #endregion

28 }

29 }

For this example, I configured the the MathService using the default wsHttpBinding and granted it the address of http://localhost:8888/MathService/. Below is the configuration file:

1 xml version="1.0" encoding="utf-8" ?>

2 <configuration>

3 <system.web>

4 <compilation debug="true" />

5 system.web>

6

8 <system.serviceModel>

9 <services>

10 <service behaviorConfiguration="SampleMathService.MathServiceBehavior"

11 name="SampleMathService.MathService">

12 <endpoint address="" binding="wsHttpBinding" contract="Contracts.IMathService">

13 <identity>

14 <dns value="localhost" />

15 identity>

16 endpoint>

17 <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />

18 <host>

19 <baseAddresses>

20 <add baseAddress="http://localhost:8888/MathService/" />

21 baseAddresses>

22 host>

23 service>

24 services>

25 <behaviors>

26 <serviceBehaviors>

27 <behavior name="SampleMathService.MathServiceBehavior">

28 <serviceMetadata httpGetEnabled="true" />

29 <serviceDebug includeExceptionDetailInFaults="false" />

30 behavior>

31 serviceBehaviors>

32 behaviors>

33 system.serviceModel>

34 configuration>

There is an endpoint for the MetaData in this configuration, however we will not be using that since we have access to the ServiceContract definition (Contracts.IMathService)!

Step 3: Add a Windows Console Application to your project. Add the Contracts project as a reference to this project. In addition, add the System.ServiceModel library as a reference to this project. This is where we will create and test our dynamic client.

In order to allow for the service configuration to be modified outside of compiled code, add a configuration file to your project. Within this App.config file, we need to add the client and service endpoint information within the System.ServiceModel configuration section. My App.config is displayed below:

1 xml version="1.0" encoding="utf-8" ?>

2 <configuration>

3 <system.serviceModel>

4 <client>

5 <endpoint name="MathServiceEndpoint"

6 address="http://localhost:8888/MathService/"

7 contract="Contracts.IMathService"

8 binding="wsHttpBinding"/>

9 client>

10 system.serviceModel>

11 configuration>

I named my endpoint MathServiceEndpoint, and the address, contract and binding information are all defined for when we create our client dynamically. This is a useful step because it allows for us to change endpoint information in the future without having to modify any code.

We are now ready to go ahead and code our test client and application. The code to create a client dynamically is incredibly simple, especially since we have all of our endpoint information defined within our App.Config file. The System.ServiceModel namespace contains a ChannelFactory object, which allows for the creation of channels that can connect with the service endpoint. To create a create a client all we have to do is initalize a ChannelFactory that passes in the service's ServiceContract. Using this ChannelFactory, we can create a new channel to use as our proxy for communication with our WCF service. The code that demonstrates this is below:

1 ChannelFactory<IMathService> factory = new ChannelFactory<IMathService>("MathServiceEndpoint");

2

3 IMathService proxy = factory.CreateChannel();

Once we have created our proxy channel, we are able to call the OperationContracts defined in our IMathService ServiceContract. Below is a copy of my Console Application, which calls each of the OperationContracts in IMathService.

1 namespace MathClientTestApplication

2 {

3 class Program

4 {

5 static void Main(string[] args)

6 {

7 ChannelFactory<IMathService> factory = new ChannelFactory<IMathService>("MathServiceEndpoint");

8

9 factory.Open();

10

11 IMathService proxy = factory.CreateChannel();

12

13 // Test AddInt(int a, int b)

14 int addIntResult = proxy.AddInt(3, 2);

15 Console.WriteLine(string.Format("AddInt(3, 2) = {0}", addIntResult));

16

17 // Test SubtractInt(int a, int b)

18 int subtractIntResult = proxy.SubtractInt(3, 2);

19 Console.WriteLine(string.Format("SubtractInt(3, 2) = {0}", subtractIntResult));

20

21 // Test AddDouble(double a, double b)

22 double addDoubleResult = proxy.AddDouble(4.5, 5.5);

23 Console.WriteLine(string.Format("AddDouble(4.5, 5.5) = {0}", addDoubleResult));

24

25 // Test SubtractDouble(double a, double b)

26 double subtractDoubleResult = proxy.SubtractDouble(9.5, 6.1);

27 Console.WriteLine(string.Format("SubtractDouble(9.5, 6.1) = {0}", subtractDoubleResult));

28

29 factory.Close();

30

31 Console.ReadLine();

32

33 }

34 }

35 }

As you can see, I call each of the Operations that IMathService offers, and print the results to the console. In addition, I added factory.Open() and factory.Close() to explicitly indicate when creating channels is available and when the factory should be disposed.

I hope you enjoyed this simple guide to consuming WCF services dynamically! If you have any questions, feel free to leave them in the comments suggestion.