Archive for May, 2008

Introducing TextBox Limiter Control Ajax Control Toolkit Extender

May 29th, 2008 by Sidar Ok

You can download the sources from here

ASP.NET TextBox has an integer attribute “MaxLength”which corresponds to html text input’s property with the same name. It works perfectly when the textbox is single line, normal input type “text”.

But when we want to work in a multiline box, such as an e-mail message or sending and SMS, we want to limit it in the same way and what happens? We see that generated control is a “textarea” and it doesn’t support maximum length! Gee!

Now of course we can use Regular Expression validators to validate and tell at the client side, but we don’t want to just tell! We want to prevent it exceeding the predefined size too!

That’s why I came up with this Ajax Control Toolkit extender that I called TextboxLimitExtender. We just give it the MultiLine text area to operate on, and the maximum length. I also added an option to show how many characters left on a text control of your choice. The extender contains a server side method to do the double check on server side.

Here is a screenshot of what you will expect to get at the end of it:

clip_image002

Picture 1. Extender in action

How to Use It

After adding TextboxLimiterExtender and Ajax Control Toolkit assemblies to your project as references, add the following at the beginning of your page or user control that you want to use the TextboxLimitExtender:

<%@ Register Assembly=”TextboxLimitExtender” 
Namespace=”TextboxLimitExtender” TagPrefix=”cc1″ %>

Of course, we have to be sure that we have a script manager:

<asp:ScriptManager ID=”sm” runat=”server” />

Now let’s assume that our target textbox is defined like the following:

<asp:TextBox ID=”limitedTextBox” runat=”server” TextMode=”MultiLine” />

And just beneath it we have our static text and a label to show how many characters left:

You have <asp:Label ID=”charsLeftLabel” runat=”server” ForeColor=”Red” /> 
chars left.
Now the moment of truth: with these controls extender goes like this:
<cc1:TextboxLimitExtender ID=”TextboxLimitExtender1″ runat=”server” 
MaxLength=”50″ TargetControlID=”limitedTextBox” 
TargetCountTextControlId=”charsLeftTextBox”>    

        </cc1:TextboxLimitExtender>

How it Works

It handles the every key hit and checks if the checkbox length exceeded the maximum length or not. If it didn’t, then does nothing. If it did, then it cancels the event so the offending chars never get typed.

In addition, we need to handle copy & paste behaviors to prevent them from happening for the same reasons above.

Implementation

Server Side

We will have 2 properties, one for the ID of the control to write how many characters left, and another one to keep maximum length.

Here is TextboxLimiterExtender.cs that writes the injects values for the script:

   1: [Designer(typeof(TextboxLimitExtenderDesigner))]
   2: [ClientScriptResource(“TextboxLimitExtender.TextboxLimitExtenderBehavior”,
   3:     “TextboxLimitExtender.TextboxLimitExtenderBehavior.js”)]
   4: [TargetControlType(typeof(ITextControl))]
   5: public class TextboxLimitExtender : ExtenderControlBase
   6: {
   7:     // TODO: Add your property accessors here.
   8:     //
   9:     [ExtenderControlProperty]
  10:     [DefaultValue(“”)]
  11:     [IDReferenceProperty(typeof(ITextControl))]
  12:     public string TargetCountTextControlId
  13:     {
  14:         get
  15:         {
  16:             return GetPropertyValue(“TargetCountTextControlId”, string.Empty);
  17:         }
  18:         set
  19:         {
  20:             SetPropertyValue(“TargetCountTextControlId”, value);
  21:         }
  22:     }
  23:
  24:     [ExtenderControlProperty]
  25:     [DefaultValue(“1000″)]
  26:     public int MaxLength
  27:     {
  28:         get
  29:         {
  30:             return GetPropertyValue<int>(“MaxLength”, 0);
  31:         }
  32:         set
  33:         {
  34:             SetPropertyValue<int>(“MaxLength”, value);
  35:         }
  36:     }
  37:
  38:     /// <summary>
  39:     /// Validates the textbox against the maximum number.
  40:     /// </summary>
  41:     /// <returns></returns>
  42:     public bool Validate()
  43:     {
  44:         return ((ITextControl)this.TargetControl).Text.Length <= MaxLength;
  45:     }
  46:
  47: }

As you can see the type of target control and the control to write target count are type of ITextControl interface. This is an interface implemented by every control that has Text property, so you can swap between Textbox and Labels. Here is a screenshot that writes the content to a TextBox instead of a label:

clip_image002[1]

Picture 2. Textbox Limiter outputting to a Textbox instead of a Label

Client Side

In the behaviour file we will define the variables that are coming from the server side and the events to achieve the behaviour needed. The code below shows how to create the behaviour . We are also initialising the methods that we are going to use here:

   1: TextboxLimitExtender.TextboxLimitExtenderBehavior = function(element) {
   2:     TextboxLimitExtender.TextboxLimitExtenderBehavior.initializeBase(this, [element]);
   3:
   4:     // initializing property values
   5:     //
   6:     this._TargetCountTextControlId = null;
   7:     this._MaxLength = 1000;
   8:
   9: //    //initializing handlers
  10:     this._onKeyPressHandler = null;
  11:     this._onBeforePasteHandler = null;
  12:     this._onPasteHandler = null;
  13:     this._onKeyDownHandler = null;
  14:     this._onKeyUpHandler = null;
  15: }

The rest goes as the same with a standard implementation of an Ajax Control Toolkit Extender, but I’ll show some important methods that are listed above.

RefreshCountTextbox method calculates the characters left and updates the count on the targetCountTextControl .

   1: _refreshCountTextBox: function() {
   2:
   3:         var control = this.get_element();
   4:         var maxLength = this.get_MaxLength();
   5:         var tbId = this.get_TargetCountTextControlId();
   6:         var countTextBox;
   7:         //var countMode = this.
   8:         if (tbId) {
   9:             countTextBox = $get(tbId);
  10:         }
  11:         else
  12:             return; //nowhere to write.
  13:
  14:         var innerTextEnabled = (document.getElementsByTagName(“body”)[0].innerText !=
  15: undefined) ? true : false;
  16:
  17:         if (countTextBox)
  18:         {
  19:
  20:             if(innerTextEnabled)
  21:             {
  22:                 countTextBox.innerText = maxLength - control.value.length;
  23:             }
  24:             else
  25:             {
  26:                 countTextBox.textContent = maxLength - control.value.length;
  27:             }
  28:         }

On pasting, things get a bit more interesting. We need to cancel default pasting in order to perform our own one, so we handle onbeforepasting:

   1: _onBeforePaste: function(e) {
   2:         //cancel default behaviour
   3:         if (e) {
   4:             e.preventDefault();
   5:         }
   6:         else {
   7:             event.returnValue = false;
   8:         }
   9:
  10:         this._refreshCountTextBox();
  11:     },

And now that we cancelled the paste, we have the responsibility to reach to what user wanted to copy and tailor it until it doesn’t exceed max length. If it exceeds, than the trailing bits won’t be in the box:

   1: _onPaste: function(e) {
   2:         var control = this.get_element();
   3:         var maxLength = this.get_MaxLength();
   4:         //cancel default behaviour to override
   5:
   6:         if (e) {
   7:             e.preventDefault();
   8:         }
   9:         else {
  10:             event.returnValue = false;
  11:         }
  12:         var oTR = control.document.selection.createRange();
  13:         var insertLength = maxLength - control.value.length + oTR.text.length;
  14:         var copiedData = window.clipboardData.getData(“Text”).substr(0, insertLength);
  15:         oTR.text = copiedData;
  16:
  17:         this._refreshCountTextBox();
  18:     },

Limitations & Remarks

Although the sample project is in .NET 3.5, the code is fully 2.0 compatible. It works fine in IE 6.0 and 7.0, but for FireFox it limits the textbox but doesn’t print the number of characters left for some reason and I was too lazy to investigate it(see update).

Conclusion

This extender wraps up the needed strategy for limiting a textbox and showing how many characters left. You can use download the source code from here and use it in anyway you want.

Feel free to post suggestions, improvements or critics under this post or to my mail address sidarok at sidarok dot com.

UPDATE: Thanks to Michael, it works for Firefox now. Source is updated. See comments.

UPDATE 2 : I am not developing the source any further, including doing no compatibility checks or new updates. Please see the comments below of people who are gracefully providing information on the issues they come across with and don’t hesitate to share with others like they are doing.

Technorati Tags: ,,,,

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Linq to SQL with WCF in a Multi Tiered Action – Part 1

May 26th, 2008 by Sidar Ok

In many places, forums, blogs, or techy talks with some colleagues I hear some ongoing urban legends about Linq to SQL I came across:

  • You can not implement multi tiered applications with Linq to SQL

  • Linq to SQL can not be used for enterprise level applications

I can’t say that both of these statements are particularly wrong or right, of course Linq to SQL can not handle every scenario but in fairness it handles most of the scenarios sometimes even better than some other RAD oriented ORM s. In this post I will create a simulation of an enterprise web application, having its Data Access, Services, and Presentation Layers separated and let them communicate with each other (err.., at least from service to UI) through WCF – Windows Communication Foundation.

This will be a couple of (may be more) posts, and this is the first part of it. I’ll post the sample code with the next post.

I have to say that this article is neither an introduction to Linq to SQL nor to WCF, so you need basic knowledge of both worlds in order to benefit from this mash up. We will develop an application step by step with an easy scenario but will have the most important characteristics of a disconnected (from DataContext perspective), multi layered enterprise architecture.

Since this architecture is more scalable and reliable, implementing it with Linq to SQL has also some tricks to keep in mind:

  • Our DataContext will be dead most of the time. So we won’t be able to benefit Object Tracking to generate our SQL statements out of the box.

  • This also brings to the table that we have to know what entities to delete, what to insert, and what to update. We can not just “do it” and submit changes as we are doing it in connected mode. This means that we have to maintain the state of the objects manually (sorry folks, I feel the same pain).

  • The transport of the data over the wire is another problem, since we don’t write the entities on our own(and in the case of an amend to them the designer of Linq to SQL can be very aggressive) so it brings us into 2 common situation

  • We can create our own entities, and write translators to convert from Linq Entities to our very own ones.

  • We can try to customize Linq Entities in the ways we are able to.

Since the first one is obvious and the straight forward to implement, we will go down the second route to explore the boundaries of this customization.

To make it clearer that what I will do, here is a basic but a functional schema of the resulting n-tier application

s

Picture 1 – Architectural schema of the sample app.

In our example, we are going to use Linq to SQL as an ORM Mapper. So as you see in the schema, Linq to SQL doesn’t give us the heaven of not writing a DAL Layer at all. But it reduces both stored queries/procedures and amount of mapping that we had to do manually before.

Developing the Application

Scenario

The scenario I came up with is a favorites web site, that consist of 2 simple pages enabling its users to Insert, Delete, Update and Retrieve users and their favorites when requested. 1 user can have many favorites.

We will simply place 2 Grid Views in the page and handle their events to make the necessary modifications on the model itself. This will also demonstrate a common usage.

Design

Entities

Here is the object diagram of the entities; they are the same as the DB tables:

clip_image004

Picture 2.Entity Diagram

See the additional “Version” fields in the entities; they are type of Binary in .NET and TimeStamps in SQL Server 2005. We will use them to let Linq to SQL handle the concurrency issues for us.

Since we are going to employ a web service by the help of WCF, we need to mark our entities as DataContract to make it available for serialization through DataContractSerializer. We can do that by right clicking on the designer and going to properties, and changing Serialization property to unidirectional as in the picture follows:

clip_image006

Picture 3. Properties window

After doing and saving this we will see in the designer.cs file, we have our Entities marked as DataContract and members as DataMember s.

As mentioned earlier before, we need to maintain our entites state – to know whether they are deleted, inserted, or updated. To do this I am going to define an enumeration as follows:

   1: /// <summary>
   2:     /// The enum helps to identify what is the latest state of the entity.
   3:     /// </summary>
   4:     public enum EntityStatus
   5:     {
   6:         /// <summary>
   7:         /// The entity mode is not set.
   8:         /// </summary>
   9:         None = 0,
  10:         /// <summary>
  11:         /// The entity is brand new.
  12:         /// </summary>
  13:         New = 1,
  14:         /// <summary>
  15:         /// Entity is updated. 
  16:         /// </summary>
  17:         Updated = 2,
  18:         /// <summary>
  19:         /// Entity is deleted. 
  20:         /// </summary>
  21:         Deleted = 3,
  22:     }

We are going to have this field in every entity, so let’s define a Base Entity with this field in it:

   1: [DataContract]
   2: public class BaseEntity
   3: {
   4:   /// <summary>
   5:   /// Gets or sets the status of the entity.
   6:   /// </summary>
   7:   /// <value>The status.</value>
   8: 
   9:   [DataMember]
  10:   public EntityStatus Status { get; set; }
  11: }

 

And then, all we need to do is to create partial classes for our Entities and extend them from base entity:

   1: public partial class User : BaseEntity
   2: {
   3: 
   4: }
   5: 
   6: public partial class Favorite : BaseEntity
   7: {
   8: 
   9: }
  10: 

Now our entities are ready to travel safely along with their arsenal.

Service Layer Design

As we are going to use WCF, we need to have our:

  • Service Contracts (Interfaces)
  • Service Implementations (Concrete classes)
  • Service Clients (Consumers)
  • Service Host (Web service in our case)

Service Contracts

We will have 2 services: Favorites Service and Users Service. It will have 4 methods: 2 Gets and 2 Updates. We will do the insertion, update, and deletion depending on the status so there is no need to determine separate functions for all. Here is the contract for User:

   1: /// <summary>
   2: /// Contract for user operations 
   3: /// </summary>
   4: 
   5: [ServiceContract]
   6: public interface IUsersService
   7: {
   8: /// <summary>
   9: /// Gets all users.
  10: /// </summary>
  11: /// <returns></returns>
  12: 
  13:   [OperationContract]
  14:   IList<User> GetAllUsers();
  15: 
  16: /// <summary>
  17: /// Updates the user.
  18: /// </summary>
  19: /// <param name=”user”>The user.</param>
  20: 
  21:   [OperationContract]
  22:   void UpdateUser(User user);
  23: 
  24: /// <summary>
  25: /// Gets the user by id.
  26: /// </summary>
  27: /// <param name=”id”>The id.</param>
  28: /// <returns></returns>
  29: 
  30:   [OperationContract]
  31:   User GetUserById(int id);
  32: 
  33: /// <summary>
  34: /// Updates the users in the list according to their state.
  35: /// </summary>
  36: /// <param name=”updateList”>The update list.</param>
  37: 
  38:   [OperationContract]
  39:   void UpdateUsers(IList<User> updateList);
  40: }

And here is the contract for Favorites Service:

   1: /// <summary>
   2: /// Contract for favorites service
   3: /// </summary>
   4: [ServiceContract]
   5: public interface IFavoritesService
   6: {
   7:   /// <summary>
   8:   /// Gets the favorites for user.
   9:   /// </summary>
  10:   /// <param name=”user”>The user.</param>
  11:   /// <returns></returns>
  12:   [OperationContract]
  13:   IList<Favorite> GetFavoritesForUser(User user);
  14: 
  15:   /// <summary>
  16:   /// Updates the favorites for user.
  17:   /// </summary>
  18:   /// <param name=”user”>The user.</param>
  19:   [OperationContract]
  20:   void UpdateFavoritesForUser(User user);
  21: }

Service Implementations (Concrete classes)

Since we are developing a db application with no business logic at all, the service layer implementors are pretty lean & mean. Here is the Service implementation for UserService

   1: [ServiceBehavior(IncludeExceptionDetailInFaults=true)]
   2: public class UsersService : IUsersService
   3: {
   4:     IUsersDataAccess DataAccess { get; set; }
   5: 
   6:     public UsersService()
   7:     {
   8:         DataAccess = new UsersDataAccess();
   9:
  10:     }
  11: 
  12:     #region IUsersService Members
  13: 
  14:     /// <summary>
  15:     /// Gets all users.
  16:     /// </summary>
  17:     /// <returns></returns>
  18:     [OperationBehavior]
  19:     public IList<User> GetAllUsers()
  20:     {
  21:         return DataAccess.GetAllUsers();
  22:     }
  23: 
  24:     /// <summary>
  25:     /// Updates the user.
  26:     /// </summary>
  27:     /// <param name=”user”>The user.</param>
  28:     [OperationBehavior]
  29:     public void UpdateUser(User user)
  30:     {
  31:         DataAccess.UpdateUser(user);
  32:     }
  33: 
  34:     /// <summary>
  35:     /// Gets the user by id.
  36:     /// </summary>
  37:     /// <param name=”id”>The id.</param>
  38:     /// <returns></returns>
  39:     [OperationBehavior]
  40:     public User GetUserById(int id)
  41:     {
  42:         return DataAccess.GetUserById(id);
  43:     }
  44: 
  45:     /// <summary>
  46:     /// Updates the users in the list according to their state.
  47:     /// </summary>
  48:     /// <param name=”updateList”>The update list.</param>
  49:     [OperationBehavior]
  50:     public void UpdateUsers(IList<User> updateList)
  51:     {
  52:         DataAccess.UpdateUsers(updateList);
  53:     }
  54: 
  55:     #endregion
  56: }

And as you can imagine the favorite service implementation is pretty much the same.

This has been long enough, so let’s cut it here. In the next post, I will talk about the presentation, service and data layer implementations. By that, we will see how to best approach to modifying these entities in a data grid, pass them through the WCF Proxy and commit the changes (insert, update, delete) to the SQL 2005 database. I will also provide the source codes with the next post. Stay tuned until then.

For part 2 : http://www.sidarok.com/web/blog/content/2008/06/02/linq-to-sql-with-wcf-in-a-multi-tiered-action-part-2.html .

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

A Basic Hands on Introduction to Unity DI Container

May 15th, 2008 by Sidar Ok

Hey folks, here we are with another interesting article. There are some introductions already on the internet about Unity providing the theoretical information, so I won’t go deeper in that route. In this article, I will be more practical and provide a concrete implementation of concepts. You can download the sample codes clicking here.

Microsoft Patterns and Practices team had been developing Enterprise Library to enable the use of general patterns and practices for .NET platform, which has great pluggable application blocks such as Logging and Validation application blocks. One of them used to be DIAB, which is an acronym for Dependency Injection Application Block. But folks thought it should be named differently from the other application blocks, and came with the fancy name “Unity”.

Now I won’t go to details of Inversion of Control and Dependency Injection patterns as I can imagine you are sick of them and I want to keep this post short, but the basic value it brings to enterprise systems is decoupling. They promote programming to interfaces and isolate you from the creation process of the collaborators, letting you to concentrate on what you need to deliver while improving testing.

Out in the universe, there are big frameworks such as Spring.NET or Castle Windsor containing Castle Microkernel. The choice coming from Microsoft Patterns and Practices team is the Unity framework, which has gone live in the April. It is open source and hosted in CodePlex along with its community contributions project that is awaiting developers’ helps to extend unity.

Enough talking, lets see some action. We will develop a simple set of classes that does naming, applying strategy patterns. This is also good because a common best practice is to inject your strategies to your consumers through containers and interfaces.

Setting Up the Environment to Use Unity

In the example, I used Visual Studio 2008 and .NET 3.5. You need to download the latest drop of Unity from here and add it as a reference to the projects we want to use and that’s it really.

Members of the Solution

In the UnitySample project, there are Strategy Contracts and Strategy Implementations. The contracts are interfaces as you already may have discovered, where their implementations reside in the implementations project.

So in the Contracts we have a naming strategy contract as follows:

   2: /// Defines the contract of changing strings per conventions.
   3: /// </summary>
   4: public interface INamingStrategy
   5: {
   6:   /// <summary>
   7:   /// Converts the string according to the convention.
   8:   /// </summary>
   9:   /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. 
  10:   /// Assumes that the words are seperated by spaces.</param>
  11:   /// <returns>The naming applied string.</returns>
  12:   string ConvertString(string toApplyNaming);
  13: }

And we will have 2 concrete implementations, one for Pascal and one for Camel casing in the implementations project. Being good TDD Guys we are writing the test first. Let’s see the test method for Pascal casing (camel’s is pretty much similar to it):

   1: /// <summary>
   2: ///A test for ConvertString
   3: ///</summary>
   4: [TestMethod()]
   5: public void ConvertStringTest()
   6: {
   7:   INamingStrategy strategy = new PascalNamingStrategy();
   8: 
   9:   string testVar = “the variable to be tested”;
  10:   string expectedVar = “TheVariableToBeTested”;
  11:
  12:   string resultVar = strategy.ConvertString(testVar);
  13: 
  14:   Assert.AreEqual(expectedVar, resultVar);
  15: }

After we write the tests and fail, we are ready to write the concrete implementation for the Pascal Casing to pass the test:

   1: /// <summary>
   2: /// Pascal naming convention, all title case.
   3: /// </summary>
   4: public class PascalNamingStrategy : INamingStrategy
   5: {
   6:    #region INamingStrategy Members
   7: 
   8:     /// <summary>
   9:     /// Converts the string according to the convention.
  10:     /// </summary>
  11:     /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. Assumes that the words are seperated by spaces.</param>
  12:     /// <returns>The naming applied string.</returns>
  13:     public string ConvertString(string toApplyNaming)
  14:     {
  15:         Debug.Assert(toApplyNaming != null);
  16:         Debug.Assert(toApplyNaming.Length > 0);
  17: 
  18:         // trivial example, not considering edge cases.
  19:         string retVal = CultureInfo.InvariantCulture.TextInfo.ToTitleCase(toApplyNaming);
  20:         return retVal.Replace(” “, string.Empty);
  21:     }
  22: 
  23:     #endregion
  24: }

You can see the relevant implementation of the Camel Casing in the source codes provided.

After finishing with fundamental, let’s utilize & test Unity with our design. For this purpose I am creating a project called “Unity Strategies Test” to see how container can be used to inject in when a INamingStrategy is requested. Following test method shows very simple injection and test if the injection succeeded in a few lines:

   1: /// <summary>
   2: /// Test if injecting dependencies succeed.
   3: /// </summary>
   4: [TestMethod]
   5: public void ShouldInjectDependencies()
   6: {
   7:     IUnityContainer container = new UnityContainer();
   8: 
   9:     container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  10: 
  11:     INamingStrategy strategy = container.Resolve<INamingStrategy>();
  12: 
  13:     Assert.IsNotNull(strategy, “strategy injection failed !!”);
  14:     Assert.IsInstanceOfType(strategy, typeof(PascalNamingStrategy), “Strategy injected, but type wrong!”);
  15: 
  16: }

And the testing of PascalNamingStrategy becomes much easier and more loosely coupled now:

   1: /// <summary>
   2: /// Tests the pascal strategy through injection.
   3: /// </summary>
   4: [TestMethod]
   5: public void TestPascalStrategy()
   6: {
   7:    IUnityContainer container = new UnityContainer();
   8: 
   9:    container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  10: 
  11:    // notice that we dont know what strategy will be used, and we dont care either really
  12: 
  13:    INamingStrategy strategy = container.Resolve<INamingStrategy>();
  14: 
  15:    string testVar = “the variable to be tested”;
  16:    string expectedVar = “TheVariableToBeTested”;
  17:    string resultVar = strategy.ConvertString(testVar);
  18:
  19:    Assert.AreEqual(expectedVar, resultVar);
  20: }

This very basic example showed how your tests and code can become loosely coupled. In the next posts I will try to talk about configuring the container, and how to utilize it in your web applications. Stay tuned till then.

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Cross Browser Guide Part 3 – Event Handling in Different Browsers

May 10th, 2008 by Sidar Ok

For the first two articles of the series: Part 1 and Part 2 .

The worst part of making an application work in multiple browsers is the different interpretation of JavaScript by every browser (you know what I mean). One of the most obvious ones is the event handling architecture difference amongst Internet Explorer and the browsers those follow W3C standards for DOM event handling.

This is a very important topic because everything starts with events. No events, no scripting. If at one point of your script your event handling fails, it is very likely that the rest of it will not be executed. So we need to understand the event models of –at least – the major browsers. We can group them by three major categories:

1 – Traditional Model

In the old browsers we were able to attach only through inline scripting such as:

<input type=”button” id=”myButton” value=”Press” onclick=”alert(’hello world!’)” />

But this is not easily maintainable and recommended now. So the Netscape notation is a common way to hook your events up:

element.onclick = doSomething;

As you can see there is a certain drawback that we can not add more than one listeners to an event as we can do in today’s modern languages. This model is supported by most of the browsers, so don’t worry, you don’t need to write any extra codes for here.

2 – W3C Model

In 2000, W3C has published a DOM Level 2 Events Specification to address the problems about the traditional model.

In this model, assignments to a specific event are done by add and remove events for a specific element. For example, to add you say:

myButton.addEventListener(‘click’, doSomething, false);

Where to remove you need to write:

myButton.removeEventListener(‘click’, doSomething, false);

As you can see you can add or remove multiple listeners to an event in this model. For e.g the following manages to fire both doSomething1 and doSomething2 when myButton is clicked:

myButton.addEventListener(‘click’, doSomething1, false);
myButton.addEventListener(‘click’, doSomething2, false);

W3C also supports anonymous functions that are very similar to anonymous functions in C#.NET 2.0.

The last Boolean parameter states whether the event handling will be done in bubbling phase or not (in handling phase).

3 – Microsoft Event Bubbling Model

This event model is similar to W3C’s one, but it is not the same. The name of the method to attach the event is different, and as in the below:

myButton.attachEvent(‘onclick’, doSomething);

and to remove the handler you use:

myButton.detachEvent(‘onclick’, doSomething);

As you see, there is not third parameter specifying capture or bubble, as the events isn MS programming environment always bubble, not captured.

As a result of this it is impossible to know exactly what element raised that event without doing anything ( I advice to look at MS Ajax Source code to see how they handled this situation)

That’s why while working in IE 7.0, we need to be careful about window.event behavior. This is there to store the latest event happened in the window event stack, but is not supported by the other browsers. For e.g, you want to cancel the default behavior in a specific circumstance, and to do this the way in IE 7.0 is:

window.event.returnValue = true;

But this will not work in FireFox. You then need to check an extra event handler, that is automatically injected in by Firefox and our event handler transforms as following:

function doSomething()
{
  if (!e) // if the parameter is provided by the browser
  {
    e = window.event;
  }     

  if (e.preventDefault)  // firefox style
  {
    e.preventDefault();
  }
  else
  {
    e.returnValue = false;
  }
}

Also if you return false from the listener, the default action will be prevented (such as the post back of a button or a redirection of a link. This is very useful especially in client side validation of forms.

We will continue to talk about the JavaScript problems across the browsers in the following posts, stay cool!

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

10 Tips to Improve your LINQ to SQL Application Performance

May 2nd, 2008 by Sidar Ok

Hey there, back again. In my first post about LINQ I tried to provide a brief(okay, bit detailed) introduction for those who want to get involved with LINQ to SQL. In that post I promised to write about a basic integration of WCF and LINQ to SQL working together, but this is not that post.

Since LINQ to SQL is a code generator and an ORM and it offers a lot of things, it is normal to be suspicious about performance of it. These are right up to a certain point as LINQ comes with its own penalties. But there are several benchmarks showing that DLINQ brings us up to %93 of the ADO.NET SQL DataReader performance if optimizations are done correctly.

Hence I summed up 10 important points for me that needs to be considered during tuning your LINQ to SQL’s data retrieval and data modifying process:

1 – Turn off ObjectTrackingEnabled Property of Data Context If Not Necessary

If you are trying only to retrieve data as read only, and not modifying anything, you don’t need object tracking. So turn it off using it like in the example below:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  context.ObjectTrackingEnabled = false;
}

This will allow you to turn off the unnecessary identity management of the objects – hence Data Context will not have to store them because it will be sure that there will be no change statements to generate.

2 – Do NOT Dump All Your DB Objects into One Single DataContext

DataContext represents a single unit of work, not all your database. If you have several database objects that are not connected, or they are not used at all (log tables, objects used by batch processes,etc..). These objects just unnecessarily consume space in the memory hence increasing the identity management and object tracking costs in CUD engine of the DataContext.

Instead think of separating your workspace into several DataContexts where each one represents a single unit of work associated with it. You can still configure them to use the same connection via its constructors to not to loose the benefit of connection pooling.

3 – Use CompiledQuery Wherever Needed

When creating and executing your query, there are several steps for generating the appropriate SQL from the expression, just to name some important of them:

  1. Create expression tree

  2. Convert it to SQL

  3. Run the query

  4. Retrieve the data

  5. Convert it to the objects

As you may notice, when you are using the same query over and over, hence first and second steps are just wasting time. This is where this tiny class in System.Data.Linq namespace achieves a lot. With CompiledQuery, you compile your query once and store it somewhere for later usage. This is achieved by static CompiledQuery.Compile method.

Below is a Code Snippet for an example usage:

Func<NorthwindDataContext, IEnumerable<Category>> func =
   CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
   ((NorthwindDataContext context) => context.Categories.
      Where<Category>(cat => cat.Products.Count > 5));


And now, “func” is my compiled query. It will only be compiled once when it is first run. We can now store it in a static utility class as follows :

/// <summary>
/// Utility class to store compiled queries
/// </summary>
public static class QueriesUtility
{
  /// <summary>
  /// Gets the query that returns categories with more than five products.
  /// </summary>
  /// <value>The query containing categories with more than five products.</value>
  public static Func<NorthwindDataContext, IEnumerable<Category>>
    GetCategoriesWithMoreThanFiveProducts
    {
      get
      {
        Func<NorthwindDataContext, IEnumerable<Category>> func =
          CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
          ((NorthwindDataContext context) => context.Categories.
            Where<Category>(cat => cat.Products.Count > 5));
        return func;
      }
    }
}

And we can use this compiled query (since it is now a nothing but a strongly typed function for us) very easily as follows:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  QueriesUtility.GetCategoriesWithMoreThanFiveProducts(context);
}

Storing and using it in this way also reduces the cost of doing a virtual call that’s done each time you access the collection – actually it is decreased to 1 call. If you don’t call the query don’t worry about compilation too, since it will be compiled whenever the query is first executed.

4 – Filter Data Down to What You Need Using DataLoadOptions.AssociateWith

When we retrieve data with Load or LoadWith we are assuming that we want to retrieve all the associated data those are bound with the primary key (and object id). But in most cases we likely need additional filtering to this. Here is where DataLoadOptions.AssociateWith generic method comes very handy. This method takes the criteria to load the data as a parameter and applies it to the query – so you get only the data that you need.

The following code below associates and retrieves the categories only with continuing products:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  DataLoadOptions options = new DataLoadOptions();
  options.AssociateWith<Category>(cat=> cat.Products.Where<Product>(prod => !prod.Discontinued));
  context.LoadOptions = options;
}

5 – Turn Optimistic Concurrency Off Unless You Need It

LINQ to SQL comes with out of the box Optimistic Concurrency support with SQL timestamp columns which are mapped to Binary type. You can turn this feature on and off in both mapping file and attributes for the properties. If your application can afford running on “last update wins” basis, then doing an extra update check is just a waste.

UpdateCheck.Never is used to turn optimistic concurrency off in LINQ to SQL.

Here is an example of turning optimistic concurrency off implemented as attribute level mapping:

[Column(Storage=“_Description”, DbType=“NText”,
            UpdateCheck=UpdateCheck.Never)]
public string Description
{
  get
  {
    return this._Description;
  }
  set
  {
    if ((this._Description != value))
    {
      this.OnDescriptionChanging(value);
      this.SendPropertyChanging();
      this._Description = value;
      this.SendPropertyChanged(“Description”);
      this.OnDescriptionChanged();
    }
  }
}

6 – Constantly Monitor Queries Generated by the DataContext and Analyze the Data You Retrieve

As your query is generated on the fly, there is this possibility that you may not be aware of additional columns or extra data that is retrieved behind the scenes. Use Data Context’s Log property to be able to see what SQL are being run by the Data Context. An example is as follows:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  context.Log = Console.Out;
}


Using this snippet while debugging you can see the generated SQL statements in the Output Window in Visual Studio and spot performance leaks by analyzing them. Don’t forget to comment that line out for production systems as it may create a bit of an overhead. (Wouldn’t it be great if this was configurable in the config file?)

To see your DLINQ expressions in a SQL statement manner one can use SQL Query Visualizer which needs to be installed separately from Visual Studio 2008.

7 – Avoid Unnecessary Attaches to Tables in the Context

Since Object Tracking is a great mechanism, nothing comes for free. When you  Attach an object to your context, you mean that this object was disconnected for a while and now you now want to get it back in the game. DataContext then marks it as an object that potentially will change - and this is just fine when you really intent to do that.

But there might be some circumstances that aren’t very obvious, and may lead you to attach objects that arent changed. One of such cases is doing an AttachAll for collections and not checking if the object is changed or not. For a better performance, you should check that if you are attaching ONLY the objects in the collection those are changed.

I will provide a sample code for this soon.

8 – Be Careful of Entity Identity Management Overhead

During working with a non-read only context, the objects are still being tracked – so be aware that non intuitive scenarios this can cause while you proceed. Consider the following DLINQ code:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  var a = from c in context.Categories
  select c;
}

Very plain, basic DLINQ isn’t it? That’s true; there doesn’t seem any bad thing in the above code. Now let’s see the code below:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  var a = from c in context.Categories
  select new Category
  {
    CategoryID = c.CategoryID,
    CategoryName = c.CategoryName,
    Description = c.Description
  };
}

The intuition is to expect that the second query will work slower than the first one, which is WRONG. It is actually much faster than the first one.

The reason for this is in the first query, for each row the objects need to be stored, since there is a possibility that you still can change them. But in the 2nd one, you are throwing that object away and creating a new one, which is more efficient.

9 – Retrieve Only the Number of Records You Need

When you are binding to a data grid, and doing paging – consider the easy to use methods that LINQ to SQL provides. These are mainly Take and Skip methods. The code snippet involves a method which retrieves enough products for a ListView with paging enabled:

/// <summary>
/// Gets the products page by page.
/// </summary>
/// <param name=”startingPageIndex”>Index of the starting page.</param>
/// <param name=”pageSize”>Size of the page.</param>
/// <returns>The list of products in the specified page</returns>
private IList<Product> GetProducts(int startingPageIndex, int pageSize)
{
  using (NorthwindDataContext context = new NorthwindDataContext())
  {
    return context.Products
           .Take<Product>(pageSize)
           .Skip<Product>(startingPageIndex * pageSize)
           .ToList<Product>();
   }
}

10 – Don’t Misuse CompiledQuery

I can hear you saying “What? Are you kiddin’ me? How can such a class like this be misused?”

Well, as it applies to all optimization LINQ to SQL is no exception:

“Premature optimization is root all of evil” – Donald Knuth

If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. But why?

That’s because the resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it. It is not compiled like the way regular expressions are compiled. And your delegate has the ability to replace the variables (or parameters) in the resulting query.

That’s the end folks, I hope you’ll enjoy these tips while programming with LINQ to SQL. Any comments or questions via sidarok at sidarok dot com or here to this post are welcome.

kick it on DotNetKicks.com

Technorati Tags: LINQ,SQL,Performance,.NET 3.5


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone