Building a Configuration Binder for MEF with POCO Support

July 14th, 2008 by Sidar Ok

After taking extensibility points of Managed Extensibility Framework to a spin,(which will be called “primitives” from next version on), Jason Olson has posted a nice way of enabling Fluent Interfaces and making MEF more compatible in a more DI ish way, and trying to enable support for POCOs.

When Krzystof Kcwalina announced first CTP of MEF, he made a comment that not attribute based programming model is feasible, and Jason has provided it in the post. But  appearantly the team is going to keep Import and Export model in the first CTP, according to David Kean’s reply to one of the replies in his blog post.

Now, that makes me cringe. Clearly, I don’t like this kind of magic. Somebody exports, somebody imports, and a-ha! I have a list of import info in my domain (which is another form of intrusiveness in your design).

In this post, I will build a Configuration Resolver that will use the application’s context configuration to resolve the values in the container with pure POCO support.  This is based on first CTP, and has a lot of hack, but I think involves some good work worth to look at.

All I want is to feed the container via an XML configuration (XmlRepository in our case) and get my dependencies injected in the configured way. If I had to mimic MEF approach, I would have come up a configuration like this:

   1: <?xml version=”1.0″ encoding=”utf-8″ ?>
   2: <configuration>
   3:   <configSections>
   4:     <section name=”xmlBinder” type=”XmlBinder.Configuration.XmlBinderConfigurationSection, XmlBinder” />
   5:   </configSections>
   6:   <xmlBinder>
   7:     <objects>
   8:       <object name=”HelloWorld” type=”XmlBinder.TestClasses.HelloWorld, XmlBinder.TestClasses”>
   9:         <import name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.IOutputter, XmlBinder.TestClasses” contract=”outputContract”/>
  10:         <import name=”Greeter” type=”XmlBinder.TestClasses.Interfaces.IGreeter, XmlBinder.TestClasses” contract=”greetingContract”/>
  11:       </object>
  12:
  13:       <object name=”Outputter”>
  14:         <export name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.Outputter, XmlBinder.TestClasses” contract=”outputContract” />
  15:       </object>
  16:
  17:       <object name=”Greeter”>
  18:         <export name=”" type=”XmlBinder.TestClasses.Interfaces.Greeter, XmlBinder.TestClasses” contract=”greetingContract” />
  19:       </object>
  20:     </objects>
  21:   </xmlBinder>
  22: </configuration>

But this didn’t seem natural to me. First, they need to share the contract name, which is prone to configuration errors.Second and export/import model is still in place - and in a worse format. I have gone for a model like this instead (looked a bit spring - unity mixture at the end)

   1: <?xml version=”1.0″ encoding=”utf-8″ ?>
   2: <configuration>
   3:   <configSections>
   4:     <section name=”xmlBinder” type=”XmlBinder.Configuration.XmlBinderConfigurationSection, XmlBinder” />
   5:   </configSections>
   6:   <xmlBinder>
   7:       <objects>
   8:         <object name=”HelloWorld” type=”XmlBinder.TestClasses.HelloWorld, XmlBinder.TestClasses”>
   9:           <properties>
  10:             <!–<property name=”PropertyName” destination=”XmlBinder.TestClasses.ConsoleOutputter, XmlBinder.TestClasses” /> Will be supported in the future, hopefully :)–>
  11:             <property name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.IOutputter, XmlBinder.TestClasses” mapTo=”XmlBinder.TestClasses.ConsoleOutputter, XmlBinder.TestClasses” />
  12:             <property name=”Greeter” type=”XmlBinder.TestClasses.Interfaces.IGreeter, XmlBinder.TestClasses” mapTo=”XmlBinder.TestClasses.Greeter, XmlBinder.TestClasses” />
  13:           </properties>
  14:         </object>
  15:       </objects>
  16:   </xmlBinder>
  17: </configuration>

Here I have my XmlBinder.Configuration namespace to store my configuration related classes.

image

Figure 1. Configuration Classes

As you see, I am defining a configuration section which has a list of object configurations in it. Object has properties, and properties have names, source types and destination types to be mapped.  Now although there is an ok amount of code there, I am not gonna talk about how I parse the configuration information, if you are interested you can download the source and read the tests.

With all these in place, I want my POCO’s, who look like this like in Jason’s example, without any Imports or Exports:

   1: /// <summary>
   2: /// Plain old Hello World
   3: /// </summary>
   4: public class HelloWorld
   5: {
   6:     public IOutputter Outputter
   7:     {
   8:         get;
   9:         set;
  10:     }
  11: 
  12:     public IGreeter Greeter
  13:     {
  14:         get;
  15:         set;
  16:     }
  17: 
  18:     public void SayIt()
  19:     {
  20:         Outputter.Output(Greeter.Greet());
  21:     }
  22: }
  23: 
  24: public class ConsoleOutputter : IOutputter
  25: {
  26:     #region IConsoleOutputter Members
  27: 
  28:     public void Output(string message)
  29:     {
  30:         Console.WriteLine(message);
  31:     }
  32: 
  33:     #endregion
  34: }
  35: 
  36: public class Greeter : IGreeter
  37: {
  38:     #region IGreeter Members
  39: 
  40:     public string Greet()
  41:     {
  42:         return “Hellp World”;
  43:     }
  44: 
  45:     #endregion
  46: }
  47: 
  48: 

to be injected by the magic of only this code in the bind time:

   1: CompositionContainer container = new CompositionContainer(resolver);
   2: container.Bind();
   3: var helloWorld = container.TryGetBoundValue<HelloWorld>().Value;

To achieve this, we have to give the container the types and contracts in the format that it needs, and this is should be a cooked & ready to eat thing because since we are not giving the container Imports and exports, we have to tell it what to import and export. To find out what container wanted to do the binding was not as easy as I expected to be, I had to do a lot of reverse engineering. Here, TDD saved my day and helped me to divide my problem space into a distinct two - Provide the instances to Composition Container correctly(1) and Compose the requested objects by using the types that are provided by (1).

We need to write a resolver and a binder to achieve this. ValueResolver needs a repository to use in the resolving process, though it takes the repository as a parameter. This parameter is an ITypeRepository interface in my design which means that one can write their DbConfigRepository that implements this and pass it to the ConfigValueResolver, and expect the resolver to work in the same way. This approach decouples the value resolver from the internals of repository. ITypeRepository interface is defined as follows :

   1: public interface ITypeRepository
   2: {
   3:   /// <summary>
   4:   /// Gets the object meta.
   5:   /// </summary>
   6:   /// <returns>A list of object meta data info. This can be changed to return IEnumerable to enable lazy loading in the future.</returns>
   7:   IList<ObjectMeta> GetObjectMeta();
   8: }

And the implementation for this, that takes the Object Meta Data from configuration repository, in this case the application XML configuration, is as below:

   1: public class XmlTypeRepository : ITypeRepository
   2: {
   3:    #region ITypeRepository Members
   4:     public IList<ObjectMeta> GetObjectMeta()
   5:     {
   6:         XmlBinderConfigurationSection section = ConfigurationManager.GetSection(“xmlBinder”) as XmlBinderConfigurationSection;
   7:         Debug.Assert(section != null);
   8: 
   9:         IList<ObjectMeta> retVal = BuildObjectMetaListFromConfigurationSection(section);
  10: 
  11:         return retVal;
  12:     }
  13:    #endregion
  14: 
  15:     private IList<ObjectMeta> BuildObjectMetaListFromConfigurationSection(XmlBinderConfigurationSection section)
  16:     {
  17:             List<ObjectMeta> retVal = new List<ObjectMeta>();
  18: 
  19:         foreach (XmlBinderObjectElement objectElement in section.Objects)
  20:         {
  21:             ObjectMeta meta = BuildObjectMetaFromConfiguration(objectElement);
  22:             retVal.Add(meta);
  23:         }
  24:         return retVal;
  25:     }
  26: 
  27:     private ObjectMeta BuildObjectMetaFromConfiguration(XmlBinderObjectElement element)
  28:     {
  29:         Debug.Assert(element != null);
  30: 
  31:         ObjectMeta retVal = new ObjectMeta()
  32:         {
  33:              ObjectType = element.Type,
  34:         };
  35:
  36:         foreach (XmlBinderPropertyElement propertyElement in element.PropertyElements)
  37:         {
  38:             retVal.MappingPairs.Add(new TypeMappingPair(element.Type.GetProperty(propertyElement.Name), propertyElement.TypeToMap, propertyElement.Name));
  39:
  40:         }
  41: 
  42:         return retVal;
  43:     }
  44: }

Here, ObjectMeta represents an object’s meta data to be processed further to be meaningful to bind.

image

Figure 2: TypeMappingPair and ObjectMeta entity structures

Now that we have the repository, we can safely build the resolver. The relationship between resolver and a binder is this as far as I could find out: A binder is there for a type, is responsible of it to be build properly. Binder tells to the container for the type “this type exports these, and imports these. Now go build.”. So it is reasonable for a binder to take 3 piece of information: Target type, its imports and its exports. I wrapped them up in a BindingInfo entity whose class diagram is shown below with side by side to ObjectMeta:

image

Figure 3: BindingInfo and ObjectMeta objects

Have you noticed the mismatch between two ? That’s the key point, BindingInfo is what binder (and so container) needs, and ObjectMeta is what we have, and is more intuitive and is there to support POCO model. Now we need to implement the magic on our own to convert from ObjectMeta listo to Binding Info list. I implemented this method called GetBindingInfo() into the resolver. Resolver queries the underlying repository the first time it is asked to do so, and retrieves a set of ObjectMeta from it. GetBindingInfo does the necessary conversion for us to be able to easily create our Binder (XmlBinder in this case).

Following test shows what we expect from the resolver’s GetBindingInfo method, I expect it to be rather self explanatory:

   1: [TestMethod()]
   2: public void should_transform_metadata_format_into_the_needed_format_for_mef()
   3: {
   4:     ITypeRepository rep = new XmlTypeRepository();
   5:     ConfigValueResolver target = new ConfigValueResolver(rep); // TODO: Initialize to an appropriate value
   6:     IList<ObjectMeta> objectsFromRepository = rep.GetObjectMeta(); // get from xml repository
   7:     IList<BindingInfo> actual;
   8:     actual = target.GetBindingInfo();
   9:     Assert.AreEqual(3, actual.Count);
  10:
  11:     // see if types are registered
  12:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(ConsoleOutputter)));
  13:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(HelloWorld)));
  14:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(Greeter)));
  15: 
  16:     // see if infos set properly
  17:     BindingInfo helloWorld = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(HelloWorld));
  18:     BindingInfo consoleOutputter = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(ConsoleOutputter));
  19:     BindingInfo greeter = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(Greeter));
  20: 
  21:     // for parent type
  22:     Assert.IsTrue(helloWorld.ExportsOfTypeToCompose.Count > 0);
  23:     Assert.IsTrue(helloWorld.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(HelloWorld)));
  24: 
  25:     // verify expectations on injection
  26:     Assert.AreEqual(2, helloWorld.ImportsOfTypeToCompose.Count);
  27:     Assert.IsTrue(helloWorld.ImportsOfTypeToCompose.Any<PropertyInfo>(t => t.PropertyType == typeof(IOutputter)));
  28:     Assert.IsTrue(helloWorld.ImportsOfTypeToCompose.Any<PropertyInfo>(t => t.PropertyType == typeof(IGreeter)));
  29: 
  30:     Assert.AreEqual(2, consoleOutputter.ExportsOfTypeToCompose.Count);
  31:     Assert.IsTrue(consoleOutputter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(ConsoleOutputter)));
  32:     Assert.IsTrue(consoleOutputter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(IOutputter)));
  33: 
  34:     Assert.AreEqual(2, greeter.ExportsOfTypeToCompose.Count);
  35:     Assert.IsTrue(greeter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(Greeter)));
  36:     Assert.IsTrue(greeter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(IGreeter)));
  37: }

As you see, every exporter needs to export themselves and the mutual contract, that’s why I am checking them with 2. To make this test pass, I came up with the following implementation for the resolver and  its over smart GetBindingInfo:

   1: public class ConfigValueResolver : ValueResolver
   2: {
   3:     ITypeRepository Repository
   4:     {
   5:         get;set;
   6:     }
   7: 
   8:     private IList<ObjectMeta> metaList;
   9:     private IList<ObjectMeta> Objects
  10:     {
  11:         get
  12:         {
  13:             if (metaList == null)
  14:             {
  15:                 metaList = Repository.GetObjectMeta();
  16:             }
  17:             return metaList;
  18:         }
  19:     }
  20:
  21: 
  22:     public ConfigValueResolver(ITypeRepository repository)
  23:     {
  24:         this.Repository = repository;
  25:     }
  26: 
  27:     protected override void OnContainerSet()
  28:     {
  29:         base.OnContainerSet();
  30:         ConfigureContainer();
  31:     }
  32: 
  33:     protected override void OnContainerDisposed()
  34:     {
  35:         base.OnContainerDisposed();
  36:     }
  37: 
  38:     public override CompositionResult<IImportInfo> TryResolveToValue(string name, IEnumerable<string> requiredMetadata)
  39:     {
  40:         CompositionResult<ImportInfoCollection> result = TryResolveToValues(name, requiredMetadata);
  41: 
  42:         return new CompositionResult<IImportInfo>(result.Succeeded, result.Issues, result.Value.First());
  43:     }
  44: 
  45:     public override CompositionResult<ImportInfoCollection> TryResolveToValues(string name, IEnumerable<string> requiredMetadata)
  46:     {
  47:         return TryGetContainerLocalImportInfos(name, requiredMetadata);
  48:     }
  49: 
  50:     private void ConfigureContainer()
  51:     {
  52:         // load up the types and add the binder for them
  53: 
  54:         IList<BindingInfo> bindingList = GetBindingInfo();
  55: 
  56:         foreach (var bindingInfo in bindingList)
  57:         {
  58:             this.Container.AddBinder(new XmlBinder(bindingInfo));
  59:         }
  60: 
  61:
  62:     }
  63: 
  64:     public IList<BindingInfo> GetBindingInfo()
  65:     {
  66:         Debug.Assert(Objects != null);
  67: 
  68:         List<BindingInfo> retVal = new List<BindingInfo>();
  69:         foreach (var objectMeta in Objects)
  70:         {
  71:             IList<Type> exports = new List<Type>();
  72:             exports.Add(objectMeta.ObjectType);
  73:             IList<PropertyInfo> imports = new List<PropertyInfo>();
  74:             //TODO:if imported properties are not in the objects list themselves, it means that they arent exporting anything. 
  75:             // So we can add them safely.
  76:             Debug.Assert(objectMeta.MappingPairs != null);
  77: 
  78:             foreach (var propertyToBeInjected in objectMeta.MappingPairs)
  79:             {
  80:                 // mapping pairs themselves should be in the container in order to be considered to bind
  81:                 Debug.Assert(propertyToBeInjected != null);
  82: 
  83:         retVal.Add(new BindingInfo()
  84:         {
  85:             TypeToCompose = propertyToBeInjected.ConcreteImplementation,
  86:             // exports itsself and its contract
  87:             ExportsOfTypeToCompose = new List<Type>()
  88:             {
  89:                     propertyToBeInjected.ConcreteImplementation,
  90:                     propertyToBeInjected.PropertyToInject.PropertyType
  91:             },
  92:             ImportsOfTypeToCompose = new List<PropertyInfo>(), // currently not implemented
  93:             });
  94:
  95:             imports.Add(propertyToBeInjected.PropertyToInject);
  96:          }
  97:
  98:          retVal.Add(new BindingInfo()
  99:          {
 100:            TypeToCompose = objectMeta.ObjectType,
 101:            ExportsOfTypeToCompose = exports,
 102:            ImportsOfTypeToCompose = imports
 103:           });
 104:         }
 105: 
 106:         return retVal;
 107:     }
 108: }

As you see in the implementation, for every binding info I am adding its Binder. This makes Binder’s implementation on this info relatively straightforward but crucial: It extends the ComponentBinder base class and provides the export info, import info and contract names for the composition operation. Here, I am using the relevantType.ToString() like Jason does in the fluent interface example, but the rest approach is a bit different:

   1: /// <summary>
   2: /// Each XML Binder stands for a type to resolve. 
   3: /// </summary>
   4: /// <remarks>No lifetime supported</remarks>
   5: public class XmlBinder : ComponentBinder
   6: {
   7:     private object instance;
   8:     private static object SyncRoot = new object();
   9: 
  10:     public IList<Type> Exports
  11:     {
  12:         get;
  13:         set;
  14:     }
  15: 
  16:     /// <summary>
  17:     /// List of the properties those are determined to be injected. 
  18:     /// Since the binder is a one-use-only object, setter is private and import list can not be changed during the composition
  19:     /// to synch with the current nature of the container/
  20:     /// </summary>
  21:     /// <value>The imports.</value>
  22:     public IList<PropertyInfo> Imports
  23:     {
  24:         get;
  25:         private set;
  26:     }
  27: 
  28:     /// <summary>
  29:         /// Gets or sets the type of the target resolve type.
  30:     /// </summary>
  31:     /// <value>The type of the resolve.</value>
  32:     public Type TargetResolveType
  33:     {
  34:         get;
  35:         private set;
  36:     }
  37: 
  38:     /// <summary>
  39:     /// Gets the current instance.
  40:     /// </summary>
  41:     /// <value>The current instance, it i singleton for the time being.</value>
  42:     private object CurrentInstance
  43:     {
  44:         get
  45:         {
  46:             if (instance == null)
  47:             {
  48:                 lock (SyncRoot)
  49:                 {
  50:                     if (instance == null)
  51:                     {
  52:                         // a really dummy instance, we can pass take these constructors from the taken types in XML repository
  53:                         // if we want to enable constructor injection. Assuming the type resolved has a default constructor for the time being. 
  54:                         instance = TargetResolveType.GetConstructor(new Type[] { }).Invoke(new object[] { });
  55:                     }
  56:                 }
  57:             }
  58:             return instance;
  59:         }
  60:     }
  61: 
  62:     public XmlBinder(BindingInfo bindingInfo)
  63:     {
  64:         this.Exports = bindingInfo.ExportsOfTypeToCompose;
  65:         this.Imports = bindingInfo.ImportsOfTypeToCompose;
  66:         this.TargetResolveType = bindingInfo.TypeToCompose;
  67:     }
  68: 
  69:     /// <summary>
  70:     /// Gets the export names.
  71:     /// </summary>
  72:     /// <value>The export names.</value>
  73:     public override IEnumerable<string> ExportNames
  74:     {
  75:         get
  76:         {
  77:             return Exports.Select<Type, string>(t => t.ToString());
  78:          }
  79:     }
  80: 
  81:     /// <summary>
  82:     /// Gets the import names.
  83:     /// </summary>
  84:     /// <value>The import names.</value>
  85:     public override IEnumerable<string> ImportNames
  86:     {
  87:         get
  88:         {
  89:             return Imports.Select(info => info.PropertyType.ToString());
  90:          }
  91:     }
  92: 
  93:     /// <summary>
  94:     /// Exports this instance.
  95:     /// </summary>
  96:     /// <returns></returns>
  97:     public override CompositionResult Export()
  98:     {
  99:         foreach (var type in Exports)
 100:         {
 101:             AddValueToContainer(type.ToString(), CurrentInstance);
 102:         }
 103: 
 104:         return CompositionResult.SucceededResult;
 105:     }
 106: 
 107:     /// <summary>
 108:     /// Imports the specified changed value names.
 109:     /// </summary>
 110:     /// <param name=”changedValueNames”>The changed value names, not really used.</param>
 111:     /// <returns></returns>
 112:     public override CompositionResult Import(IEnumerable<string> changedValueNames)
 113:     {
 114:             foreach (var info in Imports)
 115:             {
 116:                 CompositionResult<object> component = Container.TryGetBoundValue(info.PropertyType.ToString(), info.PropertyType);
 117:                 // do the injection. Currently assuming that only non-indexed values are to be resolved
 118:                 if (component.Succeeded)
 119:                 {
 120:                     info.SetValue(CurrentInstance, component.Value, null);
 121:                 }
 122:                 else
 123:                 {
 124:                     throw new InvalidOperationException(component.Issues[0].Description, component.Issues[0].Exception);
 125:                 }
 126:
 127:             }
 128: 
 129:             return CompositionResult.SucceededResult;
 130:         }
 131: 
 132:     public override bool Equals(object obj)
 133:     {
 134:         XmlBinder binder = obj as XmlBinder;
 135:         if (binder != null)
 136:         {
 137:             return binder.TargetResolveType == this.TargetResolveType;
 138:         }
 139:         return false;
 140:     }
 141: }

And now we have done all the dirty infrastructure work. Those was all for this test to pass:

   1: [TestMethod]
   2: public void should_print_hello_world_to_console()
   3: {
   4:     ConfigValueResolver resolver = new ConfigValueResolver(new XmlTypeRepository());
   5:     //IOutputter outputter = null;
   6:     //IGreeter greeter = null;
   7:     HelloWorld helloWorld = null;
   8:     try
   9:     {
  10:         CompositionContainer container = new CompositionContainer(resolver);
  11:         container.Bind();
  12:         helloWorld = container.TryGetBoundValue<HelloWorld>().Value;
  13:     }
  14:     catch (CompositionException ex)
  15:     {
  16:         foreach (CompositionIssue issue in ex.Issues)
  17:         {
  18:             Console.WriteLine(“issue = {0}”, issue.ToString());
  19:         }
  20:     }
  21:
  22:     Assert.IsNotNull(helloWorld);
  23:     Assert.IsNotNull(helloWorld.Outputter);
  24:     Assert.IsNotNull(helloWorld.Greeter);
  25:
  26:     helloWorld.SayIt();
  27: }

Ignore the try catch, that’s because of the *damn* error handling mechanism of MEF depending on the issues and not providing a concatenated representation of the error message. At the end, dependencies are injected, “hello world” is printed to the test console, and the world is a better place to live. Thank you all !

You can download the sources from here, I have %80 test coverage now. Any comments, criticisms, cheques with a lot of trailing zeros well appreciated as always.
kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Exploring MEF Extensibility Points

July 4th, 2008 by Sidar Ok

After I had a chance to dance with MEF, I wanted to go a step further and create my own logic to Bind the dependencies, and integrate it to the existing composition system. I have to say that, oh god, it was NOT as easy that as I expected. Current nature of the APIs  heavily depend  on counter intuitive usage. Again, I am having a difficult time to save my critics as a separate post as you can see :)

There are currently 2 main extensibility points in MEF. Custom Binders and Value Resolvers.

Custom Binders

The current nature of the Composition Container is, basically, Container is the god. It needs to have everything to bind, if you want to add a type binding logic, you are welcome. But first, you need to add your type to the container. Well, that’s a fair game, since our logic is Binding(wiring really) logic - not a type or instance management one. This means that we have to write the following as usual:

   1: CompositionContainer container = new CompositionContainer();
   2: container.AddComponent<Consumer>(consumer);
   3: container.AddComponent<Provider>(provider);
   4: container.Bind();

Where Provider and Consumer are just defined with Export & Import model as MEF needs them :

   1: public class Consumer
   2: {
   3:     [Import(typeof(IProvider))]
   4:     public IProvider Provider
   5:     {
   6:         get;
   7:         set;
   8:     }
   9:
  10:     public Consumer()
  11:     {
  12:
  13:     }
  14:
  15:     public int GetAnInteger()
  16:     {
  17:         return this.Provider.GetAnInteger();
  18:     }
  19:
  20: }

And hence is the provider:

   1: [Export(typeof(IProvider))]
   2: public class FavoredProvider : IProvider
   3: {
   4:     #region IProvider Members
   5:
   6:     public int GetAnInteger()
   7:     {
   8:         return 5;
   9:     }
  10:
  11:     #endregion
  12: }

And this makes the following test to pass, which just expects the hard coded “5″ value to be returned:

   1: /// <summary>
   2: ///A test for GetAnInteger
   3: ///</summary>
   4: [TestMethod()]
   5: public void GetAnIntegerTest()
   6: {
   7:        Consumer target = new Consumer();
   8:        CompositionHelper.InitializeContainerForConsumer(target); // this is where container configuration goes
   9:        int expected = 5;
  10:        int actual;
  11:        actual = target.GetAnInteger();
  12:        Assert.AreEqual(expected, actual);
  13: }}

Now, let’s come up with a hypothetical scenario : We need the same instance for every consumer (singleton). How hypothetical that is for a DI container! So  while configuring I would expect to write something close to the following, and still expect test to pass! (Note that you can use IsSingleton for the contract, but the one and only life time controlling mechanism isn’t singleton is it? ;) )

   1: CompositionContainer container = new CompositionContainer();
   2: var myVeryCustomBinder = new SampleBinder();
   3: container.AddComponent<Consumer>(consumer);
   4: // not adding the provider exlusively, expecting the binder to handle
   5: container.AddBinder(myVeryCustomBinder);
   6: container.Bind();

Q: So we didn’t add the provider to the container, haven’t you just said that this is a deal breaker ?

A: Yes it is, you are right. My test now breaks. But I can make that up, in my custom binder. Gimme a break.

To write a custom binder I need to inherit from the abstract class ComponentBinder . In ComponentBinder, there are several virtual methods that are waiting to be overridden depending on my logic:

  1. public override CompositionResult Export() : This is the place that we’ll add the container the results of our resolve operations. The return value also indicates whether the operation is succeeded or not.
  2. public override CompositionResult Import(IEnumerable<string> changedValueNames) : And this is the place that we will retrieve the demanding objects for the services that are exported.
  3. public override CompositionResult BindCompleted() : This kind of acts as an event, (why is it not one?) and will be called when the binding is completed. Here you can do your sanity checks after your bind operations needed for your binder,
  4. public override IEnumerable<string> ExportNames ; These are the “names” for the types. This can be the name specified in the Export() declaration in the attribute or a default one.
  5. public override IEnumerable<string> ImportNames : Same goes with imports, but instead import process.

So here would be my Custom Binder:

   1: public class SampleBinder : ComponentBinder
   2: {
   3:     private FavoredProvider ProviderToInject
   4:     {
   5:         get;
   6:         set;
   7:     }
   8:
   9:     private static readonly object syncRoot = new object();
  10:
  11:     public SampleBinder()
  12:     {
  13:         if (ProviderToInject == null)
  14:         {
  15:             lock (syncRoot)
  16:             {
  17:                 ProviderToInject = new FavoredProvider();
  18:             }
  19:         }
  20:     }
  21:
  22:     public override CompositionResult Export()
  23:     {
  24:
  25:         this.AddValueToContainer(CompositionServices.GetContractName(typeof(IProvider)), this.ProviderToInject, “Provider”);
  26:         return CompositionResult.SucceededResult; ;
  27:     }
  28:
  29:     public override CompositionResult Import(System.Collections.Generic.IEnumerable<string> changedValueNames)
  30:     {
  31:         return base.Import(changedValueNames);
  32:     }
  33:
  34:     public override CompositionResult BindCompleted()
  35:     {
  36:         return base.BindCompleted();
  37:     }
  38:
  39:     public override IEnumerable<string> ExportNames
  40:     {
  41:         get
  42:         {
  43:             return base.ExportNames;
  44:         }
  45:     }
  46:
  47:     public override IEnumerable<string> ImportNames
  48:     {
  49:         get
  50:         {
  51:             return base.ImportNames;
  52:         }
  53:     }
  54:
  55: }

As you see, we are not doing anything on import. For any import of I provider type, we Are providing  the Favored Provider.  Also notice the badly named utility class CompositionServices which currently only has 1 method, and it is a useful one : GetContractName. Without this, I’d have to hardcode it or get it with a drop of reflection magic.

Adding this Binder Makes my test pass, and I am happy.

Value Resolvers

This is an enabled model in this CTP, but the Usage of it is not enabled (at least I couldnt find out a way, since the Resolver value in the container does not have a setter - any feedback on this appreciated(1)) . Value Resolver is the part to provide the types, and probably in a sane implementation life time logic will go here. It is an abstract class and 2 abstract methods are waiting to be overriden, below is a sample Value Resolver:

   1: public class CustomValueResolver : ValueResolver
   2: {
   3:     public override CompositionResult<IImportInfo>
   4: TryResolveToValue(string name, IEnumerable<string> requiredMetadata)
   5:     {
   6:         // do your resolve, and send it back   
   7:         ImportInfo<FavoredProvider> resolvedValue =
   8:             new ImportInfo<FavoredProvider>(null);
   9:         return new CompositionResult<IImportInfo>(true,
  10:             Enumerable.Empty<CompositionIssue>(), resolvedValue);
  11:     }
  12:
  13:     public override CompositionResult<ImportInfoCollection>
  14: TryResolveToValues(string name, IEnumerable<string> requiredMetadata)
  15:     {
  16:     // do the same if you have more than one service to provide to one consumer
  17:         ImportInfo<FavoredProvider> resolvedValue =
  18:             new ImportInfo<FavoredProvider>(null);
  19:
  20:         return new CompositionResult<IImportInfo>(true,
  21:             Enumerable.Empty<CompositionIssue>(), resolvedValue);
  22:     }
  23: }

In the future hopefully we will be able to create our own component catalogs and associate resolvers to them, like the way we can’t to now to both Container and to AssemblyComponentCatalog.(2)

Hope this article gave a deeper hint on what’s going on MEF side of things. Any comments on this are welcome as always.

 (1): Jason pointed out that there is an overload of CompositionContainer, that takes ValueResolver as a parameter. doh! (see comments)

(2): Still couldnt find a way of doing the same to Component Catalog.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Managed Extensibility Framework (MEF) at a Glance

June 16th, 2008 by Sidar Ok

After Krysztof Cwalina has announced Microsoft’s plans on releasing extensible features to .NET framework, the CTP of MEF has made its way too quick into the market. As a result of this, we have an immature .DLL called ComponentModel.dll born into our hands, with being far from complementing the community’s needs and lacking lots of features & architectural concerns – and of course you know it, with nearly no xml comments and not very informative error messages.

But it is still CTP, and that’s what a CTP is for. Criticism against it is a matter of another post; but as .NETters we need to know about this vibe coming because this young baby is going to be a part of the core framework and one day, with an update, will be pushed to x million of computers.

So I better stop judging for the time being and let’s get our hands dirty with what we have currently as early adopters.

Dependency Injection with Management Extensibility Framework

In the dll, there are 2 main namespaces coming: System.ComponentModel.Composition and System.Reflection.StructuredValues . So as you see, to do its magic MEF uses reflection. This also means that one can define contracts using hard coded strings instead of a strongly typed contract. In the examples those are shipped with MEF, you can see this usage, but I think everybody will agree that this is not a good practice. So let’s see how we can build a strongly typed and contract based dependency injection mechanism.

So let’s get to define a very simple contract:

   1: /// <summary>
   2: /// Contract for retrieving pretty dummy data
   3: /// </summary>
   4: public interface IDataRetriever
   5: {
   6:     /// <summary>
   7:     /// Gets the sample data.
   8:     /// </summary>
   9:     /// <param name=”count”>The count.</param>
  10:     /// <returns></returns>
  11:     IEnumerable<ExampleData> GetSampleData(int count);
  12: }

This expects to get a list of example data #count times. ExampleData is a Poco, and its structure contains just a string key:

   1: /// <summary>
   2: /// Model for Sample Data
   3: /// </summary>
   4: public class ExampleData
   5: {
   6:     /// <summary>
   7:     /// Gets or sets the data key.
   8:     /// </summary>
   9:     /// <value>The data key.</value>
  10:     public string DataKey
  11:     {
  12:         get;
  13:         set;
  14:     }
  15: }

So we expect from this method to return a list of ExampleData , with their DataKey field populated by their indexes. So here is the test that ensures this basic expectation:

   1: /// <summary>
   2: ///A test for GetSampleData
   3: ///</summary>
   4: [TestMethod()]
   5: public void GetSampleDataTest()
   6: {
   7:     IDataRetriever target = new DataRetriever();
   8:     int expectedCount, count;
   9:     expectedCount = count = 10;
  10:
  11:     IEnumerable<ExampleData> actual = target.GetSampleData(count);
  12:     Assert.AreEqual(expectedCount, actual.Count<ExampleData>());
  13:
  14:     IEnumerator<ExampleData> enumerator = actual.GetEnumerator();
  15:     int expectedKey = 0;
  16:     while (enumerator.MoveNext())
  17:     {
  18:         Assert.IsNotNull(enumerator.Current);
  19:         Assert.AreEqual(enumerator.Current.DataKey, expectedKey.ToString());
  20:         expectedKey++;
  21:     }
  22: }

And after a couple of failures (yes, even in this simplicity I manage to fail), here is the implementation that passes this test:

   1: [Export(typeof(IDataRetriever))]
   2: public class DataRetriever : IDataRetriever
   3: {
   4:     #region IDataRetriever Members
   5:
   6:     //[Export(”Retriever”)]
   7:     public IEnumerable<ExampleData> GetSampleData(int count)
   8:     {
   9:         List<ExampleData> retVal = new List<ExampleData>(count);
  10:         for (int i = 0; i < count; i++)
  11:         {
  12:             yield return new ExampleData()
  13:             {
  14:                  DataKey = i.ToString()
  15:             };
  16:         }
  17:     }
  18:
  19:     #endregion
  20: }

Now, the syntax of MEF needs us to shout what we have, and explicitly define by attributes what we want to expose as our services to be injected (intrusive, girrrr…) And since there is an exporter, there should be an importer too, which is a page in this scenario. Beware, the page is using this interface and it is marked as public, as MEF can inject only public dependencies choosing the same way as the many of the other IOC containers in the wild.

   1: [Import(typeof(IDataRetriever), IsOptional = false)]
   2: public IDataRetriever Retriever
   3: {
   4:   get;
   5:   set;
   6: }

Note that these Import and Export attributes are under System.ComponentModel.Composition namespace and they both have another overload that takes strings as contract names instead of contract types as shown above.

Please also note that as a client to DataRetriever, this page doesn’t know any bit about which implementation of IDataRetriever that it will retrieve (DI mission accomplished). So in which house is all the party happening? I placed an initialization code inside the page constructor:

   1: public _Default()
   2: {
   3:     DataRetrieverHelper.InitializeContainer<_Default>(this);
   4: }

And this helper is a very smart guy who knows about everything about this magic (so from what we learnt from Italian mafia movies, it should be killed – by a DSL or an XML configuration. But MEF doesn’t support it currently out of the box – but with a bit of a hack it can be done):

   1: public static class DataRetrieverHelper
   2: {
   3:     public static void InitializeContainer<T>(T toFillDependency)
   4:         where T:class
   5:     {
   6:         CompositionContainer container = new CompositionContainer();
   7:         container.AddComponent<T>(toFillDependency);
   8:         container.AddComponent<DataRetriever>(new DataRetriever());
   9:         container.Bind();
  10:     }
  11: }

As you see, you add the consumer, add the service, and call bind – MEF container cares the rest.

Of course, the first question that is expected after how, is what if we need to add another implementation which is a very likely scenario? For e.g what if we have 2 implementations of the contract, say to return a list of keys in reverse order and the normal one, which one is the container going to choose to bind?

Handling Multiple Exports Within the Container

Well, since the requirements are extended, we need to write another test for the new requirement:

To pass this test, the implementation is trivial:

   1: /// <summary>
   2: ///A test for GetSampleData
   3: ///</summary>
   4: [TestMethod()]
   5: public void GetSampleDataReverseTest()
   6: {
   7:     IDataRetriever target = new DataRetreiverReverse();
   8:
   9:     int expectedCount, count;
  10:     expectedCount = count = 10;
  11:
  12:     IEnumerable<ExampleData> actual = target.GetSampleData(count);
  13:     Assert.AreEqual(expectedCount, actual.Count<ExampleData>());
  14:
  15:     IEnumerator<ExampleData> enumerator = actual.GetEnumerator();
  16:     int expectedKey = count - 1;
  17:     while (enumerator.MoveNext())
  18:     {
  19:         Assert.IsNotNull(enumerator.Current);
  20:         Assert.AreEqual(enumerator.Current.DataKey, expectedKey.ToString());
  21:         expectedKey–;
  22:     }
  23: }

It is obvious that if we don’t change the helper class which does the magic, we won’t get the new implementation. So let’s add it by using AddComponent generic method :

   1: public static void InitializeContainer<T>(T toFillDependency)
   2:     where T:class
   3: {
   4:     CompositionContainer container = new CompositionContainer();
   5:     container.AddComponent<T>(toFillDependency);
   6:     container.AddComponent<DataRetriever>(new DataRetriever());
   7:     container.AddComponent<DataRetreiverReverse>(new DataRetreiverReverse());
   8:     container.Bind();
   9: }

Ok, let’s run the application, and face a very nice error message:

“There was at least one composition issue of severity level ‘error’. Review the Issues collection for detailed information”

“WTF is issues collection?” were the first words out of my mouth unfortunately :) . The exception we get here is a System.ComponentModel.Composition.CompositionException and the “issues” is the Issues property of the exception we are getting. This is a collection of System.ComponentModel.Composition.CompositionIssue object, and their Description field is a string that has the meaningful explanation of what’s happening. In the list I got there was 2 issues that we were already expecting:

  1. “Multiple exports were found for contract name ‘MEFSample.Interfaces.IDataRetriever’. The import for this contract requires a single export only.”
  2. “A failure occurred while trying to satisfy member ‘Retriever’ on type ‘default_aspx’ while trying to import value ‘MEFSample.Interfaces.IDataRetriever’. Please review previous issues for details about the failure.”

Apart from not getting the exceptions in the first go, and ignoring the first message is cryptic, well, this is nice. Theoretically I can see all the errors happened during the build up process, not get stuck in the first one.

Back to the game, now I have 2 implementations in the container, I need a mechanism to choose between two – There is where this System.ComponentModel.Composition.ImportInfoCollection comes into the game. This collection holds a list of ImportInfo objects, which are basically information about the injected members, nothing more. Now, the new property will go as follows:

   1: [Import(typeof(IDataRetriever), IsOptional = false)]
   2: public ImportInfoCollection<IDataRetriever> ResolvedDependencies
   3: {
   4:     get;
   5:     set;
   6: }

When “Bind” is called, this property will be filled automatically instead of the old one. So now I have a list, I am able to decide between the resolved dependencies. Here is my new Retriever property:

   1: public IDataRetriever Retriever
   2: {
   3:    get
   4:    {
   5:        return ResolvedDependencies[0].GetBoundValue();
   6:    }
   7: }

Here I am choosing the first one; I can choose the 2nd one as well since Resolved Dependencies collection will have 2 values.Okay, [0] seems a cumbersome way of selecting the “correct” one, admitted :). Frankly, MEF team included the mechanism in this CTP so we can also specify metadata information along with the injected interfaces, which will help us to be able to decide better what implementation to choose in the run time. But this post got long so I hopefully throw another post to explore what we can do within MEF boundaries.

You can download the sources by clicking here.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Linq to SQL Wish List

June 4th, 2008 by Sidar Ok

As we are not Microsoft marketers, we tend to see the cons of the products that Microsoft builds. As every product has its flaws, of course Linq to SQL is no exception. Here is my list of the things that I compiled from various sites & forums or hit a limitation block by myself:

A) Architecture

1 – Enabled Provider Model & More providers than SQL

Since Matt Warren has explained in his blog post that LINQ to SQL supported multiple providers internally, but it was disabled for some “non-technical” reasons there is an unstoppable desire in each of us to see it enabled in the next version. (I am not even asking for Persistance Ignorance)

2 – Fully Mockable, A Design by Contract Framework

If this happened, we wouldn’t look for hacks like this one. I want to be able to mock DataContext out without doing any funky tweaks.

3 – A Disconnected Data Context

I can not remember how many times I have seen this everywhere. This DataContext will be able to be serialized & deserialized somewhere and even if it is dead, we should be able to benefit from Object Tracking and Deferred Loading.

By this I mean a stateless DAL, where I don’t have to say “delete these children” or “update these but not these”.

4 – Support for more inheritance models

Currently only Table per Hierarchy model is enabled, multiple entities constituting 1 table or vice versa are not.

5 – Out of the box many to many relationships

Title explains, as we currently can’t do this mapping.

6 – Batch Statements Execution

Currently Linq to SQL sends multiple queries to the DB if an operation needs it. A batch statement like NHibernate’s would have been more than cool.

7 – More control on the resulting statement

Advanced users should be able to sneak in the generation or submission process – like the interceptors in NHibernate again.

B) Tools and Designer

1 – Code generation into different structures

The ability to separate each entity and DataContext into different files or assemblies. Partial files do not let us reside our extensions in different assemblies.

2 – Make DBML designer support giving Entity Base Class

SQL Metal has this, so why does the designer not?

3 – Make DBML designer support external mapping

Again, this is a SQL metal specific “magic”.

4 – Enable partial generation in SQL Metal

Sometimes we human beings do not want to generate all the database, just one table for instance.

This is usually followed by a request on being able to “refresh” an object on the design surface. I don’t know anybody who fancies to delete & drop from connection explorer each time something is changed in the db and loose custom associations.

It would also be good if user changes to designer are kept, not overridden everytime by the tool (a smart merge may be?).

5 – A tool to generate POCO translators from/to Linq Entities

This could be configured in code or via XML files. Some of us (including me) are using Software Factories to generate them; it would have been nice to have an out of the box support in Visual Studio.

That’s all I can remember of at the midnight. What would you like to have in this list apart from these?

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Linq to SQL with WCF in a Multi Tiered Action - Part 2

June 2nd, 2008 by Sidar Ok

In the first part of this article, I tried to define a Users & Favorites scenario and the things to keep in mind about Linq to SQL. In this post I’ll continue building that application and show its implementation in different tiers connected with WCF.

Here are the sources for the article.

Service Layer Design (Cont’d from Part 1)

Service Host (Web Service in our case)

This is a host project (a plain Web project) needed to host our web service. It has our .svc files and needed configuration. In .svc file we have the mapping from contract to the implementation:

<%@ ServiceHost Language=”C#” Debug=”true” 
Service=”ServiceImplementations.UsersService” %>

And the endpoint configuration goes as follows:

   1: <system.serviceModel>
   2:         <behaviors>
   3:             <serviceBehaviors>
   4:                 <behavior name=”FavoritesServiceBehavior”>
   5:                     <serviceMetadata httpGetEnabled=”true” />
   6:                     <serviceDebug includeExceptionDetailInFaults=”false” />
   7:                 </behavior>
   8:                 <behavior name=”UsersServiceBehavior”>
   9:                     <serviceMetadata httpGetEnabled=”true” />
  10:                     <serviceDebug includeExceptionDetailInFaults=”false” />
  11:                 </behavior>
  12:             </serviceBehaviors>
  13:         </behaviors>
  14:         <services>
  15:             <service behaviorConfiguration=”FavoritesServiceBehavior”
  16: name=”ServiceImplementations.FavoritesService”>
  17:                 <endpoint address=”" binding=”wsHttpBinding”
  18: name=”IFavoritesService_Endpoint”
  19:                     contract=”ServiceContracts.IFavoritesService”>
  20:                     <identity>
  21:                         <dns value=”localhost” />
  22:                     </identity>
  23:                 </endpoint>
  24:             </service>
  25:             <service behaviorConfiguration=”UsersServiceBehavior”
  26: name=”ServiceImplementations.UsersService”>
  27:                 <endpoint address=”" binding=”wsHttpBinding”
  28: name=”IUsersService_Endpoint”
  29:                     contract=”ServiceContracts.IUsersService”>
  30:                     <identity>
  31:                         <dns value=”localhost” />
  32:                     </identity>
  33:                 </endpoint>
  34:             </service>
  35:         </services>
  36:     </system.serviceModel>

Service Clients (Consumers)

The client layer is a very thin façade to invoke the requested methods from the channel. Clients are meant to be called through controllers if you are using MVC, and in our case our web application will consume the service so the endpoint configurations will live in web tier:

   1: <system.serviceModel>
   2:     <client>
   3:       <endpoint binding=”wsHttpBinding” bindingConfiguration=”"
   4: contract=”ServiceContracts.IFavoritesService”
   5: address=”http://localhost/WebServiceHost/FavoritesService.svc”
   6:         name=”FavoritesClient”>
   7:         <identity>
   8:           <dns value=”localhost” />
   9:           <certificateReference storeName=”My” storeLocation=”LocalMachine”
  10:             x509FindType=”FindBySubjectDistinguishedName” />
  11:         </identity>
  12:       </endpoint>
  13:       <endpoint binding=”wsHttpBinding” bindingConfiguration=”"
  14: contract=”ServiceContracts.IUsersService”
  15: address=”http://localhost/WebServiceHost/UsersService.svc”
  16:         name=”UsersClient”>
  17:         <identity>
  18:           <dns value=”localhost” />
  19:           <certificateReference storeName=”My” storeLocation=”LocalMachine”
  20:             x509FindType=”FindBySubjectDistinguishedName” />
  21:         </identity>
  22:       </endpoint>
  23:     </client>
  24: </system.serviceModel>

Presentation

The challenge in the presentation tier is we need to maintain the state of the each entity according to the user interaction. For this purpose, I put 2 GridViews , one for Users and One for favorites to enable insert, update, delete and select operations.

We will bind strongly typed collections (IList<User> and IList<Favorite>) to our GridViews and we will use the ID fields of the objects to associate with the gridview, and then use them in the code behind:

Here is the definition for Users GridView:

   1: <asp:GridView ID=”usersGrid” runat=”server”
   2:     AutoGenerateColumns=”False” CellPadding=”4″
   3:     ForeColor=”#333333″ GridLines=”None”
   4:     DataKeyNames=”UserId”
   5:     OnRowDeleting=”usersGrid_RowDeleting”
   6:     OnRowUpdating=”usersGrid_RowUpdating”
   7:     OnSelectedIndexChanged=”usersGrid_SelectedIndexChanged”
   8:     OnSelectedIndexChanging=”usersGrid_SelectedIndexChanging”
   9:     OnRowCancelingEdit=”usersGrid_RowCancelingEdit”
  10:     OnRowEditing=”usersGrid_RowEditing”>
  11:     <RowStyle BackColor=”#F7F6F3″ ForeColor=”#333333″ />
  12:     <Columns>
  13:         <asp:CommandField ShowDeleteButton=”True” />
  14:         <asp:TemplateField HeaderText=”First Name”>
  15:             <ItemTemplate>
  16:                 <asp:Label ID=”firstNameLabel” runat=”server”
  17: Text=’<%# Bind(”FirstName”) %>’></asp:Label>
  18:             </ItemTemplate>
  19:             <EditItemTemplate>
  20:                 <asp:TextBox ID=”firstNameTextBox” runat=”server”
  21: Text=’<%# Bind(”FirstName”) %>’></asp:TextBox>
  22:             </EditItemTemplate>
  23:         </asp:TemplateField>
  24:         <asp:TemplateField HeaderText=”Last Name”>
  25:             <ItemTemplate>
  26:                 <asp:Label ID=”lastNameLabel” runat=”server”
  27: Text=’<%# Bind(”LastName”) %>’></asp:Label>
  28:             </ItemTemplate>
  29:             <EditItemTemplate>
  30:                 <asp:TextBox ID=”lastNameTextBox” runat=”server”
  31: Text=’<%# Bind(”LastName”) %>’></asp:TextBox>
  32:             </EditItemTemplate>
  33:         </asp:TemplateField>
  34:         <asp:CommandField ShowEditButton=”True” />
  35:         <asp:CommandField ShowSelectButton=”True” />
  36:     </Columns>
  37:     <FooterStyle BackColor=”#5D7B9D” Font-Bold=”True” ForeColor=”White” />
  38:     <PagerStyle BackColor=”#284775″ ForeColor=”White” HorizontalAlign=”Center” />
  39:     <SelectedRowStyle BackColor=”#E2DED6″ Font-Bold=”True” ForeColor=”#333333″ />
  40:     <HeaderStyle BackColor=”#5D7B9D” Font-Bold=”True” ForeColor=”White” />
  41:     <EditRowStyle BackColor=”#999999″ />
  42:     <AlternatingRowStyle BackColor=”White” ForeColor=”#284775″ />
  43:     </asp:GridView>

The one for Favorites is pretty much the same so I’ll go over Users grid.

Let’s go to code behind which is more important to us. We are going to do a batch update and send List of Users, and each user in the list will have their favorites. All the entities will have their latest status in their Status field.

Here is a sequence diagram to make things easier and more clearer to understand :

image

Picture 1. Sequence diagram of what’s happening

Now, in the page load, we are going to populate the Users GridView:

   1: if (!IsPostBack)
   2: {
   3:    try
   4:    {
   5:        if (SessionStateUtility.Users == null)
   6:        {
   7:            // error may occur during disposal, not caring for the time being
   8:            using (UsersClient client = new UsersClient())
   9:            {
  10:                SessionStateUtility.Users = client.GetAllUsers().ToList<User>();
  11:            }
  12:        }
  13:        BindUsersGrid(SessionStateUtility.Users, -1);
  14:    }
  15:    catch (Exception ex)
  16:    {
  17:        Response.Write(ex.ToString());
  18:    }
  19: }

In the grid, user can update and delete users from session. For insert, there is a separate panel included at the bottom with an add button. In the add button what we are doing is quite simple, just adding a new user to the session:

   1: protected void addUserButton_Click(object sender, EventArgs e)
   2: {
   3:     Debug.Assert(sender != null);
   4:     Debug.Assert(e != null);
   5: 
   6:     User u = new User()
   7:     {
   8:         FirstName = firstNameTextBox.Text,
   9:         LastName = lastNameTextBox.Text,
  10:         EMail = emailTextBox.Text,
  11:         Status = EntityStatus.New,
  12:         UserId = SessionStateUtility.NextUserId,
  13:     };
  14: 
  15:     SessionStateUtility.Users.Add(u);
  16: 
  17:     BindUsersGrid(SessionStateUtility.Users, -1);
  18: }

You’ll notice 2 things here, one of them is the Status is set to Entity Status.New . The other one is the SessionStateUtility class. This acts as a provider and a helper for User lists. The Users list that it provides is the below:

   1: /// <summary>
   2: /// Gets or sets the users.
   3: /// </summary>
   4: /// <value>The users.</value>
   5: public static List<User> Users
   6: {
   7:     get
   8:     {
   9:         Debug.Assert(HttpContext.Current != null);
  10:         Debug.Assert(HttpContext.Current.Session != null);
  11:
  12:         return HttpContext.Current.Session[“Users”] as List<User>;
  13:     }
  14:     set
  15:     {
  16:         Debug.Assert(HttpContext.Current != null);
  17:         Debug.Assert(HttpContext.Current.Session != null);
  18: 
  19:         HttpContext.Current.Session[“Users”] = value;
  20:     }
  21: }

And it provides another method to get NextUserId. This is necessary because since there can be multiple new records in the screen, we will need to identify them. Next User Id brings the next highest negative number that is available:

   1: /// <summary>
   2: /// Gets the next id.
   3: /// </summary>
   4: /// <value>The next id.</value>
   5: public static int NextUserId
   6: {
   7:    get
   8:    {
   9:        if (SessionStateUtility.Users.Count == 0)
  10:        {
  11:            return -1;
  12:        }
  13:        int minId = SessionStateUtility.Users.Min<User>(user => user.UserId);
  14:
  15:        if (minId > 0)
  16:        {
  17:            return -1;
  18:        }
  19: 
  20:        return –minId;
  21:    }
  22: }

And then we need to handle the grid events. I wrote a helper function to Get the User object from Selected row index in the grid (it retrieves from session)”

   1: private User GetUserFromRowIndex(int index)
   2: {
   3:     int userId = usersGrid.DataKeys[index].Value as int? ?? 0;
   4: 
   5:     //retrieve the instance in the session
   6:     User user = SessionStateUtility.Users.Single<User>(usr => usr.UserId == userId);
   7:     return user;
   8: }

Another helper function is there for just to get user’s full name formatted:

   1: private string GetFullNameForUser(User u)
   2: {
   3:     return String.Format(CultureInfo.InvariantCulture, “{0} {1}”, u.FirstName, u.LastName);
   4: }

And this one updates the UI fields for a selected user:

   1: private void UpdateUiForUser(User u)
   2: {
   3:    if (u != null)
   4:    {
   5:        favoritesPanel.Visible = true;
   6:        userNameLabel.Text = GetFullNameForUser(u);
   7:        BindFavoritesGrid(u.Favorites.ToList<Favorite>(), -1);
   8:    }
   9: }

And of course one method for binding the grid:

   1: private void BindUsersGrid(IList<User> users, int editIndex)
   2: {
   3:     usersGrid.DataSource = users
   4:     .Where<User>(usr=>usr.Status != EntityStatus.Deleted);// only bind non deleted ones
   5:     usersGrid.EditIndex = editIndex;
   6:     usersGrid.DataBind();
   7: }

As you can see we are not binding the deleted ones but we are still keeping them in the session because we need to know what is deleted when we send them back to the data tier.

Then within the light of these methods, here goes the SelectedIndex_Changing event handler. It updates the favorite’s grid for the selected user:

   1: protected void usersGrid_SelectedIndexChanging(object sender, GridViewSelectEventArgs e)
   2: {
   3:     Debug.Assert(sender != null);
   4:     Debug.Assert(e != null);
   5:     usersGrid.SelectedIndex = e.NewSelectedIndex;
   6: 
   7:     User u = GetUserFromRowIndex(e.NewSelectedIndex);
   8:     UpdateUiForUser(u);
   9: }

And when the row is being edited, following event handler will get executed:

   1: protected void usersGrid_RowEditing(object sender, GridViewEditEventArgs e)
   2: {
   3:     Debug.Assert(sender != null);
   4:     Debug.Assert(e != null);
   5:
   6:     usersGrid.SelectedIndex = e.NewEditIndex;
   7: 
   8:     BindUsersGrid(SessionStateUtility.Users, e.NewEditIndex);
   9: }

And after user clicks edit, when he/she clicks update following handler will run:

   1: protected void usersGrid_RowUpdating(object sender, GridViewUpdateEventArgs e)
   2: {
   3:     Debug.Assert(sender != null);
   4:     Debug.Assert(e != null);
   5: 
   6:     int userId = usersGrid.DataKeys[e.RowIndex].Value as int? ?? 0;
   7:     //retrieve the instance in the session
   8:     User user = SessionStateUtility.Users.Single<User>(usr => usr.UserId == userId);
   9:     user.FirstName = (usersGrid.Rows[e.RowIndex].FindControl(“firstNameTextBox”)
  10: as TextBox).Text;
  11:     user.LastName = (usersGrid.Rows[e.RowIndex].FindControl(“lastNameTextBox”)
  12: as TextBox).Text;
  13:
  14:     user.Status = user.Status == EntityStatus.New ?
  15: EntityStatus.New :EntityStatus.Updated; // manage the state
  16: 
  17:     BindUsersGrid(SessionStateUtility.Users, -1);// back to plain mode
  18: }

As you see if the edited users’ current status is already new, then we are not modifying it. But else, the state is changed to the updated.

A similar situation also exists for deletion. Have a look at the handler below:

   1: protected void usersGrid_RowDeleting(object sender, GridViewDeleteEventArgs e)
   2: {
   3:     Debug.Assert(sender != null);
   4:     Debug.Assert(e != null);
   5: 
   6:     User user = GetUserFromRowIndex(e.RowIndex);
   7:     // If user is new and deleted now, we shouldnt send it over the wire again
   8:     if (user.Status == EntityStatus.New)
   9:     {
  10:         SessionStateUtility.Users.Remove(user);
  11:     }
  12:     else
  13:     {
  14:         user.Status = EntityStatus.Deleted;
  15:     }
  16: 
  17:     BindUsersGrid(SessionStateUtility.Users, -1);// back to plain mode
  18: }

We have done our work as a presentation layer, and we are now sending all the data through the service to data layer along with all the information needed for it to manage the generation of the SQL Statements (fingers crossed)

Data Layer Design

Since we are going to implement the IUsersDataAccess contract, we need to implement 4 methods: But I’ll focus on 2 of them especially. First one is GetAllUsers:

   1: /// <summary>
   2: /// Gets all users.
   3: /// </summary>
   4: /// <returns>The list of all users along with their favorites.</returns>
   5: public IList<User> GetAllUsers()
   6: {
   7:     using (FavoritesEntitiesDataContext context = new FavoritesEntitiesDataContext())
   8:     {
   9:         DataLoadOptions options = new DataLoadOptions();
  10:         options.LoadWith<User>(u => u.Favorites);
  11: 
  12:         context.LoadOptions = options; // load with favorites
  13:         context.ObjectTrackingEnabled = false; // retrieving data read only
  14: 
  15:         return context.Users.ToList<User>();
  16:     }
  17: }

As you see, we are telling the context to load every user with their favorites. This can cause some damage if these tables are very big, and there are methods to enhance this experience.

The UpdateUsers(IList) method is a bit more complicated. Here are the list of things that we are going to do:

  • Attach the users to the context who have their status “Updated’ – obvious one

  • Attach the users to the context who have their status “Deleted’ – since the context does not know about an object that is not attached, we need to attach them too.

  • We aren’t going to attach the objects to insert, because Data Context doesn’t need to know about the objects those are being added.

  • Call the relevant of one of those by looking at their status: InsertAllOnSubmit, DeleteAllOnSubmit

  • Do the same for the child entities of each. (Keep in mind that we need to delete all children regardless of their status if their parent is deleted)

So now hopefully the following implementation will be more understandable:

   1: /// <summary>
   2: /// Updates the users list.
   3: /// </summary>
   4: /// <param name=”updateList”>The list of users to perform the operations.</param>
   5: public void UpdateUsers(IList<User> updateList)
   6: {
   7:     using(FavoritesEntitiesDataContext context = new FavoritesEntitiesDataContext())
   8:     {
   9:         context.Users.AttachAll<User>(
  10: updateList.Where<User>(
  11:   usr=>usr.Status == EntityStatus.Updated ||
  12:   usr.Status == EntityStatus.Deleted), true);
  13:         context.Users.InsertAllOnSubmit<User>(
  14: updateList.Where<User>(
  15:   usr=>usr.Status == EntityStatus.New));
  16:         context.Users.DeleteAllOnSubmit<User>
  17: (updateList.Where<User>(usr => usr.Status == EntityStatus.Deleted));
  18: 
  19:         // do the same for the children
  20:         // If the parent is deleted, to prevent orphan records we need to delete
  21:         // children too
  22:         foreach (User user in updateList)
  23:         {
  24:             context.Favorites.AttachAll<Favorite>
  25: (user.Favorites.Where<Favorite>
  26:   (fav=>fav.Status == EntityStatus.Updated
  27:   || fav.Status == EntityStatus.Deleted
  28:   || fav.User.Status == EntityStatus.Deleted
  29:   || fav.User.Status == EntityStatus.Updated));
  30:             //we shouldnt insert the new child records of deleted entities
  31:             context.Favorites.InsertAllOnSubmit<Favorite>
  32: (user.Favorites.Where<Favorite>
  33:   (fav => fav.Status == EntityStatus.New
  34:   && fav.User.Status != EntityStatus.Deleted));
  35:             context.Favorites.DeleteAllOnSubmit<Favorite>
  36: (user.Favorites.Where<Favorite>
  37: (fav => fav.Status == EntityStatus.Deleted ||
  38: fav.User.Status == EntityStatus.Deleted));
  39:         }
  40: 
  41:         context.SubmitChanges();
  42:     }
  43: }

That’s the end of fun(!) folks. As you have seen already, there is some work involved with making Linq to SQL work in Multi Tiered architecture, but it is doable still. Again, download the sources and please don’t hesitate to post any comments, criticisms or crossword puzzles via here or sidarok@sidarok.com, they are all welcome.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Introducing TextBox Limiter Control Ajax Control Toolkit Extender

May 29th, 2008 by Sidar Ok

You can download the sources from here

ASP.NET TextBox has an integer attribute “MaxLength”which corresponds to html text input’s property with the same name. It works perfectly when the textbox is single line, normal input type “text”.

But when we want to work in a multiline box, such as an e-mail message or sending and SMS, we want to limit it in the same way and what happens? We see that generated control is a “textarea” and it doesn’t support maximum length! Gee!

Now of course we can use Regular Expression validators to validate and tell at the client side, but we don’t want to just tell! We want to prevent it exceeding the predefined size too!

That’s why I came up with this Ajax Control Toolkit extender that I called TextboxLimitExtender. We just give it the MultiLine text area to operate on, and the maximum length. I also added an option to show how many characters left on a text control of your choice. The extender contains a server side method to do the double check on server side.

Here is a screenshot of what you will expect to get at the end of it:

clip_image002

Picture 1. Extender in action

How to Use It

After adding TextboxLimiterExtender and Ajax Control Toolkit assemblies to your project as references, add the following at the beginning of your page or user control that you want to use the TextboxLimitExtender:

<%@ Register Assembly=”TextboxLimitExtender” 
Namespace=”TextboxLimitExtender” TagPrefix=”cc1″ %>

Of course, we have to be sure that we have a script manager:

<asp:ScriptManager ID=”sm” runat=”server” />

Now let’s assume that our target textbox is defined like the following:

<asp:TextBox ID=”limitedTextBox” runat=”server” TextMode=”MultiLine” />

And just beneath it we have our static text and a label to show how many characters left:

You have <asp:Label ID=”charsLeftLabel” runat=”server” ForeColor=”Red” /> 
chars left.
Now the moment of truth: with these controls extender goes like this:
<cc1:TextboxLimitExtender ID=”TextboxLimitExtender1″ runat=”server” 
MaxLength=”50″ TargetControlID=”limitedTextBox” 
TargetCountTextControlId=”charsLeftTextBox”>    

        </cc1:TextboxLimitExtender>

How it Works

It handles the every key hit and checks if the checkbox length exceeded the maximum length or not. If it didn’t, then does nothing. If it did, then it cancels the event so the offending chars never get typed.

In addition, we need to handle copy & paste behaviors to prevent them from happening for the same reasons above.

Implementation

Server Side

We will have 2 properties, one for the ID of the control to write how many characters left, and another one to keep maximum length.

Here is TextboxLimiterExtender.cs that writes the injects values for the script:

   1: [Designer(typeof(TextboxLimitExtenderDesigner))]
   2: [ClientScriptResource(“TextboxLimitExtender.TextboxLimitExtenderBehavior”,
   3:     “TextboxLimitExtender.TextboxLimitExtenderBehavior.js”)]
   4: [TargetControlType(typeof(ITextControl))]
   5: public class TextboxLimitExtender : ExtenderControlBase
   6: {
   7:     // TODO: Add your property accessors here.
   8:     //
   9:     [ExtenderControlProperty]
  10:     [DefaultValue(“”)]
  11:     [IDReferenceProperty(typeof(ITextControl))]
  12:     public string TargetCountTextControlId
  13:     {
  14:         get
  15:         {
  16:             return GetPropertyValue(“TargetCountTextControlId”, string.Empty);
  17:         }
  18:         set
  19:         {
  20:             SetPropertyValue(“TargetCountTextControlId”, value);
  21:         }
  22:     }
  23:
  24:     [ExtenderControlProperty]
  25:     [DefaultValue(“1000″)]
  26:     public int MaxLength
  27:     {
  28:         get
  29:         {
  30:             return GetPropertyValue<int>(“MaxLength”, 0);
  31:         }
  32:         set
  33:         {
  34:             SetPropertyValue<int>(“MaxLength”, value);
  35:         }
  36:     }
  37:
  38:     /// <summary>
  39:     /// Validates the textbox against the maximum number.
  40:     /// </summary>
  41:     /// <returns></returns>
  42:     public bool Validate()
  43:     {
  44:         return ((ITextControl)this.TargetControl).Text.Length <= MaxLength;
  45:     }
  46:
  47: }

As you can see the type of target control and the control to write target count are type of ITextControl interface. This is an interface implemented by every control that has Text property, so you can swap between Textbox and Labels. Here is a screenshot that writes the content to a TextBox instead of a label:

clip_image002[1]

Picture 2. Textbox Limiter outputting to a Textbox instead of a Label

Client Side

In the behaviour file we will define the variables that are coming from the server side and the events to achieve the behaviour needed. The code below shows how to create the behaviour . We are also initialising the methods that we are going to use here:

   1: TextboxLimitExtender.TextboxLimitExtenderBehavior = function(element) {
   2:     TextboxLimitExtender.TextboxLimitExtenderBehavior.initializeBase(this, [element]);
   3:
   4:     // initializing property values
   5:     //
   6:     this._TargetCountTextControlId = null;
   7:     this._MaxLength = 1000;
   8:
   9: //    //initializing handlers
  10:     this._onKeyPressHandler = null;
  11:     this._onBeforePasteHandler = null;
  12:     this._onPasteHandler = null;
  13:     this._onKeyDownHandler = null;
  14:     this._onKeyUpHandler = null;
  15: }

The rest goes as the same with a standard implementation of an Ajax Control Toolkit Extender, but I’ll show some important methods that are listed above.

RefreshCountTextbox method calculates the characters left and updates the count on the targetCountTextControl .

   1: _refreshCountTextBox: function() {
   2:
   3:         var control = this.get_element();
   4:         var maxLength = this.get_MaxLength();
   5:         var tbId = this.get_TargetCountTextControlId();
   6:         var countTextBox;
   7:         //var countMode = this.
   8:         if (tbId) {
   9:             countTextBox = $get(tbId);
  10:         }
  11:         else
  12:             return; //nowhere to write.
  13:
  14:         var innerTextEnabled = (document.getElementsByTagName(“body”)[0].innerText !=
  15: undefined) ? true : false;
  16:
  17:         if (countTextBox)
  18:         {
  19:
  20:             if(innerTextEnabled)
  21:             {
  22:                 countTextBox.innerText = maxLength - control.value.length;
  23:             }
  24:             else
  25:             {
  26:                 countTextBox.textContent = maxLength - control.value.length;
  27:             }
  28:         }

On pasting, things get a bit more interesting. We need to cancel default pasting in order to perform our own one, so we handle onbeforepasting:

   1: _onBeforePaste: function(e) {
   2:         //cancel default behaviour
   3:         if (e) {
   4:             e.preventDefault();
   5:         }
   6:         else {
   7:             event.returnValue = false;
   8:         }
   9:
  10:         this._refreshCountTextBox();
  11:     },

And now that we cancelled the paste, we have the responsibility to reach to what user wanted to copy and tailor it until it doesn’t exceed max length. If it exceeds, than the trailing bits won’t be in the box:

   1: _onPaste: function(e) {
   2:         var control = this.get_element();
   3:         var maxLength = this.get_MaxLength();
   4:         //cancel default behaviour to override
   5:
   6:         if (e) {
   7:             e.preventDefault();
   8:         }
   9:         else {
  10:             event.returnValue = false;
  11:         }
  12:         var oTR = control.document.selection.createRange();
  13:         var insertLength = maxLength - control.value.length + oTR.text.length;
  14:         var copiedData = window.clipboardData.getData(“Text”).substr(0, insertLength);
  15:         oTR.text = copiedData;
  16:
  17:         this._refreshCountTextBox();
  18:     },

Limitations & Remarks

Although the sample project is in .NET 3.5, the code is fully 2.0 compatible. It works fine in IE 6.0 and 7.0, but for FireFox it limits the textbox but doesn’t print the number of characters left for some reason and I was too lazy to investigate it(see update).

Conclusion

This extender wraps up the needed strategy for limiting a textbox and showing how many characters left. You can use download the source code from here and use it in anyway you want.

Feel free to post suggestions, improvements or critics under this post or to my mail address sidarok at sidarok dot com.

UPDATE: Thanks to Michael, it works for Firefox now. Source is updated. See comments.

UPDATE 2 : I am not developing the source any further, including doing no compatibility checks or new updates. Please see the comments below of people who are gracefully providing information on the issues they come across with and don’t hesitate to share with others like they are doing.

Technorati Tags: ,,,,

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Linq to SQL with WCF in a Multi Tiered Action – Part 1

May 26th, 2008 by Sidar Ok

In many places, forums, blogs, or techy talks with some colleagues I hear some ongoing urban legends about Linq to SQL I came across:

  • You can not implement multi tiered applications with Linq to SQL

  • Linq to SQL can not be used for enterprise level applications

I can’t say that both of these statements are particularly wrong or right, of course Linq to SQL can not handle every scenario but in fairness it handles most of the scenarios sometimes even better than some other RAD oriented ORM s. In this post I will create a simulation of an enterprise web application, having its Data Access, Services, and Presentation Layers separated and let them communicate with each other (err.., at least from service to UI) through WCF – Windows Communication Foundation.

This will be a couple of (may be more) posts, and this is the first part of it. I’ll post the sample code with the next post.

I have to say that this article is neither an introduction to Linq to SQL nor to WCF, so you need basic knowledge of both worlds in order to benefit from this mash up. We will develop an application step by step with an easy scenario but will have the most important characteristics of a disconnected (from DataContext perspective), multi layered enterprise architecture.

Since this architecture is more scalable and reliable, implementing it with Linq to SQL has also some tricks to keep in mind:

  • Our DataContext will be dead most of the time. So we won’t be able to benefit Object Tracking to generate our SQL statements out of the box.

  • This also brings to the table that we have to know what entities to delete, what to insert, and what to update. We can not just “do it” and submit changes as we are doing it in connected mode. This means that we have to maintain the state of the objects manually (sorry folks, I feel the same pain).

  • The transport of the data over the wire is another problem, since we don’t write the entities on our own(and in the case of an amend to them the designer of Linq to SQL can be very aggressive) so it brings us into 2 common situation

  • We can create our own entities, and write translators to convert from Linq Entities to our very own ones.

  • We can try to customize Linq Entities in the ways we are able to.

Since the first one is obvious and the straight forward to implement, we will go down the second route to explore the boundaries of this customization.

To make it clearer that what I will do, here is a basic but a functional schema of the resulting n-tier application

s

Picture 1 – Architectural schema of the sample app.

In our example, we are going to use Linq to SQL as an ORM Mapper. So as you see in the schema, Linq to SQL doesn’t give us the heaven of not writing a DAL Layer at all. But it reduces both stored queries/procedures and amount of mapping that we had to do manually before.

Developing the Application

Scenario

The scenario I came up with is a favorites web site, that consist of 2 simple pages enabling its users to Insert, Delete, Update and Retrieve users and their favorites when requested. 1 user can have many favorites.

We will simply place 2 Grid Views in the page and handle their events to make the necessary modifications on the model itself. This will also demonstrate a common usage.

Design

Entities

Here is the object diagram of the entities; they are the same as the DB tables:

clip_image004

Picture 2.Entity Diagram

See the additional “Version” fields in the entities; they are type of Binary in .NET and TimeStamps in SQL Server 2005. We will use them to let Linq to SQL handle the concurrency issues for us.

Since we are going to employ a web service by the help of WCF, we need to mark our entities as DataContract to make it available for serialization through DataContractSerializer. We can do that by right clicking on the designer and going to properties, and changing Serialization property to unidirectional as in the picture follows:

clip_image006

Picture 3. Properties window

After doing and saving this we will see in the designer.cs file, we have our Entities marked as DataContract and members as DataMember s.

As mentioned earlier before, we need to maintain our entites state – to know whether they are deleted, inserted, or updated. To do this I am going to define an enumeration as follows:

   1: /// <summary>
   2:     /// The enum helps to identify what is the latest state of the entity.
   3:     /// </summary>
   4:     public enum EntityStatus
   5:     {
   6:         /// <summary>
   7:         /// The entity mode is not set.
   8:         /// </summary>
   9:         None = 0,
  10:         /// <summary>
  11:         /// The entity is brand new.
  12:         /// </summary>
  13:         New = 1,
  14:         /// <summary>
  15:         /// Entity is updated. 
  16:         /// </summary>
  17:         Updated = 2,
  18:         /// <summary>
  19:         /// Entity is deleted. 
  20:         /// </summary>
  21:         Deleted = 3,
  22:     }

We are going to have this field in every entity, so let’s define a Base Entity with this field in it:

   1: [DataContract]
   2: public class BaseEntity
   3: {
   4:   /// <summary>
   5:   /// Gets or sets the status of the entity.
   6:   /// </summary>
   7:   /// <value>The status.</value>
   8: 
   9:   [DataMember]
  10:   public EntityStatus Status { get; set; }
  11: }

 

And then, all we need to do is to create partial classes for our Entities and extend them from base entity:

   1: public partial class User : BaseEntity
   2: {
   3: 
   4: }
   5: 
   6: public partial class Favorite : BaseEntity
   7: {
   8: 
   9: }
  10: 

Now our entities are ready to travel safely along with their arsenal.

Service Layer Design

As we are going to use WCF, we need to have our:

  • Service Contracts (Interfaces)
  • Service Implementations (Concrete classes)
  • Service Clients (Consumers)
  • Service Host (Web service in our case)

Service Contracts

We will have 2 services: Favorites Service and Users Service. It will have 4 methods: 2 Gets and 2 Updates. We will do the insertion, update, and deletion depending on the status so there is no need to determine separate functions for all. Here is the contract for User:

   1: /// <summary>
   2: /// Contract for user operations 
   3: /// </summary>
   4: 
   5: [ServiceContract]
   6: public interface IUsersService
   7: {
   8: /// <summary>
   9: /// Gets all users.
  10: /// </summary>
  11: /// <returns></returns>
  12: 
  13:   [OperationContract]
  14:   IList<User> GetAllUsers();
  15: 
  16: /// <summary>
  17: /// Updates the user.
  18: /// </summary>
  19: /// <param name=”user”>The user.</param>
  20: 
  21:   [OperationContract]
  22:   void UpdateUser(User user);
  23: 
  24: /// <summary>
  25: /// Gets the user by id.
  26: /// </summary>
  27: /// <param name=”id”>The id.</param>
  28: /// <returns></returns>
  29: 
  30:   [OperationContract]
  31:   User GetUserById(int id);
  32: 
  33: /// <summary>
  34: /// Updates the users in the list according to their state.
  35: /// </summary>
  36: /// <param name=”updateList”>The update list.</param>
  37: 
  38:   [OperationContract]
  39:   void UpdateUsers(IList<User> updateList);
  40: }

And here is the contract for Favorites Service:

   1: /// <summary>
   2: /// Contract for favorites service
   3: /// </summary>
   4: [ServiceContract]
   5: public interface IFavoritesService
   6: {
   7:   /// <summary>
   8:   /// Gets the favorites for user.
   9:   /// </summary>
  10:   /// <param name=”user”>The user.</param>
  11:   /// <returns></returns>
  12:   [OperationContract]
  13:   IList<Favorite> GetFavoritesForUser(User user);
  14: 
  15:   /// <summary>
  16:   /// Updates the favorites for user.
  17:   /// </summary>
  18:   /// <param name=”user”>The user.</param>
  19:   [OperationContract]
  20:   void UpdateFavoritesForUser(User user);
  21: }

Service Implementations (Concrete classes)

Since we are developing a db application with no business logic at all, the service layer implementors are pretty lean & mean. Here is the Service implementation for UserService

   1: [ServiceBehavior(IncludeExceptionDetailInFaults=true)]
   2: public class UsersService : IUsersService
   3: {
   4:     IUsersDataAccess DataAccess { get; set; }
   5: 
   6:     public UsersService()
   7:     {
   8:         DataAccess = new UsersDataAccess();
   9:
  10:     }
  11: 
  12:     #region IUsersService Members
  13: 
  14:     /// <summary>
  15:     /// Gets all users.
  16:     /// </summary>
  17:     /// <returns></returns>
  18:     [OperationBehavior]
  19:     public IList<User> GetAllUsers()
  20:     {
  21:         return DataAccess.GetAllUsers();
  22:     }
  23: 
  24:     /// <summary>
  25:     /// Updates the user.
  26:     /// </summary>
  27:     /// <param name=”user”>The user.</param>
  28:     [OperationBehavior]
  29:     public void UpdateUser(User user)
  30:     {
  31:         DataAccess.UpdateUser(user);
  32:     }
  33: 
  34:     /// <summary>
  35:     /// Gets the user by id.
  36:     /// </summary>
  37:     /// <param name=”id”>The id.</param>
  38:     /// <returns></returns>
  39:     [OperationBehavior]
  40:     public User GetUserById(int id)
  41:     {
  42:         return DataAccess.GetUserById(id);
  43:     }
  44: 
  45:     /// <summary>
  46:     /// Updates the users in the list according to their state.
  47:     /// </summary>
  48:     /// <param name=”updateList”>The update list.</param>
  49:     [OperationBehavior]
  50:     public void UpdateUsers(IList<User> updateList)
  51:     {
  52:         DataAccess.UpdateUsers(updateList);
  53:     }
  54: 
  55:     #endregion
  56: }

And as you can imagine the favorite service implementation is pretty much the same.

This has been long enough, so let’s cut it here. In the next post, I will talk about the presentation, service and data layer implementations. By that, we will see how to best approach to modifying these entities in a data grid, pass them through the WCF Proxy and commit the changes (insert, update, delete) to the SQL 2005 database. I will also provide the source codes with the next post. Stay tuned until then.

For part 2 : http://www.sidarok.com/web/blog/content/2008/06/02/linq-to-sql-with-wcf-in-a-multi-tiered-action-part-2.html .

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

A Basic Hands on Introduction to Unity DI Container

May 15th, 2008 by Sidar Ok

Hey folks, here we are with another interesting article. There are some introductions already on the internet about Unity providing the theoretical information, so I won’t go deeper in that route. In this article, I will be more practical and provide a concrete implementation of concepts. You can download the sample codes clicking here.

Microsoft Patterns and Practices team had been developing Enterprise Library to enable the use of general patterns and practices for .NET platform, which has great pluggable application blocks such as Logging and Validation application blocks. One of them used to be DIAB, which is an acronym for Dependency Injection Application Block. But folks thought it should be named differently from the other application blocks, and came with the fancy name “Unity”.

Now I won’t go to details of Inversion of Control and Dependency Injection patterns as I can imagine you are sick of them and I want to keep this post short, but the basic value it brings to enterprise systems is decoupling. They promote programming to interfaces and isolate you from the creation process of the collaborators, letting you to concentrate on what you need to deliver while improving testing.

Out in the universe, there are big frameworks such as Spring.NET or Castle Windsor containing Castle Microkernel. The choice coming from Microsoft Patterns and Practices team is the Unity framework, which has gone live in the April. It is open source and hosted in CodePlex along with its community contributions project that is awaiting developers’ helps to extend unity.

Enough talking, lets see some action. We will develop a simple set of classes that does naming, applying strategy patterns. This is also good because a common best practice is to inject your strategies to your consumers through containers and interfaces.

Setting Up the Environment to Use Unity

In the example, I used Visual Studio 2008 and .NET 3.5. You need to download the latest drop of Unity from here and add it as a reference to the projects we want to use and that’s it really.

Members of the Solution

In the UnitySample project, there are Strategy Contracts and Strategy Implementations. The contracts are interfaces as you already may have discovered, where their implementations reside in the implementations project.

So in the Contracts we have a naming strategy contract as follows:

   2: /// Defines the contract of changing strings per conventions.
   3: /// </summary>
   4: public interface INamingStrategy
   5: {
   6:   /// <summary>
   7:   /// Converts the string according to the convention.
   8:   /// </summary>
   9:   /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. 
  10:   /// Assumes that the words are seperated by spaces.</param>
  11:   /// <returns>The naming applied string.</returns>
  12:   string ConvertString(string toApplyNaming);
  13: }

And we will have 2 concrete implementations, one for Pascal and one for Camel casing in the implementations project. Being good TDD Guys we are writing the test first. Let’s see the test method for Pascal casing (camel’s is pretty much similar to it):

   1: /// <summary>
   2: ///A test for ConvertString
   3: ///</summary>
   4: [TestMethod()]
   5: public void ConvertStringTest()
   6: {
   7:   INamingStrategy strategy = new PascalNamingStrategy();
   8: 
   9:   string testVar = “the variable to be tested”;
  10:   string expectedVar = “TheVariableToBeTested”;
  11:
  12:   string resultVar = strategy.ConvertString(testVar);
  13: 
  14:   Assert.AreEqual(expectedVar, resultVar);
  15: }

After we write the tests and fail, we are ready to write the concrete implementation for the Pascal Casing to pass the test:

   1: /// <summary>
   2: /// Pascal naming convention, all title case.
   3: /// </summary>
   4: public class PascalNamingStrategy : INamingStrategy
   5: {
   6:    #region INamingStrategy Members
   7: 
   8:     /// <summary>
   9:     /// Converts the string according to the convention.
  10:     /// </summary>
  11:     /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. Assumes that the words are seperated by spaces.</param>
  12:     /// <returns>The naming applied string.</returns>
  13:     public string ConvertString(string toApplyNaming)
  14:     {
  15:         Debug.Assert(toApplyNaming != null);
  16:         Debug.Assert(toApplyNaming.Length > 0);
  17: 
  18:         // trivial example, not considering edge cases.
  19:         string retVal = CultureInfo.InvariantCulture.TextInfo.ToTitleCase(toApplyNaming);
  20:         return retVal.Replace(” “, string.Empty);
  21:     }
  22: 
  23:     #endregion
  24: }

You can see the relevant implementation of the Camel Casing in the source codes provided.

After finishing with fundamental, let’s utilize & test Unity with our design. For this purpose I am creating a project called “Unity Strategies Test” to see how container can be used to inject in when a INamingStrategy is requested. Following test method shows very simple injection and test if the injection succeeded in a few lines:

   1: /// <summary>
   2: /// Test if injecting dependencies succeed.
   3: /// </summary>
   4: [TestMethod]
   5: public void ShouldInjectDependencies()
   6: {
   7:     IUnityContainer container = new UnityContainer();
   8: 
   9:     container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  10: 
  11:     INamingStrategy strategy = container.Resolve<INamingStrategy>();
  12: 
  13:     Assert.IsNotNull(strategy, “strategy injection failed !!”);
  14:     Assert.IsInstanceOfType(strategy, typeof(PascalNamingStrategy), “Strategy injected, but type wrong!”);
  15: 
  16: }

And the testing of PascalNamingStrategy becomes much easier and more loosely coupled now:

   1: /// <summary>
   2: /// Tests the pascal strategy through injection.
   3: /// </summary>
   4: [TestMethod]
   5: public void TestPascalStrategy()
   6: {
   7:    IUnityContainer container = new UnityContainer();
   8: 
   9:    container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  10: 
  11:    // notice that we dont know what strategy will be used, and we dont care either really
  12: 
  13:    INamingStrategy strategy = container.Resolve<INamingStrategy>();
  14: 
  15:    string testVar = “the variable to be tested”;
  16:    string expectedVar = “TheVariableToBeTested”;
  17:    string resultVar = strategy.ConvertString(testVar);
  18:
  19:    Assert.AreEqual(expectedVar, resultVar);
  20: }

This very basic example showed how your tests and code can become loosely coupled. In the next posts I will try to talk about configuring the container, and how to utilize it in your web applications. Stay tuned till then.

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Cross Browser Guide Part 3 – Event Handling in Different Browsers

May 10th, 2008 by Sidar Ok

For the first two articles of the series: Part 1 and Part 2 .

The worst part of making an application work in multiple browsers is the different interpretation of JavaScript by every browser (you know what I mean). One of the most obvious ones is the event handling architecture difference amongst Internet Explorer and the browsers those follow W3C standards for DOM event handling.

This is a very important topic because everything starts with events. No events, no scripting. If at one point of your script your event handling fails, it is very likely that the rest of it will not be executed. So we need to understand the event models of –at least – the major browsers. We can group them by three major categories:

1 – Traditional Model

In the old browsers we were able to attach only through inline scripting such as:

<input type=”button” id=”myButton” value=”Press” onclick=”alert(’hello world!’)” />

But this is not easily maintainable and recommended now. So the Netscape notation is a common way to hook your events up:

element.onclick = doSomething;

As you can see there is a certain drawback that we can not add more than one listeners to an event as we can do in today’s modern languages. This model is supported by most of the browsers, so don’t worry, you don’t need to write any extra codes for here.

2 – W3C Model

In 2000, W3C has published a DOM Level 2 Events Specification to address the problems about the traditional model.

In this model, assignments to a specific event are done by add and remove events for a specific element. For example, to add you say:

myButton.addEventListener(‘click’, doSomething, false);

Where to remove you need to write:

myButton.removeEventListener(‘click’, doSomething, false);

As you can see you can add or remove multiple listeners to an event in this model. For e.g the following manages to fire both doSomething1 and doSomething2 when myButton is clicked:

myButton.addEventListener(‘click’, doSomething1, false);
myButton.addEventListener(‘click’, doSomething2, false);

W3C also supports anonymous functions that are very similar to anonymous functions in C#.NET 2.0.

The last Boolean parameter states whether the event handling will be done in bubbling phase or not (in handling phase).

3 – Microsoft Event Bubbling Model

This event model is similar to W3C’s one, but it is not the same. The name of the method to attach the event is different, and as in the below:

myButton.attachEvent(‘onclick’, doSomething);

and to remove the handler you use:

myButton.detachEvent(‘onclick’, doSomething);

As you see, there is not third parameter specifying capture or bubble, as the events isn MS programming environment always bubble, not captured.

As a result of this it is impossible to know exactly what element raised that event without doing anything ( I advice to look at MS Ajax Source code to see how they handled this situation)

That’s why while working in IE 7.0, we need to be careful about window.event behavior. This is there to store the latest event happened in the window event stack, but is not supported by the other browsers. For e.g, you want to cancel the default behavior in a specific circumstance, and to do this the way in IE 7.0 is:

window.event.returnValue = true;

But this will not work in FireFox. You then need to check an extra event handler, that is automatically injected in by Firefox and our event handler transforms as following:

function doSomething()
{
  if (!e) // if the parameter is provided by the browser
  {
    e = window.event;
  }     

  if (e.preventDefault)  // firefox style
  {
    e.preventDefault();
  }
  else
  {
    e.returnValue = false;
  }
}

Also if you return false from the listener, the default action will be prevented (such as the post back of a button or a redirection of a link. This is very useful especially in client side validation of forms.

We will continue to talk about the JavaScript problems across the browsers in the following posts, stay cool!

kick it on DotNetKicks.com


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

10 Tips to Improve your LINQ to SQL Application Performance

May 2nd, 2008 by Sidar Ok

Hey there, back again. In my first post about LINQ I tried to provide a brief(okay, bit detailed) introduction for those who want to get involved with LINQ to SQL. In that post I promised to write about a basic integration of WCF and LINQ to SQL working together, but this is not that post.

Since LINQ to SQL is a code generator and an ORM and it offers a lot of things, it is normal to be suspicious about performance of it. These are right up to a certain point as LINQ comes with its own penalties. But there are several benchmarks showing that DLINQ brings us up to %93 of the ADO.NET SQL DataReader performance if optimizations are done correctly.

Hence I summed up 10 important points for me that needs to be considered during tuning your LINQ to SQL’s data retrieval and data modifying process:

1 – Turn off ObjectTrackingEnabled Property of Data Context If Not Necessary

If you are trying only to retrieve data as read only, and not modifying anything, you don’t need object tracking. So turn it off using it like in the example below:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  context.ObjectTrackingEnabled = false;
}

This will allow you to turn off the unnecessary identity management of the objects – hence Data Context will not have to store them because it will be sure that there will be no change statements to generate.

2 – Do NOT Dump All Your DB Objects into One Single DataContext

DataContext represents a single unit of work, not all your database. If you have several database objects that are not connected, or they are not used at all (log tables, objects used by batch processes,etc..). These objects just unnecessarily consume space in the memory hence increasing the identity management and object tracking costs in CUD engine of the DataContext.

Instead think of separating your workspace into several DataContexts where each one represents a single unit of work associated with it. You can still configure them to use the same connection via its constructors to not to loose the benefit of connection pooling.

3 – Use CompiledQuery Wherever Needed

When creating and executing your query, there are several steps for generating the appropriate SQL from the expression, just to name some important of them:

  1. Create expression tree

  2. Convert it to SQL

  3. Run the query

  4. Retrieve the data

  5. Convert it to the objects

As you may notice, when you are using the same query over and over, hence first and second steps are just wasting time. This is where this tiny class in System.Data.Linq namespace achieves a lot. With CompiledQuery, you compile your query once and store it somewhere for later usage. This is achieved by static CompiledQuery.Compile method.

Below is a Code Snippet for an example usage:

Func<NorthwindDataContext, IEnumerable<Category>> func =
   CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
   ((NorthwindDataContext context) => context.Categories.
      Where<Category>(cat => cat.Products.Count > 5));


And now, “func” is my compiled query. It will only be compiled once when it is first run. We can now store it in a static utility class as follows :

/// <summary>
/// Utility class to store compiled queries
/// </summary>
public static class QueriesUtility
{
  /// <summary>
  /// Gets the query that returns categories with more than five products.
  /// </summary>
  /// <value>The query containing categories with more than five products.</value>
  public static Func<NorthwindDataContext, IEnumerable<Category>>
    GetCategoriesWithMoreThanFiveProducts
    {
      get
      {
        Func<NorthwindDataContext, IEnumerable<Category>> func =
          CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
          ((NorthwindDataContext context) => context.Categories.
            Where<Category>(cat => cat.Products.Count > 5));
        return func;
      }
    }
}

And we can use this compiled query (since it is now a nothing but a strongly typed function for us) very easily as follows:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  QueriesUtility.GetCategoriesWithMoreThanFiveProducts(context);
}

Storing and using it in this way also reduces the cost of doing a virtual call that’s done each time you access the collection – actually it is decreased to 1 call. If you don’t call the query don’t worry about compilation too, since it will be compiled whenever the query is first executed.

4 – Filter Data Down to What You Need Using DataLoadOptions.AssociateWith

When we retrieve data with Load or LoadWith we are assuming that we want to retrieve all the associated data those are bound with the primary key (and object id). But in most cases we likely need additional filtering to this. Here is where DataLoadOptions.AssociateWith generic method comes very handy. This method takes the criteria to load the data as a parameter and applies it to the query – so you get only the data that you need.

The following code below associates and retrieves the categories only with continuing products:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  DataLoadOptions options = new DataLoadOptions();
  options.AssociateWith<Category>(cat=> cat.Products.Where<Product>(prod => !prod.Discontinued));
  context.LoadOptions = options;
}

5 – Turn Optimistic Concurrency Off Unless You Need It

LINQ to SQL comes with out of the box Optimistic Concurrency support with SQL timestamp columns which are mapped to Binary type. You can turn this feature on and off in both mapping file and attributes for the properties. If your application can afford running on “last update wins” basis, then doing an extra update check is just a waste.

UpdateCheck.Never is used to turn optimistic concurrency off in LINQ to SQL.

Here is an example of turning optimistic concurrency off implemented as attribute level mapping:

[Column(Storage=“_Description”, DbType=“NText”,
            UpdateCheck=UpdateCheck.Never)]
public string Description
{
  get
  {
    return this._Description;
  }
  set
  {
    if ((this._Description != value))
    {
      this.OnDescriptionChanging(value);
      this.SendPropertyChanging();
      this._Description = value;
      this.SendPropertyChanged(“Description”);
      this.OnDescriptionChanged();
    }
  }
}

6 – Constantly Monitor Queries Generated by the DataContext and Analyze the Data You Retrieve

As your query is generated on the fly, there is this possibility that you may not be aware of additional columns or extra data that is retrieved behind the scenes. Use Data Context’s Log property to be able to see what SQL are being run by the Data Context. An example is as follows:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  context.Log = Console.Out;
}


Using this snippet while debugging you can see the generated SQL statements in the Output Window in Visual Studio and spot performance leaks by analyzing them. Don’t forget to comment that line out for production systems as it may create a bit of an overhead. (Wouldn’t it be great if this was configurable in the config file?)

To see your DLINQ expressions in a SQL statement manner one can use SQL Query Visualizer which needs to be installed separately from Visual Studio 2008.

7 – Avoid Unnecessary Attaches to Tables in the Context

Since Object Tracking is a great mechanism, nothing comes for free. When you  Attach an object to your context, you mean that this object was disconnected for a while and now you now want to get it back in the game. DataContext then marks it as an object that potentially will change - and this is just fine when you really intent to do that.

But there might be some circumstances that aren’t very obvious, and may lead you to attach objects that arent changed. One of such cases is doing an AttachAll for collections and not checking if the object is changed or not. For a better performance, you should check that if you are attaching ONLY the objects in the collection those are changed.

I will provide a sample code for this soon.

8 – Be Careful of Entity Identity Management Overhead

During working with a non-read only context, the objects are still being tracked – so be aware that non intuitive scenarios this can cause while you proceed. Consider the following DLINQ code:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  var a = from c in context.Categories
  select c;
}

Very plain, basic DLINQ isn’t it? That’s true; there doesn’t seem any bad thing in the above code. Now let’s see the code below:

using (NorthwindDataContext context = new NorthwindDataContext())
{
  var a = from c in context.Categories
  select new Category
  {
    CategoryID = c.CategoryID,
    CategoryName = c.CategoryName,
    Description = c.Description
  };
}

The intuition is to expect that the second query will work slower than the first one, which is WRONG. It is actually much faster than the first one.

The reason for this is in the first query, for each row the objects need to be stored, since there is a possibility that you still can change them. But in the 2nd one, you are throwing that object away and creating a new one, which is more efficient.

9 – Retrieve Only the Number of Records You Need

When you are binding to a data grid, and doing paging – consider the easy to use methods that LINQ to SQL provides. These are mainly Take and Skip methods. The code snippet involves a method which retrieves enough products for a ListView with paging enabled:

/// <summary>
/// Gets the products page by page.
/// </summary>
/// <param name=”startingPageIndex”>Index of the starting page.</param>
/// <param name=”pageSize”>Size of the page.</param>
/// <returns>The list of products in the specified page</returns>
private IList<Product> GetProducts(int startingPageIndex, int pageSize)
{
  using (NorthwindDataContext context = new NorthwindDataContext())
  {
    return context.Products
           .Take<Product>(pageSize)
           .Skip<Product>(startingPageIndex * pageSize)
           .ToList<Product>();
   }
}

10 – Don’t Misuse CompiledQuery

I can hear you saying “What? Are you kiddin’ me? How can such a class like this be misused?”

Well, as it applies to all optimization LINQ to SQL is no exception:

“Premature optimization is root all of evil” – Donald Knuth

If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. But why?

That’s because the resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it. It is not compiled like the way regular expressions are compiled. And your delegate has the ability to replace the variables (or parameters) in the resulting query.

That’s the end folks, I hope you’ll enjoy these tips while programming with LINQ to SQL. Any comments or questions via sidarok at sidarok dot com or here to this post are welcome.

kick it on DotNetKicks.com

Technorati Tags: LINQ,SQL,Performance,.NET 3.5


Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone