Archive for the ‘C# 3.0’ Category

Achieving POCO s in Linq to SQL

October 14th, 2008 by Sidar Ok

After the nice talk with, it is really nice to see that people have interest in the topic. Unfortunately the quality of the recording was not very nice and connection dropped twice, so I decided to put together this blog post to show how we can work with leaving persistence polluted entities on our behind.

Why is it so important ?

I can hear lots of comments from people around me mainly concerning around “Why do we need this much hassle, when we can already have designer support, and VS integrated goodies of an ORM mapper ?”. First, I have to say that it is fair enough to think in this way. But when things start to go beyond trivial, you start to have problems with persistence or technology polluted entities. On top of my head, I can think of the following:

  1. Technology Agnosticism is a bliss : This concept is usually revolving around PI (Persistence Ignorance), but it is not only that. Persistence Ignorance means that your entities should be cleared of any persistence related code constraints that a framework - usually an ORM - forces on you. This is, for e.g. if you have attribute level mapping where those attributes are not part of your domain but are there just because some framework wants them to be there, then your domain is not persistence ignorant. Or, if your framework requests you to have specific types for handling associations to be used, like EntitySet and EntityRef s in Linq to SQL , same goes for you. This can also be another technology that wants your entities to be serializable for some reason. We need to try to avoid them as much as possible and concentrate on our business concerns there, not to bend our domain to be fitting into those technological discrepancies. This approach will also promote testability. The same goes for the need of implementing an abstract class, or interfaces like INotifyPropertyChanged when you don’t want them.

  2. Relying on Linq to SQL Designer is painful: Designer puts everything in one file, regenerates files each time when you save so you loose your changes such as xml comments. Needless to say, the only OOTB support is attribute level configuration, even for XML you need to use sqlmetal tool out of designer process.

  3. Configuration should not be anything that your domain to be concerned about: Unless you are building a configuration system :)

Let’s get geared

In the light of this, when we are working with Linq to SQL designer, we tend to think that it is impossible to achieve POCOs, but indeed it is: solution is don’t ditch POCOs, just ditch the designer :) While implementing POCOs, we need to know a couple of things beforehand about Linq to SQL internals, because we will be on our own when we have any problems.

  1. EntitySet and EntityRef are indeed useful classes, and they are there to achieve something. When you add an entity to an association, EntitySet manages the identity and back references. That is, for children you need to assign the correct parent id to the child otherwise you will loose relationship. Same goes for EntityRef and for 1-1 relations.

  2. INotifyPropertyChanging and INotifyPropertyChanged are there not only because of informing us by providing the ability to subscribe to necessary events and get notified when a property is changed, but to leverage lazy loading as well. When we discard them, we are back to eager loading.

Enough Rambling, let me see the wild world of code

For this post, I will only focus on the first part, so the lazy loading is a matter of another one. The approach we are going to take is, use the XML mapping instead of attribute based modeling. I am gonna use the trivial Questions and Answers model, where one question can have multiple Answers associated to them. Here is how it looks like :



Question and Answers entities

And their related code is pretty simple, nothing fancy. Here is the Answer POCO :


   1: public class Answer
   2: {
   4:     public Answer()
   5:     {
   6:     }
   8:     private int _QuestionId;
  10:     public int QuestionId
  11:     {
  12:         get
  13:         {
  14:             return _QuestionId;
  15:         }
  16:         set
  17:         {
  18:             _QuestionId = value;
  19:         }
  20:     }
  23:     private int _AnswerId;
  25:     public int AnswerId
  26:     {
  27:         get
  28:         {
  29:             return _AnswerId;
  30:         }
  31:         set
  32:         {
  33:             _AnswerId = value;
  34:         }
  35:     }
  37:     private string _AnswerText;
  39:     public string AnswerText
  40:     {
  41:         get
  42:         {
  43:             return _AnswerText;
  44:         }
  45:         set
  46:         {
  47:             _AnswerText = value;
  48:         }
  49:     }
  51:     private bool _IsMarkedAsCorrect;
  53:     public bool IsMarkedAsCorrect
  54:     {
  55:         get
  56:         {
  57:             return _IsMarkedAsCorrect;
  58:         }
  59:         set
  60:         {
  61:             _IsMarkedAsCorrect = value;
  62:         }
  63:     }
  66:     private int _Vote;
  68:     public int Vote
  69:     {
  70:         get
  71:         {
  72:             return this._Vote;
  73:         }
  74:         set
  75:         {
  76:             _Vote = value;
  77:         }
  78:     }
  79: }

Yeah, clean, pure C#: No attributes, EntityRef s, nothing. Same goes for Questions as well, where the association is achieved through the good old simple List<T>:


   1: public class Question
   2: {
   3:     private int _QuestionId;
   5:     public int QuestionId
   6:     {
   7:         get
   8:         {
   9:             return _QuestionId;
  10:         }
  11:         set
  12:         {
  13:             _QuestionId = value;
  14:         }
  15:     }
  17:     private string _QuestionText;
  19:     public string QuestionText
  20:     {
  21:         get
  22:         {
  23:             return _QuestionText;
  24:         }
  25:         set
  26:         {
  27:             _QuestionText = value;
  28:         }
  29:     }
  31:     private List<Answer> _Answer;
  33:     public List<Answer> Answer
  34:     {
  35:         get
  36:         {
  37:             return _Answer;
  38:         }
  39:         set
  40:         {
  41:             _Answer = value;
  43:         }
  44:     }
  45: }

To use these entities as POCOs, I need a way to externally define the mappings between db tables, columns to the relevant object fields. I chose the other OOTB supported way, XML. As I am so lazy to write it on my own, I ran the following sql metal command to generate it from the DB:


   1: sqlmetal /server:sidarok-pc /database:QuestionsAnswers /code:a.cs /map:Questions.xml

As you see, it also generates the code in a.cs file but I am gonna throw it out. Let’s check if the generated XML maps to our fields:


   1: <?xml version=”1.0″ encoding=”utf-8″?>
   2: <Database Name=”questionsanswers” xmlns=”″>
   3:   <Table Name=”dbo.Answer” Member=”Answer”>
   4:     <Type Name=”Answer”>
   5:       <Column Name=”AnswerId” Member=”AnswerId” Storage=”_AnswerId” DbType=”Int NOT NULL IDENTITY” IsPrimaryKey=”true” IsDbGenerated=”true” AutoSync=”OnInsert” />
   6:       <Column Name=”QuestionId” Member=”QuestionId” Storage=”_QuestionId” DbType=”Int NOT NULL” />
   7:       <Column Name=”AnswerText” Member=”AnswerText” Storage=”_AnswerText” DbType=”Text NOT NULL” CanBeNull=”false” UpdateCheck=”Never” />
   8:       <Column Name=”IsMarkedAsCorrect” Member=”IsMarkedAsCorrect” Storage=”_IsMarkedAsCorrect” DbType=”Bit NOT NULL” />
   9:       <Column Name=”Vote” Member=”Vote” Storage=”_Vote” DbType=”Int NOT NULL” />
  10:       <Association Name=”FK_GoodAnswer_Question” Member=”Question” Storage=”_Question” ThisKey=”QuestionId” OtherKey=”QuestionId” IsForeignKey=”true” />
  11:     </Type>
  12:   </Table>
  13:   <Table Name=”dbo.Question” Member=”Question”>
  14:     <Type Name=”Question”>
  15:       <Column Name=”QuestionId” Member=”QuestionId” Storage=”_QuestionId” DbType=”Int NOT NULL IDENTITY” IsPrimaryKey=”true” IsDbGenerated=”true” AutoSync=”OnInsert” />
  16:       <Column Name=”QuestionText” Member=”QuestionText” Storage=”_QuestionText” DbType=”NVarChar(300) NOT NULL” CanBeNull=”false” />
  17:       <Association Name=”FK_GoodAnswer_Question” Member=”Answer” Storage=”_Answer” ThisKey=”QuestionId” OtherKey=”QuestionId” DeleteRule=”NO ACTION” />
  18:     </Type>
  19:   </Table>
  20: </Database>

Now, let’s write the simple select test to see if it just works. This repository test is intentionally an integration test, to see that if I can get the question entity along with its children:


   1: [TestMethod()]
   2: public void GetQuestionTest()
   3: {
   4:   QuestionsRepository target = new QuestionsRepository(); // TODO: Initialize to an appropriate value
   5:   int id = 2; // TODO: Initialize to an appropriate value
   6:   Question actual;
   7:   actual = target.GetQuestion(id);
   8:   Assert.IsNotNull(actual);
   9:   Assert.IsTrue(actual.Answer.Count > 0);
  10: }

And after this the implementation is quite trivial. Just note the eager loading that is needed explicitly because otherwise the Answers list will never get assigned and remain null :


   1: public Question GetQuestion(int id)
   2: {
   3:     using (QuestionDataContext context = new QuestionDataContext())
   4:     {
   5:         DataLoadOptions options = new DataLoadOptions();
   6:         options.LoadWith<Question>(q => q.Answer);
   8:         context.LoadOptions = options;
   9:         return context.Questions.Single<Question>(q => q.QuestionId == id);
  10:     }
  11: }

Aha,we don’t have a DataContext yet ! Let’s create it, we need to feed with connection string and XML file. Note the Table<T> implementations are there just for convenience:


   1: public class QuestionDataContext : DataContext
   2: {
   3:   static XmlMappingSource source = XmlMappingSource.FromXml(File.ReadAllText(@”C:UserssidarokDesktopPocoDemoPocoDemoquestions.xml”));
   4:   static string connStr = “Data Source=sidarok-pc;Initial Catalog=QuestionsAnswers;Integrated Security=True”;
   5:   public QuestionDataContext()
   6:     : base(connStr, source)
   7:   {
   8:   }
  10:   public Table<Question> Questions
  11:   {
  12:     get
  13:     {
  14:       return base.GetTable<Question>();
  15:     }
  16:    }
  18:    public Table<Answer> Answers
  19:    {
  20:      get
  21:      {
  22:        return base.GetTable<Answer>();
  23:      }
  24:     }
  25: }

Now the test passes, hurray, we are happy let’s party! Before let’s take a step forward and write a test for Insert:


   1: [TestMethod()]
   2: public void InsertQuestionTest()
   3: {
   4:   QuestionsRepository target = new QuestionsRepository(); // TODO: Initialize to an appropriate value        
   5:   Question question = new Question()
   6:   {
   7:     QuestionText = “Temp Question”,
   8:     Answer = new List<Answer>()
   9:     {
  10:       new Answer()
  11:       {
  12:         AnswerText = “Temp Answer 1″,
  13:         IsMarkedAsCorrect = true,
  14:         Vote = 10,
  15:        },
  16:        new Answer()
  17:        {
  18:          AnswerText = “Temp Answer 2″,
  19:          IsMarkedAsCorrect = false,
  20:          Vote = 10,
  21:         },
  22:         new Answer()
  23:         {
  24:           AnswerText = “Temp Answer 3″,
  25:           IsMarkedAsCorrect = true,
  26:           Vote = 10,
  27:          },
  28:       }
  29:       };
  31:       using (TransactionScope scope = new TransactionScope())
  32:       {
  33:         target.InsertQuestion(question);
  34:         Assert.IsTrue(question.QuestionId > 0);
  35:         Assert.IsTrue(question.Answer[0].AnswerId > 0);
  36:         Assert.IsTrue(question.Answer[1].AnswerId > 0);
  37:         Assert.IsTrue(question.Answer[2].AnswerId > 0);
  38:        }
  39: }

Simple insert test, insert questions along with its children, answers and check that if they have been assigned any Ids. The implementation is again, nothing different from the usual implementation :


   1: public void InsertQuestion(Question q)
   2: {
   3:     using (QuestionDataContext context = new QuestionDataContext(connStr))
   4:     {
   5:         context.Questions.InsertOnSubmit(q);
   6:         context.SubmitChanges();
   7:     }
   8: }

When we run this test, we will run into this error:


   1: Test method QuestionRepositoryTest.QuestionsRepositoryTest.InsertQuestionTest threw exception:  System.Data.SqlClient.SqlException: The INSERT statement conflicted with the FOREIGN KEY constraint “FK_GoodAnswer_Question”. The conflict occurred in database “QuestionsAnswers”, table “dbo.Question”, column ‘QuestionId’.
   2: The statement has been terminated

Aha, well this was kinda expected. We knew that we had to maintain the identity and back references, but we didn’t. Shame on us. But how are we gonna do that ? We don’t know the ID value before we insert, how do we tell Linq to SQL to pick the new identity ? Are we back to square 1, @@IDENTITY_SCOPE ?

Of course if I am writing this post, the answer has to be no :) The secret is in the back reference, the back reference is there just because for this matter.

What we need to do now is, in each Answer we need to preserve a reference to the parent Question and for each question that is added, or when the list is overriden we need to assign the Answer’s QuestionId property to this back reference’s one. As we now don’t have the EntitySet, we need to do that on our own, but it is easy enough. For Answers, here is the back reference:


   1: private Question _Question;
   3: public Question Question
   4: {
   5:     get
   6:     {
   7:         return this._Question;
   8:     }
   9:     set
  10:     {
  11:         this._Question = value;
  12:         this._QuestionId = value.QuestionId;
  13:     }
  14: }

And for Question POCO, when the List is overriden, we need to put our own logic to handle this, which is: for every child answer, ensure that back reference and the reference id is set:


   1: private List<Answer> _Answer;
   3: public List<Answer> Answer
   4: {
   5:    get
   6:    {
   7:        return _Answer;
   8:    }
   9:    set
  10:    {
  11:        _Answer = value;
  12:        foreach (var answer in _Answer)
  13:        {
  14:            answer.QuestionId = this.QuestionId;
  15:            answer.Question = this;
  16:        }
  17:    }
  18: }

And Test passes after doing this.  Hope this gives some idea what you can do and what you need to know beforehand.


Desire to decouple domain entities from Technological aspects is important in SoC & SRP, and these principles are important for nearly everything, varying from pure basic to DDD. To achieve this in Linq to SQL, we need to say good bye rid to EntiyRef, EntitySet, INotifyPropertyChanged, INotifyPropertyChanging interfaces.

The next subject I am going to attack is Lazy Lading with POCOs, stay tuned till then !

kick it on

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Localizing Linq to SQL Entities

August 18th, 2008 by Sidar Ok

Back from the holidays! Not getting too much sun certainly encourages to write code rather than chilling out. Writing on this subject was on my list as Linq to SQL got more mature, need for it in multi-cultural applications has arisen respectively. Also an old post of Ayende beat me to think about how a similar problem could be solved in Linq to SQL.

I’ll use the same model that he provided, and it is the following:


Figure 1. Table structure for multi-lingual products


As in the original post, the challenge is just to load the Product Names for the current culture(or specific culture), not all of them related to one product. So in nhibernate, there are filters to solve this problem in an elegant way. It is elegant because it is externally configurable and includes no intrusiveness in your design.

When internationalising-localizing comes into play, there are 2 main approaches from a Domain Perspective and it lies behind the answer of the question :

“Is localization a concern of my domain?”

In fairness, the answer changes for every domain (to my experience in most cases it is either no, or part of a different domain, such as administration). A simple way of determining if this is an issue is, to check that if domain needs to know about different cultures or domain elements with different languages need to talk to each other or not (Can Reuters publish news in Portugese ?). If the answer is yes, then even eager loading all language translations can be an option. But otherwise, we’ll need to abstract away so that domain won’t know about this infrastructurel concern.

In the original post, Ayende uses filters in NHibernate. In Linq to SQL we don’t have filters but as mentioned before, we have Load Options to give a criteria and reduce the amount of data we retrieve.

As a matter of fact, we expect following test to pass. Note that this is a state based test to test the data retrieved is not more than one.

   1: /// <summary>
   2: ///A test for GetProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void GetProductTest()
   6: {
   7:   ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:   int prodId = 1; // TODO: Initialize to an appropriate value
   9:   int lcId = 3; // TODO: Initialize to an appropriate value
  10:   Product actual = target.GetProduct(prodId, lcId);
  11:   Assert.AreEqual(“Prod13″, actual.Name);
  12:   Assert.IsTrue(actual.ProductNames.Count == 1);
  13: }

Where the records in the table are as follows:


Figure 2. Records in Product Names Table. As seen, there are 2 records for product id ‘1′

The entity structure that we have to use with Linq to SQL (generated by the courtesy of the designer) is as follows:


Figure 3. Object Model of Product and Product Name

Looks innocent doesn’t it ? The secret thing is that Product will always have a list of ProductNames, which in my case will always have 1 element. If I want to keep my domain ignorant of this, this certainly is a bad thing but this is what L2S gives me by default. There are ways to overcome this issue of course, but those are not the point of the post.

In addition to the model, I’ll add another field called “Name” to the model that’s not mapped to any column in db, to reach the same example. This is achieved by a partial class:

   1: partial class Product
   2: {
   3:     public string Name
   4:     {
   5:         get;
   6:         set;
   7:     }
   8: }

Now we are ready to write the code that passes the test. Note that we are utilizing AssociateWith Generic Method to make the necessary filtering.

   1: /// <summary>
   2: /// Gets the product for the current culture.
   3: /// </summary>
   4: /// <param name=”prodId”>The prod id.</param>
   5: /// <param name=”lcId”>The lc id to do localization filter.</param>
   6: /// <returns></returns>
   7: public Product GetProduct(int prodId, int? lcId)
   8: {
   9:     using (ProductsDataContext context = new ProductsDataContext())
  10:     {
  11:         // set load options if localizable filter needed
  12:         if (lcId.HasValue)
  13:         {
  14:             DataLoadOptions options = new DataLoadOptions();
  15:             options.AssociateWith<Product>(p => p.ProductNames.Where<ProductName>(pn => pn.CultureId == lcId));
  16:             context.LoadOptions = options;
  17:         }
  19:         Product pFromDb = context.Products.Single<Product>(p => p.ProductId == prodId);
  21:         return new Product()
  22:         {
  23:                  Amount = pFromDb.Amount,
  24:                  ProductId = pFromDb.ProductId,
  25:                  Size = pFromDb.Size,
  26:                  Name = pFromDb.ProductNames.First<ProductName>().Name,
  27:                  ProductNames = pFromDb.ProductNames
  28:         };
  29:      }
  30: }

Now since we are done with the original post, let’s go beyond the bar and implement inserts & updates too. With Inserts, there are 2 things that I am going to handle: 1 - It is a brand new insert 2 - It is just an insert of a new product name in another language.

For first one here is the test :

   1: /// <summary>
   2: ///A test for InsertProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void Should_Insert_for_Completely_New_Prod()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = new Product()
   9:     {
  10:          Amount = 31,
  11:          Name = “English Name”,
  12:              ProductId = 0,
  13:              Size = 36,
  14:     };
  15:     int lcId = 7;
  16:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  17:     {
  18:         target.InsertProduct(p, lcId);
  19:         Assert.IsTrue(p.ProductId > 0);
  20:         Assert.IsTrue(p.ProductNames.Count > 0);
  21:      }
  22: }

And for the second one:

   1: /// <summary>
   2: ///A test for InsertProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void Should_Insert_Name_for_Existing_Prod()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = target.GetProduct(1);
   9:     int firstCount = p.ProductNames.Count;
  10:     p.Name = “Kurdish Name”;
  11:     int lcId = 9;
  12:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  13:     {
  14:         target.InsertProduct(p, lcId);
  15:         Product prAfterInsert = target.GetProduct(p.ProductId);
  16:         Assert.AreEqual(firstCount + 1, prAfterInsert.ProductNames.Count);
  17:     }
  18: }

So, passing test is obvious. I need to do an extra insert to the product tables if it is a new one, and that’s it:

   1: /// <summary>
   2: /// Inserts the product.
   3: /// </summary>
   4: /// <param name=”p”>The p.</param>
   5: /// <param name=”lcId”>The lc id.</param>
   6: public void InsertProduct(Product p, int lcId)
   7: {
   8:     using (ProductsDataContext context = new ProductsDataContext())
   9:     {
  10:         if (p.ProductId == 0)
  11:         {
  12:             // insert only if it is new
  13:             context.Products.InsertOnSubmit(p);
  14:         }
  16:         InsertProductNameForProduct(context, p, lcId);
  17:         context.SubmitChanges();
  18:     }
  19: }
  21: /// <summary>
  22: /// Inserts the product name for product.
  23: /// </summary>
  24: /// <param name=”context”>The context.</param>
  25: /// <param name=”p”>The p.</param>
  26: /// <param name=”lcId”>The lc id.</param>
  27: private void InsertProductNameForProduct(ProductsDataContext context, Product p, int lcId)
  28: {
  29:     context.ProductNames.InsertOnSubmit(new ProductName()
  30:     {
  31:         CultureId = lcId,
  32:         Name = p.Name,
  33:         ProductId = p.ProductId,
  34:         Product = p,
  35:      });
  36: }

And last, for update; apart from the obvious part there is one situation we need to handle : if the name of the product is changed, than we need to update it as well. For the other fields, go on with the regular update. Here is the test that codifies the statement:

   1: /// <summary>
   2: ///A test for UpdateProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void should_update_product_and_its_current_name()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = target.GetProduct(1, 2);
   9:     p.Name = “French Name”;
  10:     p.Amount = 40;
  11:     p.Size = 55;
  12:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  13:     {
  14:         target.UpdateProduct(p);
  15:         Assert.AreEqual(“French Name”, p.Name);
  16:         Assert.AreEqual(40, p.Amount);
  17:         Assert.AreEqual(55, p.Size);
  18:      }
  19: }

After writing the test, the implementation below becomes obvious:

   1: public void UpdateProduct(Product p)
   2: {
   3:    // since we don’t load more than one product name, we can assume that the one is updated
   4:    using (ProductsDataContext context = new ProductsDataContext())
   5:    {
   6:        context.Products.Attach(p, true);
   7:        ProductName currentName = p.ProductNames.Single<ProductName>();
   8:        if (p.Name != currentName.Name)
   9:        {
  10:            // it is updated, update it
  11:            currentName.Name = p.Name;
  12:         }
  13:         context.SubmitChanges();
  14:     }
  15: }

I showed a possible strategy to localize Linq to SQL entities in this post. Of course, more complex scenarios such as child entities and lazy loading issues could be thought thoroughly, but I hope this gave some initiative to attack the whole idea.

Comments and critics well appreciated as always.

kick it on

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Building a Configuration Binder for MEF with POCO Support

July 14th, 2008 by Sidar Ok

After taking extensibility points of Managed Extensibility Framework to a spin,(which will be called “primitives” from next version on), Jason Olson has posted a nice way of enabling Fluent Interfaces and making MEF more compatible in a more DI ish way, and trying to enable support for POCOs.

When Krzystof Kcwalina announced first CTP of MEF, he made a comment that not attribute based programming model is feasible, and Jason has provided it in the post. But  appearantly the team is going to keep Import and Export model in the first CTP, according to David Kean’s reply to one of the replies in his blog post.

Now, that makes me cringe. Clearly, I don’t like this kind of magic. Somebody exports, somebody imports, and a-ha! I have a list of import info in my domain (which is another form of intrusiveness in your design).

In this post, I will build a Configuration Resolver that will use the application’s context configuration to resolve the values in the container with pure POCO support.  This is based on first CTP, and has a lot of hack, but I think involves some good work worth to look at.

All I want is to feed the container via an XML configuration (XmlRepository in our case) and get my dependencies injected in the configured way. If I had to mimic MEF approach, I would have come up a configuration like this:

   1: <?xml version=”1.0″ encoding=”utf-8″ ?>
   2: <configuration>
   3:   <configSections>
   4:     <section name=”xmlBinder” type=”XmlBinder.Configuration.XmlBinderConfigurationSection, XmlBinder” />
   5:   </configSections>
   6:   <xmlBinder>
   7:     <objects>
   8:       <object name=”HelloWorld” type=”XmlBinder.TestClasses.HelloWorld, XmlBinder.TestClasses”>
   9:         <import name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.IOutputter, XmlBinder.TestClasses” contract=”outputContract”/>
  10:         <import name=”Greeter” type=”XmlBinder.TestClasses.Interfaces.IGreeter, XmlBinder.TestClasses” contract=”greetingContract”/>
  11:       </object>
  13:       <object name=”Outputter”>
  14:         <export name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.Outputter, XmlBinder.TestClasses” contract=”outputContract” />
  15:       </object>
  17:       <object name=”Greeter”>
  18:         <export name=”" type=”XmlBinder.TestClasses.Interfaces.Greeter, XmlBinder.TestClasses” contract=”greetingContract” />
  19:       </object>
  20:     </objects>
  21:   </xmlBinder>
  22: </configuration>

But this didn’t seem natural to me. First, they need to share the contract name, which is prone to configuration errors.Second and export/import model is still in place - and in a worse format. I have gone for a model like this instead (looked a bit spring - unity mixture at the end)

   1: <?xml version=”1.0″ encoding=”utf-8″ ?>
   2: <configuration>
   3:   <configSections>
   4:     <section name=”xmlBinder” type=”XmlBinder.Configuration.XmlBinderConfigurationSection, XmlBinder” />
   5:   </configSections>
   6:   <xmlBinder>
   7:       <objects>
   8:         <object name=”HelloWorld” type=”XmlBinder.TestClasses.HelloWorld, XmlBinder.TestClasses”>
   9:           <properties>
  10:             <!–<property name=”PropertyName” destination=”XmlBinder.TestClasses.ConsoleOutputter, XmlBinder.TestClasses” /> Will be supported in the future, hopefully :)–>
  11:             <property name=”Outputter” type=”XmlBinder.TestClasses.Interfaces.IOutputter, XmlBinder.TestClasses” mapTo=”XmlBinder.TestClasses.ConsoleOutputter, XmlBinder.TestClasses” />
  12:             <property name=”Greeter” type=”XmlBinder.TestClasses.Interfaces.IGreeter, XmlBinder.TestClasses” mapTo=”XmlBinder.TestClasses.Greeter, XmlBinder.TestClasses” />
  13:           </properties>
  14:         </object>
  15:       </objects>
  16:   </xmlBinder>
  17: </configuration>

Here I have my XmlBinder.Configuration namespace to store my configuration related classes.


Figure 1. Configuration Classes

As you see, I am defining a configuration section which has a list of object configurations in it. Object has properties, and properties have names, source types and destination types to be mapped.  Now although there is an ok amount of code there, I am not gonna talk about how I parse the configuration information, if you are interested you can download the source and read the tests.

With all these in place, I want my POCO’s, who look like this like in Jason’s example, without any Imports or Exports:

   1: /// <summary>
   2: /// Plain old Hello World
   3: /// </summary>
   4: public class HelloWorld
   5: {
   6:     public IOutputter Outputter
   7:     {
   8:         get;
   9:         set;
  10:     }
  12:     public IGreeter Greeter
  13:     {
  14:         get;
  15:         set;
  16:     }
  18:     public void SayIt()
  19:     {
  20:         Outputter.Output(Greeter.Greet());
  21:     }
  22: }
  24: public class ConsoleOutputter : IOutputter
  25: {
  26:     #region IConsoleOutputter Members
  28:     public void Output(string message)
  29:     {
  30:         Console.WriteLine(message);
  31:     }
  33:     #endregion
  34: }
  36: public class Greeter : IGreeter
  37: {
  38:     #region IGreeter Members
  40:     public string Greet()
  41:     {
  42:         return “Hellp World”;
  43:     }
  45:     #endregion
  46: }

to be injected by the magic of only this code in the bind time:

   1: CompositionContainer container = new CompositionContainer(resolver);
   2: container.Bind();
   3: var helloWorld = container.TryGetBoundValue<HelloWorld>().Value;

To achieve this, we have to give the container the types and contracts in the format that it needs, and this is should be a cooked & ready to eat thing because since we are not giving the container Imports and exports, we have to tell it what to import and export. To find out what container wanted to do the binding was not as easy as I expected to be, I had to do a lot of reverse engineering. Here, TDD saved my day and helped me to divide my problem space into a distinct two - Provide the instances to Composition Container correctly(1) and Compose the requested objects by using the types that are provided by (1).

We need to write a resolver and a binder to achieve this. ValueResolver needs a repository to use in the resolving process, though it takes the repository as a parameter. This parameter is an ITypeRepository interface in my design which means that one can write their DbConfigRepository that implements this and pass it to the ConfigValueResolver, and expect the resolver to work in the same way. This approach decouples the value resolver from the internals of repository. ITypeRepository interface is defined as follows :

   1: public interface ITypeRepository
   2: {
   3:   /// <summary>
   4:   /// Gets the object meta.
   5:   /// </summary>
   6:   /// <returns>A list of object meta data info. This can be changed to return IEnumerable to enable lazy loading in the future.</returns>
   7:   IList<ObjectMeta> GetObjectMeta();
   8: }

And the implementation for this, that takes the Object Meta Data from configuration repository, in this case the application XML configuration, is as below:

   1: public class XmlTypeRepository : ITypeRepository
   2: {
   3:    #region ITypeRepository Members
   4:     public IList<ObjectMeta> GetObjectMeta()
   5:     {
   6:         XmlBinderConfigurationSection section = ConfigurationManager.GetSection(“xmlBinder”) as XmlBinderConfigurationSection;
   7:         Debug.Assert(section != null);
   9:         IList<ObjectMeta> retVal = BuildObjectMetaListFromConfigurationSection(section);
  11:         return retVal;
  12:     }
  13:    #endregion
  15:     private IList<ObjectMeta> BuildObjectMetaListFromConfigurationSection(XmlBinderConfigurationSection section)
  16:     {
  17:             List<ObjectMeta> retVal = new List<ObjectMeta>();
  19:         foreach (XmlBinderObjectElement objectElement in section.Objects)
  20:         {
  21:             ObjectMeta meta = BuildObjectMetaFromConfiguration(objectElement);
  22:             retVal.Add(meta);
  23:         }
  24:         return retVal;
  25:     }
  27:     private ObjectMeta BuildObjectMetaFromConfiguration(XmlBinderObjectElement element)
  28:     {
  29:         Debug.Assert(element != null);
  31:         ObjectMeta retVal = new ObjectMeta()
  32:         {
  33:              ObjectType = element.Type,
  34:         };
  36:         foreach (XmlBinderPropertyElement propertyElement in element.PropertyElements)
  37:         {
  38:             retVal.MappingPairs.Add(new TypeMappingPair(element.Type.GetProperty(propertyElement.Name), propertyElement.TypeToMap, propertyElement.Name));
  40:         }
  42:         return retVal;
  43:     }
  44: }

Here, ObjectMeta represents an object’s meta data to be processed further to be meaningful to bind.


Figure 2: TypeMappingPair and ObjectMeta entity structures

Now that we have the repository, we can safely build the resolver. The relationship between resolver and a binder is this as far as I could find out: A binder is there for a type, is responsible of it to be build properly. Binder tells to the container for the type “this type exports these, and imports these. Now go build.”. So it is reasonable for a binder to take 3 piece of information: Target type, its imports and its exports. I wrapped them up in a BindingInfo entity whose class diagram is shown below with side by side to ObjectMeta:


Figure 3: BindingInfo and ObjectMeta objects

Have you noticed the mismatch between two ? That’s the key point, BindingInfo is what binder (and so container) needs, and ObjectMeta is what we have, and is more intuitive and is there to support POCO model. Now we need to implement the magic on our own to convert from ObjectMeta listo to Binding Info list. I implemented this method called GetBindingInfo() into the resolver. Resolver queries the underlying repository the first time it is asked to do so, and retrieves a set of ObjectMeta from it. GetBindingInfo does the necessary conversion for us to be able to easily create our Binder (XmlBinder in this case).

Following test shows what we expect from the resolver’s GetBindingInfo method, I expect it to be rather self explanatory:

   1: [TestMethod()]
   2: public void should_transform_metadata_format_into_the_needed_format_for_mef()
   3: {
   4:     ITypeRepository rep = new XmlTypeRepository();
   5:     ConfigValueResolver target = new ConfigValueResolver(rep); // TODO: Initialize to an appropriate value
   6:     IList<ObjectMeta> objectsFromRepository = rep.GetObjectMeta(); // get from xml repository
   7:     IList<BindingInfo> actual;
   8:     actual = target.GetBindingInfo();
   9:     Assert.AreEqual(3, actual.Count);
  11:     // see if types are registered
  12:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(ConsoleOutputter)));
  13:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(HelloWorld)));
  14:     Assert.IsTrue(actual.Any<BindingInfo>(bi => bi.TypeToCompose == typeof(Greeter)));
  16:     // see if infos set properly
  17:     BindingInfo helloWorld = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(HelloWorld));
  18:     BindingInfo consoleOutputter = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(ConsoleOutputter));
  19:     BindingInfo greeter = actual.First<BindingInfo>(bi => bi.TypeToCompose == typeof(Greeter));
  21:     // for parent type
  22:     Assert.IsTrue(helloWorld.ExportsOfTypeToCompose.Count > 0);
  23:     Assert.IsTrue(helloWorld.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(HelloWorld)));
  25:     // verify expectations on injection
  26:     Assert.AreEqual(2, helloWorld.ImportsOfTypeToCompose.Count);
  27:     Assert.IsTrue(helloWorld.ImportsOfTypeToCompose.Any<PropertyInfo>(t => t.PropertyType == typeof(IOutputter)));
  28:     Assert.IsTrue(helloWorld.ImportsOfTypeToCompose.Any<PropertyInfo>(t => t.PropertyType == typeof(IGreeter)));
  30:     Assert.AreEqual(2, consoleOutputter.ExportsOfTypeToCompose.Count);
  31:     Assert.IsTrue(consoleOutputter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(ConsoleOutputter)));
  32:     Assert.IsTrue(consoleOutputter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(IOutputter)));
  34:     Assert.AreEqual(2, greeter.ExportsOfTypeToCompose.Count);
  35:     Assert.IsTrue(greeter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(Greeter)));
  36:     Assert.IsTrue(greeter.ExportsOfTypeToCompose.Any<Type>(t => t == typeof(IGreeter)));
  37: }

As you see, every exporter needs to export themselves and the mutual contract, that’s why I am checking them with 2. To make this test pass, I came up with the following implementation for the resolver and  its over smart GetBindingInfo:

   1: public class ConfigValueResolver : ValueResolver
   2: {
   3:     ITypeRepository Repository
   4:     {
   5:         get;set;
   6:     }
   8:     private IList<ObjectMeta> metaList;
   9:     private IList<ObjectMeta> Objects
  10:     {
  11:         get
  12:         {
  13:             if (metaList == null)
  14:             {
  15:                 metaList = Repository.GetObjectMeta();
  16:             }
  17:             return metaList;
  18:         }
  19:     }
  22:     public ConfigValueResolver(ITypeRepository repository)
  23:     {
  24:         this.Repository = repository;
  25:     }
  27:     protected override void OnContainerSet()
  28:     {
  29:         base.OnContainerSet();
  30:         ConfigureContainer();
  31:     }
  33:     protected override void OnContainerDisposed()
  34:     {
  35:         base.OnContainerDisposed();
  36:     }
  38:     public override CompositionResult<IImportInfo> TryResolveToValue(string name, IEnumerable<string> requiredMetadata)
  39:     {
  40:         CompositionResult<ImportInfoCollection> result = TryResolveToValues(name, requiredMetadata);
  42:         return new CompositionResult<IImportInfo>(result.Succeeded, result.Issues, result.Value.First());
  43:     }
  45:     public override CompositionResult<ImportInfoCollection> TryResolveToValues(string name, IEnumerable<string> requiredMetadata)
  46:     {
  47:         return TryGetContainerLocalImportInfos(name, requiredMetadata);
  48:     }
  50:     private void ConfigureContainer()
  51:     {
  52:         // load up the types and add the binder for them
  54:         IList<BindingInfo> bindingList = GetBindingInfo();
  56:         foreach (var bindingInfo in bindingList)
  57:         {
  58:             this.Container.AddBinder(new XmlBinder(bindingInfo));
  59:         }
  62:     }
  64:     public IList<BindingInfo> GetBindingInfo()
  65:     {
  66:         Debug.Assert(Objects != null);
  68:         List<BindingInfo> retVal = new List<BindingInfo>();
  69:         foreach (var objectMeta in Objects)
  70:         {
  71:             IList<Type> exports = new List<Type>();
  72:             exports.Add(objectMeta.ObjectType);
  73:             IList<PropertyInfo> imports = new List<PropertyInfo>();
  74:             //TODO:if imported properties are not in the objects list themselves, it means that they arent exporting anything. 
  75:             // So we can add them safely.
  76:             Debug.Assert(objectMeta.MappingPairs != null);
  78:             foreach (var propertyToBeInjected in objectMeta.MappingPairs)
  79:             {
  80:                 // mapping pairs themselves should be in the container in order to be considered to bind
  81:                 Debug.Assert(propertyToBeInjected != null);
  83:         retVal.Add(new BindingInfo()
  84:         {
  85:             TypeToCompose = propertyToBeInjected.ConcreteImplementation,
  86:             // exports itsself and its contract
  87:             ExportsOfTypeToCompose = new List<Type>()
  88:             {
  89:                     propertyToBeInjected.ConcreteImplementation,
  90:                     propertyToBeInjected.PropertyToInject.PropertyType
  91:             },
  92:             ImportsOfTypeToCompose = new List<PropertyInfo>(), // currently not implemented
  93:             });
  95:             imports.Add(propertyToBeInjected.PropertyToInject);
  96:          }
  98:          retVal.Add(new BindingInfo()
  99:          {
 100:            TypeToCompose = objectMeta.ObjectType,
 101:            ExportsOfTypeToCompose = exports,
 102:            ImportsOfTypeToCompose = imports
 103:           });
 104:         }
 106:         return retVal;
 107:     }
 108: }

As you see in the implementation, for every binding info I am adding its Binder. This makes Binder’s implementation on this info relatively straightforward but crucial: It extends the ComponentBinder base class and provides the export info, import info and contract names for the composition operation. Here, I am using the relevantType.ToString() like Jason does in the fluent interface example, but the rest approach is a bit different:

   1: /// <summary>
   2: /// Each XML Binder stands for a type to resolve. 
   3: /// </summary>
   4: /// <remarks>No lifetime supported</remarks>
   5: public class XmlBinder : ComponentBinder
   6: {
   7:     private object instance;
   8:     private static object SyncRoot = new object();
  10:     public IList<Type> Exports
  11:     {
  12:         get;
  13:         set;
  14:     }
  16:     /// <summary>
  17:     /// List of the properties those are determined to be injected. 
  18:     /// Since the binder is a one-use-only object, setter is private and import list can not be changed during the composition
  19:     /// to synch with the current nature of the container/
  20:     /// </summary>
  21:     /// <value>The imports.</value>
  22:     public IList<PropertyInfo> Imports
  23:     {
  24:         get;
  25:         private set;
  26:     }
  28:     /// <summary>
  29:         /// Gets or sets the type of the target resolve type.
  30:     /// </summary>
  31:     /// <value>The type of the resolve.</value>
  32:     public Type TargetResolveType
  33:     {
  34:         get;
  35:         private set;
  36:     }
  38:     /// <summary>
  39:     /// Gets the current instance.
  40:     /// </summary>
  41:     /// <value>The current instance, it i singleton for the time being.</value>
  42:     private object CurrentInstance
  43:     {
  44:         get
  45:         {
  46:             if (instance == null)
  47:             {
  48:                 lock (SyncRoot)
  49:                 {
  50:                     if (instance == null)
  51:                     {
  52:                         // a really dummy instance, we can pass take these constructors from the taken types in XML repository
  53:                         // if we want to enable constructor injection. Assuming the type resolved has a default constructor for the time being. 
  54:                         instance = TargetResolveType.GetConstructor(new Type[] { }).Invoke(new object[] { });
  55:                     }
  56:                 }
  57:             }
  58:             return instance;
  59:         }
  60:     }
  62:     public XmlBinder(BindingInfo bindingInfo)
  63:     {
  64:         this.Exports = bindingInfo.ExportsOfTypeToCompose;
  65:         this.Imports = bindingInfo.ImportsOfTypeToCompose;
  66:         this.TargetResolveType = bindingInfo.TypeToCompose;
  67:     }
  69:     /// <summary>
  70:     /// Gets the export names.
  71:     /// </summary>
  72:     /// <value>The export names.</value>
  73:     public override IEnumerable<string> ExportNames
  74:     {
  75:         get
  76:         {
  77:             return Exports.Select<Type, string>(t => t.ToString());
  78:          }
  79:     }
  81:     /// <summary>
  82:     /// Gets the import names.
  83:     /// </summary>
  84:     /// <value>The import names.</value>
  85:     public override IEnumerable<string> ImportNames
  86:     {
  87:         get
  88:         {
  89:             return Imports.Select(info => info.PropertyType.ToString());
  90:          }
  91:     }
  93:     /// <summary>
  94:     /// Exports this instance.
  95:     /// </summary>
  96:     /// <returns></returns>
  97:     public override CompositionResult Export()
  98:     {
  99:         foreach (var type in Exports)
 100:         {
 101:             AddValueToContainer(type.ToString(), CurrentInstance);
 102:         }
 104:         return CompositionResult.SucceededResult;
 105:     }
 107:     /// <summary>
 108:     /// Imports the specified changed value names.
 109:     /// </summary>
 110:     /// <param name=”changedValueNames”>The changed value names, not really used.</param>
 111:     /// <returns></returns>
 112:     public override CompositionResult Import(IEnumerable<string> changedValueNames)
 113:     {
 114:             foreach (var info in Imports)
 115:             {
 116:                 CompositionResult<object> component = Container.TryGetBoundValue(info.PropertyType.ToString(), info.PropertyType);
 117:                 // do the injection. Currently assuming that only non-indexed values are to be resolved
 118:                 if (component.Succeeded)
 119:                 {
 120:                     info.SetValue(CurrentInstance, component.Value, null);
 121:                 }
 122:                 else
 123:                 {
 124:                     throw new InvalidOperationException(component.Issues[0].Description, component.Issues[0].Exception);
 125:                 }
 127:             }
 129:             return CompositionResult.SucceededResult;
 130:         }
 132:     public override bool Equals(object obj)
 133:     {
 134:         XmlBinder binder = obj as XmlBinder;
 135:         if (binder != null)
 136:         {
 137:             return binder.TargetResolveType == this.TargetResolveType;
 138:         }
 139:         return false;
 140:     }
 141: }

And now we have done all the dirty infrastructure work. Those was all for this test to pass:

   1: [TestMethod]
   2: public void should_print_hello_world_to_console()
   3: {
   4:     ConfigValueResolver resolver = new ConfigValueResolver(new XmlTypeRepository());
   5:     //IOutputter outputter = null;
   6:     //IGreeter greeter = null;
   7:     HelloWorld helloWorld = null;
   8:     try
   9:     {
  10:         CompositionContainer container = new CompositionContainer(resolver);
  11:         container.Bind();
  12:         helloWorld = container.TryGetBoundValue<HelloWorld>().Value;
  13:     }
  14:     catch (CompositionException ex)
  15:     {
  16:         foreach (CompositionIssue issue in ex.Issues)
  17:         {
  18:             Console.WriteLine(“issue = {0}”, issue.ToString());
  19:         }
  20:     }
  22:     Assert.IsNotNull(helloWorld);
  23:     Assert.IsNotNull(helloWorld.Outputter);
  24:     Assert.IsNotNull(helloWorld.Greeter);
  26:     helloWorld.SayIt();
  27: }

Ignore the try catch, that’s because of the *damn* error handling mechanism of MEF depending on the issues and not providing a concatenated representation of the error message. At the end, dependencies are injected, “hello world” is printed to the test console, and the world is a better place to live. Thank you all !

You can download the sources from here, I have %80 test coverage now. Any comments, criticisms, cheques with a lot of trailing zeros well appreciated as always.
kick it on

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Linq to SQL with WCF in a Multi Tiered Action – Part 1

May 26th, 2008 by Sidar Ok

In many places, forums, blogs, or techy talks with some colleagues I hear some ongoing urban legends about Linq to SQL I came across:

  • You can not implement multi tiered applications with Linq to SQL

  • Linq to SQL can not be used for enterprise level applications

I can’t say that both of these statements are particularly wrong or right, of course Linq to SQL can not handle every scenario but in fairness it handles most of the scenarios sometimes even better than some other RAD oriented ORM s. In this post I will create a simulation of an enterprise web application, having its Data Access, Services, and Presentation Layers separated and let them communicate with each other (err.., at least from service to UI) through WCF – Windows Communication Foundation.

This will be a couple of (may be more) posts, and this is the first part of it. I’ll post the sample code with the next post.

I have to say that this article is neither an introduction to Linq to SQL nor to WCF, so you need basic knowledge of both worlds in order to benefit from this mash up. We will develop an application step by step with an easy scenario but will have the most important characteristics of a disconnected (from DataContext perspective), multi layered enterprise architecture.

Since this architecture is more scalable and reliable, implementing it with Linq to SQL has also some tricks to keep in mind:

  • Our DataContext will be dead most of the time. So we won’t be able to benefit Object Tracking to generate our SQL statements out of the box.

  • This also brings to the table that we have to know what entities to delete, what to insert, and what to update. We can not just “do it” and submit changes as we are doing it in connected mode. This means that we have to maintain the state of the objects manually (sorry folks, I feel the same pain).

  • The transport of the data over the wire is another problem, since we don’t write the entities on our own(and in the case of an amend to them the designer of Linq to SQL can be very aggressive) so it brings us into 2 common situation

  • We can create our own entities, and write translators to convert from Linq Entities to our very own ones.

  • We can try to customize Linq Entities in the ways we are able to.

Since the first one is obvious and the straight forward to implement, we will go down the second route to explore the boundaries of this customization.

To make it clearer that what I will do, here is a basic but a functional schema of the resulting n-tier application


Picture 1 – Architectural schema of the sample app.

In our example, we are going to use Linq to SQL as an ORM Mapper. So as you see in the schema, Linq to SQL doesn’t give us the heaven of not writing a DAL Layer at all. But it reduces both stored queries/procedures and amount of mapping that we had to do manually before.

Developing the Application


The scenario I came up with is a favorites web site, that consist of 2 simple pages enabling its users to Insert, Delete, Update and Retrieve users and their favorites when requested. 1 user can have many favorites.

We will simply place 2 Grid Views in the page and handle their events to make the necessary modifications on the model itself. This will also demonstrate a common usage.



Here is the object diagram of the entities; they are the same as the DB tables:


Picture 2.Entity Diagram

See the additional “Version” fields in the entities; they are type of Binary in .NET and TimeStamps in SQL Server 2005. We will use them to let Linq to SQL handle the concurrency issues for us.

Since we are going to employ a web service by the help of WCF, we need to mark our entities as DataContract to make it available for serialization through DataContractSerializer. We can do that by right clicking on the designer and going to properties, and changing Serialization property to unidirectional as in the picture follows:


Picture 3. Properties window

After doing and saving this we will see in the designer.cs file, we have our Entities marked as DataContract and members as DataMember s.

As mentioned earlier before, we need to maintain our entites state – to know whether they are deleted, inserted, or updated. To do this I am going to define an enumeration as follows:

   1: /// <summary>
   2:     /// The enum helps to identify what is the latest state of the entity.
   3:     /// </summary>
   4:     public enum EntityStatus
   5:     {
   6:         /// <summary>
   7:         /// The entity mode is not set.
   8:         /// </summary>
   9:         None = 0,
  10:         /// <summary>
  11:         /// The entity is brand new.
  12:         /// </summary>
  13:         New = 1,
  14:         /// <summary>
  15:         /// Entity is updated. 
  16:         /// </summary>
  17:         Updated = 2,
  18:         /// <summary>
  19:         /// Entity is deleted. 
  20:         /// </summary>
  21:         Deleted = 3,
  22:     }

We are going to have this field in every entity, so let’s define a Base Entity with this field in it:

   1: [DataContract]
   2: public class BaseEntity
   3: {
   4:   /// <summary>
   5:   /// Gets or sets the status of the entity.
   6:   /// </summary>
   7:   /// <value>The status.</value>
   9:   [DataMember]
  10:   public EntityStatus Status { get; set; }
  11: }


And then, all we need to do is to create partial classes for our Entities and extend them from base entity:

   1: public partial class User : BaseEntity
   2: {
   4: }
   6: public partial class Favorite : BaseEntity
   7: {
   9: }

Now our entities are ready to travel safely along with their arsenal.

Service Layer Design

As we are going to use WCF, we need to have our:

  • Service Contracts (Interfaces)
  • Service Implementations (Concrete classes)
  • Service Clients (Consumers)
  • Service Host (Web service in our case)

Service Contracts

We will have 2 services: Favorites Service and Users Service. It will have 4 methods: 2 Gets and 2 Updates. We will do the insertion, update, and deletion depending on the status so there is no need to determine separate functions for all. Here is the contract for User:

   1: /// <summary>
   2: /// Contract for user operations 
   3: /// </summary>
   5: [ServiceContract]
   6: public interface IUsersService
   7: {
   8: /// <summary>
   9: /// Gets all users.
  10: /// </summary>
  11: /// <returns></returns>
  13:   [OperationContract]
  14:   IList<User> GetAllUsers();
  16: /// <summary>
  17: /// Updates the user.
  18: /// </summary>
  19: /// <param name=”user”>The user.</param>
  21:   [OperationContract]
  22:   void UpdateUser(User user);
  24: /// <summary>
  25: /// Gets the user by id.
  26: /// </summary>
  27: /// <param name=”id”>The id.</param>
  28: /// <returns></returns>
  30:   [OperationContract]
  31:   User GetUserById(int id);
  33: /// <summary>
  34: /// Updates the users in the list according to their state.
  35: /// </summary>
  36: /// <param name=”updateList”>The update list.</param>
  38:   [OperationContract]
  39:   void UpdateUsers(IList<User> updateList);
  40: }

And here is the contract for Favorites Service:

   1: /// <summary>
   2: /// Contract for favorites service
   3: /// </summary>
   4: [ServiceContract]
   5: public interface IFavoritesService
   6: {
   7:   /// <summary>
   8:   /// Gets the favorites for user.
   9:   /// </summary>
  10:   /// <param name=”user”>The user.</param>
  11:   /// <returns></returns>
  12:   [OperationContract]
  13:   IList<Favorite> GetFavoritesForUser(User user);
  15:   /// <summary>
  16:   /// Updates the favorites for user.
  17:   /// </summary>
  18:   /// <param name=”user”>The user.</param>
  19:   [OperationContract]
  20:   void UpdateFavoritesForUser(User user);
  21: }

Service Implementations (Concrete classes)

Since we are developing a db application with no business logic at all, the service layer implementors are pretty lean & mean. Here is the Service implementation for UserService

   1: [ServiceBehavior(IncludeExceptionDetailInFaults=true)]
   2: public class UsersService : IUsersService
   3: {
   4:     IUsersDataAccess DataAccess { get; set; }
   6:     public UsersService()
   7:     {
   8:         DataAccess = new UsersDataAccess();
  10:     }
  12:     #region IUsersService Members
  14:     /// <summary>
  15:     /// Gets all users.
  16:     /// </summary>
  17:     /// <returns></returns>
  18:     [OperationBehavior]
  19:     public IList<User> GetAllUsers()
  20:     {
  21:         return DataAccess.GetAllUsers();
  22:     }
  24:     /// <summary>
  25:     /// Updates the user.
  26:     /// </summary>
  27:     /// <param name=”user”>The user.</param>
  28:     [OperationBehavior]
  29:     public void UpdateUser(User user)
  30:     {
  31:         DataAccess.UpdateUser(user);
  32:     }
  34:     /// <summary>
  35:     /// Gets the user by id.
  36:     /// </summary>
  37:     /// <param name=”id”>The id.</param>
  38:     /// <returns></returns>
  39:     [OperationBehavior]
  40:     public User GetUserById(int id)
  41:     {
  42:         return DataAccess.GetUserById(id);
  43:     }
  45:     /// <summary>
  46:     /// Updates the users in the list according to their state.
  47:     /// </summary>
  48:     /// <param name=”updateList”>The update list.</param>
  49:     [OperationBehavior]
  50:     public void UpdateUsers(IList<User> updateList)
  51:     {
  52:         DataAccess.UpdateUsers(updateList);
  53:     }
  55:     #endregion
  56: }

And as you can imagine the favorite service implementation is pretty much the same.

This has been long enough, so let’s cut it here. In the next post, I will talk about the presentation, service and data layer implementations. By that, we will see how to best approach to modifying these entities in a data grid, pass them through the WCF Proxy and commit the changes (insert, update, delete) to the SQL 2005 database. I will also provide the source codes with the next post. Stay tuned until then.

For part 2 : .

kick it on

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

A Basic Hands on Introduction to Unity DI Container

May 15th, 2008 by Sidar Ok

Hey folks, here we are with another interesting article. There are some introductions already on the internet about Unity providing the theoretical information, so I won’t go deeper in that route. In this article, I will be more practical and provide a concrete implementation of concepts. You can download the sample codes clicking here.

Microsoft Patterns and Practices team had been developing Enterprise Library to enable the use of general patterns and practices for .NET platform, which has great pluggable application blocks such as Logging and Validation application blocks. One of them used to be DIAB, which is an acronym for Dependency Injection Application Block. But folks thought it should be named differently from the other application blocks, and came with the fancy name “Unity”.

Now I won’t go to details of Inversion of Control and Dependency Injection patterns as I can imagine you are sick of them and I want to keep this post short, but the basic value it brings to enterprise systems is decoupling. They promote programming to interfaces and isolate you from the creation process of the collaborators, letting you to concentrate on what you need to deliver while improving testing.

Out in the universe, there are big frameworks such as Spring.NET or Castle Windsor containing Castle Microkernel. The choice coming from Microsoft Patterns and Practices team is the Unity framework, which has gone live in the April. It is open source and hosted in CodePlex along with its community contributions project that is awaiting developers’ helps to extend unity.

Enough talking, lets see some action. We will develop a simple set of classes that does naming, applying strategy patterns. This is also good because a common best practice is to inject your strategies to your consumers through containers and interfaces.

Setting Up the Environment to Use Unity

In the example, I used Visual Studio 2008 and .NET 3.5. You need to download the latest drop of Unity from here and add it as a reference to the projects we want to use and that’s it really.

Members of the Solution

In the UnitySample project, there are Strategy Contracts and Strategy Implementations. The contracts are interfaces as you already may have discovered, where their implementations reside in the implementations project.

So in the Contracts we have a naming strategy contract as follows:

   2: /// Defines the contract of changing strings per conventions.
   3: /// </summary>
   4: public interface INamingStrategy
   5: {
   6:   /// <summary>
   7:   /// Converts the string according to the convention.
   8:   /// </summary>
   9:   /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. 
  10:   /// Assumes that the words are seperated by spaces.</param>
  11:   /// <returns>The naming applied string.</returns>
  12:   string ConvertString(string toApplyNaming);
  13: }

And we will have 2 concrete implementations, one for Pascal and one for Camel casing in the implementations project. Being good TDD Guys we are writing the test first. Let’s see the test method for Pascal casing (camel’s is pretty much similar to it):

   1: /// <summary>
   2: ///A test for ConvertString
   3: ///</summary>
   4: [TestMethod()]
   5: public void ConvertStringTest()
   6: {
   7:   INamingStrategy strategy = new PascalNamingStrategy();
   9:   string testVar = “the variable to be tested”;
  10:   string expectedVar = “TheVariableToBeTested”;
  12:   string resultVar = strategy.ConvertString(testVar);
  14:   Assert.AreEqual(expectedVar, resultVar);
  15: }

After we write the tests and fail, we are ready to write the concrete implementation for the Pascal Casing to pass the test:

   1: /// <summary>
   2: /// Pascal naming convention, all title case.
   3: /// </summary>
   4: public class PascalNamingStrategy : INamingStrategy
   5: {
   6:    #region INamingStrategy Members
   8:     /// <summary>
   9:     /// Converts the string according to the convention.
  10:     /// </summary>
  11:     /// <param name=”toApplyNaming”>The string that naming strategy will be applied onto. Assumes that the words are seperated by spaces.</param>
  12:     /// <returns>The naming applied string.</returns>
  13:     public string ConvertString(string toApplyNaming)
  14:     {
  15:         Debug.Assert(toApplyNaming != null);
  16:         Debug.Assert(toApplyNaming.Length > 0);
  18:         // trivial example, not considering edge cases.
  19:         string retVal = CultureInfo.InvariantCulture.TextInfo.ToTitleCase(toApplyNaming);
  20:         return retVal.Replace(” “, string.Empty);
  21:     }
  23:     #endregion
  24: }

You can see the relevant implementation of the Camel Casing in the source codes provided.

After finishing with fundamental, let’s utilize & test Unity with our design. For this purpose I am creating a project called “Unity Strategies Test” to see how container can be used to inject in when a INamingStrategy is requested. Following test method shows very simple injection and test if the injection succeeded in a few lines:

   1: /// <summary>
   2: /// Test if injecting dependencies succeed.
   3: /// </summary>
   4: [TestMethod]
   5: public void ShouldInjectDependencies()
   6: {
   7:     IUnityContainer container = new UnityContainer();
   9:     container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  11:     INamingStrategy strategy = container.Resolve<INamingStrategy>();
  13:     Assert.IsNotNull(strategy, “strategy injection failed !!”);
  14:     Assert.IsInstanceOfType(strategy, typeof(PascalNamingStrategy), “Strategy injected, but type wrong!”);
  16: }

And the testing of PascalNamingStrategy becomes much easier and more loosely coupled now:

   1: /// <summary>
   2: /// Tests the pascal strategy through injection.
   3: /// </summary>
   4: [TestMethod]
   5: public void TestPascalStrategy()
   6: {
   7:    IUnityContainer container = new UnityContainer();
   9:    container.RegisterType<INamingStrategy, PascalNamingStrategy>(); //we will abstract this later 
  11:    // notice that we dont know what strategy will be used, and we dont care either really
  13:    INamingStrategy strategy = container.Resolve<INamingStrategy>();
  15:    string testVar = “the variable to be tested”;
  16:    string expectedVar = “TheVariableToBeTested”;
  17:    string resultVar = strategy.ConvertString(testVar);
  19:    Assert.AreEqual(expectedVar, resultVar);
  20: }

This very basic example showed how your tests and code can become loosely coupled. In the next posts I will try to talk about configuring the container, and how to utilize it in your web applications. Stay tuned till then.

kick it on

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

10 Tips to Improve your LINQ to SQL Application Performance

May 2nd, 2008 by Sidar Ok

Hey there, back again. In my first post about LINQ I tried to provide a brief(okay, bit detailed) introduction for those who want to get involved with LINQ to SQL. In that post I promised to write about a basic integration of WCF and LINQ to SQL working together, but this is not that post.

Since LINQ to SQL is a code generator and an ORM and it offers a lot of things, it is normal to be suspicious about performance of it. These are right up to a certain point as LINQ comes with its own penalties. But there are several benchmarks showing that DLINQ brings us up to %93 of the ADO.NET SQL DataReader performance if optimizations are done correctly.

Hence I summed up 10 important points for me that needs to be considered during tuning your LINQ to SQL’s data retrieval and data modifying process:

1 – Turn off ObjectTrackingEnabled Property of Data Context If Not Necessary

If you are trying only to retrieve data as read only, and not modifying anything, you don’t need object tracking. So turn it off using it like in the example below:

using (NorthwindDataContext context = new NorthwindDataContext())
  context.ObjectTrackingEnabled = false;

This will allow you to turn off the unnecessary identity management of the objects – hence Data Context will not have to store them because it will be sure that there will be no change statements to generate.

2 – Do NOT Dump All Your DB Objects into One Single DataContext

DataContext represents a single unit of work, not all your database. If you have several database objects that are not connected, or they are not used at all (log tables, objects used by batch processes,etc..). These objects just unnecessarily consume space in the memory hence increasing the identity management and object tracking costs in CUD engine of the DataContext.

Instead think of separating your workspace into several DataContexts where each one represents a single unit of work associated with it. You can still configure them to use the same connection via its constructors to not to loose the benefit of connection pooling.

3 – Use CompiledQuery Wherever Needed

When creating and executing your query, there are several steps for generating the appropriate SQL from the expression, just to name some important of them:

  1. Create expression tree

  2. Convert it to SQL

  3. Run the query

  4. Retrieve the data

  5. Convert it to the objects

As you may notice, when you are using the same query over and over, hence first and second steps are just wasting time. This is where this tiny class in System.Data.Linq namespace achieves a lot. With CompiledQuery, you compile your query once and store it somewhere for later usage. This is achieved by static CompiledQuery.Compile method.

Below is a Code Snippet for an example usage:

Func<NorthwindDataContext, IEnumerable<Category>> func =
   CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
   ((NorthwindDataContext context) => context.Categories.
      Where<Category>(cat => cat.Products.Count > 5));

And now, “func” is my compiled query. It will only be compiled once when it is first run. We can now store it in a static utility class as follows :

/// <summary>
/// Utility class to store compiled queries
/// </summary>
public static class QueriesUtility
  /// <summary>
  /// Gets the query that returns categories with more than five products.
  /// </summary>
  /// <value>The query containing categories with more than five products.</value>
  public static Func<NorthwindDataContext, IEnumerable<Category>>
        Func<NorthwindDataContext, IEnumerable<Category>> func =
          CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
          ((NorthwindDataContext context) => context.Categories.
            Where<Category>(cat => cat.Products.Count > 5));
        return func;

And we can use this compiled query (since it is now a nothing but a strongly typed function for us) very easily as follows:

using (NorthwindDataContext context = new NorthwindDataContext())

Storing and using it in this way also reduces the cost of doing a virtual call that’s done each time you access the collection – actually it is decreased to 1 call. If you don’t call the query don’t worry about compilation too, since it will be compiled whenever the query is first executed.

4 – Filter Data Down to What You Need Using DataLoadOptions.AssociateWith

When we retrieve data with Load or LoadWith we are assuming that we want to retrieve all the associated data those are bound with the primary key (and object id). But in most cases we likely need additional filtering to this. Here is where DataLoadOptions.AssociateWith generic method comes very handy. This method takes the criteria to load the data as a parameter and applies it to the query – so you get only the data that you need.

The following code below associates and retrieves the categories only with continuing products:

using (NorthwindDataContext context = new NorthwindDataContext())
  DataLoadOptions options = new DataLoadOptions();
  options.AssociateWith<Category>(cat=> cat.Products.Where<Product>(prod => !prod.Discontinued));
  context.LoadOptions = options;

5 – Turn Optimistic Concurrency Off Unless You Need It

LINQ to SQL comes with out of the box Optimistic Concurrency support with SQL timestamp columns which are mapped to Binary type. You can turn this feature on and off in both mapping file and attributes for the properties. If your application can afford running on “last update wins” basis, then doing an extra update check is just a waste.

UpdateCheck.Never is used to turn optimistic concurrency off in LINQ to SQL.

Here is an example of turning optimistic concurrency off implemented as attribute level mapping:

[Column(Storage=“_Description”, DbType=“NText”,
public string Description
    return this._Description;
    if ((this._Description != value))
      this._Description = value;

6 – Constantly Monitor Queries Generated by the DataContext and Analyze the Data You Retrieve

As your query is generated on the fly, there is this possibility that you may not be aware of additional columns or extra data that is retrieved behind the scenes. Use Data Context’s Log property to be able to see what SQL are being run by the Data Context. An example is as follows:

using (NorthwindDataContext context = new NorthwindDataContext())
  context.Log = Console.Out;

Using this snippet while debugging you can see the generated SQL statements in the Output Window in Visual Studio and spot performance leaks by analyzing them. Don’t forget to comment that line out for production systems as it may create a bit of an overhead. (Wouldn’t it be great if this was configurable in the config file?)

To see your DLINQ expressions in a SQL statement manner one can use SQL Query Visualizer which needs to be installed separately from Visual Studio 2008.

7 – Avoid Unnecessary Attaches to Tables in the Context

Since Object Tracking is a great mechanism, nothing comes for free. When you  Attach an object to your context, you mean that this object was disconnected for a while and now you now want to get it back in the game. DataContext then marks it as an object that potentially will change - and this is just fine when you really intent to do that.

But there might be some circumstances that aren’t very obvious, and may lead you to attach objects that arent changed. One of such cases is doing an AttachAll for collections and not checking if the object is changed or not. For a better performance, you should check that if you are attaching ONLY the objects in the collection those are changed.

I will provide a sample code for this soon.

8 – Be Careful of Entity Identity Management Overhead

During working with a non-read only context, the objects are still being tracked – so be aware that non intuitive scenarios this can cause while you proceed. Consider the following DLINQ code:

using (NorthwindDataContext context = new NorthwindDataContext())
  var a = from c in context.Categories
  select c;

Very plain, basic DLINQ isn’t it? That’s true; there doesn’t seem any bad thing in the above code. Now let’s see the code below:

using (NorthwindDataContext context = new NorthwindDataContext())
  var a = from c in context.Categories
  select new Category
    CategoryID = c.CategoryID,
    CategoryName = c.CategoryName,
    Description = c.Description

The intuition is to expect that the second query will work slower than the first one, which is WRONG. It is actually much faster than the first one.

The reason for this is in the first query, for each row the objects need to be stored, since there is a possibility that you still can change them. But in the 2nd one, you are throwing that object away and creating a new one, which is more efficient.

9 – Retrieve Only the Number of Records You Need

When you are binding to a data grid, and doing paging – consider the easy to use methods that LINQ to SQL provides. These are mainly Take and Skip methods. The code snippet involves a method which retrieves enough products for a ListView with paging enabled:

/// <summary>
/// Gets the products page by page.
/// </summary>
/// <param name=”startingPageIndex”>Index of the starting page.</param>
/// <param name=”pageSize”>Size of the page.</param>
/// <returns>The list of products in the specified page</returns>
private IList<Product> GetProducts(int startingPageIndex, int pageSize)
  using (NorthwindDataContext context = new NorthwindDataContext())
    return context.Products
           .Skip<Product>(startingPageIndex * pageSize)

10 – Don’t Misuse CompiledQuery

I can hear you saying “What? Are you kiddin’ me? How can such a class like this be misused?”

Well, as it applies to all optimization LINQ to SQL is no exception:

“Premature optimization is root all of evil” – Donald Knuth

If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. But why?

That’s because the resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it. It is not compiled like the way regular expressions are compiled. And your delegate has the ability to replace the variables (or parameters) in the resulting query.

That’s the end folks, I hope you’ll enjoy these tips while programming with LINQ to SQL. Any comments or questions via sidarok at sidarok dot com or here to this post are welcome.

kick it on

Technorati Tags: LINQ,SQL,Performance,.NET 3.5

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

A Brief Introduction to LINQ to SQL

April 21st, 2008 by Sidar Ok

Introduction to LINQ to SQL

I know there are a lot of LINQ 2 SQL introductions, and you already are sick about them. So I will keep this section as brief as possible.

Over the years the community kept fancying strongly typed objects rather then not OO friendly Datatables and Datasets, Microsoft kept pushing them to go in that way. Since there are understandable reasons, (such as performance) this didn’t change the fact that in an object oriented world, whose requirements are already getting more complex and complex. Typed Datasets couldn’t be response to these requirements even it was making things easier to manage.

This approach also had a consequence of ignoring multi tier and professional real world applications (It would be more difficult to support these scenarios in a 5 minutes of drag & drop presentation!)

Lots of companies(such as CodeSmith)had seen an opportunity here, and they were damn right, so they made lots of money out of that.

So What is Linq 2 SQL and Does it fit in this picture ?

Linq 2 SQL is both an ORM and Code Generator. Although (mostly java guys) humiliate it, I see it as a great step forward for .NET environment, and has a lot of nice features. It speeds up the data managing & generating entities process (no tricks this time, really drag & drop & tweak and it works). And of course it has lots of cons which I will talk about in the later posts.

Following gives an idea where Linq 2 SQL resides in all this LINQ 2 X family :


                                            Figure 1.1 Linq Architecture Overview

Linq 2 SQL is coming with very easy to use tools to ease the code generation. And the generated code is surprisingly good, and using the new .NET framework 3.5 features, such as automatic properties, extension methods and lambda expressions. It is full object oriented and provides a very good level of abstraction.

Scott Guthrie has explained a while ago how to use designer in several blog posts and a very nice video . So be calm, I wont go down in that way J .

The good thing about designer is it picks up about everything that you defined in your db, tables, functions, stored procedures, primary-foregin keys and other constraints etc. It builds the object structure and relations between them based on these, and puts it into a DataContext that you specified before. You can manage your own DataContext and inherit from your DataContextBase for your architectural and business needs again, the designer tool is friendly to this approach.

To understand how LINQ 2 SQL works, understanding DataContext is essential. DataContext itself has lots of good features.

Being Aware of Context

There are 4 DataContext Constructors lets you create a data context. When you create a DataContext of your choice through designer, it creates and associates a connectionstring in settings file or app.config if it exists. Beware that if you want to replace this behaviour in configuration file (a common practice it to put it to web.config for web applications for instance) you need to make sure that a connection string with the same name exists within the current context.

But if you want to do it programmatically, you need to create the DataContext with an IDbConnection .

Understanding How DataContext Works

If I say that the LINQ 2 SQL world revolves around DataContext, I wouldn’t be much exaggerating. Our aim is to do basic insert, update and delete operations as quick & efficient as possible, so we need to understand that DatContext is the gateway to perform this goal.

DataContext is pretty flexible and enables you to work directly with objects or run your own queries / stored procedures or user functions.

1 – Working with the Objects

A cool way of using DataContext is making it to generate Insert, Update and Delete SQL statements against your data model. This works pretty well if you really know what you are dealing with. After generating your context via designer or SqlMetal code generator tool, you have your DataContext with your Entities generated and attached to it. Below is how Northwind Data Context looks like after a generation is performed :


                                                   Picture 1 : Northwind Entities

As you see categories table is mapped to Categories Generic Table that consists of a Category Type that’s mapped to each row.In this table class, there is a set of extension methods that you can use (we will in later posts) to construct SQL statements in a better object oriented way.

However, as you would expect, the relations between types shouldn’t be Table object or something that reminds db to us. EntitySet and EntityRef is there for this purpose. EntitySet in the context. EntitySet is a generic set of entities specified, Category in our case. It implements IList interface so there is no harm to say this is basically a list and we can use every method, including extension methods that a Generic IList can benefit.

If the relationship is one-to-one between the tables then an EntityRef is created for that entity.

Just to keep in mind that these two are NOT serializable, we are going to discuss this issue in later posts.

There are 2 cool features of data context that we need to know here: Object Tracking and Deferred (or Lazy) Loading.

- Object Tracking

This is the change tracking system that Linq to SQL provides. If you want your queries to be generated on the fly automatically, this is the system that provides it. However, if you only do want to select and perform read only operations and don’t want to track any changes, disabling it will improve the retrieval performance.

To disable it, you need to set ObjectTrackingEnabled property of your context to false. The default is true.

The golden rule working with object tracking is : “Data Context needs to be aware of every insert, update and delete to generate appropriate sql statement”. You need to tell DataContext what objects to insert, what to delete and what to update.

You can work with a DataContext that has ObjectTrackingEnabled in 2 ways : Connected and Disconnected.

In connected mode, everything is easy, and world is a very nice place to live. You add, delete, update from/on context and these are all trivial operations: You just manage the collection, add, update or remove an object, and call the magical context.SubmitChanges() method, and that’s it. Your statements are generated and executed. Everyone is happy.

But things aren’t as simple and this is usually not the case. In a service oriented, n-tier world, or long running sessions, keeping DataContext alive is an unreasonable hope. So in disconnected mode you need to tell explicitly what you need to insert, update and delete.

You do this with a bunch of method provided in the generic table that is associated . I will try to show how to use these methods in action in another post but to name them here :

For Insertions => InsertOnSubmit for a single entity, InsertAllOnSubmit for collections.

For Deletions => DeleteOnSubmit for a single entity and DeleteAllOnSubmit for collections.

For Updates => You need to use Attach for a single entity and AttachAll for collections. You’ll see that Attach methods have 2 overloads, one with the entity and the other one with asModified boolean. Setting this to true means that your entity will be included in the generation process even if you made any changes on that.

The way only way of informing that a disconnected object had been changed is through Attach methods. For instance, If you want to delete an object that is not in the context, you first need to attach it to the context. Otherwise, your delete will fail with this message

System.Data.Linq.ChangeConflictException: Row not found or changed

We are doing these to inform context about object tracking states. Each method changes the object’s state to a corresponding value to be able to generate the appropriate query at the end.

- Lazy Loading (Deferred Loading)

In LINQ, if you query for an object, you get the object that you requested. Nothing else, no child relations or back references are loaded. This is achieved because DeferredLoadingEnabled is true by default. You can set this to false if you want to disable it, but that is usually not the best thing to do. We usually need to customize what to load and what not to load.

In this sense, there is a property called LoadOptions of type DataLoadOptions in System.Data.Linq namespace.

The code below illustrates an example usage of this.

1:  private IQueryable<Category> GetDescribedCategoriesWithPicturesAndProducts()
2:  {
3:     using (NorthwindDataContext context = new NorthwindDataContext ())
4:     {
5:         System.Data.Linq.DataLoadOptions options = new System.Data.Linq.DataLoadOptions();
6:         options.LoadWith<Category>(ct => ct.Picture);
7:         options.LoadWith<Category>(ct => ct.Products); // first level
8:         options.LoadWith<Product>(p => p.Supplier);
9:         options.LoadWith<Product>(p => p.Order_Details); // second level
11:         return context.Categories.Where<Category>(ct => !string.IsNullOrEmpty(ct.Description));
12:      }
13:  }

As you already noticed, you can submit a request to load second or Nth level loading (I would appreciate if you let me know if you know the upper limit of this N). One good thing to keep in mind here is to avoid cyclic loads.

2 – Running Plain Text-Based SQL Queries

One other usage of LINQ is to use context as a gateway to run your predefined sql statements.

a) ExecuteCommand

You can call DataContext.ExecuteCommand to achieve this.

1:  using (NorthwindDataContext context = new NorthwindDataContext())
2:  {
3:     int rowCount = context.ExecuteCommand(“SELECT * FROM CATEGORIES”);
4:  }

b) ExecuteQuery

You can call ExecuteQuery generic method and map the results to an entity of your choice as follows.

1:  using (NorthwindDataContext context = new NorthwindDataContext())
2:  {
3:    IEnumerable<Category> categories =   

context.ExecuteQuery<Category>(“SELECT * FROM CATEGORIES”);
4:  }

This was the end of the introduction. Hope it helped you a bit to figure the concepts out and get started. In later posts I will try to get into more advanced topics and create a real world application using LINQ in a multi tiered environment in conjunction with WCF.
kick it on

Technorati Tags: ,,,

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone