Feed update

December 12th, 2008 by Sidar Ok

I am doing a feed migration before starting a series of posts.

The old feed is obsolete from now on, and is going to change. The new feed is http://feeds.feedburner.com/sidarok . Please update it as I am depressed to see a very small number on the right.

Thanks all.

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Lazy Loading with Linq to SQL POCO s

October 29th, 2008 by Sidar Ok

Yes, here is that post. While doing Linq to SQL POCO screen cast, and writing the last post about how to achieve POCOs, one of the issues that came up was how to do lazy loading - since we are using pure IList, we of course were not getting lazy loading. That’s something that we didn’t want, so there had to be some workaround this limitation.

The reason that it took me a while to come up with a fair solution, is not that only I am very lazy, but also this was a challenging task. Before I outline the solution, let me explain my train of thoughts so you would understand my pain while hitting the Linq to SQL design decision blocks.

PITA Points

So, to get lazy loading, what we need to do is to intercept a call to a collection, then load it because it is requested at that time. For this, an obvious solution is using Dynamic Proxies. For this matter, I chose LinFu Dynamic proxy and it gave me lots of playground. This is in one pocket.

The most usual thing to do would be, to make the process transparent from user as much as possible. So my idea was to Proxy out the table definitions in context, and I went ahead for that, but what’s that? Table<T> is sealed ! Well done ! So I can’t proxy the sealed tables, because DynamicProxy works on an inheritance basis (with LinFu duck typing, it is possible to unseal, but again Table<T> doesn’t have 1 aggregate interface to choose as a contract. Nice isn’t it ?).

Then I have to proxy the entities, and their related properties. This comes with the implication of marking the to-be lazily loaded properties as virtual.

And another point is that EntitySet doesn’t have a non-generic implementation, so I can’t reach the association without type. This comes with another implication that the Interceptor I am going to write needs to know about the relationship to load, but hey, we can hide this into a repository, and that’s what repositories are for, aren’t they ? (This will make more sense at the end of the post)

Let’s go ahead !

I am still going to use the same simple Questions - Answers model from the last article, but I need to slightly change Question entity as following (it is still a POCO):

   1: public class Question
   2: {
   3:     private int _QuestionId;
   4:
   5:     public virtual int QuestionId
   6:     {
   7:         get
   8:         {
   9:             return _QuestionId;
  10:         }
  11:         set
  12:         {
  13:             _QuestionId = value;
  14:         }
  15:     }
  16:
  17:     private string _QuestionText;
  18:
  19:     public virtual string QuestionText
  20:     {
  21:         get
  22:         {
  23:             return _QuestionText;
  24:         }
  25:         set
  26:         {
  27:             _QuestionText = value;
  28:         }
  29:     }
  30:
  31:     private IList<Answer> _Answer;
  32:
  33:     public virtual IList<Answer> Answer
  34:     {
  35:         get
  36:         {
  37:             return _Answer;
  38:         }
  39:         set
  40:         {
  41:             _Answer = value;
  42:         }
  43:     }
  44: }

Answers list is virtual from now on,so that LinFu can override comfotably.

So let’s look at the tests for lazy loading. What I am going to check for it first is, of course, after getting the instance can I demand for the list and get it successfully ? For e.g, can I do a count ?

   1: [TestMethod()]
   2: public void should_get_correct_answer_count_when_lazily_loaded()
   3: {
   4:   LazyLoadingRepository target = new LazyLoadingRepository(); // TODO: Initialize to an appropriate value
   5:   int id = 1; // TODO: Initialize to an appropriate value
   6:   using (QuestionDataContext context = new QuestionDataContext())
   7:   {
   8:     Question actual;
   9:     actual = target.GetQuestion(context, id);
  10:     Assert.IsNotNull(actual);
  11:     Assert.AreEqual(actual.Answer.Count, 1);
  12:    }
  13: }

Note that this can always be further refactored in order to get by specification, which I am leaving that as an exercise to reader (I always wanted to do that).

Now I need to do a negative test, to see that function does not fool me and does not eagerly load everything. So when I try to access the answers without a context, I should get a Data Context disposed exception:

   1: [TestMethod()]
   2: public void should_throw_when_lazily_loaded_and_reached_outside_the_context()
   3: {
   4:   LazyLoadingRepository target = new LazyLoadingRepository(); // TODO: Initialize to an appropriate value
   5:   int id = 1; // TODO: Initialize to an appropriate value
   6:   Question actual;
   7:   using (QuestionDataContext context = new QuestionDataContext())
   8:   {
   9:      actual = target.GetQuestion(context, id);
  10:      Assert.IsNotNull(actual);
  11:   }
  12:
  13:   try
  14:   {
  15:     int count = actual.Answer.Count;
  16:   }
  17:   catch (Exception ex)
  18:   {
  19:      Assert.IsInstanceOfType(ex, typeof(ObjectDisposedException));
  20:      return;
  21:    }
  22:    throw new Exception(“Should have thrown Object disposed exception, sorry !”);
  23: }

Good, my tests are failing, what a depressive world we inhabit. With the previous implementation which was without lazy loading, The first test would fail with “Object Reference not set to instance of an object” exception, and int the second test Assert would fail because the exception is a NullReferenceException, not an ObjectDisposedException.

Now let’s try to pass these tests with some magic. Here is where LinFu calls us to the dark side. In GetQuestion, we are not going to return the actual object, but instead a proxied Question object. To create a proxy with LinFu, we need a custom interceptor which implements Linfu.DynamicProxy.IInvokeWrapper. Our invoke wrapper needs to know the DataContext, to load the entities and the relationship specification to load the related data. With the light of this info, here is how GetQuestion looks like :

   1: public Question GetQuestion(QuestionDataContext context, int id)
   2: {
   3:   EntityInvokeWrapper<Answer> interceptor = new EntityInvokeWrapper<Answer>(context, (Answer a) => a.QuestionId == id);
   4:   ProxyFactory factory = new ProxyFactory();
   5:   Question retVal = factory.CreateProxy<Question>(interceptor);
   6:   return retVal;
   7: }

This will help me to override Answers list, and replace it with my own implementation. In IInvokeWrapper, we need to implement BeforeInvoke, DoInvoke and AfterInvoke. We are only interested in DoInvoke. I am trying to not to reinvent the wheel, so in background I am using EntitySet’s lazy loading mechanism but that’s transparent to user, since I am proxying the IList<T> with an EntitySet<T> too. But how do I assoicate it with the table in the context and the relationship that I get ? Here is the answer:

   1: public class EntityInvokeWrapper<TChi> : IInvokeWrapper
   2:        where TChi : class
   3: {
   4:   private DataContext Context
   5:   {
   6:     get;
   7:     set;
   8:   }
   9:
  10:   private Func<TChi, bool> RelationshipSpecification
  11:   {
  12:     get;
  13:     set;
  14:   }
  15:
  16:   public EntityInvokeWrapper(DataContext context, Func<TChi, bool> relationship)
  17:   {
  18:     this.Context = context;
  19:     this.RelationshipSpecification = relationship;
  20:   }
  21:
  22:   #region IInvokeWrapper Members
  23:
  24:   public void AfterInvoke(InvocationInfo info, object returnValue)
  25:   {
  26:      //Console.WriteLine(”After”);
  27:   }
  28:
  29:   public void BeforeInvoke(InvocationInfo info)
  30:   {
  31:     //Console.WriteLine(”Before”);
  32:   }
  33:
  34:   public object DoInvoke(InvocationInfo info)
  35:   {
  36:     //Console.WriteLine(”During”);
  37:     string name = info.TargetMethod.Name;
  38:     if (name.StartsWith(“get_”) &&
  39:   info.TargetMethod.ReturnType.GetInterfaces().Contains(typeof(IEnumerable<TChi>)))
  40:     {
  41:       //Console.WriteLine(”Enumerable detected!”);
  42:       EntitySet<TChi> wrapper = new EntitySet<TChi>();
  43:       wrapper.SetSource(this.Context.GetTable<TChi>().Where<TChi>(this.RelationshipSpecification));
  44:       return wrapper;
  45:     }
  46:
  47:     return OriginalCall(info);
  48:    }
  49:
  50:   private object OriginalCall(InvocationInfo info)
  51:   {
  52:     //Console.WriteLine(”Original = ” + info.Target);
  53:     return info.TargetMethod.Invoke(info.Target, info.Arguments);
  54:    }
  55:
  56:     #endregion
  57: }

So as you see, we are doing the magic in DoInvoke, and checking that if it is a property, with an access to the enumeration of the child type (TChi) that we are interested in, we are silently stepping and saying that, hey, what you need for this is an EntitySet, but you can use it as an IList ;) Line 43 does the association between our EntitySet and the context table, and we are returning the EntitySet which has the full lazy loading support - so when it is accessed, it will perform the necessary query on its source. But the Actual Entity is clueless about what’s happening, and all the consumers of the entity who need that lazy loading will treat it as List<T> as it is one.

Conclusion

In this article, I outlined a solution to enable the lazy loading while using POCOs in Linq to SQL. I tried to reuse as much as possible, and wanted to show the pain points what kept me away from a best design. Of course I expect this will always not be a comprehensive solution, but as always, if it gives some ideas, I am happy.

Before leaving, be sure to check out great LinFu stuff from Philip Laureno as it also has other goodies such as simulated duck typing and mixin support, and even a dependency injection framework. Comments and free beers are welcome as usual.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Achieving POCO s in Linq to SQL

October 14th, 2008 by Sidar Ok

After the nice talk with developers.ie, it is really nice to see that people have interest in the topic. Unfortunately the quality of the recording was not very nice and connection dropped twice, so I decided to put together this blog post to show how we can work with leaving persistence polluted entities on our behind.

Why is it so important ?

I can hear lots of comments from people around me mainly concerning around “Why do we need this much hassle, when we can already have designer support, and VS integrated goodies of an ORM mapper ?”. First, I have to say that it is fair enough to think in this way. But when things start to go beyond trivial, you start to have problems with persistence or technology polluted entities. On top of my head, I can think of the following:

  1. Technology Agnosticism is a bliss : This concept is usually revolving around PI (Persistence Ignorance), but it is not only that. Persistence Ignorance means that your entities should be cleared of any persistence related code constraints that a framework - usually an ORM - forces on you. This is, for e.g. if you have attribute level mapping where those attributes are not part of your domain but are there just because some framework wants them to be there, then your domain is not persistence ignorant. Or, if your framework requests you to have specific types for handling associations to be used, like EntitySet and EntityRef s in Linq to SQL , same goes for you. This can also be another technology that wants your entities to be serializable for some reason. We need to try to avoid them as much as possible and concentrate on our business concerns there, not to bend our domain to be fitting into those technological discrepancies. This approach will also promote testability. The same goes for the need of implementing an abstract class, or interfaces like INotifyPropertyChanged when you don’t want them.

  2. Relying on Linq to SQL Designer is painful: Designer puts everything in one file, regenerates files each time when you save so you loose your changes such as xml comments. Needless to say, the only OOTB support is attribute level configuration, even for XML you need to use sqlmetal tool out of designer process.

  3. Configuration should not be anything that your domain to be concerned about: Unless you are building a configuration system :)

Let’s get geared

In the light of this, when we are working with Linq to SQL designer, we tend to think that it is impossible to achieve POCOs, but indeed it is: solution is don’t ditch POCOs, just ditch the designer :) While implementing POCOs, we need to know a couple of things beforehand about Linq to SQL internals, because we will be on our own when we have any problems.

  1. EntitySet and EntityRef are indeed useful classes, and they are there to achieve something. When you add an entity to an association, EntitySet manages the identity and back references. That is, for children you need to assign the correct parent id to the child otherwise you will loose relationship. Same goes for EntityRef and for 1-1 relations.

  2. INotifyPropertyChanging and INotifyPropertyChanged are there not only because of informing us by providing the ability to subscribe to necessary events and get notified when a property is changed, but to leverage lazy loading as well. When we discard them, we are back to eager loading.

Enough Rambling, let me see the wild world of code

For this post, I will only focus on the first part, so the lazy loading is a matter of another one. The approach we are going to take is, use the XML mapping instead of attribute based modeling. I am gonna use the trivial Questions and Answers model, where one question can have multiple Answers associated to them. Here is how it looks like :

 

image

Question and Answers entities

And their related code is pretty simple, nothing fancy. Here is the Answer POCO :

 

   1: public class Answer
   2: {
   3:
   4:     public Answer()
   5:     {
   6:     }
   7:
   8:     private int _QuestionId;
   9:
  10:     public int QuestionId
  11:     {
  12:         get
  13:         {
  14:             return _QuestionId;
  15:         }
  16:         set
  17:         {
  18:             _QuestionId = value;
  19:         }
  20:     }
  21:
  22:
  23:     private int _AnswerId;
  24:
  25:     public int AnswerId
  26:     {
  27:         get
  28:         {
  29:             return _AnswerId;
  30:         }
  31:         set
  32:         {
  33:             _AnswerId = value;
  34:         }
  35:     }
  36:
  37:     private string _AnswerText;
  38:
  39:     public string AnswerText
  40:     {
  41:         get
  42:         {
  43:             return _AnswerText;
  44:         }
  45:         set
  46:         {
  47:             _AnswerText = value;
  48:         }
  49:     }
  50:
  51:     private bool _IsMarkedAsCorrect;
  52:
  53:     public bool IsMarkedAsCorrect
  54:     {
  55:         get
  56:         {
  57:             return _IsMarkedAsCorrect;
  58:         }
  59:         set
  60:         {
  61:             _IsMarkedAsCorrect = value;
  62:         }
  63:     }
  64:
  65:
  66:     private int _Vote;
  67:
  68:     public int Vote
  69:     {
  70:         get
  71:         {
  72:             return this._Vote;
  73:         }
  74:         set
  75:         {
  76:             _Vote = value;
  77:         }
  78:     }
  79: }

Yeah, clean, pure C#: No attributes, EntityRef s, nothing. Same goes for Questions as well, where the association is achieved through the good old simple List<T>:

 

   1: public class Question
   2: {
   3:     private int _QuestionId;
   4:
   5:     public int QuestionId
   6:     {
   7:         get
   8:         {
   9:             return _QuestionId;
  10:         }
  11:         set
  12:         {
  13:             _QuestionId = value;
  14:         }
  15:     }
  16:
  17:     private string _QuestionText;
  18:
  19:     public string QuestionText
  20:     {
  21:         get
  22:         {
  23:             return _QuestionText;
  24:         }
  25:         set
  26:         {
  27:             _QuestionText = value;
  28:         }
  29:     }
  30:
  31:     private List<Answer> _Answer;
  32:
  33:     public List<Answer> Answer
  34:     {
  35:         get
  36:         {
  37:             return _Answer;
  38:         }
  39:         set
  40:         {
  41:             _Answer = value;
  42:
  43:         }
  44:     }
  45: }

To use these entities as POCOs, I need a way to externally define the mappings between db tables, columns to the relevant object fields. I chose the other OOTB supported way, XML. As I am so lazy to write it on my own, I ran the following sql metal command to generate it from the DB:

 

   1: sqlmetal /server:sidarok-pc /database:QuestionsAnswers /code:a.cs /map:Questions.xml

As you see, it also generates the code in a.cs file but I am gonna throw it out. Let’s check if the generated XML maps to our fields:

 

   1: <?xml version=”1.0″ encoding=”utf-8″?>
   2: <Database Name=”questionsanswers” xmlns=”http://schemas.microsoft.com/linqtosql/mapping/2007″>
   3:   <Table Name=”dbo.Answer” Member=”Answer”>
   4:     <Type Name=”Answer”>
   5:       <Column Name=”AnswerId” Member=”AnswerId” Storage=”_AnswerId” DbType=”Int NOT NULL IDENTITY” IsPrimaryKey=”true” IsDbGenerated=”true” AutoSync=”OnInsert” />
   6:       <Column Name=”QuestionId” Member=”QuestionId” Storage=”_QuestionId” DbType=”Int NOT NULL” />
   7:       <Column Name=”AnswerText” Member=”AnswerText” Storage=”_AnswerText” DbType=”Text NOT NULL” CanBeNull=”false” UpdateCheck=”Never” />
   8:       <Column Name=”IsMarkedAsCorrect” Member=”IsMarkedAsCorrect” Storage=”_IsMarkedAsCorrect” DbType=”Bit NOT NULL” />
   9:       <Column Name=”Vote” Member=”Vote” Storage=”_Vote” DbType=”Int NOT NULL” />
  10:       <Association Name=”FK_GoodAnswer_Question” Member=”Question” Storage=”_Question” ThisKey=”QuestionId” OtherKey=”QuestionId” IsForeignKey=”true” />
  11:     </Type>
  12:   </Table>
  13:   <Table Name=”dbo.Question” Member=”Question”>
  14:     <Type Name=”Question”>
  15:       <Column Name=”QuestionId” Member=”QuestionId” Storage=”_QuestionId” DbType=”Int NOT NULL IDENTITY” IsPrimaryKey=”true” IsDbGenerated=”true” AutoSync=”OnInsert” />
  16:       <Column Name=”QuestionText” Member=”QuestionText” Storage=”_QuestionText” DbType=”NVarChar(300) NOT NULL” CanBeNull=”false” />
  17:       <Association Name=”FK_GoodAnswer_Question” Member=”Answer” Storage=”_Answer” ThisKey=”QuestionId” OtherKey=”QuestionId” DeleteRule=”NO ACTION” />
  18:     </Type>
  19:   </Table>
  20: </Database>

Now, let’s write the simple select test to see if it just works. This repository test is intentionally an integration test, to see that if I can get the question entity along with its children:

 

   1: [TestMethod()]
   2: public void GetQuestionTest()
   3: {
   4:   QuestionsRepository target = new QuestionsRepository(); // TODO: Initialize to an appropriate value
   5:   int id = 2; // TODO: Initialize to an appropriate value
   6:   Question actual;
   7:   actual = target.GetQuestion(id);
   8:   Assert.IsNotNull(actual);
   9:   Assert.IsTrue(actual.Answer.Count > 0);
  10: }

And after this the implementation is quite trivial. Just note the eager loading that is needed explicitly because otherwise the Answers list will never get assigned and remain null :

 

   1: public Question GetQuestion(int id)
   2: {
   3:     using (QuestionDataContext context = new QuestionDataContext())
   4:     {
   5:         DataLoadOptions options = new DataLoadOptions();
   6:         options.LoadWith<Question>(q => q.Answer);
   7:
   8:         context.LoadOptions = options;
   9:         return context.Questions.Single<Question>(q => q.QuestionId == id);
  10:     }
  11: }

Aha,we don’t have a DataContext yet ! Let’s create it, we need to feed with connection string and XML file. Note the Table<T> implementations are there just for convenience:

 

   1: public class QuestionDataContext : DataContext
   2: {
   3:   static XmlMappingSource source = XmlMappingSource.FromXml(File.ReadAllText(@”C:UserssidarokDesktopPocoDemoPocoDemoquestions.xml”));
   4:   static string connStr = “Data Source=sidarok-pc;Initial Catalog=QuestionsAnswers;Integrated Security=True”;
   5:   public QuestionDataContext()
   6:     : base(connStr, source)
   7:   {
   8:   }
   9:
  10:   public Table<Question> Questions
  11:   {
  12:     get
  13:     {
  14:       return base.GetTable<Question>();
  15:     }
  16:    }
  17:
  18:    public Table<Answer> Answers
  19:    {
  20:      get
  21:      {
  22:        return base.GetTable<Answer>();
  23:      }
  24:     }
  25: }

Now the test passes, hurray, we are happy let’s party! Before let’s take a step forward and write a test for Insert:

 

   1: [TestMethod()]
   2: public void InsertQuestionTest()
   3: {
   4:   QuestionsRepository target = new QuestionsRepository(); // TODO: Initialize to an appropriate value        
   5:   Question question = new Question()
   6:   {
   7:     QuestionText = “Temp Question”,
   8:     Answer = new List<Answer>()
   9:     {
  10:       new Answer()
  11:       {
  12:         AnswerText = “Temp Answer 1″,
  13:         IsMarkedAsCorrect = true,
  14:         Vote = 10,
  15:        },
  16:        new Answer()
  17:        {
  18:          AnswerText = “Temp Answer 2″,
  19:          IsMarkedAsCorrect = false,
  20:          Vote = 10,
  21:         },
  22:         new Answer()
  23:         {
  24:           AnswerText = “Temp Answer 3″,
  25:           IsMarkedAsCorrect = true,
  26:           Vote = 10,
  27:          },
  28:       }
  29:       };
  30:
  31:       using (TransactionScope scope = new TransactionScope())
  32:       {
  33:         target.InsertQuestion(question);
  34:         Assert.IsTrue(question.QuestionId > 0);
  35:         Assert.IsTrue(question.Answer[0].AnswerId > 0);
  36:         Assert.IsTrue(question.Answer[1].AnswerId > 0);
  37:         Assert.IsTrue(question.Answer[2].AnswerId > 0);
  38:        }
  39: }

Simple insert test, insert questions along with its children, answers and check that if they have been assigned any Ids. The implementation is again, nothing different from the usual implementation :

 

   1: public void InsertQuestion(Question q)
   2: {
   3:     using (QuestionDataContext context = new QuestionDataContext(connStr))
   4:     {
   5:         context.Questions.InsertOnSubmit(q);
   6:         context.SubmitChanges();
   7:     }
   8: }

When we run this test, we will run into this error:

 

   1: Test method QuestionRepositoryTest.QuestionsRepositoryTest.InsertQuestionTest threw exception:  System.Data.SqlClient.SqlException: The INSERT statement conflicted with the FOREIGN KEY constraint “FK_GoodAnswer_Question”. The conflict occurred in database “QuestionsAnswers”, table “dbo.Question”, column ‘QuestionId’.
   2: The statement has been terminated

Aha, well this was kinda expected. We knew that we had to maintain the identity and back references, but we didn’t. Shame on us. But how are we gonna do that ? We don’t know the ID value before we insert, how do we tell Linq to SQL to pick the new identity ? Are we back to square 1, @@IDENTITY_SCOPE ?

Of course if I am writing this post, the answer has to be no :) The secret is in the back reference, the back reference is there just because for this matter.

What we need to do now is, in each Answer we need to preserve a reference to the parent Question and for each question that is added, or when the list is overriden we need to assign the Answer’s QuestionId property to this back reference’s one. As we now don’t have the EntitySet, we need to do that on our own, but it is easy enough. For Answers, here is the back reference:

 

   1: private Question _Question;
   2:
   3: public Question Question
   4: {
   5:     get
   6:     {
   7:         return this._Question;
   8:     }
   9:     set
  10:     {
  11:         this._Question = value;
  12:         this._QuestionId = value.QuestionId;
  13:     }
  14: }

And for Question POCO, when the List is overriden, we need to put our own logic to handle this, which is: for every child answer, ensure that back reference and the reference id is set:

 

   1: private List<Answer> _Answer;
   2:
   3: public List<Answer> Answer
   4: {
   5:    get
   6:    {
   7:        return _Answer;
   8:    }
   9:    set
  10:    {
  11:        _Answer = value;
  12:        foreach (var answer in _Answer)
  13:        {
  14:            answer.QuestionId = this.QuestionId;
  15:            answer.Question = this;
  16:        }
  17:    }
  18: }

And Test passes after doing this.  Hope this gives some idea what you can do and what you need to know beforehand.

Conclusion

Desire to decouple domain entities from Technological aspects is important in SoC & SRP, and these principles are important for nearly everything, varying from pure basic to DDD. To achieve this in Linq to SQL, we need to say good bye rid to EntiyRef, EntitySet, INotifyPropertyChanged, INotifyPropertyChanging interfaces.

The next subject I am going to attack is Lazy Lading with POCOs, stay tuned till then !

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Screencast on supporting POCO with Linq to SQL

September 29th, 2008 by Sidar Ok

  http://www.developers.ie was kind enough to invite me to do a screencast for them, where I will be talking in .NET Coffee Break show about POCO support in Linq to SQL.

The event is on Thursday, 11:00 A.M Greenwich time. Registration is free through http://www.developers.ie/Webcasts.aspx .

Hope to see you all there !

UPDATE : Thanks to Paschal and http://www.developers.ie, to provide me this opportunity. Here are the source for the demo. Sorry for the unluckiness that connection dropped twice (yes, twice :( ). Thanks everybody for listening, and as son as Pascal provides the link I will post it here.

UPDATE 2: Still waiting the show to go online.

UPDATE 3: Here it is :Part1 , Part2.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

What is this Managed Extensibility Framework thing all about ?

September 26th, 2008 by Sidar Ok

No, this is not another introduction post. Being one of the earliest adopter of MEF, I started to see a great confusion around what MEF is, and what it really is suppose to do in this wild world of DI containers… Well ok, I can’t lie anymore about, as people have twitter proofs to throw at me, I was a part of that confusion :

image

As you see, I was pretty biased as an avid DI user, that MEF is an IoC container so why doesn’t it behave in a normal DI ish way ? I was seeing MEF as an only IoC container so much, and in that limited perspective its design has sucked big time for me, especially because of the unnatural Export - Import system. And for that matter, I even implemented a way of an external configuration for MEF.

This has proven to be really painful. Glenn Block, saw my pain through twitter and called me for an hour of a very nice and informative chat.

Here is the result: MEF is not a DI container. DI is only a part of the solution that MEF tries to bring into the scene. Ayende just got quicker in the game, and explained it well, so I couldn’t have the joy of telling the world first :)

I won’t repeat what Ayende said, I’ll just try to make things a bit more concrete in perspective of my understanding as a person who likes to talk over examples.

Understanding MEF with an example

Now, it is time to take those DI glasses off and look at a different view. Forget about DI, containers, IoCs et all. Let’s start thinking with a concrete example we all know about. Visual Studio has a plug in infrastructure. Say that we are handed to build that, what do we do ?

  1. First, we need to come up with an interface (be it an SDK or API reference), sort of a hook to outside, so addin implementors can code against it.

    1. This involves the lifecycle of the addin. You should be able to achieve basic needs create instance / retrieve already created one (singleton).

  2. We should be able to provide a way of interact with add-in. Add-in should be able to easily send you messages about, say show solution explorer or open a file in editor.

  3. We should be able to lazy load it if needed. E.g there is no need to load an add-in at start up when the add in is supposed to be performing within a context menu event.

  4. A new add-in, or a replace of an old one should be as easy as throwing the dll in bin folder. This means that if a new set of addins is in place, you need to be able to choose amongst them. (meta-data based discovery)

  5. When an add-in is replaced or simply dead, we need to unload it.

So in all this sense, VS is a composite application. It needs to be composed dynamically.Beware, during all these steps we still don’t care about the actual add-in. We just care that if it complies to our needs that we specified in the contract. We need something that quacks, and looks like a duck. Anything that supplies these demands, even if it is a dog, more than welcome to be part of our Visual Studio.

Now, these are all huge works. Imagine just lazy loading of a component has a lot of edge cases already(#3), let alone solving the unloading problem from AppDomain(#5). But let’s say that we finished and shipped our Visual Studio. Happy days, counting the money.

And now, we are assigned on a new task and writing Lutz Roeder’s great tool Reflector. And guess what, Lutz Roeder wants us to implement an add-in infrastructure so that they can perform several tasks in reflected code. To accomplish this, we need to do ALL the steps above again, from 1 to 5. All that drudge work, and none of it is reusable because we designed it in an application specific way (and that perfectly makes sense since our concern was not to build that framework). Now you got the point: MEF is a core .NetFx component that adds ability of composability to any app by default.  Its aim in short term is, not replacing IoC containers but working with IoC containers, which fits this greater vision.

Another Example - Publisher Subscriber

A lot of confusion is also going around between MEF and System.Addin Namespace, similar to the confusion addressed above. Let’s look at another example that has nothing to do with add-ins, to show that problem is not an add-in bound system.

Say we have a banking system, with a strong domain model. We need services like Accounting, Personal Info, Leasing etc. And in this bank, in every two weeks or so there comes a new service.For the discovery and configuration of these new services, you would normally use an ESB (Enterprise Service Bus). That discovery part has indeed very similar roots with MEF.  MEF provides you a lightweight bus-alike approach for your extensions. By hosting MEF, you are delegating this discovery system to MEF, and it provides you a rich Meta Data to be used. The Export - Import model indeed looks very similar to Publisher and Subscriber model in this sense.

Conclusion

MEF is by no means a replacement to today’s IoC containers, nor to System.Addin . It aims to be a reusable solution to provide a common base for the heavy lifting code of extensible applications, so we (hopefully) will need to worry less about extensibility concerns of the application. If you like, you can use MEF as your IoC container, but this doesn’t mean that you should. We can also use our favorite IoC container of choice with a free (as in speech) collaboration with MEF by Resolvers.I am planning to cover the concepts with a hands on application which will hopefully help to build better understanding, stay tuned.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

After Linq to Sql Talk in Cork

September 23rd, 2008 by Sidar Ok

The talk last night was awesome. Thanks to all who showed up, there was a good turnout. Also thanks to Joe Gill and MTUG for organizing the event, and to Microsoft for sponsoring. 

pic1 

I had the great chance of sharing my thoughts, knowledge and experience on Linq to SQL, analyzing upsides and downsides of it. It was targeted for advanced audience, so I enjoyed talking to a bunch of geeks.

Here are the slides for the presentation. I also made a demo on how we can support Domain First Design and create POCOs with Linq to SQL, and didn’t have the time to do the second demo on a short multi tier development demonstration. You can find them here in rar format.

It is always great to have good techies around you, and free beers along with chicken wings !!

This was the beginning event of the year and I am honored to have done it. Hope this will encourage more people to share the views and experiences on subject matters.

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Agenda on Linq to SQL talk

September 20th, 2008 by Sidar Ok

I decided on some of the bullet points that I will talk about at 22nd. This will be a MS-200 level talk, means that it requires also basic level knowledge of Linq to SQL. Although if you don’t have the basic knowledge, you are still welcome to fill the seats and make the place look crowded :-)

In this first event of the year, here is what I am going to talk about ORM s and Linq to SQL in particular:

  1. Building a Common Glossary
  2. Defining the Problem
  3. Building in house ORM/DAL vs Use an existing one
  4. Linq to SQL Comes into play : Myths and Realities
  5. Linq to SQL beyond drag and drop : Concepts
  6. Linq to SQL Entity Model
  7. Mapping Engine
  8. Attribute Level or External ?

    SQL Metal to rescue

    What it does, what it lacks

  9. Understanding DataContext
  10. Change Management & Change Communication Strategies
  11. Advanced Topics (If time permits)
    1. Debugging and Troubleshooting
    2. Transaction Handling
    3. Concurrency & Conflict Handling Scenarios
    4. Entity Validation
    5. Security Model
    6. Serialization
    7. Performance Advices & Best practices

I also prepared a couple of demos to serve some of the practical implementation of the concepts that I want to share.

Looking forward to see you at Imperial Hotel on Monday!

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Talking on Linq to SQL on 22nd of September

September 5th, 2008 by Sidar Ok

I am going to be talking about Linq to SQL, one of the ORM mappers coming out of Redmond. I am planning to cover how to utilize Linq to SQL to get the best benefit out of it, Linq to SQL’s place in the ORM world and pros-cons and of course ways to apply patterns and practices with a couple of demos.  I’ll be more than happy to see any of you there and do some geek chat , and have a couple of pints. Pints and chicken tenders are free and Microsoft’s courtesy (Himm, free tenders…That is more appealing then the talk…tenders…)

Registration is free through this link : http://www.cork.mtug.ie/Events/EventInfo.aspx?ID=b2515894-866f-4d70-8c87-ebaa69c69b7d 

 See you there !

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Documents 2.0 - Consumable Documents

September 3rd, 2008 by Sidar Ok

I was in Brussels last weekend to visit a friend, and had a chance of meeting up with a group of highly skilful and visionary people. We had a very interesting chat, and one point was about documentation came up. Seemed like everybody has the same pain, reading and writing technical documents consume an incredible part of any developer’s day, usually most of the work done is about searching the right needle through a pile of garbage.

It is amazing that over the years that hardware technology is evolved a lot. Not so much as it, software has also progressed and proved some quality.

But one thing couldn’t make the big move: Documentation. No, I know we have better word processors, heck, I am using one for this entry. We have the enough technology to write more old styled documents. We have been reading tons of papers 30 years ago to accomplish a single development task, thanks to the internet, web 2.0 or whatever brought us, we now have wikis, podcasts, screenCasts and blogs which speak to different senses and styles. Of course there still is some technical problems, such as indexability, searchability, and relevancy to the need of these but it is getting better over the time.So what is my problem ?

Ok, I am gonna say it. It is that we DON’T get enough benefit of all these in our daily business environment.How many of you have been left with a legacy code ? Surely most of you. How much are them was documented ? I bet since most aren’t, it is not a few either. And why don’t we get any podcast, or a web cast on, let’s say Provider Infrastructure for Insurance, or Dealing with Leasing Options in Banking where we are more than happily get provided hundreds of pages ?

Now known that while handing over the legacy code, pairing with the person who originally wrote it is one of the best things that you can do, why does this information fly? When that person goes away either you loose all the info, or if you have a documenting process it might be even worse: you’ll face the hungry monster of 200 pages, which hasn’t been maintained like 10 years ago.

My point is, let’s produce those documents keeping in mind that a person will read it - not that because writing a thick book on the subject makes us seem more intelligent and hardworking. I want to see more agile documents. Those include:

  1. If a document is longer than 20 pages, please consider breaking it down to multiple ones.
  2. If you can tell more with less words, please do so. Usage of patterns for e.g, applies well here.
  3. If you can tell more without any word, even more appreciated. Using pictures, diagrams or even comics sometimes, usually reach to the point
  4. Know your audience, and produce the document for him/her not for comfort of yourself. If I am your audience, know that I am a highly technical guy and not a native English speaker. Don’t assume that I know what you know, such as your sector’s details and don’t think that I am dying to look at your design document to learn what a Memento Pattern is all about.
  5. If you think that you can express better if you speak, do a podcast and link it from that document. If you want to show a process, let’s say a migration chain, don’t hesitate to record a screen cast. It will take only 2 mins and will stay there forever, so you won’t spend your time going over and over it to the newbies joining the team. You don’t need to be an expert, as these will stay internal.
  6. Blog about what you found. Put it in the company wiki. Get angry with the people asking you questions before looking at the wiki. Girr !
  7. Use documents to increase the quality of communication, not the time spent on it. If you need a meeting to talk about a 100 page document, either the document or meeting is not to its full intent.

In business environments, I see an incremental Agile Documenting Fear (term stolen from M.Fowler as in Parser Fear) that leads us to the apocalypse of inconsumable documents. IMHO, that’s something that we should avoid for our own goodness.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone

Localizing Linq to SQL Entities

August 18th, 2008 by Sidar Ok

Back from the holidays! Not getting too much sun certainly encourages to write code rather than chilling out. Writing on this subject was on my list as Linq to SQL got more mature, need for it in multi-cultural applications has arisen respectively. Also an old post of Ayende beat me to think about how a similar problem could be solved in Linq to SQL.

I’ll use the same model that he provided, and it is the following:

tmpD45

Figure 1. Table structure for multi-lingual products

 

As in the original post, the challenge is just to load the Product Names for the current culture(or specific culture), not all of them related to one product. So in nhibernate, there are filters to solve this problem in an elegant way. It is elegant because it is externally configurable and includes no intrusiveness in your design.

When internationalising-localizing comes into play, there are 2 main approaches from a Domain Perspective and it lies behind the answer of the question :

“Is localization a concern of my domain?”

In fairness, the answer changes for every domain (to my experience in most cases it is either no, or part of a different domain, such as administration). A simple way of determining if this is an issue is, to check that if domain needs to know about different cultures or domain elements with different languages need to talk to each other or not (Can Reuters publish news in Portugese ?). If the answer is yes, then even eager loading all language translations can be an option. But otherwise, we’ll need to abstract away so that domain won’t know about this infrastructurel concern.

In the original post, Ayende uses filters in NHibernate. In Linq to SQL we don’t have filters but as mentioned before, we have Load Options to give a criteria and reduce the amount of data we retrieve.

As a matter of fact, we expect following test to pass. Note that this is a state based test to test the data retrieved is not more than one.

   1: /// <summary>
   2: ///A test for GetProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void GetProductTest()
   6: {
   7:   ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:   int prodId = 1; // TODO: Initialize to an appropriate value
   9:   int lcId = 3; // TODO: Initialize to an appropriate value
  10:   Product actual = target.GetProduct(prodId, lcId);
  11:   Assert.AreEqual(“Prod13″, actual.Name);
  12:   Assert.IsTrue(actual.ProductNames.Count == 1);
  13: }

Where the records in the table are as follows:

image

Figure 2. Records in Product Names Table. As seen, there are 2 records for product id ‘1′

The entity structure that we have to use with Linq to SQL (generated by the courtesy of the designer) is as follows:

image

Figure 3. Object Model of Product and Product Name

Looks innocent doesn’t it ? The secret thing is that Product will always have a list of ProductNames, which in my case will always have 1 element. If I want to keep my domain ignorant of this, this certainly is a bad thing but this is what L2S gives me by default. There are ways to overcome this issue of course, but those are not the point of the post.

In addition to the model, I’ll add another field called “Name” to the model that’s not mapped to any column in db, to reach the same example. This is achieved by a partial class:

   1: partial class Product
   2: {
   3:     public string Name
   4:     {
   5:         get;
   6:         set;
   7:     }
   8: }

Now we are ready to write the code that passes the test. Note that we are utilizing AssociateWith Generic Method to make the necessary filtering.

   1: /// <summary>
   2: /// Gets the product for the current culture.
   3: /// </summary>
   4: /// <param name=”prodId”>The prod id.</param>
   5: /// <param name=”lcId”>The lc id to do localization filter.</param>
   6: /// <returns></returns>
   7: public Product GetProduct(int prodId, int? lcId)
   8: {
   9:     using (ProductsDataContext context = new ProductsDataContext())
  10:     {
  11:         // set load options if localizable filter needed
  12:         if (lcId.HasValue)
  13:         {
  14:             DataLoadOptions options = new DataLoadOptions();
  15:             options.AssociateWith<Product>(p => p.ProductNames.Where<ProductName>(pn => pn.CultureId == lcId));
  16:             context.LoadOptions = options;
  17:         }
  18:
  19:         Product pFromDb = context.Products.Single<Product>(p => p.ProductId == prodId);
  20:
  21:         return new Product()
  22:         {
  23:                  Amount = pFromDb.Amount,
  24:                  ProductId = pFromDb.ProductId,
  25:                  Size = pFromDb.Size,
  26:                  Name = pFromDb.ProductNames.First<ProductName>().Name,
  27:                  ProductNames = pFromDb.ProductNames
  28:         };
  29:      }
  30: }

Now since we are done with the original post, let’s go beyond the bar and implement inserts & updates too. With Inserts, there are 2 things that I am going to handle: 1 - It is a brand new insert 2 - It is just an insert of a new product name in another language.

For first one here is the test :

   1: /// <summary>
   2: ///A test for InsertProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void Should_Insert_for_Completely_New_Prod()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = new Product()
   9:     {
  10:          Amount = 31,
  11:          Name = “English Name”,
  12:              ProductId = 0,
  13:              Size = 36,
  14:     };
  15:     int lcId = 7;
  16:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  17:     {
  18:         target.InsertProduct(p, lcId);
  19:         Assert.IsTrue(p.ProductId > 0);
  20:         Assert.IsTrue(p.ProductNames.Count > 0);
  21:      }
  22: }

And for the second one:

   1: /// <summary>
   2: ///A test for InsertProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void Should_Insert_Name_for_Existing_Prod()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = target.GetProduct(1);
   9:     int firstCount = p.ProductNames.Count;
  10:     p.Name = “Kurdish Name”;
  11:     int lcId = 9;
  12:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  13:     {
  14:         target.InsertProduct(p, lcId);
  15:         Product prAfterInsert = target.GetProduct(p.ProductId);
  16:         Assert.AreEqual(firstCount + 1, prAfterInsert.ProductNames.Count);
  17:     }
  18: }

So, passing test is obvious. I need to do an extra insert to the product tables if it is a new one, and that’s it:

   1: /// <summary>
   2: /// Inserts the product.
   3: /// </summary>
   4: /// <param name=”p”>The p.</param>
   5: /// <param name=”lcId”>The lc id.</param>
   6: public void InsertProduct(Product p, int lcId)
   7: {
   8:     using (ProductsDataContext context = new ProductsDataContext())
   9:     {
  10:         if (p.ProductId == 0)
  11:         {
  12:             // insert only if it is new
  13:             context.Products.InsertOnSubmit(p);
  14:         }
  15:
  16:         InsertProductNameForProduct(context, p, lcId);
  17:         context.SubmitChanges();
  18:     }
  19: }
  20:
  21: /// <summary>
  22: /// Inserts the product name for product.
  23: /// </summary>
  24: /// <param name=”context”>The context.</param>
  25: /// <param name=”p”>The p.</param>
  26: /// <param name=”lcId”>The lc id.</param>
  27: private void InsertProductNameForProduct(ProductsDataContext context, Product p, int lcId)
  28: {
  29:     context.ProductNames.InsertOnSubmit(new ProductName()
  30:     {
  31:         CultureId = lcId,
  32:         Name = p.Name,
  33:         ProductId = p.ProductId,
  34:         Product = p,
  35:      });
  36: }

And last, for update; apart from the obvious part there is one situation we need to handle : if the name of the product is changed, than we need to update it as well. For the other fields, go on with the regular update. Here is the test that codifies the statement:

   1: /// <summary>
   2: ///A test for UpdateProduct
   3: ///</summary>
   4: [TestMethod()]
   5: public void should_update_product_and_its_current_name()
   6: {
   7:     ProductsRepository target = new ProductsRepository(); // TODO: Initialize to an appropriate value
   8:     Product p = target.GetProduct(1, 2);
   9:     p.Name = “French Name”;
  10:     p.Amount = 40;
  11:     p.Size = 55;
  12:     using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))
  13:     {
  14:         target.UpdateProduct(p);
  15:         Assert.AreEqual(“French Name”, p.Name);
  16:         Assert.AreEqual(40, p.Amount);
  17:         Assert.AreEqual(55, p.Size);
  18:      }
  19: }

After writing the test, the implementation below becomes obvious:

   1: public void UpdateProduct(Product p)
   2: {
   3:    // since we don’t load more than one product name, we can assume that the one is updated
   4:    using (ProductsDataContext context = new ProductsDataContext())
   5:    {
   6:        context.Products.Attach(p, true);
   7:        ProductName currentName = p.ProductNames.Single<ProductName>();
   8:        if (p.Name != currentName.Name)
   9:        {
  10:            // it is updated, update it
  11:            currentName.Name = p.Name;
  12:         }
  13:         context.SubmitChanges();
  14:     }
  15: }

I showed a possible strategy to localize Linq to SQL entities in this post. Of course, more complex scenarios such as child entities and lazy loading issues could be thought thoroughly, but I hope this gave some initiative to attack the whole idea.

Comments and critics well appreciated as always.

kick it on DotNetKicks.com

Share it on: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Blogosphere News
  • e-mail
  • YahooMyWeb
  • DotNetKicks
  • DZone