Tuesday, December 16, 2008

Host [Name] failed while starting monitoring asynchronous operations queue.

I've started working on some Microsoft Dynamics CRM development lately, and came accross this issue with on of our servers when the CRM Asyncronous Service stopped:

Host [Name] failed while starting monitoring asynchronous operations queue.

If you have the ApplicationServer and PlatformServer installed on the same machine, you may receive this error (in the event log) when you try and start the service

Host SGCORPCRM1: failed while starting monitoring asynchronous operations queue. Exception: System.InvalidOperationException: The requested Performance Counter is not a custom counter, it has to be initialized as ReadOnly.
at System.Diagnostics.PerformanceCounter.Initialize()
at System.Diagnostics.PerformanceCounter..ctor(String categoryName, String counterName, String instanceName, Boolean readOnly)
at Microsoft.Crm.Asynchronous.PerformanceCounters..ctor(String instanceName)
at Microsoft.Crm.Asynchronous.AsyncService.OnStart(String[] args)


I'm not sure how this happenned, I had a look for answers and found a Microsft KB on the problem - which was similar, but this did not seem to be applicable.

After some more searching, I finally found a post in the eggheadcafe forums , which came up with the goods (thanks Deepak!). Of course, before applying this fix, I checked a working server first, and found that the RoleNames and Roles registry entries were infact incorrect.

Below I've repeated the steps, hopefully making it easier to find for other with the same issue.
Open the registry and go to the path HKLM\SOFTWARE\Microsoft\MSCRM

  1. Check the 'RoleNames' value and the 'roles' value - they should be set to the following values "ApplicationServer,PlatformServer" and "2392107" ( do not include quotes)
  2. If the values do not match these, correct them to be the same
  3. Repair the CRM Installation
  4. Restart IIS The service should start successfully now.

Friday, December 5, 2008

LINQ to SQL Entity Base Version 1.0 Final

Hi everyone,

Now that all the bugs seem to be ironed out in the the LINQ to SQL Entity Base class, it's time to release it to the world as a final version.

If you have been using RC3, there is no need to update the source code from the Final release source code as it has not changed.

As with the RC3 release, I will stress again that you need Visual Studio 2008 SP1 and .NET 3.5 SP1 to run this final release verison as it uses one of the attributes for serialization not found in the original .NET 3.5 version.

Find the final version here:


Anyway, have fun!

Cheers

Matt.


Wednesday, October 22, 2008

Linq to SQL Entity Base Release Candidate 3

Hi everyone,

I've released a new version of the Linq 2 SQL Entity Base class which can be found here, along with details of the changes:

http://www.codeplex.com/LINQ2SQLEB/Release/ProjectReleases.aspx?ReleaseId=18662

I'm hoping this is the final release candidate that will become version 1.0 gold.

Cheers

Matt.

Friday, October 10, 2008

TFS - Fixing Work Item Updates made in Excel

One of the cool things about TFS is it's integration with Excel. The integration allows you to link work item data and publish it back to TFS, which is really handy for BA's, Team Leaders and Project Managers.

Unfortunately, with this feature comes the ability to make a huge mistake as well if you are not careful.

Recently, I had to find a way to revert 400 hundred work items which were unintentionally updated by one of the business team. They were using excel, copying and pasting from one TFS linked document into another and accidentally answered "yes" to the publish on the target excel document which resulted in every work items title and description being overridden.

Unfortunately, there was not quick and easy way to revert this so I had to figure out some way of doing it.

What I came up with was rather than trying to rollback the changes by fiddling with the TFS database (which is also a very risky thing to do) I thought that I would just override the mistaken records with the previous data (ironically with Excel - which is the same thing that made the mess!) and leave the invalid data in the workitem, so it would dissappear in the history.

Here's what I did:

  1. Queried the "workitemsare" table and the "workitemswere" table to find those records which were accidentally updated (looking at the change date field and the changed by field to find the right time of the incidient and user who publish the records).
  2. Found the previous version of the records by looking in the "workitemswere" table by comparing the records found in 1. above to the "workitemswere" table and finding the most recent update that occurent before the incident.
  3. From the results, build CSV file that could be opened in excel and be pasted over the top of a TFS linked document containing those records which were affected by the incident. It's important that both the excel documents (both the extract and the linked document) were in the same work item order and had the same columns that needed to be updated.
  4. Paste the correct values over the top of the bad records in the linked excel spreadsheet.
  5. Publish the correct values back to TFS from excel, resolving any issues where the new "correct" values broke any TFS workitem status state rules.

If you ever need to do this, here's a sample of the scripts which should get you started:

-----


-- Create a tempory table to put the data in
SELECT 1 AS Id,
fld10010 AS Priority,
State,
[Fld10094] ExternalSystemId,
[Assigned To],
Title,
fld10005 As [Resolved Date],
[Changed Date]
INTO #workingtable
FROM workitemsare
WHERE id = null


-- Grab the data that we are after and throw them into a temp table,
-- filtering by user and approximate time.
INSERT INTO #workingtable
(
id,
Priority,
State,
ExternalSystemId,
[Assigned To],
Title,
[Resolved Date],
[Changed Date]
)
SELECT id,
fld10010 AS Priority,
State,
[Fld10094] ExternalSystemId,
[Assigned To],
Title,
fld10005 As [Resolved Date],
[Changed Date]
FROM workitemsare
WHERE [changed by] = '[user name]'
AND [changed date] BETWEEN '2008-08-26 08:41:00'
AND '2008-08-26 08:42:00'
UNION
SELECT id,
fld10010 AS Priority,
State,
[Fld10094] ExternalSystemId,
[Assigned To],
Title,
fld10005 As [Resolved Date],
[Changed Date]
FROM workitemswere
WHERE [changed by] = '[user name]'
AND [changed date] BETWEEN '2008-08-26 08:41:00'
AND '2008-08-26 08:42:00'

-- select the correct records (finding the most recent change before the incident),
-- making sure that the column order and record order match that of excel spread sheet
-- you will use to paste over the top
-- (export this to CSV)

SELECT WIW.id,
ISNULL(CAST(WIW.fld10010 AS VARCHAR(10)), '') AS Priority,
WIW.State,
ISNULL(WIW.[Fld10094], '') AS ExternalSystemId,
WIW.[Assigned To],
WIW.Title,
WIW.fld10005 As [Resolved Date],
WIW.[Changed Date]
FROM #workingtable MWT
INNER JOIN [WorkItemsWere] WIW ON MWT.Id = WIW.Id
WHERE WIW.[Changed Date] = ( SELECT MAX([Changed Date])
FROM [WorkItemsWere] AS WIW2
WHERE WIW.Id = WIW2.Id
AND WIW2.[Changed Date] < '2008-08-26 08:41:00'
)
GROUP BY WIW.id,
WIW.fld10010,
WIW.State,
WIW.[Fld10094],
WIW.[Assigned To],
WIW.Title,
WIW.fld10005,
WIW.[Changed Date]
ORDER BY WIW.Id

----

Notes:

- Some of the fields have column values that aren't well named (auto genereated by TFS), to find where they are, look in the Fields table for the correct field/column mapping.

- WorkItemsWere, WorkItemsAre, Field tables can be found in the "TfsWorkItemTracking" database.

- WorkItemsWere table contains the previous states of each work item

- WorkItemsAre table contains the latest values for each work item

- DO NOT modify the TFS database directly!!!!


Cheers

Matt

Friday, August 8, 2008

Run Sun Java Application Server 9.1 on Windows Server 2003 as Service

I don't usually dabble in the Java world too much, but had to recently because a client required us to use a Java Solution and we needed to integrate with as part of our Windows Workflow Foundation. This solution Runs under Sun Java Application Server (SJAS).

Installation of the JAR file in SJAS worked fine as a Service and everything seemed to be going great. Right up until I logged off the server - this is where the service stop responding to calls but the service itself still beleived everything was working fine. This was weird because if you choose to install as a service, you don't expect it stop when you logout (i.e.the whole reason for having the "windows service" concept).

There seemed to be a lot of people out there looking for a solution to this issue, but no one seemed to have the answer (at least not for Windows 2003/SJAS 9.1 combination). Not even the vendor of the solution we were integrating with had a solution to this problem.

It turns out that the information is actually in the documentation for SJAS after all - which tripped me up because my searched kept taking me to the old 8.x version SJAS which doesn't include the extra special setting for Windows 2003, and following the 8.x version doesn't work.

Anyway, here's the information and a link to the Sun documenation - it's a bit of fiddling, but it works fine once you make these changes.

http://docs.sun.com/app/docs/doc/819-3671/ablwz?l=en&a=view&q=Restarting+Automatically


Preventing the Service From Shutting Down When a User Logs Out
By default, the Java VM catches signals from Windows that indicate that the operating system is shutting down, or that a user is logging out, and shuts itself down cleanly. This behavior causes the Application Server service to shut down when a user logs out of Windows. To prevent the service from shutting down when a user logs out, set the -Xrs Java VM option.

To set the -Xrs Java VM option, add the following line to the section of the as-install\domains\domain-name\config\domain.xml file that defines Java VM options:

-XrsIf the Application Server service is running, stop and restart the service for your changes to become effective.


And the all imporant bit...


Note –
In some Windows 2003 Server installations, adding the -Xrs option to the domain.xml file fails to prevent the service from shutting down. In this situation, add the option to the as-install\lib\processLauncher.xml file as follows:

<process name="as-service-name">
...
<sysproperty key="-Xrs"/>
...


Thursday, July 3, 2008

Debugging Windows Services in Visual Studio 2002/3/5/8

Just a quick tip on something I've been doing for years.

If you want to debug your Windows Service without having to install it as a service on your development machine first, you can use the "DEBUG" compiler constant to direct the compiler constant to run your service just as an executable instead of spinning up a service.

To do this in C# (basically the same steps in VB as well), open the Program.cs file and wrap the main method contents in a #if(!DEBUG), #else and #endif. Between the #else and #endif, simply put in the code that invokes the application logic.


You can see my example below, I've simply created a class called "ProcessFiles" and added static start and stop methods that starts/stops the processing as it normally would in a service. In the Program.cs file, i've then simply called the start method, put in a message box to stop the process falling through and stopping until developer debugging is read to quit, and a call to the stop method to cease processing.

From there it was just a simple matter of hooking up the OnStart and OnStop events of the actual service class of these methods as well.



namespace ExampleService
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
static void Main()
{
#if(!DEBUG)


ServiceBase[] ServicesToRun;
ServicesToRun = new ServiceBase[]
{
new Service()
};
ServiceBase.Run(ServicesToRun);
#else
ProcessFiles.Start();
MessageBox.Show("Debugging has started. To stop debugging click ok");
ProcessFiles.Stop();

#endif

}
}
}


The "DEBUG" compliler constant is linked to the Build Properties Page on a C# project. By defualt this is ON for the "DEBUG" build configuration, but OFF for releases. You'll notice if you are looking at the code, that when your build configuration is set to "DEBUG:, the generated service code will be greyed out and the custom code comes to life. Switching to "RELEASE" config reverses this situation. Nice touch I think ;)

Now I can use the "RELEASE" config to build an exe so my solution can be deployed as a service for testing/production OR simply run it in the Visual Studio debugger under the "DEBUG" config without having to install it as a service on my dev machine.


Cheers

Matt.

Wednesday, June 25, 2008

Hyper-V - Great for Developers

I've been using Microsofts Hyper-V software for the last month and I gotta say I'm impressed.

At work, we've setup box with 1 Terrabyte of Hard Drive Space (Raid 10), 16GB of RAM and two Quad Core Xeons running Windows Server 2008 x64 and Hypervisor. This is being used for our Test and User Acceptance platform - allow us to create our environments very rapidly (with some help from Sys Prep) and performance is amazing considering we are running 10 VPC's, some even running Biztalk + SQL server instances inside the virtuals as well. I have not yet seen even a hint of any drop in performance and still plenty of room for expansion with this hardware.

The difference between Hyper-V and other Microsoft Virtual technolgies is that the Windows 2008 OS actually allows Hyper-V to sit a hell of a lot closer to the metal (i.e. less layers between the Virtual and the Physical hardware) while still doing a great job of distributing the load of the VPC's accross the machine. It totally rocks compared to Virtual PC Server - it isn't even close to Hyper-V on performance.


It has definately development lives much easier, being able to spin up a new Syspreped OS, drop it on, hook it up to the network and then install whatever we need. The other thing we've been able to do is to port physical machines to virtual (this comes as an easy step by step wizard) - although I have come across some issues and couldn't get some machines to move accross so easily, but this has let us move our current enviroments on seperate physical machines quickly into the virtual fold. I was further impressed in most cases that you can port a machine from physical to virtual while the physical machine your porting is still online.

Just some things to note if you are interested when using the release candidate...

1. It only runs on Windows Server x64 OS, no other OS is supported.

2. It's currently only at a Release Candidate.

3. Although Windows 2008 Server x64 contains the Integration Services components that are required for it be a guest, these are only compatible with Beta 2 of Hyper-V - When using the release candidate you will need to install a patch in the guest OS to allow these drivers to install correctly (See http://support.microsoft.com/kb/949219).

4. When migrating from a previous Virtual PC VHD to Hyper-V, it may not detect the VMBUS and other things correctly, leaving you without some of the comforts of the Integration Services compents. In order to fix this, use Start-->Run MSCONFIG-->Boot Tab-->Advanced Options and Click Detect HAL and reboot.

5. Also when migrating a previous Virtual PC VHD to Hyper-V, if you can't initally get any network access in the Hyper-V admin tool, set the Virtual PC up so it's using a Legacy network driver - this should fix it.

6. If you are stuck wondering why you can't log into the Hyper-V Adminstration tool, you'll need to get the user that original installed Hyper-V on the machine to log into it first and add you as an Administrator before you can use it yourself.

Anyway, check out this for more info

http://www.microsoft.com/windowsserver2008/en/us/virtualization-consolidation.aspx


Cheers

Matt

Tuesday, May 20, 2008

LINQ2SQL "IS" the DAL

When I first started off with LINQ to SQL, my first intincts were to wrap it in a data acces layer just like we had on many previous projects with ADO.NET, but after using it for a short while I realised one important thing - LINQ to SQL is the Data Access Layer.

The generally accepted method of writing applications in .NET was to have 3 tiers - presentation, business and data. Basically, you had Winforms or ASP.NET in your presentation tier, .NET in the business tier and ADO.NET/SQL Server in your data tier.

Usually in the data tier, stored procedures would be created for CRUD operations and other business logic. ADO.NET code would them written to wrap these stored procedures that service the business tier.

This often ended up being quite a lot of T-SQL and .NET code, and because of this it could take quite some time to get just a single vertical slice of functionality running. It would also mean a lot of code to maintain after the product was shipped and bugs/enhancements were made.

With LINQ to SQL, a lot is simplified (although there's still plenty of room for improvement). There's no need to write tedious CRUD stored procedures which also means that there's no need to write the code to wrap the stored procedures.

In my opinion, if you go down the route of putting a layer around the LINQ to SQL (the DAL) you are making a mistake. You are adding extra code where it's not needed and reducing productivity and adding to maintenance later on.

The way I see it, there's no point - there is nothing to gain by wrapping LINQ to SQL up in layers, instead embrace it's elegance by exposing it directly to the business layer and reduce the amount of code you write significantly. Mix it with your business logic - allow it to save you the time which is much better spent in other area's rather than the plumbing.

Either use the LINQ to SQL entities as your Data Transfer objects or use them as your business entities - either way your still saving on code.

Of course LINQ to SQL is not perfect, but it's definately a step in the right direction.

In Summary, this:

Presentation -> Business -> DAL --> ADO.NET --> TSQL --> DATA

Becomes this:

Presentation -> Business -> Linq2SQL --> DATA

Cheers

Matt.

Friday, May 9, 2008

Version 1.0 Release Candidate 1 of LINQ to SQL Entity Base released!!!

Hi people,

I've finally made a release candidate for the LINQ to SQL Entity base which can be found here

A number of things have been done recently to it, namely:
- Added two static helper methods for serialization/de-serialization of entities.
- Now automatically returns KnownTypes if Entity Base class is in the same assembly as your Entity classes.
- You can now set the initial state of the root entity (e.g. New or Deleted)
- Demo is now in the form of a Client/Server architecture, with WCF used for communication.
- Added "LINQEntityState" property which returns an enum indicating the state of the entity.

Oh, and i've re-written the home page to include a list of "How To's" covering a lot of the questions i've been asked. Find it here


Enjoy!

Cheers

Matt.

Wednesday, April 30, 2008

Another LINQ to SQL Entity Base Improvement

Just a quick note that the latest source code includes a WCF serialization enhancement being that that you no longer have to specifically mention Known Types - this is being done automagically now.

A new release will follow in the next day or so.

Cheers

Matt.

Wednesday, April 23, 2008

LINQ to SQL Entity Base Improvement

Well,

I've been at it again... This time i've added some requested features that will come in handy for some of you.

These changes are:

1. There is no longer need to have a timestamp field
In earlier versions you had to have a timestamp column primarily because it was the easiest way for me to figure out if an object was new or not. If the column was NULL this meant that I could tell that the object was brand new, if it wasnt't null it meant that the object had been retrieved from the database.

This is no longer needed, as now when you invoke "SetAsChangeTrackingRoot()" it will go through the Entity Tree and mark all entities as IsNew = false. Then, when an entity added somewhere on the entity tree, i detect that it's a new entity (when the FK changes from NULL to the parents ID) and mark IsNew = true on the object.

2. Option to keep original entity values.
When a timestamp (version) column is available on a table, by default it is used by LINQ to SQL to peform concurrency checks when submitting updates and deletes to the database.

Because I have removed the requirement to have a timestamp (version) column on a table, I had to allow for the other method of concurrency checking which is to use the UpdateCheck property that is available on every column in the dbml model.

When the UpdateCheck property is set to "Always" on a column, LINQ to SQL compares the original value of this column against current value in the database before it updates or deletes the record. This is intended to make sure another process has not come in and changed the data since you last retrieved it.

Of course, it's up to the developer to choose the best column(s) for concurrency checks (usually a date/timestamp or update counter of some sort). By default if there is no timestamp column on the table, LINQ to SQL will set all columns = "Always" so that every column is checked in the record for change before an update is done. This is a very safe way to go, but is a little more expensive that just checking a single column.

In order for the UpdateCheck property to work however, the original value has to be available and when working in a disconnected model this is not the case. The first thought was to use the MemberwiseClone() method, but I needed to shallow copy the original version of the record somehow without including it's references (which MemberwiseClone does). To get around it, I created the following method:




        /// <summary>

        /// Make a shallow copy of column values without copying references of the source entity

        /// </summary>

        /// <param name="source">the source entity that will have it's values copied</param>

        /// <returns></returns>

        private LINQEntityBase ShallowCopy(LINQEntityBase source)

        {

            PropertyInfo[] sourcePropInfos = source.GetType().GetProperties(BindingFlags.Public  BindingFlags.Instance);

            PropertyInfo[] destinationPropInfos = source.GetType().GetProperties(BindingFlags.Public  BindingFlags.Instance);

            

            // create an object to copy values into

            Type entityType = source.GetType();

            LINQEntityBase destination;

            destination = Activator.CreateInstance(entityType) as LINQEntityBase;

 

            foreach (PropertyInfo sourcePropInfo in sourcePropInfos)

            {

                if (Attribute.GetCustomAttribute(sourcePropInfo, typeof(ColumnAttribute), false) != null)

                {

                    PropertyInfo destPropInfo = destinationPropInfos.Where(pi => pi.Name == sourcePropInfo.Name).First();

                    destPropInfo.SetValue(destination, sourcePropInfo.GetValue(source, null), null);

                }

            }

 

            return destination;

        }




The ShallowCopy method creates a new instance of the source entity, copies all the values from the original to the new instance and returns the new instance.

You can specify that you want to keep original values when calling the "SetAsChangeTrackingRoot()" method.

Some things to note:
1. LINQ to SQL will throw an exception if you specify that you want updates checks on columns and you have not indicated that you want to keep original values. This is reasonable, because you need the orignal values to do update checks in the first place...

2. I've updated the demo to include a tick box, clicking this will keep the original values as described above. I've removed the timestamp fields from the dbml model so to try out conurrency using the UpdateCheck property, simply modify the customer table and change one or more of the column's UpdateCheck properties to true.

3. An advantage of keeping the original information is that LINQ to SQL can detect just the columns that have changed and only generate an UPDATE T-SQL statement which updates those particular fields. This is good for those tables that have a lot of columns, becaues the T-SQL generated is smaller and there's less work for the database to do as it's potentially updating less columns.

For example,

// Running without the original values when updateing the freight value on an order
// will create this (note every column is updated)

UPDATE [dbo].[Orders]
SET [CustomerID] = @p1, [EmployeeID] = @p2, [OrderDate] = @p3, [RequiredDate] = @p4, [ShippedDate] = @p5, [ShipVia] = @p6, [Freight] = @p7, [ShipName] = @p8, [ShipAddress] = @p9, [ShipCity] = @p10, [ShipRegion] = @p11, [ShipPostalCode] = @p12, [ShipCountry] = @p13
WHERE [OrderID] = @p0

// Running with original values will create this (note it only updates the freight value)

UPDATE [dbo].[Orders]
SET [Freight] = @p1
WHERE [OrderID] = @p0

The Source Code

I'll be publishing the source code later today, when it's ready grab the source code here - I'll do a realease (Beta 4) once I'm satisified it's stable

Other notes

I would still recommend using a timestamp (rowversion) column if possible because it is the absolute best and easiest way to detect changes in a record.

The reason it is a good choice is that the timestamp (rowversion) column value is changed for every update made on a record.

Unlike other databases, the SQL Server version of a timestamp is not related to time, instead it's actually an 8 byte binary value that is unique within the entire database (not just unique within the table).

This is commonly mis-understood especially for people coming from other database systems which use a time based timestamp value. The issue being with time based timestamps is that you can get duplicates.

See books online for more information.

Monday, April 14, 2008

New Version of LINQ to SQL Entity Base Class (Beta 3.0)

Hello there!

Due to popular demand (well at least one person wanted it!) the LINQ to SQL Entity base now supports it... (in thoery!). Find V1.0 Beta 3.0 here.

In the example that comes with the LINQ to Entity Base, i've used the data contract serializer (which is what WCF uses to serialize/deserialize objects) to demonstrate this. I've also improved the example so you can scroll through the results by converting it to a simple windows form.

Don't forget the following when using it in your own projects:
1. You need to set serialization on your data context to uni-directional.
2. You need to use the KnownTypes for you're entities (this is because of the inheritence of entities from the LINQ to SQL Entity base - for more info, see Sowmy Srinivasan's blog).

Cheers

Matt

Friday, March 28, 2008

Implementing Disconnected Deletion Change Tracking

In one of my previous blog posts, I described some of the difficulties with change tracking entities which have been removed (i.e. deleted).

The main problem was that once you remove an entity whilst "disconnected", it's no longer referenced by anything, and so the object disappears and hence the entity is no longer available when re-attaching to a new data context.

In the short term, I added a property called "IsDeleted" to the entity base which people could use instead of using the remove method (or setting a refrence to a child property to null), but this had it's disadvantages - mainly being that the user would have to set this themselves (i.e. it wouldn't get picked up automatically on remove) and would un-naturally need to keep the object around.

So the obvious thing to do was to keep a reference (some where?) to the entity when it's deleted (removed), so it can be re-attached and deleted later on. But where would this entity be kept? In the parent that deleted it? In the root object perhaps? In an external change tracking object?

To keep the Entity Base consistant, I decided to keep all the functionality in the Entity Base class, which ruled out having an external object tracking the changes.

Then I went through a lot of options regarding where to store the detached objects and came up with the simplest solution possible - I used the existing infrastructure provided by my Entity Base class - the ToEntityTree() method - as this was the option which seemed the least troublesome for the developer to use.

So, what I have done is implemented "SetAsChangeTrackingRoot()" method which the developer can call before making any changed to the entity objects.

The developer would use this method to mark the section of the Entity Tree (the Entity branch) that would be change tracked.

When this method is invoked on an entity, the following would happen:

1. A snapshot of the entity branch would be taken from the entity that method was invoked on.

2. Indicate to each entity in the branch that it is being change tracked.

The meant a snapshot of the entity branch would be kept locally with the root of the branch, and this also meant the entity that would used for syncronisation with the data context later on.

From there, it was just a matter of waiting for the property changed event to fire on an entity (exposed by INotifyPropertyChanged), and to look to see if the property being changed was a Foreign Key reference (meaning a child to parent relatinship) and that the value was being set to NULL (i.e. detaching the entity from it's parent). Once these conditions were met, I set the IsDeleted flag automatically marking the object for delete.

Next, I modified the ToEntityTree() method to include these "deleted" entities, as these entities would now not be picted up in the traversal of the entity tree, returning a complete list of all entities including the deleted objects.

The SyncroniseWithDataContext() method then used the information returned from the ToEntityTree() method to figure out what to attach, insert and delete.

One issue I came accross was the deletion of child entities under the entity that was marked for deletion. If you simply removed an entity that already had children, the submit changes would fail because LINQ to SQL doesn't support cascading deletes unless specified in the Database Schema and so any Foreign key constraints linking to the record being deleted would mean an exception would be thrown by SQL Server.

I also couldn't rely on the developer to delete the child entities first and then delete the top most entity as they could do it in any order and the order of the deletions is crucial - the child objects must be deleted first.

Instead, I decided by default to have my own cascade delete functionality so when an object is removed, I automatically remove any child objects starting with the child leaves of the branch first. This was achieved by call the ToEntityTree() method internally and using the reverse function so that it would be from order from child leave all the way back to the root of the change tracking.

Even though by default calling the SyncroniseWithDataContext() will perform cascading deletes, I have added an optional parameter so that it can be disabled if need be - which is handy if you didn't expect there to be children of the object you are deleting OR you are handling cascading deletes in the database anyway.

So thats how I've achieved automatic deletion tracking :).

Some more thoughts

After building the LINQ Entity base class in this way, I realised it would be reasonably easy to move all the logic into an external object (not an entity) which was similar to the standard DataContext but performed the tasks in an offline way.

Some people would feel more comfortable with this perhaps, because of the similarities with the existing data context.

I may shortly in the future investigate this further, and perhaps we'll have a alternative if people want it for change tracking whilst disconnected.

Thursday, March 27, 2008

New Version of LINQ to SQL Entity Base Class (Beta 2.0)

Hi there!

Just wanted everyone to know I've release a new version of the LINQ to SQL Entity base class.

It now supports change tracking for deletes, as well as the ability to cascade delete.

I guess from my point of view it's feature complete for ASP.NET use, assuming you store the entities in the session.

Still have to work on serialization for WCF and other uses.

LINQ to SQL Entity Base Class Version 1.0 Beta 2.0

Anyhow, check it out and let me know what you think.


Cheers

Matthew Hunter

Friday, March 14, 2008

Implementing Change Tracking when disconnected.

If you have a look at the LINQ to SQL Entity Base source code on codeplex, you'll see that the way I've implemented change tracking is by putting a few flags on the base class IsNew, IsModified, IsDeleted.

I've been able to get IsNew & IsModified to set automacally, here's how they work:

IsNew
This can be established by checking the entities RowVersion (TimeStamp) field (which BTW is a requirement to have for this to work). If the RowVersion is null, it's never been applied to the database (as the database sets this value not the developer) and hence we can tell with absolute certainty that it's a new object.
But it's in the child class how did I accomplish this?
Since the RowVersion property is in the class, there's a few options we can use to achieve this:

(1) Write extra code in the child entity class
Nope, this is out of the question! We are trying to avoid coding here!

(2) Create an interface (OK)
This is probably the best action for performance, but you need to make sure that all entities use the same column name for the RowVersion. If you use this method, you could cast the current object to this an interface and get the value that way. Of course, to get the entity to implement this, you'll need to force it to implement the interface by adding it to the DataContext dbml file (just like described here) .
However, as I'm writing something to share with one and all, and no doubt everyones gonna want to name it differently, this isn't the appropriate option (however it's still a damn good one!).

(3) Use a virtual property and override it in the entity (OK)
This is also good for performance, however it also means that you need to set every RowVersion field property property to have it's access modifier set to "override" which is a bit annoying. Personally I'm impressed that you can do this in the DBML model viewer, but it's still a little hard to maintain when you are adding tables - just another thing to remember, and again everthing has to be set to the same name for it to work.

(4) Use reflection (OK)
This is not so bad, we can simple get the properties using reflection and find which property is marked with ColumnAttribute.IsVersion = true. Seeing there can only be 1 of these per table (enforce by SQL Server) this is pretty safe. It also means i can throw a custom exception with a message if I discover that there is now RowVersion field and let the developer know.

So, after considering the options, I went with the later option being reflection mainly because it's the most flexible for this situation, but all things being equal I think the best option if you can control it is to use an interface as in (2) above.



IsModified
This ones easy, there's already an interface supplied called INotifyPropertyChanged that each entity implements, which you can then use and attach to the childs PropertyChangedEventHandler in your parent class. Whenever this event is raised, we know a column has been changed and we know to set the IsModified Flag to true.
Interestingly enough too, if the event is raised and it's the field that is the RowVersion (TimeStamp) property that's being updated, we know that the data has just been applied to the database, and hence we know we can reset the IsNew and IsModified and IsDeleted to false. So this is something we definately want to do, if after committing the data we want to keep working with our Entity Tree.
One propblem is though, I noticed that this event is also raised for child entities and entity collections not just columns. I need to avoid these non-column events because they are not the type of property changes I am looking for. So, with a little reflection, I can find out if the property has an AssociationAttribue applied to it and ignore the change events raised for these. So that's solved too.

IsDeleted
*** UPDATE --> I've come up with a solution to this problem, see this link for more details ***

I'm still looking for a good way to do this. Unfortantely, the one draw back with the way the entities are organised when disconnected, is there's no good place to handle this because if you remove the entity, you remove the entity - it's gone - not much use setting a flag if you can no longer find it!

One Idea I have is to store the object in the parent, but I haven't got around to working this one out. It's definately the trickiest of the lot.

So for now, there's just a simple flag indicating that the object needs to be deleted, which is not ideal at this stage, but it mostly works, apart from where you have a single child entity (not a collection of entities) and you want to delete it and replace it with a diffent object - currenlty you have to commit to the database in between otherwise, again you'll loose the original object.

Cheers

Matt.

Entity Base Project Added to Codeplex

For anyone that's interested, I've added an example project of some of the things I've found to codeplex and Called it the LINQ to SQL Entity Base.

Check it out at the following Address:

http://www.codeplex.com/LINQ2SQLEB

It just demonstrates some change tracking, the entity tree enumeration feature and auto-syncing to the database.

Anyway, go check it out if you want to see how some of these things I've found out can be put to use!

Cheers

Matt.

Thursday, March 13, 2008

Disconnected LINQ To SQL Tips Part 2



How to Enumerate the Entity Tree Graph


I really, really like the way that the standard IEnumerator interface works, and in conjunction with "yield" keyword and a little, I came up with a cunning plan that would help me with my Disconnected model.

Basically, I needed a way to find all objects that were changed tracked in an entity tree, otherwise it would be very manual and an awful lot of code to re-attach entity. It's not so much of a problem when you have just a parent and some children like so:

Customer1
-> Order1
-> Order2
-> Order3


Here you could just re-attach you're objects, first with the custmer and then in a nice little loop attch the orders.


This will be fairly light and not a lot of code, but I thought that in a lot of circumstances the tree would be a lot more complicated like this:


Customer1
-> Order1
->->OrderDetails1
->->OrderDetails2
-> Order2
->->OrderDetails3
->->OrderDetails4
->Order3
->Order4


This starts to look a little complicated because now you have to write a lot of code that loops through each level of the tree attaching as necessary... and you can start to imagine trees that may be 20 entities deep and all of the place. Yuk.


So, basically with a little help from reflection and with the use of the base class, I could put togther a nice little function that would traverse the enitre tree in one list. This is useful for a lot of reasons, not just for finding changed objects, but you would now have the ability to find an object or objects in the tree without hardcoding paths in your linq statements.


Of course, I've only done parent->child relationships and ignore foriegn key ones (Child <- Parent) to avoid overflowing the stack.

This code is in the base class for all entities that will allow you to enumerate against a tree of entities. Please note the following:

  • Yes, it will work in connected mode as well, i.e. when a datacontext is attached.

  • You can query it with LINQ

  • -> var temp = from c in customer.GetEntityHierarchy().OfType() select c);
  • If you're wondering why I've put it in a private class instead of just exposing IEnumerator/IEnumerable on the entity it's self - it's because at runtime it seems to fail because I think Microsoft have put some sort of runtime check on it, perhaps because they want to reserve the IEnumerator for implementation later on.




using System;
using System.Collections;
using System.Collections.Generic;
using System.Data.Linq;
using System.Data.Linq.Mapping;
using System.Linq;
using System.Text;
using System.ComponentModel;
using System.Runtime.Serialization;
using System.Reflection;
 
namespace LINQEntityBaseExample1
{
    public abstract class LINQEntityBase
    {
        // stores the property info for associations
        private Dictionary<string, PropertyInfo> _entityAssociationProperties 
                    = new Dictionary<string, PropertyInfo>(); 
        //used to hold the private class that allows entity hierarchy to be enumerated
        private EntityHierarchy _entityHierarchy; 
        
        /// <summary>
        /// Constructor!
        /// </summary>
        protected LINQEntityBase()
        {
            // Note: FindAssociations() finds association property info's 
            // using reflection (where IsForeignKey !=true)
            // Have left this function out just to keep this short.
            _entityAssociationProperties = FindAssociations();
            // pass in the current object and it's property associations
            _entityHierarchy = new EntityHierarchy(this, _entityAssociationProperties);
        }
 
        /// <summary>
        /// This method flattens the hierachy of objects into a single list that can be queried by linq
        /// </summary>
        /// <returns></returns>
        public IEnumerable<LINQEntityBase> GetEntityHierarchy()
        {
            return (from t in _entityHierarchy
                    select t);
        }
 
        /// <summary>
        /// This class is used internally to implement IEnumerable, so that the hierarchy can
        /// be enumerated by LINQ queries.
        /// </summary>
        private class EntityHierarchy : IEnumerable<LINQEntityBase>
        {
            private Dictionary<string, PropertyInfo> _entityAssociationProperties;
            private LINQEntityBase _entityRoot;
 
            public EntityHierarchy(LINQEntityBase EntityRoot, Dictionary<string, PropertyInfo> EntityAssociationProperties)
            {
                _entityRoot = EntityRoot;
                _entityAssociationProperties = EntityAssociationProperties;
            }
 
            // implement the GetEnumerator Type
            public IEnumerator<LINQEntityBase> GetEnumerator()
            {
                // return the current object
                yield return _entityRoot;
 
                // return the children (using reflection)
                foreach (PropertyInfo propInfo in _entityAssociationProperties.Values)
                {
                    // Is it an EntitySet<> ?
                    if (propInfo.PropertyType.IsGenericType && propInfo.PropertyType.GetGenericTypeDefinition() == typeof(EntitySet<>))
                    {
                        // It's an EntitySet<> so lets grab the value, loop through each value and
                        // return each value as an EntityBase.
                        IEnumerator entityList = (propInfo.GetValue(_entityRoot, null) as IEnumerable).GetEnumerator();
 
                        while (entityList.MoveNext() == true)
                        {
                            if (entityList.Current.GetType().IsSubclassOf(typeof(LINQEntityBase)))
                            {
                                LINQEntityBase currentEntity = (LINQEntityBase)entityList.Current;
                                foreach (LINQEntityBase subEntity in currentEntity.GetEntityHierarchy())
                                {
                                    yield return subEntity;
                                }
                            }
                        }
                    }
                    else if (propInfo.PropertyType.IsSubclassOf(typeof(LINQEntityBase)))
                    {
                        //Ask for these children for their section of the tree.
                        foreach (LINQEntityBase subEntity in (propInfo.GetValue(_entityRoot, null) as LINQEntityBase).GetEntityHierarchy())
                        {
                            yield return subEntity;
                        }
                    }
                }
            }
 
            // implement the GetEnumerator type
            IEnumerator IEnumerable.GetEnumerator()
            {
                return this.GetEnumerator();
            }
        }
 
    }
 
}

Tuesday, March 4, 2008

Superclass your entities without using SQLMetal

Superclass your entities without using SQLMetal

Often I look through posts and find that people are repeating a lot of code in partial classes, when some of this work could be done in a parent class. The only documented way to do this seems to be the SQLMetal.exe command line tool. However, it's entirely possible to do this without SQLMetal.exe. To make this possible, it's just a simple matter of using notepad to edit your existing dbml file and adding the following to the "Database" element:

EntityBase="[EntityBase]"

Where [EntityBase] can be replaced with the name of your superclass.

Next, save the file and go back to your project. Right click on the DBML file in the VS project and select "Run custom tool"... the next thing you know all your LINQ to SQL objects will be subclasses to what you specified.

You can update this anytime, without screwing anything up... and the way it does it is pretty lazy...There's no checking of any sort, so you can prefix your class with a namespace, add multiple interfaces or just have your entities implement an interface without a base class.

E.g.

EntityBase="Sample.EntityBase"
EntityBase="Sample.EntityBase, IMyInterface1, IMyInterface2"
EntityBase="IMyInterface"

Anyway, I've already used this for a number of reasons - quite useful.

Cheers

Matt.

Monday, March 3, 2008

Disconnected LINQ to SQL Tips Part 1

Intro!
In my research on LINQ to SQL and trying to work around the limititation of no "out of the box" disconnected (n-tier) mode, I've come accross a lot of things that others may find useful.

I was somewhat disappointed in this drawback as LINQ to SQL - seeing that it was only intended to be used in "connected" scenario. I saw it as a challenge to figure out ways in which a "disconnected" scenario could be done.

Hence, here I am a first time virgin blogger - who felt compelled to reduce the sweat and tears of others while dealing with this double edged sword.

I'll be blogging how "Disconnected LINQ" can be achieved in a later post, but first up here's some tips for some of you out there that might be struggling with "Disconnected LINQ".

The 'How to do Disconnected' Rules
First up, here are the rules for allowing disconnected LINQ to SQL. To successfully disconnect a LINQ to SQL entity from a Data Context and allow it to re-connect to a different Data Context, you must do the following:

1. Enable Concurrency Tracking
You can enable concurrency tracking by adding a timestamp (rowversion) field to your database and include this in your LINQ to SQL model.

Alternatively, if you don't care for concurrency tracking you can set all columns so that the Update Check is set to false.

2. Disable Deffered Loading, Load Everything or Serialize the objects
Disabling Lazy loading is fairly straight forward, it just a simple matter of setting the DeferredLoadingEnabled = False like so:



using (EntitiesDataContext db = new EntitiesDataContext())
{
    db.DeferredLoadingEnabled = false;
 
    var customers = from c in db.Customers
                    where c.CustomerId == CustomerId
                    select c;
}

This tells link not to query that database when a link association (another object or collection) is referenced, instead it will simply return a null.

Alternatively, you can also load all related objects so there is nothing to lazy load. This can be done by using the datacontext load options object like so:


using (EntitiesDataContext db = new EntitiesDataContext())
{
    DataLoadOptions lo = new DataLoadOptions();
    lo.LoadWith<Customer>(c => c.Dependants);
    db.LoadOptions = lo;
 
    var customers = from c in db.Customers
                    where c.CustomerId == CustomerId
                    select c;
}

Basically, this is telling the data context that whenever it loads a Customer, it should also load the Dependants for that customer. If all the related objects or collections are covered in this way, LINQ to SQL won't even consider deferred loading because it believes it has all the possible connected objects.

As for serialization, this automatically disables deferred loading as above ... and how to serialize is covered next....

Serializing and copying the objects
Linq to SQL entities cannot be serialized using standard serialization techniques, this means you can't just pop it in the view state or ASP.NET state server without running into some trouble.

In order to serialize an LINQ to SQL object graph (a root object and it's child entities) you'll need to use the WCF data contract serializer instead.

But wait! Even before you do that, you'll need to set your Data Contexts serialization mode to "Unidirectional" (which is available in the model properties). This means that only references in the object graph from parent to child will be serialized and therefore therefore child to parent references will be ignored. This is good because it means that circular references won't cause problems. The standard serializer can't be used for exactly this reason, because it would end up with failing because it would try to serialize circular references.

Here's a couple of functions which can be used to serialize/deserialize a LINQ to SQL object graph and also a handy copy function which will make a completely seperate copy of an object graph.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.IO;
using System.Xml;
 
namespace SampleFramework
{
    public static class LINQHelper
    {       
 
        /// <summary>
        /// Makes a copy of an existing LINQ to SQL entity and it's children.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="entitySource">The LINQ to SQL entity to copy</param>
        /// <returns></returns>
        public static T CopyEntityDeep<T>(T entitySource)
        {
            if (entitySource == null)
                return default(T);
 
            return (T)DeserializeEntity(SerializeEntity(entitySource), entitySource.GetType());
        }
 
        /// <summary>
        /// Makes a copy of a list of existing LINQ to SQL entities and their children.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="source">The LIST of SQL entities to copy
        /// </param>
        /// <returns></returns>
        public static List<T> CopyEntityListDeep<T>(List<T> entitySourceList)
        {
            List<T> result = new List<T>();
 
            if (entitySourceList == null)
                return null;
 
 
            foreach (T entitySource in entitySourceList)
            {
                T entityTarget = CopyEntityDeep(entitySource);
 
                result.Add(entityTarget);
            }
 
            return result;
 
        }
     
        public static string SerializeEntity<T>(T entitySource)
        {
            DataContractSerializer dcs = new DataContractSerializer(entitySource.GetType());
 
            if (entitySource == null)
                return null;
 
            StringBuilder sb = new StringBuilder();
            XmlWriter xmlw = XmlWriter.Create(sb);
            dcs.WriteObject(xmlw, entitySource);
            xmlw.Close();
 
            return sb.ToString();
        }
 
        public static object DeserializeEntity(string entitySource, Type entityType)
        {
            object entityTarget;
 
            if (entityType == null)
                return null;
 
            DataContractSerializer dcs = new DataContractSerializer(entityType);
 
            StringReader sr = new StringReader(entitySource);
            XmlTextReader xmltr = new XmlTextReader(sr);
            entityTarget = (object)dcs.ReadObject(xmltr);
            xmltr.Close();
 
            return entityTarget;
        }
    }
}

Wrapping up
Next post, I hope to move further into the disconnected model that I've come up with that uses some of the above techniques to track changes.

Cheers

Matt.