Posts archived in .NET

I’d been trying to do a little reporting in RavenDB and needed to do a group several items.

The secret is that you can group into a new anonymous type, which I found out about on David Klein’s blog posting a few years ago.

So, my Reduce statement looks something like:

Reduce = results => from result in results
                    group result by
                    into g
                                Count = g.Sum(x => x.Count)

I was surprised it worked so easily once I knew the secret ingredient.

So, today I discovered an issue which related to me doing two calls something a little like this:

– Execute dc.sp_Proc1
– If some condition exists, execute dc.sp_Proc2, and then Execute dc.sp_Proc1 again with the same parameters.
– Insert some records into the database.

The problem is, the first time you execute the sproc, it caches the result. This would be okay for most instances, but in mine – I’m actually after the updated result.

A quick bit of googling revealed this post by Chris Rock. This approach of “turn off object tracking” works Ok if you don’t need to insert records on that Data Context.

My quick, dirty, and (possibly) really wrong approach was just to spin up a new Data Context, and re-execute that sproc.

I promise I’ll find a more sane way of fixing this :)

With many web 2.0 applications there’s a basic three-tier architecture..   In our case the client is a Flex 3/Caringorm application, the Services are WCF/ASP.NET Web Services, and the Database SQL 2005.

One of the typical approaches to creating Web Services for this type of system is to use a CRUD type pattern. That is: all methods are based around either Creating, Retrieving, Updating, or Deleting records.  In most usually done on a per-table basis, and means that you’re effectively making the Web Services a HTTP enabled SQL client.

For our situation, this wasn’t really appropriate for a number of reasons, including complex relationships between tables, and a need to reduce the amount of network traffic.

Another concern, although relatively minor, is to reduce the amount of work needed by the Flex team to implement the Web Services. 

Ideally, we wanted to be able to share business objects as widely as possible, to reduce the amount of rework needed by everyone involved in implementing the interfaces.

Therefore we chose to go with task, or semantic based methods, and using the objects as needed by the Flex front-end.  The work of validation, and mapping to appropriate tables would be done by the Web services.

An example of this might be that a Document had many properties, such as Media Items (pictures, video, etc), Tags, Authors, etc.  However, within the database there might be a necessity to track Document Versions, What versions are Live, the relationships between Documents, Document Versions and Media Items. 

Because the objects that I needed to send/receive didn’t match the objects that needed to be saved in the database, I needed to write a lot of “left hand/right hand code”: ServiceDocument.Property =   SQLDocument.Property.  Most of this was fairly simple code to write, but tracking the places where this takes place can be grow to become quite a challenge when the solution grows to dozens of tables.

This is an approximate list of what I need to do to add a property to one table:

  • Add the Property to the Service Types
  • Add conversion pieces to transpose the Service Type to/from the LINQ to SQL Object equivalents.
  • Add the column to the Table in the Database Model for LINQ to SQL
  • Add the column to all Stored Procedures in the Database Model which reference this, removing and re-adding them if this means new properties too.  Don’t forget to ensure the return types on the re-added Stored Procedures are set correctly.
  • Add the columns to the actual Stored Procedures, update parameters, etc
  • Add the column to the actual Table

I can only imagine the Version Control conflict chaos that would ensue if you had several people making these changes concurrently.

I highly recommend grouping changes into a per-table basis, because it can take a while to go through all the additional pieces you have referencing the LINQ to SQL and Service Type object equivilents.

This is the first in (hopefully) a series of quick things I’ve picked up whilst tackling the previously mentioned project

So, I have a table something like this:

CREATE TABLE [dbo].[Product](
[ProductID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](100) NOT NULL,
   [Price] [int] NOT NULL,
    [LastSaveTimestamp] [datetime] NOT NULL CONSTRAINT [DF_Product_SaveTimestampDEFAULT (getutcdate())

The key here is the default value on the column: LastSaveTimestamp.

If I then try to, say insert a new column into this table, for example using this code:

  DatabaseContext dc = new DatabaseContext();
  Product product = new Product();
  product.Name = “test product”;
  product.Price = 50;

Then I’d get an exception like:

System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM..

The fix is actually really simple – In the table designer / DBML, you need to tell it that the column is auto-generated. Unfortunately this doesn’t seem to be automatically detected. It’s one of a few ‘just plain weird’ situations. 

AzamSharp has the fix details, with a handy-dandy screenshot over on his blog.

Today I spent about half an hour banging my head against this problem:
Whenever I would try and return a business object, I’d simply get no response from my WCF Service. Litterally nothing.

The problem turned out to be that I had accidentally specified the DataMember Name of a property in a sub object twice.

So, I had my broken class set up like:

[DataContract(Name = "MyClass", Namespace = "Example")]
public partial class MyClass
[DataMember(Name = "property1")] public int Property1 { get; set; }
[DataMember(Name = "property1")] public string Property2{ get; set; }

An instance of this class was used as a property in another object, which was being returned from WCF.

.NET didn’t throw any sort of error unless I tried to return just “MyClass”.

Sure, it was my fault, but if you have a complex data structure, this could get awefully difficult to find without some sort of message from WCF.

Yes, this is part of that ultra nifty WCF JSON .NET 3.5 Flex project at work. :)

(A note to readers: This is all pure geek/coder content – Please skip this if nothing in the subject line makes sense)

I started on a new project at work for a client a bit over a week ago, by virtue of the requirements, we decided to investigate the use of new version of Microsoft’s  .NET Framework, Version 3.5, for all of the server-side services.

Microsoft have been quite strongly pushing the benefits of the new features of .NET 3.5. There’s been a few key features which are particularly interesting, and if all goes according to the marketing hype, should end up saving a huge amount of time and effort, whilst ensuring that we use well known and standardised interfaces.

What features?

Windows Communications Foundation (WCF) is particularly interesting because it promises to let you (mostly) remove the whole ‘how’ and ‘where’ portion of communications between tiers, and let you focus on the ‘what’ and ‘when’.

In essence, WCF should let me state that I want to create (say) a Web Service, that accepts information in format X, and outputs responses in some other format. It doesn’t have to be a Web Service either, it could be a Peer to Peer network speaking in straight binary streams.

For this project the client-functionality is all in Flex, so we need to ensure that the Flex guys can quickly decode all the responses and turn them into Action Script objects. Through a bit of experimenting and application prior experience – Web Services speaking JSON appeared to be the easiest and most light weight method of doing this.

Language Integrated Queries (LINQ) is another particularly interesting technology, particularly because it lets me focus on what I want to do with the data I have, rather than spending time transforming it from the Database tables, rows, and procedures, into .NET objects and methods.

There are a number of implementations of LINQ, which enables you to query a variety of sources – the one that I’m most likely to use is LINQ to SQL (talking to SQL Server). Regardless of what I’m accessing however, the syntax is identical – again, removing the need to modify my code if I need to query an XML file, Oracle or MySQL Database, or even native .NET objects.

You can probably see a common theme here – WCF lets me focus on communication with the outside world without needing to write that interface or conversion functionality, and LINQ lets me access and manipulate data, without needing to write that interface either.

So, it’s all plug and play?

Well, that depends entirely on what you’re doing with your data. If you’ve got something like a CRM application where the client is responsible for managing (most of) the data, then yes it can quite possibly be almost plug and play if you’re going with a “CRUD” interface.

If your data structure is more complex, then you need to determine exactly where the split is. In this specific project, I’m presenting an abstracted view of the data that the client needs, and doing all of the business logic to manage data management in the SQL and Web Services Layer.

So far, the whole WCF and LINQ combination looks good. I’m hoping to post some more detailed posts later on.

Further Reading

I highly recommend Scott Guthrie’s LINQ to SQL series of posts. Start with Part 1: Introduction to LINQ to SQL

These resources have also been of a great help in getting my head around the whole LINQ thing: