Simple.Data 0.5

There wasn’t going to be a 0.5 release of Simple.Data, but it started picking up a head of steam and I decided to push an extra release to help some people build some adapters and providers.


No more Reactive Extensions

The main change in this release is that I took out the dependency on the Reactive Extensions. It’s a bit of a shame, but the Rx assemblies are strongly-named, which means that when I build a Simple.Data release against what’s currently on nuget, and then they push a new build, it breaks everything the next time somebody installs the package. As Seb Lambla says in his OpenWrap presentation, strong-naming is anathema to package management, as well as just generally evil. I understand why they do it, but the actual implementation needs mending.

The only Rx functionality I was using was a trick for buffering data so that connections are closed as early as possible. I was doing this by pushing data from DataReaders through an IObservable and then using the Rx ToEnumerable method to cache the results. I’ve replaced this with a BufferedEnumerable type, which was interesting to write, and involved creating a Maybe<T> type to support it. .NET really, really needs a Maybe in the BCL.

NoSQL compatibility

A guy called Craig Wilson is creating a MongoDB adapter, and he ran into quite a few issues with the dynamic property name resolution. The code was using a special dictionary which “homogenized” keys as their values were set; essentially all non-alphanumeric characters were removed and what was left was down-shifted. This was fine for SQL Server, where the column names for CUD operations were resolved by interrogating the schema, but completely failed when used against a data store which has no schema. So the dictionary has been replaced with normal dictionaries using a custom IEqualityComparer implementation. While I was in there, I also optimised the Homogenize method, and created a new custom Dictionary implementation which only holds one copy of the keys for an arbitrary number of values; this saves quite a lot of memory when returning lots of rows.

Fewer internal types

In previous releases, I followed the minimal public API approach, and marked as internal anything I could. In order to facilitate testing, I added InternalsVisibleTo attributes to expose some stuff specifically to the SqlServer and SqlCe40 test projects. However, another guy is building a MySQL provider, and he rightly pointed out that all these internals made it impossible for him to copy the tests to use as a start point for his project. So I’ve made those things public.

It’s made me ponder the nature of internal and private and protected and so forth, and I might even manage a blog post on it at some point.

Roadmap update

So that’s where things are at for 0.5. The next feature release, 0.6, will appear at some point in March, bringing support for lovely complex queries with explicit joins, cross-table column lists, aggregates and so on. And hopefully there’ll be even more adapters and providers coming from the community (CouchDB and Redis have been mentioned); I’ll release minor updates to the 0.5 branch as and when necessary to support those things.


Mad props to Paul Stack for setting up a Continuous Integration and NuGet-deploying project on his TeamCity server.


  1. James Chaldecott says:

    I once attended a talk about API design by a guy who was lead developer on some popular java open source project [1] who’d had to deal with versioning issues across several releases.

    His opinion was that from an API design and support point of view, you should make the absolute minimum public, and should only consider making something public if AT LEAST two “customers” asked for it, as the support burden was so high. There was then some discussion about what to do about the fact that this was in such opposition to the ability to do unit testing.

    He said that what he really wanted in a language was not an enforced “public” and “private”, but just semantic equivalent of “published” and “unpublished”. Anything not marked as “published” was fair game to be broken when the library is updated, but you could still use it if you wanted.

    Always sounded reasonable to me. I guess you could use a custom attribute and write an FxCop rule (or msbuild task?) that would add warnings to client code if it used any “unpublished” functionality.

    Just a thought…

    [1] I’ll be damned if I can remember either his or the project’s name. I think it was something to do with XSLT.

  2. is great. I am using it on a project after I heard about it on the Herding Code. I am looking foward to version .06.

    I have a a flat poco that has data in two tables joined by a FK, not a big deal but I have about 10 of these flat objects. A way to smash them together would be great. I tried using a view, but since there is no primary key on a view, it would not work. The solution I am implementing is to just make two calls to the database, cast the largest object, and then left right the last few properties.

    • 0.6, which is in progress, adds support for full queries which will let you specify column lists, explicit joins, table and column aliasing, as well as set operations like aggregates, ordering etc. That should satisfy your requirement. Looking to be ready in the first half of April at the moment.


  1. […] That said, the current implementation of the Rx library has at least a couple of issues: […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: