No further development on Simple.Web

I’ve let this little fact out on Twitter, so here’s a little more detail on why I’m not going to be working on Simple.Web any more.

Original motivation

Whenever people asked why I wrote Simple.Web, I would reply “because Microsoft broke Web API”. Eventually, at an MVP Summit, Scott Hanselman had me stand in front of the Web API development team and explain what it was that they had broken, which was… awkward.

I started writing Zudio when Web API was still “WCF Web API”, being developed by Glenn Block. I think I was using version 0.6. The project had gone quiet, then suddenly re-emerged as ASP.NET Web API 1.0 beta. I switched to the new framework, and everything broke. They’d changed the way IoC/DI worked, they’d added an ApiController base class, and they’d taken out the attribute-based routing, which I’d been a huge fan of.

Here’s the thing about routing. When I create an API, web or otherwise, it is rarely of the type popularised by Ruby on Rails, ASP.NET MVC and most other MVC frameworks. These tend to think of an API as simple resources with HTTP methods – two GETs, a POST, a PUT and a DELETE. That’s not an API. That’s just a basic Data Access Layer over HTTP. Sure, I’ve written code where that is a valid pattern for certain resources, but it’s not an API. An API provides high-level commands and operations, such as “activate this user” or “complete this order”.

The ASP.NET Web API 1.0 release offered the same routing as ASP.NET MVC – “{controller}/{action}/{id}” – and if you wanted anything more complicated, you had to manipulate the routing table directly.

So, I took all the things that I had liked about WCF Web API and put them into Simple.Web. I also added a feature of my own, inspired by a talk I’d seen that week about “Hypermedia As The Engine Of Application State”, where the serialization process would include collections of links so that API consumers didn’t have to hard-code URLs.

Simple.Web also removed the distinction between Views and other types of result, honoured Accept headers, promoted a “Single-Action Controller” pattern, and had a completely asynchronous pipeline.

What’s changed?

Well for a start, Web API 2 has attribute based routing. And it’s got a class-level RoutePrefix attribute that lets you apply routing to a controller; if you then just have a single method (say, “Get”) with an empty Route parameter, you’ve essentially got Simple.Web’s Single-Action Controller pattern.

Web API also has a comprehensive set of hooks and providers and what-have-you that mean you can extend or modify any functionality you like (or don’t like). So the couple of tricks that Simple.Web did, like the link generation, are easy to add in.

And then there’s OWIN, which allows you to integrate functionality into your Web API application at the pipeline level without faffing around with System.Web HttpModules and the like.

Also, Microsoft have changed. Certainly DevDiv. There’s a bunch of awesome people working there, who have been pushing hard for open source ideals across the entire stack, and it’s working. Most of the ASP.NET stack – certainly all the bits I care about – are now developed in the open, and ASP.NET vNext is on GitHub, and accepting pull requests.

ASP.NET vNext was the real trigger, in the end. I am incredibly excited about what’s going on there. A completely new, very lightweight CLR, a modularised base class library distributed via NuGet (just like I asked for), and MVC and Web API combined into a single, cohesive framework. With, incidentally, a fully asynchronous pipeline.

(David Fowler and Damien Edwards were in my Simple.Web talk at NDC London last year. Just sayin’.)

Simple.Web scratched an itch that I had. There is fundamentally no point continuing to scratch an itch that doesn’t itch any more, especially when there’s an over-the-counter cream [metaphor terminated due to potential taste boundary violation].

Seriously, though, I’ve got limited time, and when a mainstream product does everything I need, the way I want it, then I should spend that time creating value instead of replicating other people’s efforts*.

What now, then?

Well, I’m migrating Zudio over to MVC 5.2 & Web API 2.2, which is actually much easier than I expected, and I’ll upgrade that to vNext as soon as there’s a stable release with good performance that I can confidently run on Azure Web Sites.

My AngularJS client-side code relies heavily on the hypermedia links that Simple.Web generates, so I’ve implemented that part of Simple.Web as a Web API extension called Linky, which is on GitHub now and NuGet soon. As and when I find other gaps in Web API or MVC, I will fill them with more extensions. It is my fond hope that simple extensions to the canonical frameworks might be more useful to the wider community than full-blown alternatives.

I will also be doing some blogging (on my coming-soon new blog) about Web API extensions I’ve written that are specific to my project, but where the patterns are broadly applicable.

Finally, I can get back to working on Simple.Data v2 (assuming that Entity Framework v7 isn’t going to add dynamic method-name-based operations).


Just to be clear, this doesn’t mean that I think all alternative web frameworks are a waste of time. Those that offer a completely different approach and philosophy, such as Nancy, obviously bring much-needed innovation and value to the .NET ecosystem. It’s just that Simple.Web doesn’t any more.

Disable the Git source control add-in in VS2013, permanently

Update: 27th June 2014

The registry hack stopped working for me for some reason, possibly after I installed VS2013 Update 2. In the little movie world in my head (where I have a speaking part and an end credit instead of being a CG head in a crowd, or possibly a Class 5 droid) there’s a minor war going on between me and the Microsoft guys who wrote this oddly insistent Source Control Provider. I found a way to disable it; they found a way to re-enable it. So I’ve escalated my efforts. Screw those guys.

You can now install my first ever published Visual Studio extension, NoGit, from the Extensions dialog in Visual Studio, or download it from the Visual Studio Gallery. Every time you load a Solution, it checks to see if the current Source Control Provider is Microsoft’s Git provider, and if it is, it deactivates it.

The original text of this post has been preserved for historical context.

This has been driving me nuts. I don’t know what particular combination of things contributes to it, but the Git source control provider that’s built into Visual Studio 2013 just hangs the whole system. I change it to “None” in Tools/Options, but every now and then it comes back. I never, ever use it – CLI all the way – so its very existence was offending me to the core.

Now, I think I’ve found out how to kill it, via that old nasty-hack-atron, regedit.exe.


If you do this and it kills Visual Studio, or Windows, or your PC, or a kitten, I will not accept responsibility. I hate the VS Git plug-in enough to take this chance, but it is a chance. It seems to have worked for now, but your mileage may vary.


Note: I have Visual Studio 2013 Ultimate Edition. It is possible that some of these keys may be different for other editions.

Right, close all instances of VS2013, open up regedit and navigate to the following entry:


There should be three keys below that node (unless you’ve installed other providers, I guess, you dirty Subversion user, you). On my machine, the Git provider had this GUID:


I’m guessing that’ll be the same on yours, but click on it and check the value of the (Default) key within it. Should be GitSccProvider. If it’s not, find the one that is.

Got it?

Good. Now delete that whole node.

Now, just for good measure, go back up to the root of the regedit tree and do a search on that GUID – I left off the braces. You should find it in a couple more places: an “AutoloadPackages” node and also in the HKEY_USERS section. Delete all of them.

Eventually, the only place you’ll find it is in a section which seems to hold the last thing you looked at in regedit, which is deliciously recursive.

Here’s a shot of my Visual Studio 2013 Ultimate Edition Source Control options after I did this:


If it comes back again, I’ll delete this post. And probably become a hipster Hack programmer using Vim on a Linux box (with dual-boot to Windows for the sole purpose of playing Titanfall).

Book Review: Dependency Injection with AngularJS


Dependency Injection with AngularJS
Dependency Injection with AngularJS, by Alex Knol

Disclosure: I was provided with a free review e-book copy of this book by Packt Publishing. Which was nice. I am a big fan of AngularJS, and have been using it to build Zudio for a year and a half now. I’ve also read two of Packt’s other Angular books, Mastering Web Application Development with AngularJS and AngularJS Directives, which are both excellent.

TL;DR: 3 out of 5 stars

Well-written, accurate, plenty of AngularJS for beginners information, but not as much Dependency Injection as you might expect. Not a bad e-book purchase if you’re new to AngularJS, otherwise there are better options.


This is a short book (63 pages in my PDF copy) which, according to its subtitle, aims to teach you to “Design, control, and manage your dependencies with AngularJS dependency injection.”

It starts off by introducing Angular in a fairly standard way in chapter 1. The next chapter introduces some concepts around clean code, the SOLID principles and Dependency Injection itself, then chapter 3 shows how DI works in AngularJS. Chapter 4 covers testing, and finally chapter 5 discusses management of large applications and code-bases.

The book is well-written and edited, clear and concise, and I found it easy to read and follow. The examples are mostly simple and get the point across, but this is not an in-depth or exhaustive tutorial. In the Testing chapter, in particular, the author covers a wide range of Angular testing practices, including Jasmine, Karma, and Protractor, but I would have liked to see more detail in the unit-testing pages, which is where Angular’s dependency injection facilities really shine. The examples given show the use of the inject function from Angular Mocks, and the use of a Scope and a Service stub to test a controller, but no mention is made of other Angular features designed to make testing easy, such as the mock-able $window and $timeout services provided in the core framework. The mention of Karma as a test runner is pertinent, but I question the inclusion of Protractor in the chapter since this is a top-level, end-to-end, integration testing tool, and so has no relevance to Dependency Injection.

In the end, my only real criticism of this book is its title (yes, I am judging it by its cover). It spends more time covering other aspects of AngularJS than it does on Dependency Injection, and it doesn’t go into that aspect to the level I was hoping for.

What it does do is provide a pretty good introduction to AngularJS, with reference to its Dependency Injection features, which are a major feature of the framework.

It’s also a little overpriced, at least for the print format.

If you’re considering using AngularJS, or in learning about its distinguishing features as part of your evaluation of JavaScript MV* frameworks, then getting this in e-book form is not a bad place to start. If you have been using AngularJS for any length of time, do not buy this book expecting detailed insights into the framework’s Dependency Injection features.


If you can write good code when those about you,
Are breaking builds and blaming it on you;
If you can trust your tests when others doubt you,
And write more tests to cover their code too;
If you can patch and not be bored by patching,
And never prematurely optimise;
Or understand Haskell’s pattern-matching,
And not expect a coding Nobel Prize:

If you can branch – and not commit to master;
Or write terse code – and not make golf your aim;
If your app can recover from disaster
And restart so the state is just the same;
If you can bear to see your OAuth token,
Rejected by an API of fools,
Or find a legacy application, broken,
And fix it up with twenty-year-old tools:

If you can make one heap of all your objects,
And risk them on one garbage-collector pass,
And leak memory because of runtime defects,
And create a workaround within in your class;
If you can force a knackered, ancient server,
To run your site although it’s obsolete;
If you can keep on learning things with fervour,
Or answer a C# question before Jon Skeet:

If you know it’s OK sometimes to goto,
Or unwind loops – to speed up just a touch;
If you don’t let your language choice denote you;
If all platforms count with you, but none too much;
If you can fill the unforgiving git clone,
With 60K of SOLID code (compiled),
Yours is the desktop, laptop, tablet, smartphone,
And – which is more – you’ll be a Dev, my child.

With apologies to Rudyard Kipling.

License: CC Attribution 3.0

(This was originally posted to the Zudio blog by accident (blame Windows Live Writer). I’m migrating that blog to a different back-end, so I’ve put this here, where it was supposed to be.)

JavaScript is not C#

Sharing this because it had me confused for half an hour.

I was just working on the new file upload dialog for Zudio 1.1 (I’m replacing Plupload with a custom-written solution which works better with Azure Blob Storage on the server side). The new code is all AngularJS directives with some Angular-UI Bootstrap, including their spiffing progress directive. Except, when I uploaded multiple files, the only progress bar that moved was the last one in the list, and it jumped all over the place. My immediate suspicion, obviously, was “there’s something wrong with Angular-UI”, but then I remembered similar issues when I first started writing LINQ code in C# 3.0 and got caught by the “modified closure trap”.

If you don’t know what that is, consider this code, which is supposed to activate a whole list of things when a button is clicked:

foreach (var thing in things)
    button.Click += (o,e) => thing.Activate();

What actually happens here is that the Activate method on the last thing in the list will be called (repeatedly?), and the other things’ Activate methods won’t be called at all. That’s because the “thing” variable gets wrapped in a single closure object which is reused each time round the loop, and it ends up pointing to whatever the last thing was.

This was such a common cause of confusion, head-scratching and keyboard abuse that in C# 5 they changed the specification to say that a separate variable should be created each time around the loop, but before that the way to avoid this problem was to do that manually, like so:

foreach (var thing in things)
    var thisThing = thing;
    button.Click += (o,e) => thisThing.Activate();

Now, when I write this kind of code, I always assign the loop variable to a separate variable, even in C# 5… and also in JavaScript:

for (var i = 0; i < things.length; i++) {
    var thing = things[i];
    button.addEventListener("click", function() {

Except it doesn’t work in JavaScript, because in JavaScript the var keyword won’t create a new variable within the same scope; var statements are hoisted to the top of the current scope by the runtime. That code above is semantically equivalent to:

var i, thing;
for (i = 0; i < things.length; i++) {
    thing = things[i];
    button.addEventListener("click", function() {

In JavaScript, the way I work around this is to refactor the body of the loop into a separate function and pass the variable in; that creates a separate copy of the variable each time round the loop.

function setupActivateOnClick(thing) {
    button.addEventListener("click", function() {
for (var i = 0; i < things.length; i++) {

If you’re the kind of person who likes nested code that’s hard to read 6 days after you wrote it, let alone 6 months, you can do this as an Immediately-Invoked Function Expression:

for (var i = 0; i < things.length; i++) {
    (function (thing) {
        button.addEventListener("click", function() {

which has the same effect, but with the added excitement of knowing that I’ll kill you if you do shit like that in any of my code. It’s not “functional programming”, it’s just bloody horrible*.

* Yes, IIFEs are a perfectly valid construct in JavaScript, and pretty much all code is and should be contained in them at the top level to prevent random declarations from hitting the global context, and it’s how JavaScript does modules, and all that, but that doesn’t mean you have to use them everywhere, you damn hipster.

Fix and OWIN and Simple.Web

In readiness for my new course on building SPAs with AngularJS, TypeScript and Simple.Web, I published some updates to Simple.Web and Fix last night. The way Fix works has changed a bit, so I thought a quick post was in order.

The old Fix

I wrote Fix while the denizens of the OWIN mailing list were still hammering out the specification. (Its original name was actually CRack, as in “C# Rack” but also as in “This server is running on CRack”, but it was pointed out to me that that name might hamper adoption.) Fix was the original home of the stupefyingly complicated Func/Action signature that was proposed (by me) as a way of not taking a dependency on the Task type, so we could maintain compatibility with .NET 3.5. In case you missed them at the time, here’s a sample of the Request and Response delegates from an early version:

using RequestHandler = System.Action<
    System.Collections.Generic.IDictionary<string, string>,
            System.Collections.Generic.KeyValuePair<string, string>
using ResponseHandler = System.Action<int, 
        System.Collections.Generic.KeyValuePair<string, string>

As time passed and .NET 4.0 became more widespread, we decided that we should forget 3.5 in the name of keeping it reasonably simple, so that the OWIN delegate signature became:

Func<IDictionary<string,object>, Task>

(I know: boring, right?)

Based on this new standard, a small team at Microsoft began work in earnest on Katana, their implementation of OWIN “glue”; that is, the meta-code that plumbs servers, applications and middleware together. Support for that implementation is included in-the-box with Visual Studio 2013 and the latest One ASP.NET bits, and it’s one of the first Microsoft-published assemblies to have the Windows-only restriction removed from the license.

I do have some issues with Katana though. One of the things that project has embraced enthusiastically is the Owin.dll and its IAppBuilder interface, which some people think makes it easier to set things up. I don’t like that assembly dependency, and I don’t really like the IAppBuilder.Use method’s signature:

public interface IAppBuilder {
    IAppBuilder Use(object middleware, params object[] args);

You could literally call that with anything. The most basic valid argument is an OWIN delegate, i.e. a Func<IDictionary<string,object>, Task>. But that introduces an interesting problem as to how a component in an OWIN pipeline signifies that processing should stop or continue; as I understand it, it should return 404 to indicate that processing may continue, but I may be wrong.

My other issue is that Katana is full-on Microsoft, enterprise-ready, all-things-to-all-consumers, and complicated in ways I don’t quite understand, such as in the way it’s interacting with IIS’s integrated pipeline. That’s what Microsoft do, and I’m not exactly criticising them for it, but sometimes you don’t need a Swiss Army knife. Sometimes you just need a knife.

Because of these issues, and also because OWIN should not be reduced to a single implementation, Fix has moved to it’s new home on GitHub and I’m continuing to work on it, and hoping the community will pitch in (which is why it’s not under my name anymore).

The new Fix

Fix 0.4 has been simplified from the previous version, which used to use MEF to do auto-discovery of apps and middleware and stick them together any which way it wanted. That’s a bloody stupid approach, since the order in which middleware runs is hugely important and something you should have total control over. So now, Fix uses a setup class to configure the pipeline. That class should be called OwinAppSetup, and should have a public method (instance or static) that takes – you guessed it – an Action delegate. Here’s a sample:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.IO;

namespace FixExample
    using System.Text;
    using UseAction = Action<
            IDictionary<string, object>, // OWIN Environment
            Func<Task>, // Next component in pipeline
            Task // Return

    public static class OwinAppSetup
        public static void Setup(UseAction use)
            use(async (env, next) =>
                var stream = (Stream) env["owin.ResponseBody"];
                await stream.WriteAsync("<h1>OWIN!</h1>");
                env["owin.ResponseStatusCode"] = 200;

        public static Task WriteAsync(this Stream stream, string text)
            var bytes = Encoding.Default.GetBytes("<h1>OWIN!</h1>");
            return stream.WriteAsync(bytes, 0, bytes.Length);

If that delegate looks complicated, it shouldn’t; it’s just a slight variation on the OWIN delegate signature, wrapped in an Action. (If it still looks complicated, you can take a dependency on Fix.dll and write a method that takes a single Fixer parameter.) ((If you really like IAppBuilder, there is a Fix.AppBuilder package that will let your Setup method take an instance of IAppBuilder, although it will currently throw a NotSupportedException if you call Use with anything but Fix’s preferred delegate signature.))

Fix’s variant OWIN delegate (inspired by the Connect package for Node.js) differs in that it takes an additional parameter of type Func<Task>. Fix expects any middleware or application to call that function in order to continue processing; if it wants to terminate the request itself, it just doesn’t call it.

This introduces a lot of additional power and flexibility. OWIN components, whether middleware or applications, can do what they do to the environment dictionary, and then do one of three things:

  • Ignore the next delegate to complete processing, as in the example above;
  • Call the next delegate and return its return value directly;


  • Await the next delegate and do some more processing.

Now middleware components can very easily modify the Request parts of the environment dictionary before an application is called, but can also modify the Response parts after the application has returned; for example, to compress the output in the “owin.ResponseBody” stream.


Presently, the OwinAppSetup class is only required if you are using the Fix.AspNet package to host your OWIN application on IIS. If you want a self-hosted application, you can use a lightweight HTTP server like Nowin or Flux, and consume the Fix.Fixer class directly to configure an application and build an OWIN delegate for the server. There’s an example in the Fix repository using the Nowin server in a console application. Here’s the Main method from that sample, using Simple.Web as the application framework:

using System;
using Fix;
using Nowin;
using Simple.Web;

namespace SimpleNowinDemo
    class Program
        static void Main()
            // Build the OWIN app
            var app = new Fixer()
                .Use(Application.Run) // Simple.Web

            // Set up the Nowin server
            var builder = ServerBuilder.New()

            // Run
            using (builder.Start())
                Console.WriteLine("Listening on port 1337. Enter to exit.");

(That’s why there’s no Simple.Web.Hosting.Self package; it’s just not necessary.)

But… Simple.Web.AspNet?

Yes, there is a Simple.Web.AspNet package, but it’s basically a meta-package that adds Fix and Fix.AspNet. It also creates the OwinAppSetup class for you, and adds the Application.Run method to it. In fact, it’s currently Simple.Web.AspNet that adds the FixHttpHandler to web.config, which is wrong and I’m going to sort that out once I’ve posted this.

What’s next?

Fix 0.4 works, and I am using it in production with Simple.Web. My next job is to move the static file handling from Simple.Web into a Simple.Owin.Statics middleware, and create Simple.Owin.Cors and Simple.Owin.Xsrf.

There are also a few areas where I’m really hoping the community will engage with discussion and pull requests:

Fix’s alternative delegate

I’m going to add some kind of wrapper for the original OWIN delegate, but it would be good if middleware and framework authors considered adding support for the Fix alternative. It really does make life easier (and applications more performant). Yes, I know, variants are bad because standards, but sometimes variants can improve standards and that’s a good thing.


As mentioned, this is currently just a very simple wrapper around Fixer’s Use method, but I’d like it to support all the same signatures as the Katana implementation (well, most of them).

Standardising server start-up

I’d really like to come up with some kind of standard for configuring and starting OWIN-compliant HTTP servers. I’m thinking of something similar to the OwinAppSetup class, maybe have a convention where there is a class called OwinServerSetup with a standardised Start method. Here’s how that would look with Nowin:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Net;

namespace Nowin
  using AppFunc = Func<IDictionary<string, object>, Task>;

  public static class OwinServerSetup
    public static IDisposable Start(AppFunc app, 
                  IPAddress address, 
                  int? port, 
                  X509Certificate certificate)
      var builder = ServerBuilder.New().SetOwinApp(app);
      if (address != null)
      if (port.HasValue)
      if (certificate != null)

      return builder.Start();

This would enable a FixUp.exe app that you could just run from the command line, something like:

D:\Code\MyApp> fixup nowin.dll myapp.dll

If you maintain a .NET HTTP server and think this is an interesting idea, please get in touch via Twitter or something.

I’m intrigued by SteamOS

You’ve probably seen the announcement about SteamOS; if you haven’t, go Google it for yourself. I’m not your mum.

This is interesting to me because I do like PC gaming. First-person shooters are better with a mouse and keyboard for a start, but also my PC monitor is the only screen in my house that I seem to have control of these days. But also, it reminds me of a time in the dim and distant past, when programmers were programmers and phones were immobile and Doom was the most advanced game ever.

In those days, your standard autoexec.bat file, the one on your 30MB hard drive, would probably load a bunch of TSR processes and run for you. But if you wanted to play Doom, you didn’t want all that cruft taking up precious 66Mhz CPU cycles, or chunks of your single megabyte of RAM. So you’d create a boot floppy specifically for playing games, which skipped all the unnecessary cruft.

These days, we just install everything and probably leave it running all the time. Most developers who also use their PC for gaming will have at least one database server running as a service, probably more; plus all the usual system tray cruft. Ever done that thing where you’re starting a new AAA game that’s pushing the limits of your system, and you right click everything in the system tray and shut it down? Forgetting that you’ve also got the 64-bit Developer Edition of SQL Server 2008 R2 humming away in the background?

Keeping your gaming environment entirely separate from your productivity environment means you’re getting the best performance from your hardware (as long as the historical Linux driver issues are resolved).

So the idea of having a dedicated gaming OS makes a lot of sense to me, and I’d rather it were Valve doing it with Steam than any of the other options. And if I never have to dick around with Origin servers or Games for Windows Live again, I shall die a marginally less bitter and angry man.

For game developers, this new platform should make a lot of sense, too. I have no idea what the overhead is on doing Windows, Mac and Linux versions of a game is, but even if the code is mostly shared, the QA burden must scale linearly. But these three platforms they’re targeting are pretty much identical in terms of hardware: x86-64 CPU; NVidia/AMD/Intel GPU; interchangeable PC components. If SteamOS is freely available, and runs well on Mac hardware as well as ordinary computers, there’s no reason why SteamOS Linux shouldn’t be the only platform you need to target.

Even the faff of switching these days could be ameliorated with a BootCamp type of thing which sleeps the current OS and wakes the other. (Would that work?)

So yeah, bring it on. And I’ll probably buy one of those controllers, too. But not for first-person shooters.

Reality is an illusion – .NET OSS is hard

So lots of posts over the last few days about that old chestnut, OSS and .NET:

Those are all good, well-written posts, and they all make very good points, and I don’t want to debate any of them, but I had a tangential thought on the subject last night while travelling home from the Dot Net Rocks UK Tour event in London. I may be repeating things other people have said, in which case you should definitely tweet me a link to their post with a rude comment about how they said it better.


  • Ruby, Node, etc. are text-based; text-based is easier to extend.
  • .NET is Visual Studio-based; Visual Studio is harder to extend.

.NET is an unusual ecosystem

When groups of developers come together to talk about the state of OSS in .NET, the comparisons were traditionally drawn with Ruby; more recently Node.js has also become a focus for analogy. Both those ecosystems are almost entirely open-source, with lots of healthy competition and innovation, and a genuine sense that if somebody builds a better mousetrap, people will switch to it. “Why isn’t the .NET world more like that?” people ask, and they talk about whether it’s Microsoft’s fault and what they could do to help. What I rarely hear mentioned is the humungous elephant in the room: Visual Studio.

Ruby, Node, Python and most of the other OSS darlings are predominantly CLI-based. There are excellent tools for scaffolding MVC sites, running builds and tests, managing version control or migrating databases; and the thing they all have in common is that they are command-line tools. That is what developers in that ecosystem expect (and, incidentally, is why they prefer Linux and OSX, because they have bash and decent Terminal applications built-in). Text editors are commonplace, from Vim to Sublime Text, and although excellent IDEs are available (thanks mostly to JetBrains) they are luxury accessories rather than fundamental components. I have no idea what the ratio is of Vim users to, say, WebStorm users in the Node.js world, but I wouldn’t be surprised if it came down fairly heavily on the side of Vim, or at least Sublime Text (with or without the excellent Vintageous plug-in).

In the land of .NET, Visual Studio is king. It scaffolds, it builds, it runs tests, it manages version control and it does this really bizarre thing which is instead of migrating databases. You don’t need a CLI, which is lucky, because even though there’s PowerShell now, the Console window it runs in is still a 20-year-old piece of shit.

Disclaimer: I don’t really know about Java, and I don’t really want to, but I suspect it’s different because there are, like, nine different IDEs and even though only one of them doesn’t suck it’s probably still easier to write command line stuff (and also code completion in Java is really just a tiny floating window into a yawning abyss of despair).

The Visual Studio experience

So the majority .NET developers are, first and foremost, Visual Studio users. And Visual Studio with a decent productivity add-on is the best IDE ever, by a considerable margin; yes, it is, and if you try to argue with that in the comments I won’t approve it so don’t even bother. It holds your hand and guides you through things, and it doesn’t demand that you actually remember a whole bunch of stuff, like, in your head. Scaffolding is done with wizards; builds and tests are run at a key-press with colourful graphical output; version control is integrated into the Solution explorer; the database design stuff is all drop-downs and pop-ups.

People complain that Microsoft built Entity Framework instead of embracing NHibernate, but NHibernate never had an Entity Model Designer integrated into Visual Studio (correct me if I’m wrong (Update: I was wrong; MindscapeHQ do an NHibernate designer, albeit a commercial product)). The OSS section of the stands may boo and hiss at the very idea of the Entity Model Designer, let alone the horrible, horrible code that it generated, but it made the ORM concept approachable for a vast swathe of developers who would otherwise have carried on blithely binding web forms to DataSets.

I don’t know whether it would have been realistic for Microsoft to add tooling for NHibernate to Visual Studio than to write their own ORM from scratch, but I suspect writing their own and creating the tooling as they went along was easier.

The same can’t really be said for MSTest, which was a hazardous by-product of the NIH mentality of the time, but it did allow integration with Visual Studio and various frameworks (think “Generate Unit Tests”) in a way that blessing NUnit probably wouldn’t have, at least not without Microsoft developers contributing to that project on company time.

And so on and so on. ASP.NET MVC quietly adds a bunch of features to the IDE, such as “Strongly-typed Razor View with Layout” in the New Items dialog and the “Add Controller…” option on the Controllers folder context-menu. NuGet gets more and more integrated and powerful with each point-release.

How does this affect OSS?

It makes it difficult. It raises the bar, and demands skills and investments of time that most OSS framework or library projects, especially fledgling ones, are unlikely to be willing or able to muster.

In a text-based world where pretty much every tool is a script, it’s easy to compete, because scripts are (relatively) easy to write.

In an IDE world, especially where the IDE’s SDK is baroque and complex and poorly-documented with few good examples available, it’s hard to compete, especially when the people you’re competing with are the people who make that IDE.

Does this mean that .NET OSS is doomed to edge status, an underground movement of which the majority of surface-dwellers are blissfully unaware? Maybe. But there are things that OSS projects could do to improve their chances; things which some are starting to do, in fact.

Things your OSS project should bring to the party

If you’re an application framework, you should at the very least provide Project templates and maybe Item templates. Bonus points for injecting context-specific items into the Solution Explorer menus. If you’re a web framework that supports Razor, find a way to make the IntelliSense work properly in .cshtml files, or use a .razor extension and write your own IntelliSense provider for it. Related: if you’ve invented your own View Engine, Item templates and IntelliSense are expected.

BTW, framework projects: you still shouldn’t expect to gain a huge amount of traction as a percentage of the whole. The overwhelming majority of .NET web development in medium-to-large enterprises still uses ASP.NET Web Forms, and always will, because drag-and-drop and Infrabloodygistics.

If you’re a testing framework, you need to integrate with the built-in Unit Test explorer, as well as ReSharper and other third-party tools. If you can find more ways to provide a lovely hand-holding experience, like SpecFlow’s editor integration and test class generator, then even better.

If you’re a database access library that uses dynamic trickery to do your thing, you still need to make it work with bloody IntelliSense.

And so on.

(Also: documentation, but I’m going to be announcing something about that soon.)

Things Microsoft could do to help

Improve the Visual Studio extensibility story, both in terms of the SDK and the documentation of the SDK. Provide more, and more relevant, examples. And open source the Visual Studio components from your own frameworks for OSS authors to use as a reference. (A good example here is TypeScript; the language, compiler and so on is completely open-source, but the Visual Studio package is closed-source, and I don’t understand why.)

Kudos at this point to Mads Kristensen, a PM on the Microsoft Web Platform team and author of the Web Essentials extension, for open-sourcing that project. The source code for that is an excellent reference for anyone looking to extend Visual Studio at the editor and Solution Explorer levels.

Also, provide MSDN Ultimate subscriptions for established OSS projects, applying the same criteria and license-restrictions as companies like JetBrains and Telerik do for their OSS-supporting licenses. And when the creators of a good OSS project do provide a decent level of integration with Visual Studio, make them Visual Studio MVPs and Insiders so they can access more information about the thing they’re trying to work with.


Update: given some of the comments, I think I should clearly state that I am an MVP (though not for any of my open source stuff) and also a member of the BizSpark+ program, and thus have more MSDN Ultimate subscriptions than I can actually use. I am not advocating this for my own personal benefit, but for the benefit of other open source project maintainers who are not so well-resourced as I am.

Simple.Data 2.0

Today I started work on the next version of Simple.Data. Even though there hasn’t been an actual, official release of 1.0 yet. That’s something I hope to rectify in the next week or so, by fixing the few remaining issues and hoping the documentation is sufficient, although not exhaustive.

The initial work on 2.0 involves addressing some of the issues that have appeared as a result of a code-base growing organically from something that started out much less ambitious. These changes will clean the code up to provide a better base to build on, and hopefully to make it easier to write adapters for a wider range of back-end data platforms. I’ll also be building a RavenDB adapter, not because RavenDB’s own API needs improving upon, but because I’d like RavenDB to be one of the platforms that can be swapped in to an application built using Simple.Data.

Once that’s done, there are a few headline features for the 2.0 release:

Async support

Async’s strong point is non-blocking I/O operations, so getting it into Simple.Data is a no-brainer. I’ve got to work out the details, but you should be able to await everything.

Batch operations

You’ll be able to create a batch of operations and execute it with a single connect-run-disconnect process. This will also support adding arbitrary Actions and Funcs into the chain of operations to be run between database calls, using previously-returned data.

Metadata exposed

You’ll be able to access all the database schema information from the underlying database, without worrying about differing INFORMATION_SCHEMA or sys.* or what-have-you. Where appropriate, you’ll also be able to connect to a server and get database information too.


There are lots of things that could be added to Simple.Data that would probably stop it being quite so Simple, so I’m going to add various hooks and extension points to allow either me or other people to add this stuff. Adapter authors will be able to add custom “methods” to the Database or Table objects, and you’ll be able to check whether those methods are available at runtime.

I might also refactor out some of the more esoteric functionality that Simple.Data currently has into plug-ins. We’ll see.

Better WithStuff

The With operations on queries will be getting much more powerful, allowing complex object graphs to be populated in a single call. I’m also looking at allowing object graphs to be saved with a single call, if that’s actually possible.

Oh, and IntelliSense

It’s not possible to glom onto the built-in IntelliSense in Visual Studio and augment it. Believe me, I tried. All you can do is completely replace it. Which, obviously, is a nightmare. But with the upcoming release of Roslyn (I am assured there will be a new preview release shortly after VS2013’s full release), I suspect replacing IntelliSense is going to be a lot easier.

It’s still a sufficiently big undertaking that I’m not going to just build it for Simple.Data. My plan is to create an open source IntelliSense replacement that is designed from line one to be extensible by anyone who wants to, and then to build a Simple.Data extension for it. I’m hoping I can make the engine for that extension modular enough to be able to create ReSharper and CodeRush completion plug-ins, too.

If you’re interested in contributing to that project, particularly on the UI/UX side of things, I’d be grateful for any help.

.NET 4.5

The biggest other change is that I’m pretty sure 2.0 is going to be .NET 4.5 (and up) only. With all the async/await stuff, maintaining compatibility with 4.0 will likely require too much time and effort, and will probably compromise the code and performance.


If you’re the author of an adapter or provider for Simple.Data and you don’t want to continue to maintain that package through the 2.0 release, please let me know and we can bring it into the central fold. You’ll continue to be credited as the original author for the life of the package.


So, if you’ve got any comments or anything about any of that, please leave them below or reach out to me on Twitter.

Developer PC 2013

For the last 3 years (since Mrs Rendle offered me a MacBook Pro in exchange for turning my “study” into a spare room) I’ve been doing all my development work on laptops of one sort or another. They’ve all been reasonably well-specced machines, from that 2010 MBP which I upgraded to 8GB of memory and stuck an SSD in, to the Dell XPS 14 UltraBook that killed two SSDs in the first week but has served me well since, but you just can’t spec a laptop up like a desktop. You can try, but the result is not very portable and will probably give you first-degree burns if you actually try to use it on your lap.

For a proper workstation, you still can’t beat a desktop, and since I’ve got a lot of work to do and I’ll be working at home on most of it, I decided to invest in one. Thanks to the intricacies of UK VAT regulations, I needed to spend upward of £2,000 on it, which task I set about with vim and vigour.

I didn’t want an off-the-shelf PC from Dell or HP or similar; the only component you can guarantee the provenance of in those things is the CPU, which is but a single part. In the past, I have bought components and built my own, but I didn’t really fancy that this time around, so I investigated the custom build companies in the UK and found QuietPC. Having had some very noisy self-builds in the past, the idea of a quiet PC rather appealed, and their Serenity Pro Gamer looked like an excellent starting point.

With a minimum price to aim for, I started turning up the dials until I’d passed the magic number and created my ideal PC.

The spec

Here’s what I ended up with (sparing the non-essential details):

  • Core i7 4770K 3.5GHz CPU
  • 32GB (4x8GB) Corsair XMS DDR3 memory
  • 256GB Samsung 840 Pro SSD
  • 2TB WD Caviar Green HDD
  • Nvidia GTX770 OC 2GB graphics card
  • Gigabyte GA-Z87X-UD4H motherboard
  • Zalman LQ320 liquid CPU cooler

I had to throw on a 27” monitor to push it past £2k. Oh, and the reason for switching from the AMD GPU to the Nvidia is CUDA, obviously.

Once the thing arrived and I’d got Windows 8 Pro installed and running happily, I added a 512GB Crucial m4 SSD that I salvaged from the MacBook before it was passed down to my daughter. Now I’ve got a SATA 3 SSD system+apps drive, a SATA 3 SSD working drive, and an HDD for, you know, iTunes and Dropbox and stuff.

I also overclocked the CPU, which is why I got the liquid cooler. That’s always been a bit challenging in the past, but this new motherboard has a “CPU Upgrade” dropdown in the BIOS where you can choose from 4.2GHz up to 4.8GHz, and that’s it, it handles the details for you. I thought I’d got it stable at 4.6GHz, but then I got a BSoD running a build in VS2012 so I bumped it down to 4.5GHz to be on the safe side.

The results

This really is primarily a work PC. I mean to play games, I really do, but I sit down and think “I’ll just hack on this code for five minutes, then I’ll play Far Cry 3.” Before you know it it’s 3am and you’ve got work in 5 hours.

So the key thing here is how does Visual Studio run? Well, it loads pretty quickly, even with R# 8 and various other plug-ins and add-ons, even when you open it by right-clicking and choosing a solution from the task-bar menu. Less than 10 seconds to load Simple.Web and start typing.

But the first time I built Simple.Data after a fresh clone from Github, I swear I thought I’d hit the wrong key. It took just over 2 seconds to build the whole solution. Using the R# test runner, it runs all 840 tests – including the 180 full SQL Server 2012 integration tests which recreate the database for each fixture – in under 7 seconds. In the past I’ve had NCrunch ignore those integration tests, but now it can run them in background, too. 9 seconds to build and run tests. It takes 71 seconds on the Dell UltraBook.

Simple.Web builds in ~3.5 seconds in Visual Studio, ~8 seconds using the Rake build. R# runs all the tests in ~6 seconds, including 30 Razor rendering tests. That takes 59 seconds on the Dell.

I can’t wait to see what it’s like with multi-threaded Roslyn builds.

Oh, and yes, it is very quiet. I can barely hear it as I type this, although when the GPU ramps up it’s the same old noisy story.

Why am I telling you this?

Now, this is all very nice for me, and of course, you’re pleased that I’m happy, but the reason I’m posting this is to once again bang on about not skimping on developer workstations. Without the expensive GPU and the monitor, this thing comes in under £1,300 (not counting VAT, which businesses don’t). Given a 2-year lifespan, that’s (hopefully a lot) less than a week’s salary for a developer, and you don’t have to be a genius to see that a 600-800% performance increase in compiling code and running tests is going to result in more than a 2% increase in productivity. And sadly, I’m willing to bet that the Dell XPS 14 UltraBook, with its 3rd-gen Core i7 mobile CPU and budget SSD, is still better than what a lot of you reading this are given to work with.

So maybe forward this post to whoever is in charge of IT procurement, as an example of some concrete numbers. Here they are again for reference:

Task Slow PC Fast PC
Build & Test Simple.Data 71 seconds 9 seconds
Build & Test Simple.Web 59 seconds 9.5 seconds

P.S.  32GB? Really?

Two words: virtual machines. VMs for running Linux for testing Mono builds. VMs for running databases (and other things) that you don’t want installed on the main PC. VMs for testing web sites in old versions of Internet Explorer. Azure emulators, mobile device emulators. Yes, 32GB, really.


I have not received any consideration from QuietPC in exchange for writing this post, and I won’t profit in any way if you buy a system from them.