Disable the Git source control add-in in VS2013, permanently

Update: 27th June 2014

The registry hack stopped working for me for some reason, possibly after I installed VS2013 Update 2. In the little movie world in my head (where I have a speaking part and an end credit instead of being a CG head in a crowd, or possibly a Class 5 droid) there’s a minor war going on between me and the Microsoft guys who wrote this oddly insistent Source Control Provider. I found a way to disable it; they found a way to re-enable it. So I’ve escalated my efforts. Screw those guys.

You can now install my first ever published Visual Studio extension, NoGit, from the Extensions dialog in Visual Studio, or download it from the Visual Studio Gallery. Every time you load a Solution, it checks to see if the current Source Control Provider is Microsoft’s Git provider, and if it is, it deactivates it.

The original text of this post has been preserved for historical context.

This has been driving me nuts. I don’t know what particular combination of things contributes to it, but the Git source control provider that’s built into Visual Studio 2013 just hangs the whole system. I change it to “None” in Tools/Options, but every now and then it comes back. I never, ever use it – CLI all the way – so its very existence was offending me to the core.

Now, I think I’ve found out how to kill it, via that old nasty-hack-atron, regedit.exe.


If you do this and it kills Visual Studio, or Windows, or your PC, or a kitten, I will not accept responsibility. I hate the VS Git plug-in enough to take this chance, but it is a chance. It seems to have worked for now, but your mileage may vary.


Note: I have Visual Studio 2013 Ultimate Edition. It is possible that some of these keys may be different for other editions.

Right, close all instances of VS2013, open up regedit and navigate to the following entry:


There should be three keys below that node (unless you’ve installed other providers, I guess, you dirty Subversion user, you). On my machine, the Git provider had this GUID:


I’m guessing that’ll be the same on yours, but click on it and check the value of the (Default) key within it. Should be GitSccProvider. If it’s not, find the one that is.

Got it?

Good. Now delete that whole node.

Now, just for good measure, go back up to the root of the regedit tree and do a search on that GUID – I left off the braces. You should find it in a couple more places: an “AutoloadPackages” node and also in the HKEY_USERS section. Delete all of them.

Eventually, the only place you’ll find it is in a section which seems to hold the last thing you looked at in regedit, which is deliciously recursive.

Here’s a shot of my Visual Studio 2013 Ultimate Edition Source Control options after I did this:


If it comes back again, I’ll delete this post. And probably become a hipster Hack programmer using Vim on a Linux box (with dual-boot to Windows for the sole purpose of playing Titanfall).

Book Review: Dependency Injection with AngularJS


Dependency Injection with AngularJS
Dependency Injection with AngularJS, by Alex Knol

Disclosure: I was provided with a free review e-book copy of this book by Packt Publishing. Which was nice. I am a big fan of AngularJS, and have been using it to build Zudio for a year and a half now. I’ve also read two of Packt’s other Angular books, Mastering Web Application Development with AngularJS and AngularJS Directives, which are both excellent.

TL;DR: 3 out of 5 stars

Well-written, accurate, plenty of AngularJS for beginners information, but not as much Dependency Injection as you might expect. Not a bad e-book purchase if you’re new to AngularJS, otherwise there are better options.


This is a short book (63 pages in my PDF copy) which, according to its subtitle, aims to teach you to “Design, control, and manage your dependencies with AngularJS dependency injection.”

It starts off by introducing Angular in a fairly standard way in chapter 1. The next chapter introduces some concepts around clean code, the SOLID principles and Dependency Injection itself, then chapter 3 shows how DI works in AngularJS. Chapter 4 covers testing, and finally chapter 5 discusses management of large applications and code-bases.

The book is well-written and edited, clear and concise, and I found it easy to read and follow. The examples are mostly simple and get the point across, but this is not an in-depth or exhaustive tutorial. In the Testing chapter, in particular, the author covers a wide range of Angular testing practices, including Jasmine, Karma, and Protractor, but I would have liked to see more detail in the unit-testing pages, which is where Angular’s dependency injection facilities really shine. The examples given show the use of the inject function from Angular Mocks, and the use of a Scope and a Service stub to test a controller, but no mention is made of other Angular features designed to make testing easy, such as the mock-able $window and $timeout services provided in the core framework. The mention of Karma as a test runner is pertinent, but I question the inclusion of Protractor in the chapter since this is a top-level, end-to-end, integration testing tool, and so has no relevance to Dependency Injection.

In the end, my only real criticism of this book is its title (yes, I am judging it by its cover). It spends more time covering other aspects of AngularJS than it does on Dependency Injection, and it doesn’t go into that aspect to the level I was hoping for.

What it does do is provide a pretty good introduction to AngularJS, with reference to its Dependency Injection features, which are a major feature of the framework.

It’s also a little overpriced, at least for the print format.

If you’re considering using AngularJS, or in learning about its distinguishing features as part of your evaluation of JavaScript MV* frameworks, then getting this in e-book form is not a bad place to start. If you have been using AngularJS for any length of time, do not buy this book expecting detailed insights into the framework’s Dependency Injection features.


If you can write good code when those about you,
Are breaking builds and blaming it on you;
If you can trust your tests when others doubt you,
And write more tests to cover their code too;
If you can patch and not be bored by patching,
And never prematurely optimise;
Or understand Haskell’s pattern-matching,
And not expect a coding Nobel Prize:

If you can branch – and not commit to master;
Or write terse code – and not make golf your aim;
If your app can recover from disaster
And restart so the state is just the same;
If you can bear to see your OAuth token,
Rejected by an API of fools,
Or find a legacy application, broken,
And fix it up with twenty-year-old tools:

If you can make one heap of all your objects,
And risk them on one garbage-collector pass,
And leak memory because of runtime defects,
And create a workaround within in your class;
If you can force a knackered, ancient server,
To run your site although it’s obsolete;
If you can keep on learning things with fervour,
Or answer a C# question before Jon Skeet:

If you know it’s OK sometimes to goto,
Or unwind loops – to speed up just a touch;
If you don’t let your language choice denote you;
If all platforms count with you, but none too much;
If you can fill the unforgiving git clone,
With 60K of SOLID code (compiled),
Yours is the desktop, laptop, tablet, smartphone,
And – which is more – you’ll be a Dev, my child.

With apologies to Rudyard Kipling.

License: CC Attribution 3.0

(This was originally posted to the Zudio blog by accident (blame Windows Live Writer). I’m migrating that blog to a different back-end, so I’ve put this here, where it was supposed to be.)

JavaScript is not C#

Sharing this because it had me confused for half an hour.

I was just working on the new file upload dialog for Zudio 1.1 (I’m replacing Plupload with a custom-written solution which works better with Azure Blob Storage on the server side). The new code is all AngularJS directives with some Angular-UI Bootstrap, including their spiffing progress directive. Except, when I uploaded multiple files, the only progress bar that moved was the last one in the list, and it jumped all over the place. My immediate suspicion, obviously, was “there’s something wrong with Angular-UI”, but then I remembered similar issues when I first started writing LINQ code in C# 3.0 and got caught by the “modified closure trap”.

If you don’t know what that is, consider this code, which is supposed to activate a whole list of things when a button is clicked:

foreach (var thing in things)
    button.Click += (o,e) => thing.Activate();

What actually happens here is that the Activate method on the last thing in the list will be called (repeatedly?), and the other things’ Activate methods won’t be called at all. That’s because the “thing” variable gets wrapped in a single closure object which is reused each time round the loop, and it ends up pointing to whatever the last thing was.

This was such a common cause of confusion, head-scratching and keyboard abuse that in C# 5 they changed the specification to say that a separate variable should be created each time around the loop, but before that the way to avoid this problem was to do that manually, like so:

foreach (var thing in things)
    var thisThing = thing;
    button.Click += (o,e) => thisThing.Activate();

Now, when I write this kind of code, I always assign the loop variable to a separate variable, even in C# 5… and also in JavaScript:

for (var i = 0; i < things.length; i++) {
    var thing = things[i];
    button.addEventListener("click", function() {

Except it doesn’t work in JavaScript, because in JavaScript the var keyword won’t create a new variable within the same scope; var statements are hoisted to the top of the current scope by the runtime. That code above is semantically equivalent to:

var i, thing;
for (i = 0; i < things.length; i++) {
    thing = things[i];
    button.addEventListener("click", function() {

In JavaScript, the way I work around this is to refactor the body of the loop into a separate function and pass the variable in; that creates a separate copy of the variable each time round the loop.

function setupActivateOnClick(thing) {
    button.addEventListener("click", function() {
for (var i = 0; i < things.length; i++) {

If you’re the kind of person who likes nested code that’s hard to read 6 days after you wrote it, let alone 6 months, you can do this as an Immediately-Invoked Function Expression:

for (var i = 0; i < things.length; i++) {
    (function (thing) {
        button.addEventListener("click", function() {

which has the same effect, but with the added excitement of knowing that I’ll kill you if you do shit like that in any of my code. It’s not “functional programming”, it’s just bloody horrible*.

* Yes, IIFEs are a perfectly valid construct in JavaScript, and pretty much all code is and should be contained in them at the top level to prevent random declarations from hitting the global context, and it’s how JavaScript does modules, and all that, but that doesn’t mean you have to use them everywhere, you damn hipster.

Fix and OWIN and Simple.Web

In readiness for my new course on building SPAs with AngularJS, TypeScript and Simple.Web, I published some updates to Simple.Web and Fix last night. The way Fix works has changed a bit, so I thought a quick post was in order.

The old Fix

I wrote Fix while the denizens of the OWIN mailing list were still hammering out the specification. (Its original name was actually CRack, as in “C# Rack” but also as in “This server is running on CRack”, but it was pointed out to me that that name might hamper adoption.) Fix was the original home of the stupefyingly complicated Func/Action signature that was proposed (by me) as a way of not taking a dependency on the Task type, so we could maintain compatibility with .NET 3.5. In case you missed them at the time, here’s a sample of the Request and Response delegates from an early version:

using RequestHandler = System.Action<
    System.Collections.Generic.IDictionary<string, string>,
            System.Collections.Generic.KeyValuePair<string, string>
using ResponseHandler = System.Action<int, 
        System.Collections.Generic.KeyValuePair<string, string>

As time passed and .NET 4.0 became more widespread, we decided that we should forget 3.5 in the name of keeping it reasonably simple, so that the OWIN delegate signature became:

Func<IDictionary<string,object>, Task>

(I know: boring, right?)

Based on this new standard, a small team at Microsoft began work in earnest on Katana, their implementation of OWIN “glue”; that is, the meta-code that plumbs servers, applications and middleware together. Support for that implementation is included in-the-box with Visual Studio 2013 and the latest One ASP.NET bits, and it’s one of the first Microsoft-published assemblies to have the Windows-only restriction removed from the license.

I do have some issues with Katana though. One of the things that project has embraced enthusiastically is the Owin.dll and its IAppBuilder interface, which some people think makes it easier to set things up. I don’t like that assembly dependency, and I don’t really like the IAppBuilder.Use method’s signature:

public interface IAppBuilder {
    IAppBuilder Use(object middleware, params object[] args);

You could literally call that with anything. The most basic valid argument is an OWIN delegate, i.e. a Func<IDictionary<string,object>, Task>. But that introduces an interesting problem as to how a component in an OWIN pipeline signifies that processing should stop or continue; as I understand it, it should return 404 to indicate that processing may continue, but I may be wrong.

My other issue is that Katana is full-on Microsoft, enterprise-ready, all-things-to-all-consumers, and complicated in ways I don’t quite understand, such as in the way it’s interacting with IIS’s integrated pipeline. That’s what Microsoft do, and I’m not exactly criticising them for it, but sometimes you don’t need a Swiss Army knife. Sometimes you just need a knife.

Because of these issues, and also because OWIN should not be reduced to a single implementation, Fix has moved to it’s new home on GitHub and I’m continuing to work on it, and hoping the community will pitch in (which is why it’s not under my name anymore).

The new Fix

Fix 0.4 has been simplified from the previous version, which used to use MEF to do auto-discovery of apps and middleware and stick them together any which way it wanted. That’s a bloody stupid approach, since the order in which middleware runs is hugely important and something you should have total control over. So now, Fix uses a setup class to configure the pipeline. That class should be called OwinAppSetup, and should have a public method (instance or static) that takes – you guessed it – an Action delegate. Here’s a sample:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.IO;

namespace FixExample
    using System.Text;
    using UseAction = Action<
            IDictionary<string, object>, // OWIN Environment
            Func<Task>, // Next component in pipeline
            Task // Return

    public static class OwinAppSetup
        public static void Setup(UseAction use)
            use(async (env, next) =>
                var stream = (Stream) env["owin.ResponseBody"];
                await stream.WriteAsync("<h1>OWIN!</h1>");
                env["owin.ResponseStatusCode"] = 200;

        public static Task WriteAsync(this Stream stream, string text)
            var bytes = Encoding.Default.GetBytes("<h1>OWIN!</h1>");
            return stream.WriteAsync(bytes, 0, bytes.Length);

If that delegate looks complicated, it shouldn’t; it’s just a slight variation on the OWIN delegate signature, wrapped in an Action. (If it still looks complicated, you can take a dependency on Fix.dll and write a method that takes a single Fixer parameter.) ((If you really like IAppBuilder, there is a Fix.AppBuilder package that will let your Setup method take an instance of IAppBuilder, although it will currently throw a NotSupportedException if you call Use with anything but Fix’s preferred delegate signature.))

Fix’s variant OWIN delegate (inspired by the Connect package for Node.js) differs in that it takes an additional parameter of type Func<Task>. Fix expects any middleware or application to call that function in order to continue processing; if it wants to terminate the request itself, it just doesn’t call it.

This introduces a lot of additional power and flexibility. OWIN components, whether middleware or applications, can do what they do to the environment dictionary, and then do one of three things:

  • Ignore the next delegate to complete processing, as in the example above;
  • Call the next delegate and return its return value directly;


  • Await the next delegate and do some more processing.

Now middleware components can very easily modify the Request parts of the environment dictionary before an application is called, but can also modify the Response parts after the application has returned; for example, to compress the output in the “owin.ResponseBody” stream.


Presently, the OwinAppSetup class is only required if you are using the Fix.AspNet package to host your OWIN application on IIS. If you want a self-hosted application, you can use a lightweight HTTP server like Nowin or Flux, and consume the Fix.Fixer class directly to configure an application and build an OWIN delegate for the server. There’s an example in the Fix repository using the Nowin server in a console application. Here’s the Main method from that sample, using Simple.Web as the application framework:

using System;
using Fix;
using Nowin;
using Simple.Web;

namespace SimpleNowinDemo
    class Program
        static void Main()
            // Build the OWIN app
            var app = new Fixer()
                .Use(Application.Run) // Simple.Web

            // Set up the Nowin server
            var builder = ServerBuilder.New()

            // Run
            using (builder.Start())
                Console.WriteLine("Listening on port 1337. Enter to exit.");

(That’s why there’s no Simple.Web.Hosting.Self package; it’s just not necessary.)

But… Simple.Web.AspNet?

Yes, there is a Simple.Web.AspNet package, but it’s basically a meta-package that adds Fix and Fix.AspNet. It also creates the OwinAppSetup class for you, and adds the Application.Run method to it. In fact, it’s currently Simple.Web.AspNet that adds the FixHttpHandler to web.config, which is wrong and I’m going to sort that out once I’ve posted this.

What’s next?

Fix 0.4 works, and I am using it in production with Simple.Web. My next job is to move the static file handling from Simple.Web into a Simple.Owin.Statics middleware, and create Simple.Owin.Cors and Simple.Owin.Xsrf.

There are also a few areas where I’m really hoping the community will engage with discussion and pull requests:

Fix’s alternative delegate

I’m going to add some kind of wrapper for the original OWIN delegate, but it would be good if middleware and framework authors considered adding support for the Fix alternative. It really does make life easier (and applications more performant). Yes, I know, variants are bad because standards, but sometimes variants can improve standards and that’s a good thing.


As mentioned, this is currently just a very simple wrapper around Fixer’s Use method, but I’d like it to support all the same signatures as the Katana implementation (well, most of them).

Standardising server start-up

I’d really like to come up with some kind of standard for configuring and starting OWIN-compliant HTTP servers. I’m thinking of something similar to the OwinAppSetup class, maybe have a convention where there is a class called OwinServerSetup with a standardised Start method. Here’s how that would look with Nowin:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Net;

namespace Nowin
  using AppFunc = Func<IDictionary<string, object>, Task>;

  public static class OwinServerSetup
    public static IDisposable Start(AppFunc app, 
                  IPAddress address, 
                  int? port, 
                  X509Certificate certificate)
      var builder = ServerBuilder.New().SetOwinApp(app);
      if (address != null)
      if (port.HasValue)
      if (certificate != null)

      return builder.Start();

This would enable a FixUp.exe app that you could just run from the command line, something like:

D:\Code\MyApp> fixup nowin.dll myapp.dll

If you maintain a .NET HTTP server and think this is an interesting idea, please get in touch via Twitter or something.

I’m intrigued by SteamOS

You’ve probably seen the announcement about SteamOS; if you haven’t, go Google it for yourself. I’m not your mum.

This is interesting to me because I do like PC gaming. First-person shooters are better with a mouse and keyboard for a start, but also my PC monitor is the only screen in my house that I seem to have control of these days. But also, it reminds me of a time in the dim and distant past, when programmers were programmers and phones were immobile and Doom was the most advanced game ever.

In those days, your standard autoexec.bat file, the one on your 30MB hard drive, would probably load a bunch of TSR processes and run win.com for you. But if you wanted to play Doom, you didn’t want all that cruft taking up precious 66Mhz CPU cycles, or chunks of your single megabyte of RAM. So you’d create a boot floppy specifically for playing games, which skipped all the unnecessary cruft.

These days, we just install everything and probably leave it running all the time. Most developers who also use their PC for gaming will have at least one database server running as a service, probably more; plus all the usual system tray cruft. Ever done that thing where you’re starting a new AAA game that’s pushing the limits of your system, and you right click everything in the system tray and shut it down? Forgetting that you’ve also got the 64-bit Developer Edition of SQL Server 2008 R2 humming away in the background?

Keeping your gaming environment entirely separate from your productivity environment means you’re getting the best performance from your hardware (as long as the historical Linux driver issues are resolved).

So the idea of having a dedicated gaming OS makes a lot of sense to me, and I’d rather it were Valve doing it with Steam than any of the other options. And if I never have to dick around with Origin servers or Games for Windows Live again, I shall die a marginally less bitter and angry man.

For game developers, this new platform should make a lot of sense, too. I have no idea what the overhead is on doing Windows, Mac and Linux versions of a game is, but even if the code is mostly shared, the QA burden must scale linearly. But these three platforms they’re targeting are pretty much identical in terms of hardware: x86-64 CPU; NVidia/AMD/Intel GPU; interchangeable PC components. If SteamOS is freely available, and runs well on Mac hardware as well as ordinary computers, there’s no reason why SteamOS Linux shouldn’t be the only platform you need to target.

Even the faff of switching these days could be ameliorated with a BootCamp type of thing which sleeps the current OS and wakes the other. (Would that work?)

So yeah, bring it on. And I’ll probably buy one of those controllers, too. But not for first-person shooters.

Reality is an illusion – .NET OSS is hard

So lots of posts over the last few days about that old chestnut, OSS and .NET:

Those are all good, well-written posts, and they all make very good points, and I don’t want to debate any of them, but I had a tangential thought on the subject last night while travelling home from the Dot Net Rocks UK Tour event in London. I may be repeating things other people have said, in which case you should definitely tweet me a link to their post with a rude comment about how they said it better.


  • Ruby, Node, etc. are text-based; text-based is easier to extend.
  • .NET is Visual Studio-based; Visual Studio is harder to extend.

.NET is an unusual ecosystem

When groups of developers come together to talk about the state of OSS in .NET, the comparisons were traditionally drawn with Ruby; more recently Node.js has also become a focus for analogy. Both those ecosystems are almost entirely open-source, with lots of healthy competition and innovation, and a genuine sense that if somebody builds a better mousetrap, people will switch to it. “Why isn’t the .NET world more like that?” people ask, and they talk about whether it’s Microsoft’s fault and what they could do to help. What I rarely hear mentioned is the humungous elephant in the room: Visual Studio.

Ruby, Node, Python and most of the other OSS darlings are predominantly CLI-based. There are excellent tools for scaffolding MVC sites, running builds and tests, managing version control or migrating databases; and the thing they all have in common is that they are command-line tools. That is what developers in that ecosystem expect (and, incidentally, is why they prefer Linux and OSX, because they have bash and decent Terminal applications built-in). Text editors are commonplace, from Vim to Sublime Text, and although excellent IDEs are available (thanks mostly to JetBrains) they are luxury accessories rather than fundamental components. I have no idea what the ratio is of Vim users to, say, WebStorm users in the Node.js world, but I wouldn’t be surprised if it came down fairly heavily on the side of Vim, or at least Sublime Text (with or without the excellent Vintageous plug-in).

In the land of .NET, Visual Studio is king. It scaffolds, it builds, it runs tests, it manages version control and it does this really bizarre thing which is instead of migrating databases. You don’t need a CLI, which is lucky, because even though there’s PowerShell now, the Console window it runs in is still a 20-year-old piece of shit.

Disclaimer: I don’t really know about Java, and I don’t really want to, but I suspect it’s different because there are, like, nine different IDEs and even though only one of them doesn’t suck it’s probably still easier to write command line stuff (and also code completion in Java is really just a tiny floating window into a yawning abyss of despair).

The Visual Studio experience

So the majority .NET developers are, first and foremost, Visual Studio users. And Visual Studio with a decent productivity add-on is the best IDE ever, by a considerable margin; yes, it is, and if you try to argue with that in the comments I won’t approve it so don’t even bother. It holds your hand and guides you through things, and it doesn’t demand that you actually remember a whole bunch of stuff, like, in your head. Scaffolding is done with wizards; builds and tests are run at a key-press with colourful graphical output; version control is integrated into the Solution explorer; the database design stuff is all drop-downs and pop-ups.

People complain that Microsoft built Entity Framework instead of embracing NHibernate, but NHibernate never had an Entity Model Designer integrated into Visual Studio (correct me if I’m wrong (Update: I was wrong; MindscapeHQ do an NHibernate designer, albeit a commercial product)). The OSS section of the stands may boo and hiss at the very idea of the Entity Model Designer, let alone the horrible, horrible code that it generated, but it made the ORM concept approachable for a vast swathe of developers who would otherwise have carried on blithely binding web forms to DataSets.

I don’t know whether it would have been realistic for Microsoft to add tooling for NHibernate to Visual Studio than to write their own ORM from scratch, but I suspect writing their own and creating the tooling as they went along was easier.

The same can’t really be said for MSTest, which was a hazardous by-product of the NIH mentality of the time, but it did allow integration with Visual Studio and various frameworks (think “Generate Unit Tests”) in a way that blessing NUnit probably wouldn’t have, at least not without Microsoft developers contributing to that project on company time.

And so on and so on. ASP.NET MVC quietly adds a bunch of features to the IDE, such as “Strongly-typed Razor View with Layout” in the New Items dialog and the “Add Controller…” option on the Controllers folder context-menu. NuGet gets more and more integrated and powerful with each point-release.

How does this affect OSS?

It makes it difficult. It raises the bar, and demands skills and investments of time that most OSS framework or library projects, especially fledgling ones, are unlikely to be willing or able to muster.

In a text-based world where pretty much every tool is a script, it’s easy to compete, because scripts are (relatively) easy to write.

In an IDE world, especially where the IDE’s SDK is baroque and complex and poorly-documented with few good examples available, it’s hard to compete, especially when the people you’re competing with are the people who make that IDE.

Does this mean that .NET OSS is doomed to edge status, an underground movement of which the majority of surface-dwellers are blissfully unaware? Maybe. But there are things that OSS projects could do to improve their chances; things which some are starting to do, in fact.

Things your OSS project should bring to the party

If you’re an application framework, you should at the very least provide Project templates and maybe Item templates. Bonus points for injecting context-specific items into the Solution Explorer menus. If you’re a web framework that supports Razor, find a way to make the IntelliSense work properly in .cshtml files, or use a .razor extension and write your own IntelliSense provider for it. Related: if you’ve invented your own View Engine, Item templates and IntelliSense are expected.

BTW, framework projects: you still shouldn’t expect to gain a huge amount of traction as a percentage of the whole. The overwhelming majority of .NET web development in medium-to-large enterprises still uses ASP.NET Web Forms, and always will, because drag-and-drop and Infrabloodygistics.

If you’re a testing framework, you need to integrate with the built-in Unit Test explorer, as well as ReSharper and other third-party tools. If you can find more ways to provide a lovely hand-holding experience, like SpecFlow’s editor integration and test class generator, then even better.

If you’re a database access library that uses dynamic trickery to do your thing, you still need to make it work with bloody IntelliSense.

And so on.

(Also: documentation, but I’m going to be announcing something about that soon.)

Things Microsoft could do to help

Improve the Visual Studio extensibility story, both in terms of the SDK and the documentation of the SDK. Provide more, and more relevant, examples. And open source the Visual Studio components from your own frameworks for OSS authors to use as a reference. (A good example here is TypeScript; the language, compiler and so on is completely open-source, but the Visual Studio package is closed-source, and I don’t understand why.)

Kudos at this point to Mads Kristensen, a PM on the Microsoft Web Platform team and author of the Web Essentials extension, for open-sourcing that project. The source code for that is an excellent reference for anyone looking to extend Visual Studio at the editor and Solution Explorer levels.

Also, provide MSDN Ultimate subscriptions for established OSS projects, applying the same criteria and license-restrictions as companies like JetBrains and Telerik do for their OSS-supporting licenses. And when the creators of a good OSS project do provide a decent level of integration with Visual Studio, make them Visual Studio MVPs and Insiders so they can access more information about the thing they’re trying to work with.


Update: given some of the comments, I think I should clearly state that I am an MVP (though not for any of my open source stuff) and also a member of the BizSpark+ program, and thus have more MSDN Ultimate subscriptions than I can actually use. I am not advocating this for my own personal benefit, but for the benefit of other open source project maintainers who are not so well-resourced as I am.

Developer PC 2013

For the last 3 years (since Mrs Rendle offered me a MacBook Pro in exchange for turning my “study” into a spare room) I’ve been doing all my development work on laptops of one sort or another. They’ve all been reasonably well-specced machines, from that 2010 MBP which I upgraded to 8GB of memory and stuck an SSD in, to the Dell XPS 14 UltraBook that killed two SSDs in the first week but has served me well since, but you just can’t spec a laptop up like a desktop. You can try, but the result is not very portable and will probably give you first-degree burns if you actually try to use it on your lap.

For a proper workstation, you still can’t beat a desktop, and since I’ve got a lot of work to do and I’ll be working at home on most of it, I decided to invest in one. Thanks to the intricacies of UK VAT regulations, I needed to spend upward of £2,000 on it, which task I set about with vim and vigour.

I didn’t want an off-the-shelf PC from Dell or HP or similar; the only component you can guarantee the provenance of in those things is the CPU, which is but a single part. In the past, I have bought components and built my own, but I didn’t really fancy that this time around, so I investigated the custom build companies in the UK and found QuietPC. Having had some very noisy self-builds in the past, the idea of a quiet PC rather appealed, and their Serenity Pro Gamer looked like an excellent starting point.

With a minimum price to aim for, I started turning up the dials until I’d passed the magic number and created my ideal PC.

The spec

Here’s what I ended up with (sparing the non-essential details):

  • Core i7 4770K 3.5GHz CPU
  • 32GB (4x8GB) Corsair XMS DDR3 memory
  • 256GB Samsung 840 Pro SSD
  • 2TB WD Caviar Green HDD
  • Nvidia GTX770 OC 2GB graphics card
  • Gigabyte GA-Z87X-UD4H motherboard
  • Zalman LQ320 liquid CPU cooler

I had to throw on a 27” monitor to push it past £2k. Oh, and the reason for switching from the AMD GPU to the Nvidia is CUDA, obviously.

Once the thing arrived and I’d got Windows 8 Pro installed and running happily, I added a 512GB Crucial m4 SSD that I salvaged from the MacBook before it was passed down to my daughter. Now I’ve got a SATA 3 SSD system+apps drive, a SATA 3 SSD working drive, and an HDD for, you know, iTunes and Dropbox and stuff.

I also overclocked the CPU, which is why I got the liquid cooler. That’s always been a bit challenging in the past, but this new motherboard has a “CPU Upgrade” dropdown in the BIOS where you can choose from 4.2GHz up to 4.8GHz, and that’s it, it handles the details for you. I thought I’d got it stable at 4.6GHz, but then I got a BSoD running a build in VS2012 so I bumped it down to 4.5GHz to be on the safe side.

The results

This really is primarily a work PC. I mean to play games, I really do, but I sit down and think “I’ll just hack on this code for five minutes, then I’ll play Far Cry 3.” Before you know it it’s 3am and you’ve got work in 5 hours.

So the key thing here is how does Visual Studio run? Well, it loads pretty quickly, even with R# 8 and various other plug-ins and add-ons, even when you open it by right-clicking and choosing a solution from the task-bar menu. Less than 10 seconds to load Simple.Web and start typing.

But the first time I built Simple.Data after a fresh clone from Github, I swear I thought I’d hit the wrong key. It took just over 2 seconds to build the whole solution. Using the R# test runner, it runs all 840 tests – including the 180 full SQL Server 2012 integration tests which recreate the database for each fixture – in under 7 seconds. In the past I’ve had NCrunch ignore those integration tests, but now it can run them in background, too. 9 seconds to build and run tests. It takes 71 seconds on the Dell UltraBook.

Simple.Web builds in ~3.5 seconds in Visual Studio, ~8 seconds using the Rake build. R# runs all the tests in ~6 seconds, including 30 Razor rendering tests. That takes 59 seconds on the Dell.

I can’t wait to see what it’s like with multi-threaded Roslyn builds.

Oh, and yes, it is very quiet. I can barely hear it as I type this, although when the GPU ramps up it’s the same old noisy story.

Why am I telling you this?

Now, this is all very nice for me, and of course, you’re pleased that I’m happy, but the reason I’m posting this is to once again bang on about not skimping on developer workstations. Without the expensive GPU and the monitor, this thing comes in under £1,300 (not counting VAT, which businesses don’t). Given a 2-year lifespan, that’s (hopefully a lot) less than a week’s salary for a developer, and you don’t have to be a genius to see that a 600-800% performance increase in compiling code and running tests is going to result in more than a 2% increase in productivity. And sadly, I’m willing to bet that the Dell XPS 14 UltraBook, with its 3rd-gen Core i7 mobile CPU and budget SSD, is still better than what a lot of you reading this are given to work with.

So maybe forward this post to whoever is in charge of IT procurement, as an example of some concrete numbers. Here they are again for reference:

Task Slow PC Fast PC
Build & Test Simple.Data 71 seconds 9 seconds
Build & Test Simple.Web 59 seconds 9.5 seconds

P.S.  32GB? Really?

Two words: virtual machines. VMs for running Linux for testing Mono builds. VMs for running databases (and other things) that you don’t want installed on the main PC. VMs for testing web sites in old versions of Internet Explorer. Azure emulators, mobile device emulators. Yes, 32GB, really.


I have not received any consideration from QuietPC in exchange for writing this post, and I won’t profit in any way if you buy a system from them.

The new MSDN, and airline surveys

Microsoft have just launched a new look for MSDN. For as long as I can remember, MSDN has been the gateway to all things Microsoft and development, so it’s always interesting to see how they’ve improved it with their semi-regular updates.

Now, when you land on the MSDN home page, there is a row of buttons inviting you to “use your skills”:


I saw this, and I thought “awesome, this is more of the new Microsoft I know and increasingly love; the Microsoft that is embracing open source and other platforms with their Azure SDKs and support for alternative languages and frameworks, and is capturing some of the iOS and Android market with the frankly awesome Windows Azure Mobile Services.

These days, I identify as a web developer, what with Simple.Web and Node.js and TypeScript and AngularJS and all the other awesome stuff that’s going on in browsers and on web servers. So I clicked that button, expecting to be welcomed into a world of Azure (yay) and ASP.NET MVC and Web API (eh, OK). Instead, I get a page telling me to use my web development skills to build apps for the Windows 8 store. I cannot adequately express my disappointment at this. OK, maybe I can…

I tweeted a couple of choice remarks on the matter, then, out of curiosity, went back to that homepage and clicked the other buttons, one after the other. Whichever button I clicked, I ended up on a slightly different page telling me I should be developing Windows 8 store apps. iOS developer? If I know Objective-C, C# will be easy. Android? If I know Java, C# will be like an upgrade. .NET or Windows apps? You could monetize those skills by building Windows 8 store apps. Designer? Look at Expression Blend for… you know the rest.

I know Microsoft are struggling to compete against iPhone, iPad and Android in its myriad forms, and I believe that a new player in that marketplace with strong vision can make a difference and an impact. I really like Windows 8, on traditional machines and on tablets. I want a Lumia 1020, I really do. But this is not the way to win the hearts and minds of developers. We’re mostly smart people, and we’re wary of marketing at any level, let alone when it’s this facile. This kind of stunt only serves to alienate us, and it reeks of desperation.

The bit about airlines

It reminded me of another experience I had recently. I flew to Chicago for the Monkey Space conference, and for reasons beyond my control I flew via Warsaw with LOT, the Polish airline. Bit of a detour, but I got to fly on the new Boeing 787 DreamLiner (which did not catch fire at any point). The plane was brilliant, with slightly more comfortable seats in economy and cool electronically-tinted windows, but otherwise the experience lacked something. There was a choice of 12 movies, the most recent of which was Argo (worth watching). The cabin crew could have been more attentive, and maybe not throw sandwiches right at your head. So when I saw a “Survey” option on the in-flight entertainment screen on the way home (and bearing in mind that I’d watched Argo on the outgoing flight) I thought I’d fill it in. I was so amazed by this survey that I took pictures, which tell the story better than I can:



Excellent, they want to continuously improve their service. I have a couple of suggestions that might help.

Question 1:


Well, you know, it was nice and all, but…

Question 2:


Actually, I’d describe it as “spoiled by the lack of entertainment and the low quality service”.

Question 3:


Surely! But not with LOT.

And that was all the questions. Thank you for taking part in the survey, your feedback is important to us.

Obviously this is a laughable survey, and the idea that somebody somewhere thought it was a passable idea is deserving of maximum derision. It would have made me angry if it wasn’t so very funny in its own, special, utterly asinine way.

The point is, those “I am a…” buttons on the new MSDN portal are no better than this ridiculous “survey”. They’re asking you questions and they don’t give a damn about your answer. There are genuinely useful sections of the site that those buttons could lead to. iOS or Android developer? Take a look at Azure Mobile Services. Web developer? We’ve got some really rather good frameworks and a best-in-class hosting platform. .NET or Windows desktop? Visual Studio 2013 is going to rock your world. Instead, regardless of what you choose, you end up in the same “PLEASE MAKE APPS FOR OUR PLATFORM” marketing campaign, which you weren’t expecting and don’t really care about at that point.

Like I say, I know there’s a war on and Microsoft are recruiting, but there are good ways and bad ways, and taking one of the most respected technical resources on the internet and turning it into a ham-fisted “WE WANT YOU” poster is one of the worst ways.

What they should do, IMHO

I’m not a fan of posting acres of negativity with nothing constructive, so here are my thoughts on what Microsoft need to do to compete better in the phone and tablet space:

  • It should be possible to develop for Windows 8 and Windows Phone 8 with a single solution, if not a single Visual Studio project, with the code-sharing between the two maximal. The advantage that iOS and Android have here is that the phones and the tablets are running the same OS, and you can build and publish apps that run on both. Microsoft have made much of the fact that every Windows OS with an 8 in its name is running on the same kernel, so why are the app frameworks and stores segregated? Also, HTML+JS development on Windows Phone, obviously.
  • Compete directly with Google’s Nexus 4 phone, and 7 and 10 tablets. The Android tablet app story has improved immeasurably since Google stepped up and provided a pair of quality tablets at competitive prices, and the new model of the Nexus 7 is getting rave reviews right now; I’ll be snapping one up as soon as Google see fit to let me buy one here in the UK. With Windows 8.1’s new support for smaller screens, I want to see a Surface 7 that matches or exceeds the specs, performance and build quality of the 2013 Nexus 7. And we should hope the rumours about a Surface phone come to something, because right now it comes down to Nokia or HTC.
  • Take a hit on Windows RT and Windows Phone 8 licenses. Apple don’t license iOS, because only Apple is allowed to make iThings, but for Google, Android is not a cash cow; their win comes from getting people on Google’s services, and the income from the Play store. For the exact same hardware specs, an Android device is always going to be cheaper than a Windows device because the OEM doesn’t have to pay for the Android license. There’s plenty of money to be made from the Windows Store and from selling Azure services to app developers.
  • Please, for the love of all the gods, sort out the consumer marketing. Ben Kuchera at Penny Arcade summed this up perfectly in this article, so go read that.

AngularJS & TypeScript at Progressive .NET 2013

If you’re coming to my half-day tutorial on building Single Page Applications with AngularJS and TypeScript and Simple.Web, there are some things you’re going to need that you might not already have. I can’t promise this list is exhaustive, but it’s a good start.

The basics:

Visual Studio 2012

You’ll need the TypeScript plug-in, and Web Essentials, which you should have already, but if you don’t, install it from the Extensions dialog and thank me later.

You’re also going to need something that can run Jasmine tests. I’ll be using Resharper, but you can also use the Chutzpah extension. Either way, you’ll be wanting PhantomJS installed.

SQL Server

We’re going to build an application, and it’s going to need a database, so you’re going need one installed. If you really hate SQL Server for some reason, you can use any database that’s compatible with Simple.Data (e.g. MySQL, PostgreSQL, SQLite). SQL Express is fine, as long as you’ve got Management Studio so you can create tables and stuff.


Wait, what?

Don’t worry, we’re not going to be writing any server code in Node.js. But we are going to be building JavaScript, and that works better with a Node.js-based toolchain than with Visual Studio.

You could install Node.js using the installer from the web site, but I recommend using Chocolatey. Install that using the command line on the web site, then run

cinst nodejs.install

to install the full Node.js package. Make sure you remember the .install part, otherwise you won’t get the npm package manager, and we’re going to need that.

I think that’s everything, but I totally reserve the right to remember something else at 9:29 on Friday morning.