Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...


Writing and distributing Roslyn analyzers with MyGet

Pretty sweet: MyGet just announced Vsix support has been enabled for all MyGet customers! I wanted to work on a fun example for this new feature and came up with this: how can we use MyGet to build and distribute a Roslyn analyzer and code fix? Let’s see.

Developing a Roslyn analyzer and code fix

Roslyn analyzers and code fixes allow development teams and individuals to enforce certain rules within a code base. Using code fixes, it’s also possible to provide automated “fixes” for issues found in code. When writing code that utilizes DateTime, it’s often best to use DateTime.UtcNow instead of DateTime.Now. The first uses UTC timezone, while the latter uses the local time zone of the computer the code runs on, often introducing nasty time-related bugs. Let’s write an analyzer that detects usage of DateTime.Now!

You will need Visual Studio 2015 RC and the Visual Studio 2015 RC SDK installed. You’ll also need the SDK Templates VSIX package to get the Visual Studio project templates. Once you have those, we can create a new Analyzer with Code Fix.


A solution with 3 projects will be created: the analyzer and code fix, unit tests and a Vsix project. Let’s start with the first: detecting DateTime.Now in code an showing a diagnostic for it. It’s actually quite easy to do: we tell Roslyn we want to analyze IdentifierName nodes and it will pass them to our code. We can then see if the identifier is “Now” and the parent node is “System.DateTime”. If that’s the case, return a diagnostic:

private void AnalyzeIdentifierName(SyntaxNodeAnalysisContext context) { var identifierName = context.Node as IdentifierNameSyntax; if (identifierName != null) { // Find usages of "DateTime.Now" if (identifierName.Identifier.ValueText == "Now") { var expression = ((MemberAccessExpressionSyntax)identifierName.Parent).Expression; var memberSymbol = context.SemanticModel.GetSymbolInfo(expression).Symbol; if (!memberSymbol?.ToString().StartsWith("System.DateTime") ?? true) { return; } else { // Produce a diagnostic. var diagnostic = Diagnostic.Create(Rule, identifierName.Identifier.GetLocation(), identifierName); context.ReportDiagnostic(diagnostic); } } } }

If we compile our solution and add the generated NuGet package to another project, DateTime.Now code will be flagged. But let’s implement the code fix first as well. We want to provide a code fix for the syntax node we just detected. And when we invoke it, we want to replace the “Now” node with “UtcNow”. A bit of Roslyn syntax tree fiddling:

public sealed override async Task RegisterCodeFixesAsync(CodeFixContext context) { var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false); var diagnostic = context.Diagnostics.First(); var diagnosticSpan = diagnostic.Location.SourceSpan; // Find "Now" var identifierNode = root.FindNode(diagnosticSpan); // Register a code action that will invoke the fix. context.RegisterCodeFix( CodeAction.Create("Replace with DateTime.UtcNow", c => ReplaceWithDateTimeUtcNow(context.Document, identifierNode, c)), diagnostic); } private async Task<Document> ReplaceWithDateTimeUtcNow(Document document, SyntaxNode identifierNode, CancellationToken cancellationToken) { var root = await document.GetSyntaxRootAsync(cancellationToken); var newRoot = root.ReplaceNode(identifierNode, SyntaxFactory.IdentifierName("UtcNow")); return document.WithSyntaxRoot(newRoot); }

That’s it. We now have an analyzer and a code fix. If we try it (again, by adding the generated NuGet package to another project), we can see both in action:


Now let’s distribute it to our team!

Distributing a Roslyn analyzer and code fix using MyGet

Roslyn analyzers can be distributed in two formats: as NuGet packages, so they can be enabled for individual project, and as a Visual Studio extension so that all projects we work with have the analyzer and code fix enabled. You can build on a developer machine, a CI server or using MyGet Build Services. Let’s pick the latter as it’s the easiest way to achieve our goal: compile and distribute.

Create a new feed on Next, from the Build Services tab, we can add a GitHub repository as the source. We’ve open-sourced our example at so feel free to add it to your feed as a test. Once added, you can start a build. Just like that. MyGet will figure out it’s a Roslyn analyzer and build both the NuGet package as well as the Visual Studio extension.


Sweet! You can now add the Roslyn analyzer and code fix per-project, by installing the NuGet package from the feed ( ANd when registering it in Visual Studio ( by opening the Tools | Options... menu and the Environment | Extensions and Updates pane, you can also install the full extension.


Domain Routing and resolving current tenant with ASP.NET MVC 6 / ASP.NET 5

So you’re building a multi-tenant application. And just like many multi-tenant applications out there, the application will use a single (sub)domain per tenant and the application will use that to select the correct database connection, render the correct stylesheet and so on. Great! But how to do this with ASP.NET MVC 6?

A few years back, I wrote about ASP.NET MVC Domain Routing. It seems that post was more popular than I thought, as people have been asking me how to do this with the new ASP.NET MVC 6. In this blog post, I’ll do exactly that, as well as provide an alternative way of resolving the current tenant based on the current request URL.

Disclaimer: ASP.NET MVC 6 still evolves, and a big chance exists that this blog post is outdated when you are reading it. I’ve used the following dependencies to develop this against:

You’re on your own if you are using other dependencies.

Domain routing – what do we want to do?

The premise for domain routing is simple. Ideally, we want to be able to register a route like this:

The route would match any request that uses a hostname similar to *, where "*" is recognized as the current tenant and provided to controllers a a route value. And of course we can define the path route template as well, so we can recognize which controller and action to route into.

Domain routing – let’s do it!

Just like in my old post on ASP.NET MVC Domain Routing, we will be using the existing magic of ASP.NET routing and extend it a bit with what we need. In ASP.NET MVC 6, this means we’ll be creating a new IRouter implementation that encapsulates a TemplateRoute. Let’s call ours DomainTemplateRoute.

The DomainTemplateRoute has a similar constructor to MVC’s TemplateRoute, with one exception which is that we also want a domainTemplate parameter in which we can define the template that the host name should match. We will create a new TemplateRoute that’s held in a private field, so we can easily match the request path against that if the domain matches. This means we only need some logic to match the incoming domain, something which we need a TemplateMatcher for. This guy will parse {tenant} into a dictionary that contains the actual value of the tenant placeholder. Not deal, as the TemplateMatcher usually does its thing on a path, but since it treats a dot (.) as a separator we should be good there.

Having that infrastructure in place, we will need to build out the Task RouteAsync(RouteContext context) method that handles the routing. Simplified, it would look like this:

We match the hostname against our domain emplate. If it does not match, then the route does not match. If it does match, we call the inner TemplateRoute’s RouteAsync method and let that one handle the path template matching, constraints processing and so on. Lazy, but convenient!

We’re not there yet. We also want to be able to build URLs using the various HtmlHelpers that are around. If we pass it route data that is only needed for the domain part of the route, we want to strip it off the virtual path context so we don’t end up with URLs like /Home/About?tenant=tenant1 but instead with a normal /Home/About. Here’s a gist:

Fitting it all together, here’s the full DomainTemplateRoute class: – The helpers for registering these routes are at

But there’s another approach!

One might say the only reason we would want domain routing is to know the current tenant in a multi-tenant application. This will often be the case, and there’s probably a more convenient method of doing this: by building a middleware. Ideally in our application startup, we want to add app.UseTenantResolver(); and that should ensure we always know the desired tenant for the current request. Let’s do this!

OWIN learned us that we can simply create our own request pipeline and decide which steps the current request is routed through. So if we create such step, a middleware, that sets the current tenant on the current request context, we’re good. And that’s exactly what this middleware does:

We check the current request, based on the request or one of its properties we create a Tenant instance and a TenantFeature instance and set it as a feature on the current HttpContext. And in our controllers we can now get the tenant by using that feature: Context.GetFeature<ITenantFeature>().

There you go, two ways of detecting tenants based on the incoming URL. The full source for both solutions is at (requires some assembling).

Building future .NET projects is quite pleasant

You may remember my ranty post from a couple of months back. If not, read about how building .NET projects is a world of pain and here’s how we should solve it. With Project K ASP.NET vNext ASP.NET 5 around the corner, I thought I had to look into it again and see if things will actually get better… So here goes!

Setting up a build agent is no longer a world of pain

There, the title says it all. For all .NET development we currently do, this world of pain will still be there. No way around it, you will want to commit random murders if you want to do builds targeting .NET 2.0 – .NET 4.5. A billion SDK’s all packaged in MSI’s that come with weird silent installs so you can not really script their setup, it will be there still. Reason for that is that dependencies we have are all informal: we build against some SDK, and assume it will be there. Our application does not define what it needs, so we have to provide the whole world on our build machines…

But if we forget all that and focus just on ASP.NET 5 and the new runtime, this new world is bliss. What do we need on the build agent? A few things, still.

  • An operating system (Windows, Linux and even Mac OS X will do the job)
  • PowerShell, or any other shell like Bash
  • Some form of .NET installed, for example mono

Sounds pretty standard out-of-the-box to me. So that’s all good! What else do we need installed permanently on the machine? Nothing! That’s right: NOTHING! Builds for ASP.NET 5 are self-contained and will make sure they can run anytime, anywhere. Every project specifies its dependencies, that will all be downloaded when needed so they are available to the build. Let’s see how builds now work…

How ASP.NET 5 projects work…

As an example for this post, I will use the Entity Framework repository on GitHub, which is built against ASP.NET 5. When building a project in Visual Studio 2015, there will be .sln files that represent the solution, as well as new .kproj files that represent our project. For Visual Studio. That’s right: you can ignore these files, they are just so Visual Studio can figure out how it all fits together. “But that .kproj file is like a project file, it’s msbuild-like and I can add custom tasks in there!” – Crack! That was the sound of a whip on your fingers. Yes, you can, and the new project system actually adds some things in there to make building the project in Visual Studio work, but stay away from the .kproj files. Don’t touch them.

The real project files are these: global.json and project.json. The first one, global.json, may look like this:

{ "sources": [ "src" ] }

It defines the structure of our project, where we say that source code is in the folder named src. Multiple folders could be there, for example src and test so we can distinguish where which type of project is stored. For every project we want to make, we can create a folder under the sources folder and in there, add a project.json file. It could look like this:

{ "version": "7.0.0-*", "description": "Entity Framework is Microsoft's recommended data access technology for new applications.", "compilationOptions": { "warningsAsErrors": true }, "dependencies": { "Ix-Async": "1.2.3-beta", "Microsoft.Framework.Logging": "1.0.0-*", "Microsoft.Framework.OptionsModel": "1.0.0-*", "Remotion.Linq": "1.15.15", "System.Collections.Immutable": "1.1.32-beta" }, "code": [ "**\\*.cs", "..\\Shared\\*.cs" ], "frameworks": { "net45": { "frameworkAssemblies": { "System.Collections": { "version": "", "type": "build" }, "System.Diagnostics.Debug": { "version": "", "type": "build" }, "System.Diagnostics.Tools": { "version": "", "type": "build" }, "System.Globalization": { "version": "", "type": "build" }, "System.Linq": { "version": "", "type": "build" }, "System.Linq.Expressions": { "version": "", "type": "build" }, "System.Linq.Queryable": { "version": "", "type": "build" }, "System.ObjectModel": { "version": "", "type": "build" }, "System.Reflection": { "version": "", "type": "build" }, "System.Reflection.Extensions": { "version": "", "type": "build" }, "System.Resources.ResourceManager": { "version": "", "type": "build" }, "System.Runtime": { "version": "", "type": "build" }, "System.Runtime.Extensions": { "version": "", "type": "build" }, "System.Runtime.InteropServices": { "version": "", "type": "build" }, "System.Threading": { "version": "", "type": "build" } } }, "aspnet50": { "frameworkAssemblies": { "System.Collections": "", "System.Diagnostics.Debug": "", "System.Diagnostics.Tools": "", "System.Globalization": "", "System.Linq": "", "System.Linq.Expressions": "", "System.Linq.Queryable": "", "System.ObjectModel": "", "System.Reflection": "", "System.Reflection.Extensions": "", "System.Resources.ResourceManager": "", "System.Runtime": "", "System.Runtime.Extensions": "", "System.Runtime.InteropServices": "", "System.Threading": "" } }, "aspnetcore50": { "dependencies": { "System.Diagnostics.Contracts": "4.0.0-beta-*", "System.Linq.Queryable": "4.0.0-beta-*", "System.ObjectModel": "4.0.10-beta-*", "System.Reflection.Extensions": "4.0.0-beta-*" } } } }

Whoa! My eyes! Well, it’s not so bad. A couple of things are in here:

  • The version of our project (yes, we have to version properly, woohoo!)
  • A description (as I have been preaching a long time: every project is now a package!)
  • Where is our source code stored? II n this case, all .cs files in all folders and some in a shared folder one level up.
  • Dependencies of our project. These are identifiers of other packages, that will either be searched for on NuGet, or on the filesystem. Since every project is a package, there is no difference between a project or a NuGet package. During development, you can depend on a project. When released, you can depend on a package. Convenient!
  • The frameworks supported and the framework components we require.

That’s the project system. These are not all supported elements, there are more. But generally speaking: our project now defines what it needs. One I like is the option to run scripts at various stages of the project’s lifecycle and build lifecycle, such as restoring npm or bower packages. SLight thorn in my eye there is that the examples out there all assume npm and bower are on the build machine. Yes, that’s a hidden dependency right there…

The good things?

  • Everything is a package
  • Everything specifies their dependencies explicitly (well, almost everything)
  • It’s human readable and machine readable

So let’s see what we would have to do if we want to automate a build of, say, the Entity Framework repository on GitHub.

Automated building of ASP.NET 5 projects

This is going to be so dissappointing when you read it: to build Entity Framework, you run build.cmd (or on non-Windows OS). That’s it. It will compile everything into assemblies in NuGet packages, run tests and that’s it. But what does this build.cmd do, exactly? Let’s dissect it! Here’s the source code that’s in there at time of writing this blog post:

@echo off cd %~dp0 SETLOCAL SET CACHED_NUGET=%LocalAppData%\NuGet\NuGet.exe IF EXIST %CACHED_NUGET% goto copynuget echo Downloading latest version of NuGet.exe... IF NOT EXIST %LocalAppData%\NuGet md %LocalAppData%\NuGet @powershell -NoProfile -ExecutionPolicy unrestricted -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest '' -OutFile '%CACHED_NUGET%'" :copynuget IF EXIST .nuget\nuget.exe goto restore md .nuget copy %CACHED_NUGET% .nuget\nuget.exe > nul :restore IF EXIST packages\KoreBuild goto run .nuget\NuGet.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre .nuget\NuGet.exe install Sake -version 0.2 -o packages -ExcludeVersion IF "%SKIP_KRE_INSTALL%"=="1" goto run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 :run CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 packages\Sake\tools\Sake.exe -I packages\KoreBuild\build -f makefile.shade %*

Did I ever mention my dream was to have fully self-contained builds? This is one. Here’s what happens:

  • A NuGet.exe is required, if it’s found that one is reused, if not, it’s downloaded on the fly.
  • Using NuGet, 2 packages are installed (currently from the alpha feed the ASP.NET team has on MyGet, but I assume these will end up on someday)
    • KoreBuild
    • Sake
  • The KoreBuild package contains a few things (go on, use NuGet Package Explorer and see, I’ll wait)
    • A kvm.ps1, which is the bootstrapper for the ASP.NET 5 runtime that installs a specific runtime version and kpm, the package manager.
    • A bunch of .shade files
  • Using that kvm.ps1, the latest CoreCLR runtime is installed and activated
  • Sake.exe is run from the Sake package

Dissappointment, I can feel it! This file does botstrap having the CoreCLR runtime on the build machine, but how is the actual build performed? The answer lies in the .shade files from that KoreBuild package. A lot of information is there, but distilling it all, here’s how a build is done using Sake:

  • All bin folders underneath the current directory are removed. Consider this the old-fashioned “clean” target in msbuild.
  • The kpm restore command is run from the folder where the global.json file is. This will ensure that all dependencies for all project files are downloaded and made available on the machine the build is running on.
  • In every folder containing a project.json file, the kpm build command is run, which compiles it all and generates a NuGet package for every project.
  • In every folder containing a project.json file where a command element is found that is named test, the k test command is run to execute unit tests

This is a simplified version, as it also cleans and restores npm and bower, but you get the idea. A build is pretty easy now. KoreBuild and Sake do this, but we could also just run all steps in the same order to achieve a fully working build. So that’s what I did…

Automated building of ASP.NET 5 projects with TeamCity

To see if it all was true, I decided to try and automate things using TeamCity. Entity Framework would be to easy as that’s just calling build.bat. Which is awesome!

I crafted a little project on GitHub that has a website, a library project and a test project. The goal I set out was automating a build of all this using TeamCity, and then making sure tests are run and reported. On a clean build agent with no .NET SDK’s installed at all. I also decided to not use the Sake approach, to see if my theory about the build process was right.

So… Installing the runtime, running a clean, build and test, right? Here goes:

@echo off cd SETLOCAL IF EXIST packages\KoreBuild goto run %teamcity.tool.NuGet.CommandLine.DEFAULT.nupkg%\tools\nuget.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre -Source :run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 :clean @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% "bin" -Directory -rec -erroraction 'silentlycontinue' | Remove-Item -Recurse; exit $Lastexitcode" :restore @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% global.json -rec -erroraction 'silentlycontinue' | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& CALL kpm restore }; exit $Lastexitcode" :buildall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Foreach { kpm build $_.FullName --configuration %mr.Configuration% }; exit $Lastexitcode" @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% *.nupkg -rec -erroraction 'silentlycontinue' | Where-Object {$_.FullName -match 'bin'} | Select-Object -Expand FullName | Foreach { Write-Host `#`#teamcity`[publishArtifacts `'$_`'`] }; exit $Lastexitcode" :testall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Where-Object { $_.FullName -like '*test*' } | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& k test -teamcity }; exit $Lastexitcode"

(note: this may not be optimal, it’s as experimental as it gets, but it does the job – feel free to rewrite this in Ant or Maven to make it cross platform on TeamCity agents, too)

TeamCity will now run the build and provide us with the artifacts generated during build (all the NuGet packages), and expose them in the UI after each build:

Projeckt K ASP.NET vNext TeamCity

Even better: since TeamCity has a built-in NuGet server, these packages now show up on that feed as well, allowing me to consume these in other projects:

NuGet feed in TeamCity for ASP.NET vNext

Running tests was unexpected: it seems the ASP.NET 5 xUnit runner still uses TeamCity service messages and exposes results back to the server:

Test results from xUnit vNext in TeamCity

But how to set the build number, you ask? Well, turns out that this is coming from the project.json. The build umber in there is leading, but we can add a suffix by creating a K_VERSION_NUMBER environment variable. On TeamCity, we could use our build counter for it. Or run GitVersion and use that as the version suffix.

TeamCity set ASP.NET 5 version number

Going a step further, running kpm pack even allows us to build our web applications and add the entire generated artifact to our build, ready for deployment:

ASP.NET 5 application build on TeamCity

Very, very nice! I’m liking where ASP.NET 5 is going, and forgetting everything that came before gives me high hopes for this incarnation.


This is really nice, and the way I dreamt it would all work. Everything is a package, and builds are self-contained. It’s all still in beta state, but it gives a great view of what we’ll soon all be doing. I hope a lot of projects will use the builds like the Entity Framework one. having one or two build.bat files in there that do the entire thing. But even if not and you have a boilerplate VS2015 project, using the steps outlined in this blog post gets the job done. In fact, I created some TeamCity meta runners for you to enjoy (contributions welcome). How about adding one build step to your ASP.NET 5 builds in TeamCity…

TeamCity ASP.NET build by convention

Go grab these meta runners now! I have created quite a few:

  • Install KRE
  • Convention-based build
  • Clean sources
  • Restore packages
  • Build one project
  • Build all projects
  • Test one project
  • Test all projects
  • Package application

PS: Thanks Mike for helping me out with some PowerShell goodness!

Automatically strong name signing NuGet packages

Some developers prefer to strong name sign their assemblies. Signing them also means that the dependencies that are consumed must be signed. Not all third-party dependencies are signed, though, for example when consuming packages from NuGet. Some are signed, some are unsigned, and the only way to know is when at compile time when we see this:

Referenced assembly does not have a strong name

That’s right: a signed assembly cannot consume an unsigned one. Now what if we really need that dependency but don’t want to lose the fact that we can easily update it using NuGet… Turns out there is a NuGet package for that!

The Assembly Strong Naming Toolkit can be installed into our project, after which we can use the NuGet Package Manager Console to sign referenced assemblies. There is also the .NET Assembly Strong-Name Signer by Werner van Deventer, which provides us with a nice UI as well.

The problem is that the above tools only sign the assemblies once we already consumed the NuGet package. With package restore enabled, that’s pretty annoying as the assemblies will be restored when we don’t have them on our system, thus restoring unsigned assemblies…

NuGet Signature

Based on the .NET Assembly Strong-Name Signer, I decided to create a small utility that can sign all assemblies in a NuGet package and creates a new package out of those. This “signed package” can then be used instead of the original, making sure we can simply consume the package in Visual Studio and be done with it. Here’s some sample code which signs the package “MyPackage” and creates “MyPackage.Signed”:

var signer = new PackageSigner(); signer.SignPackage("MyPackage.1.0.0.nupkg", "MyPackage.Signed.1.0.0.nupkg", "SampleKey.pfx", "password", "MyPackage.Signed");

This is pretty neat, if I may say so, but still requires manual intervention for the packages we consume. Unless the NuGet feed we’re consuming could sign the assemblies in the packages for us?

NuGet Signature meets MyGet Webhooks

Earlier this week, MyGet announced their webhooks feature. After enabling the feature on our feed, we could pipe events, such as “package added”, into software of our own and perform an action based on this event. Such as, signing a package.

MyGet automatically sign NuGet package

Sweet! From the GitHub repository here, download the Web API project and deploy it to Microsoft Azure Websites. I wrote the Web API project with some configuration options, which we can either specify before deploying or through the management dashboard. The application needs these:

  • Signature:KeyFile - path to the PFX file to use when signing (defaults to a sample key file)
  • Signature:KeyFilePassword - private key/password for using the PFX file
  • Signature:PackageIdSuffix - suffix for signed package id's. Can be empty or something like ".Signed"
  • Signature:NuGetFeedUrl - NuGet feed to push signed packages to
  • Signature:NuGetFeedApiKey - API key for pushing packages to the above feed

The configuration in the Microsoft Azure Management Dashboard could look like the this:

Azure Websites

Once that runs, we can configure the web hook on the MyGet side. Be sure to add an HTTP POST hook that posts to <url to your deployed app>/api/sign, and only with the package added event.

MyGet Webhook configuration

From now on, all packages that are added to our feed will be signed when the webhook is triggered. Here’s an example where I pushed several packages to the feed and the NuGet Signature application signed the packages themselves.

NuGet list signed packages

The nice thing is in Visual Studio we can now consume the “.Signed” packages and no longer have to worry about strong name signing.

Thanks to Werner for the .NET Assembly Strong-Name Signer I was able to base this on.


Developing for the Tessel with WebStorm

image_thumb[1]In a previous post, I mentioned that (finally) my Tessel arrived, “an internet-connected microcontroller programmable in JavaScript”. I like WebStorm a lot as an IDE, and since the Tessel runs on JavaScript code (via node), why not see if WebStorm can be more than just an editor for Tessel development…

Developing JavaScript

The Tessel runs JavaScript, so naturally a JavaScript IDE like WebStorm will be splendid at that part. It provides a project system, code completion, navigation, inspections to check whether my code is as it should be (which from the screenshot below, it is not, yet ;-)) and so on.

WebStorm Tessel Node JavaScript

What I like a lot is that everything related to the device-side of my project (a thermometer thing that posts data to the Internet), is in one place. The project system ensures the IDE can be intelligent about code completion and navigation, I can see the npm modules I have installed, I can use version control and directly push my changes back to a GitHub repository. The Terminal tool window lets me run the Tessel command line to run scripts and so on. No fiddling with additional tools so far!

Tessel Command Line Tools

As I explained in a previous blog post, the Tessel comes with a command line that is used toconnect the thing to WiFi, run and deploy scripts and read logs off it (and more). I admit it: I am bad at command line things. After a long time, commands get engraved in my memory and I’m quite fast at using them, but new command line tools, like Tessel’s, are something that I always struggle with at the start.

To help me learn, I thought I’d add the Tessel command line to WebStorm’s Command Line Tools. Through the Project Settings | Command Line Tool Support,, I added the path to Tessel’s tool (%APPDATA%\npm\tessel.cmd). Note that you may have to install the Command Line Tools Plugin into WebStorm, I’m unsure if it’s bundled.

Tessel Command Line 

This helps in getting the Tessel commands available in the Command Line Tools after pressign Ctrl+Shift+X (or Tools | Run Command…), but it still does not help me in learning this new command’s syntax, right? Copy this gist into C:\Users\<your username>\.WebStorm8\config\commandlinetools\Custom_tessel.xml and behold: completion for these commands!

Tessel command line auto completion

Again, I consider them as training wheels until I start memorizing the commands. I can remember tessel run, but it’s all the one’s that I’m not using cntinuously that I tend to forget…

Running Code on the Tessel

Running code on the Tessel can be done using the tessel run <script.js> command. However, I dislike having to always jump into a console or even the command line tools mentioned earlier to just run and see if things work. WebStorm has the concept of Run/Debug Configurations, where using a simple keystroke (Shift+F10) I can run the active configuration without having to switch my brain context to a console.

I created two configurations: one that runs nodejs on my own computer so I can test some things, and one that invokes tessel run. Provide the path to node, set the working directory to the current project folder, specify the Tessel command line script as the file to execute and provide run somescript.js as the parameters.

Tessel Run

Quick note here: after a few massive errors coming from Tessel’s command line tool that mentioned the device only supports one connection, it’s bes tto check the Single instance only box for the run configuration. This ensures the process is killed and restarted whenever the script is ran.

Save, Shift+F10 and we’re deploying and running whenever we want to test our code.

Run code on Tessel from WebStorm

Debugging does not work, as the Tessel does not support this. I hope support for it will be added, ideally using the V8 debugger so WebStorm can hook into it, too. Currently I’m doing “poor man’s debugging”: dumping variables using console.log() mostly…

External Tools

When I first added Tessel to WebStorm, I figured it would be nice to have some menu entries to invoke commands like updating the firmware (a weekly task,Tessel is being actively developed it seems!) or showing the device’s WiFi status. So I did!

Tessel External Tools

External Tools can be added under the IDE Settings | External Tools and added to groups and so on. Here’s what I entered for the “Update firmware” command:

Update Tessel from WebStorm

It’s basically just running node, passing it the path to the Tessel command line script and appending the correct parameter.

Now, I don’t use my newly created menu too much I must say. Using the command line tools directly is more straightforward. But adding these external tools does give an additional advantage: since I have to re-connect to the WiFi every now and then (Tessel’s WiFi chip is a bit flakey when further away from the access point), I added an external tool for connectingit to WiFi and assigned a shortcut to it (IDE Settings | Keymaps, search for whatever label you gave the command and use the context menu to assign a keyboard shortcut). On my machine, Ctrl+Alt+W resets the Tessel’s WiFi now!

Installing npm Packages

This one may be overkill, but I found searching npm for Tessel-related packages quite handy through the IDE. From Project Settings | Node.JS and NPM, searching packages is pretty simple. And installing them, too! Careful, Tessel’s 32 MB of storage may not like too many modules…

NPM webstorm

Fun fact: writing this blog post, I noticed the grunt-tessel package which contains tasks that run or deploy scripts to the device. If you prefer using Grunt for doing that, know WebStorm comes with a Grunt runner, too.

That’s it, for now, I do hope to tinker away on the Tessel in the next weeks nad finish my thermometer and the app so I can see the (historical) temperature in my house,

Getting Started with the Tessel

Tessel LogoSomewhere last year (I honestly no longer remember when), I saw a few tweets that piqued my interest: a crowdfunding project for the Tessel, “an internet-connected microcontroller programmable in JavaScript”. Since everyone was doing Arduino and Netduino and JavaScript is not the worst language ever, I thought: let’s give these guys a bit of money! A few months later, they reached their goal and it seemed Tessel was going to production. Technical Machine, the company behind the device, sent status e-mails on their production process every couple of weeks and eventually after some delays, there it was!

Plug, install (a little), play!

After unpacking the Tessel, I was happy to see it was delivered witha micro-USB cable to power it, a couple of stuickers and the climate module I ordered with it (a temperature and humidity sensor). The one-line manual said “”, so that’s where I went.

The setup is pretty easy: plug it in a USB port so that Windows installs the drivers, install the tessel package using NPM and update the device to the latest firmware.

npm install -g tessel tessel update

Very straightforward! Next, connecting it to my WiFi:

tessel wifi -n <ssid> -p <password> -s wpa2 -t 120

And as a test, I managed to deploy “blinky”, a simple script that blinks the leds on the Tessel.

tessel blinky

Now how do I develop for this thing…

My first script (with the climate module)

One of the very cool things about Tessel is that all additional modules have something printed on them… The climate module, for example, has the text “climate-si7005” printed on it.


Now what does that mean? Well, it’s also the name of the npm package to install to work with it! In a new directory, I can now simply initialzie my project and install theclimate module dependency.

npm init npm install climate-si7005

All modules have their npm package name printed on them so finding the correct package to work with the Tessel module is quite easy. All it takes is the ability to read. The next thing to do is write some code that can be deployed to the Tessel. Here goes:

The above code uses the climate module and prints the current temperature (in Celsius, metric system for the win!) on the console every second. Here’s a sample, climate.js.

var tessel = require('tessel'); var climatelib = require('climate-si7005'); var climate = climatelib.use(tessel.port['A']); climate.on('ready', function () { setImmediate(function loop () { climate.readTemperature('c', function (err, temp) { console.log('Degrees:', temp.toFixed(4) + 'C'); setTimeout(loop, 1000); }); }); });

The Tessel takes two commands that run a script: tessel run climate.js, which will copy the script and node modules onto the Tessel and runs it, and tessel push climate.js which does the same but deploys the script as the startup script so that whenever the Tessel is powered, this script will run.

Here’s what happens when climate.js is run:

tessel run climate.js

The output of the console.log() statement is there. And yes, it’s summer in Belgium!

What’s next?

When I purchased the Tessel, I had the idea of building a thermometer that I can read from my smartphone, complete with history, min/max temperatures and all that. I’ve been coding on it on and off in the past weeks (not there yet). Since I’m a heavy user of PhpStorm and WebStorm for doing non-.NET development, I thought: why not also see what those IDE’s can do for me in terms of developing for the Tessel… I’ll tell you in a next blog post!

Microsoft Azure cloud plugin for TeamCity (dabbling in Java code)

NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.

If you follow me on Twitter, you may have seen me in several stages of anger at Java. After two weeks of learning, experimenting, coding and even getting it all to compile, I’m proud to announce an inital very early preview of my Microsoft Azure cloud plugin for TeamCity.

This plugin provides Microsoft Azure cloud support for TeamCity. By configuring a Microsoft Azure cloud in TeamCity, a set of known virtual build agents can be started and stopped on demand by the TeamCity server so we can benefit from Microsoft Azure’s cost model (a stopped VM is almost free) and scaling model (only start new instances when we need them).

Curious to try it? Make sure you know it is all still very early alpha version software so use with caution. I wanted to get an early preview out to gather some comments on it. Here are the installation steps:

  • Download the plugin ZIP file from the latest GitHub release.
  • Copy it to the TeamCity plugins folder
  • Restart TeamCity server and verify the plugin was installed from Administration | Plugins List

Creating a new cloud profile

From TeamCity’s Administration | Agent Cloud, we can create a new cloud configuration making use of the Microsoft Azure plugin for TeamCity. All we have to do is select “Microsoft Azure” as the cloud type and enter the requested details.

TeamCity agent on Azure VM

Once we enter some preconfigured and pre-provisioned VM names, we’re good to save and profit.

Known issue: only one Microsoft Azure cloud configuration can be created per TeamCity server because the KeyStore being configured by the plugin only stores one management certificate. Contribute a fix if you feel up for it!

What’s up?

From Agents | Cloud, we can now see which VM instances are stopped/running on Microsoft Azure.

Start stop TeamCity agent on Azure

Known issue: status of the VM displayed is not always current. The VM status is read from TeamCity's last known status, not from Microsoft Azure. Again, contribute a fix if you feel up for it.

What is there to come?

That’s pretty much it for now. I told you, it’s early. In my ideal world, there should also be a possibility to launch VM instances from a predefined image and destroy them when no longer needed. I also would love to convert it all to Kotlin as I still don’t like Java as a language and Kotlin looks really nice. ANd ideally, the crude UI I did for the plugin should be much nicer too.

Happy building in the cloud!

Building .NET projects is a world of pain and here’s how we should solve it

During the past few weeks, I’ve been working on and off on setting up a build agent that can build as many open-source .NET projects as possible in an effort to learn how hard it is to do. Allow me to open this blog post with a rant… One which will feel very familiar if you’ve recently installed a build agent yourself.

Setting up a .NET build machine is insane

As the minimal installation of components I started with installing the .NET framework 2.0, 3.0, 3.5, 4.0, 4.0.1 (yes, that exists), 4.0.2, 4.0.3, 4.5, 4.5.1 and their multi-targeting packs on the build agent. Next, I took 100 random C# projects from GitHub that had activity in the last year or so and started building and reading build logs. Great news! There are a lot of self-contained open source projects out there that build happily on this minimal install. Most of these seem to be class libraries, often depending on some NuGet packages that are installed using NuGet package restore.

Unfortunately, there are a great number of projects that do not build on this minimal setup: those that require specific SDK’s and components installed. So I started delving deeper into build logs and tackled project by project with the necessary “headless installs” of SDK’s. In practice, this sometimes means running an installer with specific commands to only install what is required to build projects on it. In other cases it means copying .targets and reference assemblies from my Windows 8.1 machine to the Windows Server 2012 R2 machine that was my build agent (yes, you can build Windows Store apps on Windows Server if you are persistent…). And in other cases (looking at you, Windows Phone SDK!) it meant running the installer in compatibility mode with some registry keys changed to overcome installer checks that do not allow installing that SDK on Windows Server.

In the end, I had to install pretty much the entire world on the build agent, or at least all SDK’s and tools that have been released between Visual Studio 2010 and the latest 2013. Here’s 17.6 GB of sh… dependencies for you.

Installed programs and features

What is the issue?

Well, there isn’t just “one” issue. There are several. Here’s a quick list of issues and questions

  • There is no way to clearly specify dependencies on SDK’s and tooling in .NET projects. The only way to know what is required is to build, read the build log, build, read the build log and someday succeed in finding the right SDK for the job. These dependencies are all implicit and there is no good way of finding out what they are, except trial and error.
  • The fact that I need this amount of SDK’s installed is crazy in itself. Why is this? Most builds simply need a .targets file and some DLLs, not all the other stuff that is in the download of such SDK.
  • Some SDK’s don’t install on every platform. Why is that? Why can’t SDK X install on platform Y?
  • Will I be able to install future versions of the SDK side-by-side so “older” projects build on my machine? Or will I need a machine for every Visual Studio version separately? How to isolate these things?

This is not only Microsoft tooling and SDK’s. Various other SDKs also require installs, prerequisites, configuration, … If only that picture above would allow scrolling so you could see Amazon, Xamarin and many others in that list.

How should we solve this?

Let’s look at the Node.js community and how they manage to do things. Every project, whether an actual application, a library or component, contains an important file: packages.json. It contains a description of the project itself, as well as the dependencies it requires, both id and version. All you need to build or run most of such projects is a node executable and an Internet connection to download dependencies on the fly. Sounds familiar? It does!

We’ve been using NuGet for quite a while now in the .NET space (if you haven’t, look into it now, even for in-house frameworks hosted on private feeds!). We’re distributing open source projects as NuGet packages that we can depend on in our own software. We can publish our own software as a NuGet package so others can depend on it. Awesome! Then why aren’t we doing this with the 17.6 GB of SDK madness we have to install on a build machine?

I do not think we can solve this quickly and change history. But I do think from now on we have to start building SDK’s differently. Most projects only require an MSBuild .targets file and some assemblies, either containing MSBuild tasks or reference assemblies, to do their compilation work. What if… we shipped the minimum files required to succesfully build a project as NuGet packages? The NuGet gallery contains some examples of this, but there are only a few. Another example is the ReSharper SDK which is shipped as a NuGet package. Need a test runner? Wrap the executable in a NuGet package and I’ll bring it down and run it during build. My takeaway: if you have a .targets file and are wrapping it in an MSI, you are doing it wrong.

Does that mean MSI's should disappear? No! They can exist and add tooling or whatever they need to add to a developer machine. All I want is the .targets file and supporting assemblies to be distributed separately as a self-contained package which I can reference explicitly, rather than the implicit way it is done now.

In my ideal world, all .NET projects would have a packages.config file in their root folder in which library dependencies as well as MSBuild dependencies can be described. My build machine would contain the .NET framework and Mono. And during build, all dependencies would be magically brought down for just that build.

P.S.: A lot of the new packages like ASP.NET MVC and WebApi, the OData packages and such are being shipped as NuGet packages which is awesome. The ones that I am missing are those that require additional build targets that are typically shipped in SDK's. Examples are the Windows Azure SDK, database tools and targets, ... I would like those to come aboard the NuGet train and ship their Visual Studio tooling separately from teh artifacts required to run a build.

Pro NuGet second edition is out

Pro NuGet will learn you all there is to know about NuGetPfew! Around February 2013, Xavier and I started planning work on an update of our book. Eight months later, we’re proud to present you with Pro NuGet (second edition). It’s been a tough couple of months writing this: Xavier has become a father for the second time (congratulations!), we’ve had two massive updates to NuGet we had to work in our book, … But here it is!

What’s new?

  • A number of workflows with NuGet have changed and have been added. Expect all of these, including NuGet’s old and new package restore functionality.
  • Want to work with NuGet and Windows Azure Websites, TeamCity, Visual Studio Online, OctopusDeploy, NuGet Gallery, ProGet or MyGet? We have a bunch of recipes for you!
  • Pitfalls of package versioning
  • Building a plugin system based on NuGet

Next to that there is a lot more meat in there!

  • Understand how NuGet fits into the big picture of your software development process to save you time and money.
  • How to keep your team working when your project depends on an external resource (such as a web service or cloud) which suddenly becomes unavailable.
  • Whether or not to auto-update NuGet packages within a continuous integration process for maximum reliability and speed.
  • How to combine NuGet with PowerShell to create your own Cmdlets and extend the base toolset in an extremely powerful manner.
  • Evaluate the pros-and-cons of hosting your own NuGet repository.
  • How to incorporate NuGet seamlessly within your continuous integration process.
  • Much much more!

We would love to get your feedback! E-mail us or write a review on your blog or Amazon. Enjoy the read!

PS: Thanks to our excellent reviewers (the NuGet team) and everyone at Apress! There is a lot of people involved in getting a quality book out there. Thanks!

A new year's present: introducing Glimpse plugins for Windows Azure

Glimpse plugin for Windows AzureHave you tried Glimpse before? It shows you server-side information like execution times, server configuration, request data and such in your browser. At the February MVP Summit this year, Anthony, Nik and I had a chat about what would be useful information to be displayed in Glimpse when working on Windows Azure. Some beers and a bit of coding later, we had a proof-of-concept showing Windows Azure runtime configuration data in a Glimpse tab.

Today, we are happy to announce a first public preview of two Windows Azure tabs in Glimpse: the Glimpse.WindowsAzure package displaying runtime information, and Glimpse.WindowsAzure.Storage collecting information about traffic from and to storage.

Want to give it a try? You can install these two NuGet packages from (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

Now let’s have a look at what these packages have to offer!


The Glimpse.WindowsAzure package adds a new tab to Glimpse, displaying environment information when the web application is hosted on Windows Azure. It does this for Cloud Services as well as for Windows Azure Web Sites.

Installation is easy: simply add the Glimpse.WindowsAzure package to your project and you’re done. If you are running on .NET 4.5, you will have to add the following setting to your Web.config:

  <add key="Glimpse:DisableAsyncSupport" value="true"/>

When hosting in a Windows Azure Cloud Service (or the full emulator available in the Windows Azure SDK), the Azure Environment tab will provide information gathered from the RoleEnvironment class. Youcan see the deployment ID, current role instance information, a list of configured endpoints, which fault and uopdate domain our application is running in and so on.

Windows Azure Role Environment

When the web application is hosted on Windows Azure Web Sites, we get information like Compute Mode (Shared or Reserved) as well as Site Mode (Limited in the screenshot below means the application is running on a Free web site).

Glimpse Windows Azure Web Sites

The Azure Environment tab will also provide a link to the Kudu Remote Console, a feature in Windows Azure Web Sites where you can run commands on the box hosting the web site,

Kudu Console

Pretty handy if you ask me!


The Glimpse.WindowsAzure.Storage package adds an “Azure Storage” tab to Glimpse, displaying all sorts of information about traffic from and to Windows Azure storage. It will also estimate the cost for loading the current page depending on number of transactions and traffic to blobs, tables and/or queues. Note that this package can also be used in ASP.NET web sites that are not hosted on Windows Azure yet making use of Windows Azure Storage.

Once the package is installed into your project, you can almost start inspecting all this information. Almost? Well, see the caveat further down…


Number of transactions and a cost estimate

The first type of data displayed in the Azure Storage tab is the total number of transactions, traffic consumed and a cost estimate for 10.000 pageviews. This information can be used for several scenarios:

  • Know how many calls are made to storage. Maybe you can reduce the number of calls to reduce the toal number of transactions, one of the billing metrics for Windows Azure.
  • Another billing metric is the amount of traffic consumed. When running in the same datacenter as the storage account, it’s less important for cost but still, reducing the traffic can reduce the page load time.

Windows Azure Storage Transactions and bandwidth consumed

Now where do we get the price per 10.000 pageviews? Well, this is a very rough estimate, based om the pay-per-use pricing in Windows Azure. It is very likely that the actual price willk be lower if you are running on an MSDN subscription, a pre-paid plan or an Enterprise Agreement.

Warnings and analysis of requests

One feature we’re particularly proud of is this one: warnings and analysis of requests to Windows Azure Storage. First of all, we’ll analyse the settings for communicating over the network. In the screenshot below, you can see several general hints to optimize throughput by disabling the Nagle algorithm or disabling HTTP 100 Continue.

Another analysis we’ll do is verifying the requests themselves. In the example below, Glimpse is giving a warning about the fact that I’m querying table storage on properties that are not indexed, potentially causing timeouts in my application.

There are several more inspections in there, if you have suggestions for others feel free to let us know!

Analysis of requests

List of requests and Timeline

When using Windows Azure Storage, Glimpse will show you all requests that have been made together with the status code and total duration of the request.


Since a plain list is often not that easy to analyze, the Timeline tab is extended with this information as well. It shows you a summary of when calls to Windows Azure Storage have been made, as well as full details of the requests:

Timeline tracing Windows Azure Storage

One caveat

Because of a current limitation in the Windows Azure Storage SDK, you will have to explicitly add one parameter to every call that is made to Windows Azure Storage.

The idea is that the OperationContext parameter for calls to storage has to be a special Glimpse OperationContext obtained by calling OperationContextFactory.Current.Create(). This Glimpse-specific implementation provides us all the information required to do display information in the Azure Storage tab. here’s an example on how to wire it in for a call to create a blob storage container:

var account = CloudStorageAccount.DevelopmentStorageAccount;
var blobclient
= account.CreateCloudBlobClient();
var container1
= blobclient.GetContainerReference("glimpse1");
container1.CreateIfNotExists(operationContext: OperationContextFactory.Current.Create());

We are talking with Microsoft about this and are pretty sure this shortcoming will be addressed in the future.

What’s next?

It would be great if you could give these two packages a try! NuGet packages are available from (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

We’re still looking at load balanced environments. You can implement Glimpse’s IPersistenceStore but we would like to have a zero-configuration setup.

Once we’re confident Glimpse.WindowsAzure and Glimpse.WindowsAzure.Storage are working properly, we’ll have a look at Windows Azure Caching and Service Bus.