Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Developing for the Tessel with WebStorm

image_thumb[1]In a previous post, I mentioned that (finally) my Tessel arrived, “an internet-connected microcontroller programmable in JavaScript”. I like WebStorm a lot as an IDE, and since the Tessel runs on JavaScript code (via node), why not see if WebStorm can be more than just an editor for Tessel development…

Developing JavaScript

The Tessel runs JavaScript, so naturally a JavaScript IDE like WebStorm will be splendid at that part. It provides a project system, code completion, navigation, inspections to check whether my code is as it should be (which from the screenshot below, it is not, yet ;-)) and so on.

WebStorm Tessel Node JavaScript

What I like a lot is that everything related to the device-side of my project (a thermometer thing that posts data to the Internet), is in one place. The project system ensures the IDE can be intelligent about code completion and navigation, I can see the npm modules I have installed, I can use version control and directly push my changes back to a GitHub repository. The Terminal tool window lets me run the Tessel command line to run scripts and so on. No fiddling with additional tools so far!

Tessel Command Line Tools

As I explained in a previous blog post, the Tessel comes with a command line that is used toconnect the thing to WiFi, run and deploy scripts and read logs off it (and more). I admit it: I am bad at command line things. After a long time, commands get engraved in my memory and I’m quite fast at using them, but new command line tools, like Tessel’s, are something that I always struggle with at the start.

To help me learn, I thought I’d add the Tessel command line to WebStorm’s Command Line Tools. Through the Project Settings | Command Line Tool Support,, I added the path to Tessel’s tool (%APPDATA%\npm\tessel.cmd). Note that you may have to install the Command Line Tools Plugin into WebStorm, I’m unsure if it’s bundled.

Tessel Command Line 

This helps in getting the Tessel commands available in the Command Line Tools after pressign Ctrl+Shift+X (or Tools | Run Command…), but it still does not help me in learning this new command’s syntax, right? Copy this gist into C:\Users\<your username>\.WebStorm8\config\commandlinetools\Custom_tessel.xml and behold: completion for these commands!

Tessel command line auto completion

Again, I consider them as training wheels until I start memorizing the commands. I can remember tessel run, but it’s all the one’s that I’m not using cntinuously that I tend to forget…

Running Code on the Tessel

Running code on the Tessel can be done using the tessel run <script.js> command. However, I dislike having to always jump into a console or even the command line tools mentioned earlier to just run and see if things work. WebStorm has the concept of Run/Debug Configurations, where using a simple keystroke (Shift+F10) I can run the active configuration without having to switch my brain context to a console.

I created two configurations: one that runs nodejs on my own computer so I can test some things, and one that invokes tessel run. Provide the path to node, set the working directory to the current project folder, specify the Tessel command line script as the file to execute and provide run somescript.js as the parameters.

Tessel Run

Quick note here: after a few massive errors coming from Tessel’s command line tool that mentioned the device only supports one connection, it’s bes tto check the Single instance only box for the run configuration. This ensures the process is killed and restarted whenever the script is ran.

Save, Shift+F10 and we’re deploying and running whenever we want to test our code.

Run code on Tessel from WebStorm

Debugging does not work, as the Tessel does not support this. I hope support for it will be added, ideally using the V8 debugger so WebStorm can hook into it, too. Currently I’m doing “poor man’s debugging”: dumping variables using console.log() mostly…

External Tools

When I first added Tessel to WebStorm, I figured it would be nice to have some menu entries to invoke commands like updating the firmware (a weekly task,Tessel is being actively developed it seems!) or showing the device’s WiFi status. So I did!

Tessel External Tools

External Tools can be added under the IDE Settings | External Tools and added to groups and so on. Here’s what I entered for the “Update firmware” command:

Update Tessel from WebStorm

It’s basically just running node, passing it the path to the Tessel command line script and appending the correct parameter.

Now, I don’t use my newly created menu too much I must say. Using the command line tools directly is more straightforward. But adding these external tools does give an additional advantage: since I have to re-connect to the WiFi every now and then (Tessel’s WiFi chip is a bit flakey when further away from the access point), I added an external tool for connectingit to WiFi and assigned a shortcut to it (IDE Settings | Keymaps, search for whatever label you gave the command and use the context menu to assign a keyboard shortcut). On my machine, Ctrl+Alt+W resets the Tessel’s WiFi now!

Installing npm Packages

This one may be overkill, but I found searching npm for Tessel-related packages quite handy through the IDE. From Project Settings | Node.JS and NPM, searching packages is pretty simple. And installing them, too! Careful, Tessel’s 32 MB of storage may not like too many modules…

NPM webstorm

Fun fact: writing this blog post, I noticed the grunt-tessel package which contains tasks that run or deploy scripts to the device. If you prefer using Grunt for doing that, know WebStorm comes with a Grunt runner, too.

That’s it, for now, I do hope to tinker away on the Tessel in the next weeks nad finish my thermometer and the app so I can see the (historical) temperature in my house,

Getting Started with the Tessel

Tessel LogoSomewhere last year (I honestly no longer remember when), I saw a few tweets that piqued my interest: a crowdfunding project for the Tessel, “an internet-connected microcontroller programmable in JavaScript”. Since everyone was doing Arduino and Netduino and JavaScript is not the worst language ever, I thought: let’s give these guys a bit of money! A few months later, they reached their goal and it seemed Tessel was going to production. Technical Machine, the company behind the device, sent status e-mails on their production process every couple of weeks and eventually after some delays, there it was!

Plug, install (a little), play!

After unpacking the Tessel, I was happy to see it was delivered witha micro-USB cable to power it, a couple of stuickers and the climate module I ordered with it (a temperature and humidity sensor). The one-line manual said “http://tessel.io/start”, so that’s where I went.

The setup is pretty easy: plug it in a USB port so that Windows installs the drivers, install the tessel package using NPM and update the device to the latest firmware.

npm install -g tessel tessel update

Very straightforward! Next, connecting it to my WiFi:

tessel wifi -n <ssid> -p <password> -s wpa2 -t 120

And as a test, I managed to deploy “blinky”, a simple script that blinks the leds on the Tessel.

tessel blinky

Now how do I develop for this thing…

My first script (with the climate module)

One of the very cool things about Tessel is that all additional modules have something printed on them… The climate module, for example, has the text “climate-si7005” printed on it.

climate-si7005

Now what does that mean? Well, it’s also the name of the npm package to install to work with it! In a new directory, I can now simply initialzie my project and install theclimate module dependency.

npm init npm install climate-si7005

All modules have their npm package name printed on them so finding the correct package to work with the Tessel module is quite easy. All it takes is the ability to read. The next thing to do is write some code that can be deployed to the Tessel. Here goes:

The above code uses the climate module and prints the current temperature (in Celsius, metric system for the win!) on the console every second. Here’s a sample, climate.js.

var tessel = require('tessel'); var climatelib = require('climate-si7005'); var climate = climatelib.use(tessel.port['A']); climate.on('ready', function () { setImmediate(function loop () { climate.readTemperature('c', function (err, temp) { console.log('Degrees:', temp.toFixed(4) + 'C'); setTimeout(loop, 1000); }); }); });

The Tessel takes two commands that run a script: tessel run climate.js, which will copy the script and node modules onto the Tessel and runs it, and tessel push climate.js which does the same but deploys the script as the startup script so that whenever the Tessel is powered, this script will run.

Here’s what happens when climate.js is run:

tessel run climate.js

The output of the console.log() statement is there. And yes, it’s summer in Belgium!

What’s next?

When I purchased the Tessel, I had the idea of building a thermometer that I can read from my smartphone, complete with history, min/max temperatures and all that. I’ve been coding on it on and off in the past weeks (not there yet). Since I’m a heavy user of PhpStorm and WebStorm for doing non-.NET development, I thought: why not also see what those IDE’s can do for me in terms of developing for the Tessel… I’ll tell you in a next blog post!

Welcome to BlogEngine.NET 3.0

If you see this post it means that BlogEngine.NET is running and the hard part of creating your own blog is done. There is only a few things left to do.

Write Permissions

To be able to log in, write posts and customize blog, you need to enable write permissions on the App_Data and Custom folders. If your blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support.

If you wish to use a database to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine docs to get started.

Security

When you've got write permissions set, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change password, create new users and set roles and permissions. Passwords are hashed by default so you better configure email in settings for password recovery to work or learn how to do it manually.

Configuration and Profile

Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET is set up to take full advantage of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this.

Themes, Widgets & Extensions

One last thing to consider is customizing the look and behavior of your blog. We have themes, widgets and extensions available right out of the box. You can install more right from admin panel under Custom/Gallery.

On the web

You can find news about BlogEngine.NET on the official website. For tutorials, documentation, tips and tricks visit our docs site. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.

Good luck and happy writing.

The BlogEngine.NET team

What happened to Code Spaces could happen to you. On Amazon, Azure and any host out there.

Earlier this week, a sad thing happened to the version control hosting service Code Spaces. A malicious person gained access to their Amazon control panel and after demanding a ransom to the owners of Code Spaces, that malicious person started deleting data and EC2 instances. After a couple of failed attempts from Code Spaces to stop this from happening, the impossible happened: the hacker rendered Code Spaces dead. Everything that was their business is gone. As they state themselves:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.

That’s sad. Sad for users, sad for employees and sad for business owner. Some nutcase destroyed a flourishing business over the course of 12 hours. Horrible! But the most horrible thing? It can happen to you! Or as Jeff Atwood stated:

Jeff Atwood - they are everywhere!

The fact that this could happen is bad. But security is what it is: there is always this chance of something happening, whatever we do to mitigate as much of this as possible. Any service out there, whether Amazon Microsoft Azure or your hosting control panel are open for everyone with a username and password. Being a Microsoft Azure fan, I’ll use this post to scare everyone using the service and tools about what can happen. Knowing about what can happen is the first step towards mitigating it.

Disclaimer and setting the stage

What I do NOT want to do in this post is go into the technical details of every potential mishap that can happen. We’re all developers, there’s a myriad of search engines out there that can present us with all the details. I also do not want to give people the tools to do these mishaps. I’ll give you some theory on what could happen but I don’t want to be the guy who told people to be evil. Don’t. I deny any responsibility for potential consequences of this post.

Microsoft Account

Every Microsoft Azure subscription is linked to either an organizational account or a Microsoft Account. Earlier this week, I saw someone tweet that they had 32 Microsoft Azure subscriptions linked to their Microsoft Account. If I were looking to do bad things there, I’d try and get access to that account using any of the approaches available. Trying to gain access, some social engineering, anything! 32 subscriptions is a lot of ransom I could ask for. And with potentially 20 cores of CPU available in all of them, it’s also an ideal target to go and host some spam bots or some machines to perform a DDoS.

What can we do with our Microsoft Account to make it all a bit more secure?

  • Enable 2-factor authentication on your Microsoft Account. Do it!
  • Partition. Have one Microsoft Account for every subscription. With a different, complex password.
  • Managing this many subscriptions with this many accounts is hard. Don’t be tempted to make all the accounts “Administrators” on all of the subscriptions. It’s convenient and you will have one single logon to manage it all, but it broadens the potential attack surface again.

Certificates, PowerShell, the Command Line, NuGet and Visual Studio

The Microsoft Azure Management API’s can be used to do virtually anything you can do through the management portal. And more! Access to the management API is secured using a certificate that you have to upload to the portal. Great! Unless that management certificate was generated on your end without any security in mind. Not having a passphrase to use it or storing that passphrase on your system means that anyone with access to your computer could, in theory, use the management API with that certificate. But this is probably unlikely since as an attacker I’d have to have access to your computer. There are more clever ways!

Those PowerShell and cross-platform tools are great! Using them, we can script against the management API to create storage accounts, provision and deprovision resources, add co-administrators and so forth. What if an attacker got some software on your system? Malware. A piece of sample code. Anything! If you’re using the PowerShell or cross-platform tools, you’ve probably used them before and set the active subscription. All an attacker would have to do is run the command to create a co-admin or delete or provision something. No. Credentials. Needed.

Not possible, you say? You never install any software that is out there? And you’re especially wary when getting something through e-mail? Good for you! “But that NuGet thing is so damn tempting. I installed half of NuGet.org so far!” – sounds familiar? Did you know NuGet packages can run PowerShell code when installed in Visual Studio? What if… an attacker put a package named “jQeury” out there? And other potential spelling mistakes? They could ship the contents of the real jQuery package in them so you don’t see anything unusual. In that package, someone could put some call to the Azure PowerShell CmdLets and a fallback using the cross-platform tools to create a storage account, mirror a couple of TB of illegal content and host it on your account. Or delete all your precious VMs.

Not using any of the PowerShell or cross-platform tools? No worries: attackers could also leverage the $dte object and invoke stuff inside Visual Studio and trigger any of the ample commands available in there. You may notice something in the activity log when this happens, but still.

What can we do to use these tools but make it a bit more secure?

  • Think about good certificate management. Give them a shorter lifetime, replace them every now and then. Don’t store passphrases.
  • Using the PowerShell or cross-platform tools? Make sure that after every use you either invalidate the credential used. Don’t just set the active subscription in these tools to null. There’s a list command of which an attacker could set the currect subscription id.
  • That publish settings file? It contains the management certificate. Don't distribute it.
  • Automate using all the tools! But not on all developer machines, do it on the build server.

All these tools are very useful and handy to work with, but use them with some common sense. If you have other tips for locking it all down, leave them in the comments.

Enjoy your night rest.

Microsoft Azure cloud plugin for TeamCity (dabbling in Java code)

If you follow me on Twitter, you may have seen me in several stages of anger at Java. After two weeks of learning, experimenting, coding and even getting it all to compile, I’m proud to announce an inital very early preview of my Microsoft Azure cloud plugin for TeamCity.

This plugin provides Microsoft Azure cloud support for TeamCity. By configuring a Microsoft Azure cloud in TeamCity, a set of known virtual build agents can be started and stopped on demand by the TeamCity server so we can benefit from Microsoft Azure’s cost model (a stopped VM is almost free) and scaling model (only start new instances when we need them).

Curious to try it? Make sure you know it is all still very early alpha version software so use with caution. I wanted to get an early preview out to gather some comments on it. Here are the installation steps:

  • Download the plugin ZIP file from the latest GitHub release.
  • Copy it to the TeamCity plugins folder
  • Restart TeamCity server and verify the plugin was installed from Administration | Plugins List

Creating a new cloud profile

From TeamCity’s Administration | Agent Cloud, we can create a new cloud configuration making use of the Microsoft Azure plugin for TeamCity. All we have to do is select “Microsoft Azure” as the cloud type and enter the requested details.

TeamCity agent on Azure VM

Once we enter some preconfigured and pre-provisioned VM names, we’re good to save and profit.

Known issue: only one Microsoft Azure cloud configuration can be created per TeamCity server because the KeyStore being configured by the plugin only stores one management certificate. Contribute a fix if you feel up for it!

What’s up?

From Agents | Cloud, we can now see which VM instances are stopped/running on Microsoft Azure.

Start stop TeamCity agent on Azure

Known issue: status of the VM displayed is not always current. The VM status is read from TeamCity's last known status, not from Microsoft Azure. Again, contribute a fix if you feel up for it.

What is there to come?

That’s pretty much it for now. I told you, it’s early. In my ideal world, there should also be a possibility to launch VM instances from a predefined image and destroy them when no longer needed. I also would love to convert it all to Kotlin as I still don’t like Java as a language and Kotlin looks really nice. ANd ideally, the crude UI I did for the plugin should be much nicer too.

Happy building in the cloud!

Optimizing calls to Azure storage using Fiddler

Last week, Xavier and I were really happy for achieving a milestone. After having spent quite some evenings on bringing Visual Studio Online integration to MyGet, we were happy to be mentioned in the TechEd keynote and even pop up in quite some sessions. We also learned ASP.NET vNext was coming and it would leverage NuGet as an important part of it. What we did not know, however, is that the ASP.NET team would host all vNext preview packages from MyGet. But we soon noticed and found our evening hours were going to be very focused for another few days…

On May 12th, we all of a sudden saw usage of our service double in an instant. Ouch! Here’s what Google Analytics told us:

image

Luckily for us, we are hosted on Azure and could just pull the slider to the right and add more servers. Scale out for the win! Apart for some hickups when we enabled auto scaling (we thought traffic would go down at some points during the day), MyGet handled traffic pretty well. But still, we had to double our server capacity for being able to host one high-traffic NuGet feed. And even though we doubled sever capacity, response times went up as well.

image

Time for action! But what…

Some background on our application

When we started MyGet, our idea was to leverage table storage and blob storage, and avoid SQL completely. The reason for that is back then MyGet was a simple proof-of-concept and we wanted to play with new technology. Growing, getting traction and onboarding users we found out that what we had in place to work on this back-end was very nice to work with and we’ve since evolved to a more CQRS-ish and event driven (-ish) architecture.

But with all good things come some bad things as well. Adding features, improving code, implementing quota so we could actually meter what our users were doing and put a price tag on it had added quite some calls to table storage. And while it’s blazingly fast if you know what you are doing, they are still calls to an external system that have to open up a TCP connection, do an SSL handshake and so on. Not so many milliseconds, but summing them all together they do add up. So how do you find out what is happening? Let’s see…

Analyzing Azure storage traffic

There is no profiler out there that currently allows you to easily hook into what traffic is going over the wire to Azure storage. Fortunately for us, the Azure team added a way of hooking a proxy server between your application and storage itself. Using the development storage emulator, we can simply change our storage connection string to the following and hook Fiddler in:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler

Great! Now we have Fiddler to analyze our traffic to Azure storage. All requests going to blob, table or queue storage services are now visible: URL, result, timing and so forth.

image

The picture above is not coming from MyGet but just to illustrate the idea. You can clear the list of requests, load a specific page or action in your application and see the calls going out to storage. And we had some critical paths where we did over 7 requests. If each of them is 30ms on average, that is 210ms just to grab some data. And then you’ve not even done anything with it… So we decided to tackle that in our code.

Another thing we noticed by looking at URLs here is that we had some of those requests filtering only using the table storage RowKey, resulting in a +/- 2 second roundtrip on some requests. That is bad. And so we also fixed that (on some occasions by adding some caching, on others by restructuring the way data is stored and moving our filter to PartitionKey or a combination of PartitionKey and RowKey as you should).

The result of this? Well have a look at that picture above where our response times are shown: we managed to drastically reduce response times, making ourselves happy (we could kill some VM’s), and our users as well because everything is faster.

A simple trick with great results!

Speeding up ASP.NET vNext package restore

TL;DR: If you have multiple NuGet feeds configured on your machine, it may be worth to do some tweaking in the NuGet.config file shipping with your project.

Last week, the ASP.NET team released a preview of “ASP.NET vNext”, a first step in the good direction for solving the pain building .NET projects is, but more than that a great step towards having an open and cross-platform ASP.NET that is super developer friendly. If you haven’t checked it out yet, do so now.

One of the things ASP.NET vNext leans on heavily is NuGet. In fact, every application comes with a project.json file that describes an application’s dependencies. Only when running kpm restore these dependencies are downloaded and the application can be run. Running this package restore (it’s NuGet after all) is usually pretty fast, but if you, like me, are a heavy NuGet user, chances are the restore is not happening in the most optimal way. Have a look at the output of my kpm restore command right after I installed ASP.NET vNext on my system:

Project K package restore

It’s not easy to capture a screenshot that proves the point I'm about to make, but if you do this yourself and you have multiple NuGet feeds configured on your system, you’ll see that ASP.NET vNext is trying to restore packages from all configured feeds. In my case, I’m using a personal feed on MyGet, a feed hosted on my TeamCity server, a feed on my local filesystem (testing purposes) and then the ASP.NET vNext MyGet feed as well as NuGet.org. That’s 5 feeds being checked over and over again for the dependencies listed in my project.json… Let’s see if we can reduce this a bit.

If we look at the samples shipped in ASP.NET vNext, we can find a NuGet.config file in there. And as we know, NuGet has this thing called configuration file inheritance. This means that the feeds defined in here will be enriched with the feeds configured at the machine level, in my case 5 of them. But that also means we can easily fix this: adding a <clear /> element under the <packageSources> element will do the trick of removing all previously defined feeds and using just the ones defined for the project I’m working on:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/" /> <add key="NuGet.org" value="https://nuget.org/api/v2/" /> </packageSources> <!-- ... --> </configuration>

Use this trick for your own ASP.NET vNext projects as well: specify the feeds you want to use explicitly and everything will be faster for you and other developers working with your code. It ensures that kpm or NuGet for that matter only check the feeds that are relevant to your project and not every feed that is configured on your system.

Building .NET projects is a world of pain and here’s how we should solve it

During the past few weeks, I’ve been working on and off on setting up a build agent that can build as many open-source .NET projects as possible in an effort to learn how hard it is to do. Allow me to open this blog post with a rant… One which will feel very familiar if you’ve recently installed a build agent yourself.

Setting up a .NET build machine is insane

As the minimal installation of components I started with installing the .NET framework 2.0, 3.0, 3.5, 4.0, 4.0.1 (yes, that exists), 4.0.2, 4.0.3, 4.5, 4.5.1 and their multi-targeting packs on the build agent. Next, I took 100 random C# projects from GitHub that had activity in the last year or so and started building and reading build logs. Great news! There are a lot of self-contained open source projects out there that build happily on this minimal install. Most of these seem to be class libraries, often depending on some NuGet packages that are installed using NuGet package restore.

Unfortunately, there are a great number of projects that do not build on this minimal setup: those that require specific SDK’s and components installed. So I started delving deeper into build logs and tackled project by project with the necessary “headless installs” of SDK’s. In practice, this sometimes means running an installer with specific commands to only install what is required to build projects on it. In other cases it means copying .targets and reference assemblies from my Windows 8.1 machine to the Windows Server 2012 R2 machine that was my build agent (yes, you can build Windows Store apps on Windows Server if you are persistent…). And in other cases (looking at you, Windows Phone SDK!) it meant running the installer in compatibility mode with some registry keys changed to overcome installer checks that do not allow installing that SDK on Windows Server.

In the end, I had to install pretty much the entire world on the build agent, or at least all SDK’s and tools that have been released between Visual Studio 2010 and the latest 2013. Here’s 17.6 GB of sh… dependencies for you.

Installed programs and features

What is the issue?

Well, there isn’t just “one” issue. There are several. Here’s a quick list of issues and questions

  • There is no way to clearly specify dependencies on SDK’s and tooling in .NET projects. The only way to know what is required is to build, read the build log, build, read the build log and someday succeed in finding the right SDK for the job. These dependencies are all implicit and there is no good way of finding out what they are, except trial and error.
  • The fact that I need this amount of SDK’s installed is crazy in itself. Why is this? Most builds simply need a .targets file and some DLLs, not all the other stuff that is in the download of such SDK.
  • Some SDK’s don’t install on every platform. Why is that? Why can’t SDK X install on platform Y?
  • Will I be able to install future versions of the SDK side-by-side so “older” projects build on my machine? Or will I need a machine for every Visual Studio version separately? How to isolate these things?

This is not only Microsoft tooling and SDK’s. Various other SDKs also require installs, prerequisites, configuration, … If only that picture above would allow scrolling so you could see Amazon, Xamarin and many others in that list.

How should we solve this?

Let’s look at the Node.js community and how they manage to do things. Every project, whether an actual application, a library or component, contains an important file: packages.json. It contains a description of the project itself, as well as the dependencies it requires, both id and version. All you need to build or run most of such projects is a node executable and an Internet connection to download dependencies on the fly. Sounds familiar? It does!

We’ve been using NuGet for quite a while now in the .NET space (if you haven’t, look into it now, even for in-house frameworks hosted on private feeds!). We’re distributing open source projects as NuGet packages that we can depend on in our own software. We can publish our own software as a NuGet package so others can depend on it. Awesome! Then why aren’t we doing this with the 17.6 GB of SDK madness we have to install on a build machine?

I do not think we can solve this quickly and change history. But I do think from now on we have to start building SDK’s differently. Most projects only require an MSBuild .targets file and some assemblies, either containing MSBuild tasks or reference assemblies, to do their compilation work. What if… we shipped the minimum files required to succesfully build a project as NuGet packages? The NuGet gallery contains some examples of this, but there are only a few. Another example is the ReSharper SDK which is shipped as a NuGet package. Need a test runner? Wrap the executable in a NuGet package and I’ll bring it down and run it during build. My takeaway: if you have a .targets file and are wrapping it in an MSI, you are doing it wrong.

Does that mean MSI's should disappear? No! They can exist and add tooling or whatever they need to add to a developer machine. All I want is the .targets file and supporting assemblies to be distributed separately as a self-contained package which I can reference explicitly, rather than the implicit way it is done now.

In my ideal world, all .NET projects would have a packages.config file in their root folder in which library dependencies as well as MSBuild dependencies can be described. My build machine would contain the .NET framework and Mono. And during build, all dependencies would be magically brought down for just that build.

P.S.: A lot of the new packages like ASP.NET MVC and WebApi, the OData packages and such are being shipped as NuGet packages which is awesome. The ones that I am missing are those that require additional build targets that are typically shipped in SDK's. Examples are the Windows Azure SDK, database tools and targets, ... I would like those to come aboard the NuGet train and ship their Visual Studio tooling separately from teh artifacts required to run a build.

NuGet Configuration File inheritance is awesome

One way to remove friction from using NuGet in multiple projects is by making use of NuGet Configuration File inheritance, probably the awesomest unknown feature in there.

By default, all NuGet clients (the command-line tool, the Visual Studio extension and the Package Manager Console) all make use of the default NuGet configuration file which lives under %AppData%\NuGet\NuGet.config. NuGet can make use of other configuration files as well! In fact, NuGet can walk an entire tree of configuration files and fetch settings from those.

Which configuration file will be used?

Good question and happy you asked! The standard answer I always give to any question is: it depends. In this case on the client you are using. But ignoring that fact, here’s a generalized version of the three that is walked for building the configuration the client will work with.

  • The current directory and all its parents
  • The user-specific config file located under %AppData%\NuGet\NuGet.config
  • IDE-specific configuration files, for example:
        %ProgramData%\NuGet\Config\{IDE}\{Version}\{SKU}\*.config (e.g. %ProgramData%\NuGet\Config\VisualStudio\12.0\Pro\NuGet.config)
        %ProgramData%\NuGet\Config\{IDE}\{Version}\*.config
        %ProgramData%\NuGet\Config\{IDE}\*.config
        %ProgramData%\NuGet\Config\*.config
  • The machine-wide config file located under %ProgramData%\NuGet\NuGetDefaults.config (which, as a sysadmin, is a good one to put default configuration options in using an Active Directory Group Policy, just saying)

Full details can be found in the NuGet docs, just keep in mind that first item of the list: all clients start with a NuGet.config in the current directory and then walk up to the drive root, and only then are the standard files checked. Wow. Just WOW! This means every parent folder of a project or solution can contain additional configuration details that will be applied (note: the file that is first consulted wins).

So in short, if I have a solution file C:\Projects\CustomerA\AwesomeSolution\AwesomeSolution.sln, all NuGet clients will load configuration values from:

  • C:\Projects\CustomerA\AwesomeSolution\NuGet.config
  • C:\Projects\CustomerA\NuGet.config
  • C:\Projects\NuGet.config
  • C:\NuGet.config
  • All the other locations mentioned above

This gives some pretty interesting scenarios! Let’s cover a few. But again, check the NuGet docs for more information on possible entries in a NuGet.config.

Example 1: a project-specific configuration

So you are using a private feed? That’s a good thing! (I do hope it’s on MyGet ;-)). It’s the default for your current project? Even better! But why do all your developers have to add this feed to their NuGet configuration if a NuGet.config can be shipped in source control? Simply putting the following file right next to your .sln file will do the job:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </packageSources> </configuration>

Want to block access to NuGet.org and simply use the private feed all the time? Here’s some more:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </packageSources> <disabledPackageSources> <add key="nuget.org" value="true" /> </disabledPackageSources> <activePackageSource> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </activePackageSource> </configuration>

Example 2: help, my devs are pushing our internal framework to NuGet.org!

Good one, good one. We don’t want that to happen. Probably they forgot the -Source parameter to NuGet.exe, but still. Accidental pushes are not fun! Place this one next to the .sln file and you should be good:

<?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="DefaultPushSource" value="https://www.myget.org/F/chucknorris/api/v2/package" /> </config> </configuration>

Feel free to combine it with example 1, it may make sense!

Example 3: NuGet.exe always asks me for proxy credentials

That is not funny. Proxies are like printers: the idea is great but when you need them things don’t always go well. Good thing is we can configure default proxy credentials. While possible to put this one in a project, it’s probably better to do this in the default %AppData%\NuGet\NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="http_proxy" value="host" /> <add key="http_proxy.user" value="username" /> <add key="http_proxy.password" value="encrypted_password" /> </config> </configuration>

Example 4: feed inheritance and package restore

We have multiple customers, each with a specific feed they can use. Awesome! Every customer project can contain the following NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Customer X" value="https://www.myget.org/F/customerx" /> </packageSources> </configuration>

In the C:\Projects folder, we can add another configuration file which adds in another feed for every project located under C:\Projects. All customer projects use both of these feeds, typically. Customer specific components as well as that framework built in-house, each on their own feed. But help! All of a sudden, package restore started complaining no package named X can be found!

The reason for that is probably the active package source is set to one specific feed and not the “aggregate” of all configured feeds. Here’s a solution to that which can go in C:\Projects\NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Our Cool Framework" value="https://www.myget.org/F/ourcoolframework" /> </packageSources> <activePackageSource> <add key="All" value="(Aggregate source)" /> </activePackageSource> </configuration>

All sorts of fancy combinations are possible, the only thing you have to do is find an approach that works for you.

Enjoy!

Windows Azure Storage magic with Shared Access Signatures

When building cloud applications on Windows Azure, it’s always a good thing to delegate as much work to specialized services as possible. File downloads would be one good example: these can be streamed directly from Windows Azure blob storage to your client, without having to pass a web application hosted on Windows Azure Cloud Services or Web Sites. Why occupy the web server with copying data from a request stream to a response stream? Let blob storage handle it!

When thinking this through there may be some issues you may think of. Here are a few:

  • How can I keep this blob secure? I don’t want to give everyone access to it!
  • How can I keep the download URL on my web server, track the number of downloads (or enforce other security rules) and still benefit from offloading the download to blob storage?
  • How can the blob be stored in a way that is clear to my application (e.g. a customer ID or something), yet give it a friendly name when downloading?

Let’s answer these!

Meet Shared Access Signatures

Keeping blobs secure is pretty easy on Windows Azure Blob Storage, but it’s also sort of an all-or-nothing story… Either you make all blobs in a container private, or you make them public.

Not to worry though! Using Shared Access Signatures it is possible to grant temporary privileges on a blob, for read and write access. Here’s a code snippet that will grant read access to the blob named helloworld.txt, residing in a private container named files, during the next minute:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("files"); var blob = container.GetBlockBlobReference("helloworld.txt"); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri;

Note I’m giving access starting 5 minutes ago, just to make sure any clock skew along the way is ignored within a reasonable time window.

There we go: our blob is secured and by passing along the signedBlobUrl to our user, he or she can start downloading our blob without having access to any other blobs at all.

Meet HTTP redirects

Shared Access Signatures are really cool, but the generated URLs are… “fugly”, they are not pretty or easy to remember. Well, there is this thing called HTTP redirects, right? Here’s an ASP.NET MVC action method that checks if the user is authenticated, queries a repository for the correct filename, generates the signed access signature and redirects us to the actual download.

[Authorize] [EnsureInvoiceAccessibleForUser] public ActionResult DownloadInvoice(string invoiceId) { // Fetch invoice var invoice = InvoiceService.RetrieveInvoice(invoiceId); if (invoice == null) { return new HttpNotFoundResult(); } // We can do other things here: track # downloads, ... // Build shared access signature CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("invoices"); var blob = container.GetBlockBlobReference(invoice.CustomerId + "-" + invoice.InvoiceId); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri; // Redirect return Redirect(signedBlobUrl); }

This gives us the best of both worlds: our web application can still verify access and run some business logic on it, yet we can offload the file download to blob storage.

Meet Shared Access Signatures content disposition header

Often, storage is a technical thing where we choose technical filenames for the things we store, instead of human-readable or human-friendly file names. In the example above, users will get a very strange filename to be downloaded: the customer id + invoice id, concatenated. No .pdf file extension, nothing else either. Users may get confused by this, or have problems opening the file because teir browser will not recognize this is a PDF.

Last November, a feature was added to blob storage which enables us to let a blob be whatever we want it to be: support for setting some additional headers on a blob through the Shared Access Signature.

The following headers can be specified on-the-fly, through the shared access signature:

  • Cache-Control
  • Content-Disposition
  • Content-Encoding
  • Content-Language
  • Content-Type

Here’s how to generate a meaningful Shared Access Signature in the previous example, where we specify a human-readable filename for the resulting download, as well as a custom content type:

builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }, new SharedAccessBlobHeaders { ContentDisposition = "attachment; filename=" + customer.DisplayName + "-invoice-" + invoice.InvoiceId + ".pdf", ContentType = "application/pdf" }).TrimStart('?');

Note: for this feature to work, the service version for the storage account must be set to the latest one, using the DefaultServiceVersion on the blob client. Here’s an example:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var serviceProperties = client.GetServiceProperties(); serviceProperties.DefaultServiceVersion = "2013-08-15"; client.SetServiceProperties(serviceProperties);

Combining all these techniques, we can do some analytics and business logic in our web application and offload the boring file and network I/O to blob storage.

Enjoy!