Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...


Working with a private npm registry in Azure Web Apps

Using Azure Web Apps, we can deploy and host Node applications quite easily. But what to do with packages the site depends on? Do we have to upload them manually to Azure Web Apps? Include them in our Git repository? None of that: we just have to make sure our app’s package,json is checked in so that Azure Web Apps can install them during deployment. Let’s see how.

Installing node modules during deployment

In this blog post, we’ll create a simple application using Express. In its simplest form, an Express application will map incoming request paths to a function that generates the response. This makes Express quite interesting to work with: we can return a simple string or delegate work to a full-fledged MVC component if we want to. Here’s the simplest application I could think of, returning “Hello world!” whenever the root URL is requested. We can save it as server.js so we can deploy it later on.

var express = require("express"); var app = express(); app.get("/", function(req, res) { res.send("Hello world!"); }); console.log("Web application starting..."); app.listen(process.env.PORT); console.log("Web application started on port " + process.env.PORT);

Of course, this will not work as-is. We need to ensure Express (and its dependencies) are installed as well. We can do this using npm, running the following commands:

# create package.json describing our project npm init # install and save express as a dependency npm install express --save

That’s pretty much it, running this is as simple as setting the PORT environment variable and running it using node.

set PORT=1234 node server.js

We can now commit our code, excluding the node_modules folder to our Azure Web App git repository. Ideally we create a .gitignore file that excludes this folder for once and for all. Once committed, Azure Web Apps starts a convention-based deployment process. One of the conventions is that for a Node application, all dependencies from package.json are installed. We can see this convention in action from the Azure portal.

Azure Deploy Node.JS

Great! Seems we have to do nothing special to get this to work. Except… What if we are using our own, private npm modules? How can we tell Azure Web Apps to make use of a different npm registry? Let’s see…

Installing private node modules during deployment

When building applications, we may be splitting parts of the application into separate node modules to make the application more componentized, make it easier to develop individual components and so on. We can use a private npm registry to host these components, an example being MyGet. Using a private npm feed we can give our development team access to these components using “good old npm” while not throwing these components out on the public

Imagine we have a module called demo-site-pages which contains some of the views our web application will be hosting. We can add a dependency to this module in our package.json:

{ "name": "demo-site", "version": "1.0.0", "description": "Demo site", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "dependencies": { "express": "^4.13.3", "demo-site-pages": "*" } }

Alternatively we could install this package using npm, specifying the registry directly:

npm install --save --registry

But now comes the issue: if we push this out to Azure Web Apps, our private registry is not known!

Generating a .npmrc file to work with a private npm registry in Azure Web Apps

To be able to install node modules from a private npm registry during deployment on Azure Web Apps, we have to ship a .npmrc file with our code. Let’s see how we can do this.

Since our application uses both as well as our private registry, we want to make sure MyGet proxies packages used from during installation. We can enable this from our feed’s Package Sources tab and edit the default package source source. Ensure the Make all upstream packages available in clients option is checked.

Next, register your MyGet NPM feed (or another registry URL). The easiest way to do this is by running the following commands:

npm config set registry npm login --registry= npm config set always-auth true

This generates a .npmrc file under our user profile folder. On Windows that would be something like C:\Users\Username\.npmrc. Copy this file into the application’s root folder and open it in an editor. Depending on the version of npm being used, we may have to set the always-auth setting to true:

registry= //"BASE64ENCODEDPASSWORD" // // //

If we now commit this file to our git repository, the next deployment on Azure Web Apps will install both packages from, in this case express, as well as packages from our private npm registry.

Installing node module from private npm registry


Not enough space on the disk - Azure Cloud Services

I have been using Microsoft Azure Cloud Services since PDC 2008 when it was first announced. Ever since, I’ve been a huge fan of “cloud services”, the cattle VMs in the cloud that are stateless. In all those years, I have never seen this error, until yesterday:

There is not enough space on the disk.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BinaryWriter.Write(Byte[] buffer, Int32 index, Int32 count)

Help! Where did that come from! I decided to set up a remote desktop connection to one of my VMs and see if any of the disks were full or near being full. Nope!

Azure temp path full

The stack trace of the exception told me the exception originated from creating a temporary file, so I decided to check all the obvious Windows temp paths which were squeaky clean. The next thing I looked at were quotas: were any disk quotas enabled? No. But there are folder quotas enabled on an Azure Cloud Services Virtual Machine!

Azure temporary folder TEMP TMP quotas 100 MB

The one that has a hard quota of 100 MB caught my eye. The path was C:\Resources\temp\…. Putting one and one together, I deducted that Azure was redirecting my application’s temporary folder to this one. And indeed, a few searches and this was confirmed: cloud services do redirect the temporary folder and limit it with a hard quota. But I needed more temporary space…

Increasing temporary disk space on Azure Cloud Services

Turns out the System.IO namespace has several calls to get the temporary path (for example in Path.GetTempPath()) which all use the Win32 API’s under the hood. For which the docs read:

The GetTempPath function checks for the existence of environment variables in the following order and uses the first path found:

  1. The path specified by the TMP environment variable.
  2. The path specified by the TEMP environment variable.
  3. The path specified by the USERPROFILE environment variable.
  4. The Windows directory.

Fantastic, so all we have to do is create a folder on the VM that has no quota (or larger quota) and set the TMP and/or TEMP environment variables to point to it.

Let’s start with the first step: creating a folder that will serve as the temporary folder on our VM. We can do this from our Visual Studio cloud service project. For each role, we can create a Local Resource that has a given quota (make sure to not exceed the local resource limit for the VM size you are using!)

Create local resource on Azure VM

The next step would be setting the TMP / TEMP environment variables. We can do this by adding the following code into the role’s RoleEntryPoint (pasting full class here for reference):

public class WorkerRole : RoleEntryPoint { private const string _customTempPathResource = "CustomTempPath"; private CancellationTokenSource _cancellationTokenSource; public override bool OnStart() { // Set TEMP path on current role string customTempPath = RoleEnvironment.GetLocalResource(_customTempPathResource).RootPath; Environment.SetEnvironmentVariable("TMP", customTempPath); Environment.SetEnvironmentVariable("TEMP", customTempPath); return base.OnStart(); } }

That’s it! A fresh deploy and our temporary files are now stored in a bigger folder.

What happened to Code Spaces could happen to you. On Amazon, Azure and any host out there.

Earlier this week, a sad thing happened to the version control hosting service Code Spaces. A malicious person gained access to their Amazon control panel and after demanding a ransom to the owners of Code Spaces, that malicious person started deleting data and EC2 instances. After a couple of failed attempts from Code Spaces to stop this from happening, the impossible happened: the hacker rendered Code Spaces dead. Everything that was their business is gone. As they state themselves:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.

That’s sad. Sad for users, sad for employees and sad for business owner. Some nutcase destroyed a flourishing business over the course of 12 hours. Horrible! But the most horrible thing? It can happen to you! Or as Jeff Atwood stated:

Jeff Atwood - they are everywhere!

The fact that this could happen is bad. But security is what it is: there is always this chance of something happening, whatever we do to mitigate as much of this as possible. Any service out there, whether Amazon Microsoft Azure or your hosting control panel are open for everyone with a username and password. Being a Microsoft Azure fan, I’ll use this post to scare everyone using the service and tools about what can happen. Knowing about what can happen is the first step towards mitigating it.

Disclaimer and setting the stage

What I do NOT want to do in this post is go into the technical details of every potential mishap that can happen. We’re all developers, there’s a myriad of search engines out there that can present us with all the details. I also do not want to give people the tools to do these mishaps. I’ll give you some theory on what could happen but I don’t want to be the guy who told people to be evil. Don’t. I deny any responsibility for potential consequences of this post.

Microsoft Account

Every Microsoft Azure subscription is linked to either an organizational account or a Microsoft Account. Earlier this week, I saw someone tweet that they had 32 Microsoft Azure subscriptions linked to their Microsoft Account. If I were looking to do bad things there, I’d try and get access to that account using any of the approaches available. Trying to gain access, some social engineering, anything! 32 subscriptions is a lot of ransom I could ask for. And with potentially 20 cores of CPU available in all of them, it’s also an ideal target to go and host some spam bots or some machines to perform a DDoS.

What can we do with our Microsoft Account to make it all a bit more secure?

  • Enable 2-factor authentication on your Microsoft Account. Do it!
  • Partition. Have one Microsoft Account for every subscription. With a different, complex password.
  • Managing this many subscriptions with this many accounts is hard. Don’t be tempted to make all the accounts “Administrators” on all of the subscriptions. It’s convenient and you will have one single logon to manage it all, but it broadens the potential attack surface again.

Certificates, PowerShell, the Command Line, NuGet and Visual Studio

The Microsoft Azure Management API’s can be used to do virtually anything you can do through the management portal. And more! Access to the management API is secured using a certificate that you have to upload to the portal. Great! Unless that management certificate was generated on your end without any security in mind. Not having a passphrase to use it or storing that passphrase on your system means that anyone with access to your computer could, in theory, use the management API with that certificate. But this is probably unlikely since as an attacker I’d have to have access to your computer. There are more clever ways!

Those PowerShell and cross-platform tools are great! Using them, we can script against the management API to create storage accounts, provision and deprovision resources, add co-administrators and so forth. What if an attacker got some software on your system? Malware. A piece of sample code. Anything! If you’re using the PowerShell or cross-platform tools, you’ve probably used them before and set the active subscription. All an attacker would have to do is run the command to create a co-admin or delete or provision something. No. Credentials. Needed.

Not possible, you say? You never install any software that is out there? And you’re especially wary when getting something through e-mail? Good for you! “But that NuGet thing is so damn tempting. I installed half of so far!” – sounds familiar? Did you know NuGet packages can run PowerShell code when installed in Visual Studio? What if… an attacker put a package named “jQeury” out there? And other potential spelling mistakes? They could ship the contents of the real jQuery package in them so you don’t see anything unusual. In that package, someone could put some call to the Azure PowerShell CmdLets and a fallback using the cross-platform tools to create a storage account, mirror a couple of TB of illegal content and host it on your account. Or delete all your precious VMs.

Not using any of the PowerShell or cross-platform tools? No worries: attackers could also leverage the $dte object and invoke stuff inside Visual Studio and trigger any of the ample commands available in there. You may notice something in the activity log when this happens, but still.

What can we do to use these tools but make it a bit more secure?

  • Think about good certificate management. Give them a shorter lifetime, replace them every now and then. Don’t store passphrases.
  • Using the PowerShell or cross-platform tools? Make sure that after every use you either invalidate the credential used. Don’t just set the active subscription in these tools to null. There’s a list command of which an attacker could set the currect subscription id.
  • That publish settings file? It contains the management certificate. Don't distribute it.
  • Automate using all the tools! But not on all developer machines, do it on the build server.

All these tools are very useful and handy to work with, but use them with some common sense. If you have other tips for locking it all down, leave them in the comments.

Enjoy your night rest.

Microsoft Azure cloud plugin for TeamCity (dabbling in Java code)

NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.

If you follow me on Twitter, you may have seen me in several stages of anger at Java. After two weeks of learning, experimenting, coding and even getting it all to compile, I’m proud to announce an inital very early preview of my Microsoft Azure cloud plugin for TeamCity.

This plugin provides Microsoft Azure cloud support for TeamCity. By configuring a Microsoft Azure cloud in TeamCity, a set of known virtual build agents can be started and stopped on demand by the TeamCity server so we can benefit from Microsoft Azure’s cost model (a stopped VM is almost free) and scaling model (only start new instances when we need them).

Curious to try it? Make sure you know it is all still very early alpha version software so use with caution. I wanted to get an early preview out to gather some comments on it. Here are the installation steps:

  • Download the plugin ZIP file from the latest GitHub release.
  • Copy it to the TeamCity plugins folder
  • Restart TeamCity server and verify the plugin was installed from Administration | Plugins List

Creating a new cloud profile

From TeamCity’s Administration | Agent Cloud, we can create a new cloud configuration making use of the Microsoft Azure plugin for TeamCity. All we have to do is select “Microsoft Azure” as the cloud type and enter the requested details.

TeamCity agent on Azure VM

Once we enter some preconfigured and pre-provisioned VM names, we’re good to save and profit.

Known issue: only one Microsoft Azure cloud configuration can be created per TeamCity server because the KeyStore being configured by the plugin only stores one management certificate. Contribute a fix if you feel up for it!

What’s up?

From Agents | Cloud, we can now see which VM instances are stopped/running on Microsoft Azure.

Start stop TeamCity agent on Azure

Known issue: status of the VM displayed is not always current. The VM status is read from TeamCity's last known status, not from Microsoft Azure. Again, contribute a fix if you feel up for it.

What is there to come?

That’s pretty much it for now. I told you, it’s early. In my ideal world, there should also be a possibility to launch VM instances from a predefined image and destroy them when no longer needed. I also would love to convert it all to Kotlin as I still don’t like Java as a language and Kotlin looks really nice. ANd ideally, the crude UI I did for the plugin should be much nicer too.

Happy building in the cloud!

Optimizing calls to Azure storage using Fiddler

Last week, Xavier and I were really happy for achieving a milestone. After having spent quite some evenings on bringing Visual Studio Online integration to MyGet, we were happy to be mentioned in the TechEd keynote and even pop up in quite some sessions. We also learned ASP.NET vNext was coming and it would leverage NuGet as an important part of it. What we did not know, however, is that the ASP.NET team would host all vNext preview packages from MyGet. But we soon noticed and found our evening hours were going to be very focused for another few days…

On May 12th, we all of a sudden saw usage of our service double in an instant. Ouch! Here’s what Google Analytics told us:


Luckily for us, we are hosted on Azure and could just pull the slider to the right and add more servers. Scale out for the win! Apart for some hickups when we enabled auto scaling (we thought traffic would go down at some points during the day), MyGet handled traffic pretty well. But still, we had to double our server capacity for being able to host one high-traffic NuGet feed. And even though we doubled sever capacity, response times went up as well.


Time for action! But what…

Some background on our application

When we started MyGet, our idea was to leverage table storage and blob storage, and avoid SQL completely. The reason for that is back then MyGet was a simple proof-of-concept and we wanted to play with new technology. Growing, getting traction and onboarding users we found out that what we had in place to work on this back-end was very nice to work with and we’ve since evolved to a more CQRS-ish and event driven (-ish) architecture.

But with all good things come some bad things as well. Adding features, improving code, implementing quota so we could actually meter what our users were doing and put a price tag on it had added quite some calls to table storage. And while it’s blazingly fast if you know what you are doing, they are still calls to an external system that have to open up a TCP connection, do an SSL handshake and so on. Not so many milliseconds, but summing them all together they do add up. So how do you find out what is happening? Let’s see…

Analyzing Azure storage traffic

There is no profiler out there that currently allows you to easily hook into what traffic is going over the wire to Azure storage. Fortunately for us, the Azure team added a way of hooking a proxy server between your application and storage itself. Using the development storage emulator, we can simply change our storage connection string to the following and hook Fiddler in:


Great! Now we have Fiddler to analyze our traffic to Azure storage. All requests going to blob, table or queue storage services are now visible: URL, result, timing and so forth.


The picture above is not coming from MyGet but just to illustrate the idea. You can clear the list of requests, load a specific page or action in your application and see the calls going out to storage. And we had some critical paths where we did over 7 requests. If each of them is 30ms on average, that is 210ms just to grab some data. And then you’ve not even done anything with it… So we decided to tackle that in our code.

Another thing we noticed by looking at URLs here is that we had some of those requests filtering only using the table storage RowKey, resulting in a +/- 2 second roundtrip on some requests. That is bad. And so we also fixed that (on some occasions by adding some caching, on others by restructuring the way data is stored and moving our filter to PartitionKey or a combination of PartitionKey and RowKey as you should).

The result of this? Well have a look at that picture above where our response times are shown: we managed to drastically reduce response times, making ourselves happy (we could kill some VM’s), and our users as well because everything is faster.

A simple trick with great results!

Windows Azure Storage magic with Shared Access Signatures

When building cloud applications on Windows Azure, it’s always a good thing to delegate as much work to specialized services as possible. File downloads would be one good example: these can be streamed directly from Windows Azure blob storage to your client, without having to pass a web application hosted on Windows Azure Cloud Services or Web Sites. Why occupy the web server with copying data from a request stream to a response stream? Let blob storage handle it!

When thinking this through there may be some issues you may think of. Here are a few:

  • How can I keep this blob secure? I don’t want to give everyone access to it!
  • How can I keep the download URL on my web server, track the number of downloads (or enforce other security rules) and still benefit from offloading the download to blob storage?
  • How can the blob be stored in a way that is clear to my application (e.g. a customer ID or something), yet give it a friendly name when downloading?

Let’s answer these!

Meet Shared Access Signatures

Keeping blobs secure is pretty easy on Windows Azure Blob Storage, but it’s also sort of an all-or-nothing story… Either you make all blobs in a container private, or you make them public.

Not to worry though! Using Shared Access Signatures it is possible to grant temporary privileges on a blob, for read and write access. Here’s a code snippet that will grant read access to the blob named helloworld.txt, residing in a private container named files, during the next minute:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("files"); var blob = container.GetBlockBlobReference("helloworld.txt"); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri;

Note I’m giving access starting 5 minutes ago, just to make sure any clock skew along the way is ignored within a reasonable time window.

There we go: our blob is secured and by passing along the signedBlobUrl to our user, he or she can start downloading our blob without having access to any other blobs at all.

Meet HTTP redirects

Shared Access Signatures are really cool, but the generated URLs are… “fugly”, they are not pretty or easy to remember. Well, there is this thing called HTTP redirects, right? Here’s an ASP.NET MVC action method that checks if the user is authenticated, queries a repository for the correct filename, generates the signed access signature and redirects us to the actual download.

[Authorize] [EnsureInvoiceAccessibleForUser] public ActionResult DownloadInvoice(string invoiceId) { // Fetch invoice var invoice = InvoiceService.RetrieveInvoice(invoiceId); if (invoice == null) { return new HttpNotFoundResult(); } // We can do other things here: track # downloads, ... // Build shared access signature CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("invoices"); var blob = container.GetBlockBlobReference(invoice.CustomerId + "-" + invoice.InvoiceId); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri; // Redirect return Redirect(signedBlobUrl); }

This gives us the best of both worlds: our web application can still verify access and run some business logic on it, yet we can offload the file download to blob storage.

Meet Shared Access Signatures content disposition header

Often, storage is a technical thing where we choose technical filenames for the things we store, instead of human-readable or human-friendly file names. In the example above, users will get a very strange filename to be downloaded: the customer id + invoice id, concatenated. No .pdf file extension, nothing else either. Users may get confused by this, or have problems opening the file because teir browser will not recognize this is a PDF.

Last November, a feature was added to blob storage which enables us to let a blob be whatever we want it to be: support for setting some additional headers on a blob through the Shared Access Signature.

The following headers can be specified on-the-fly, through the shared access signature:

  • Cache-Control
  • Content-Disposition
  • Content-Encoding
  • Content-Language
  • Content-Type

Here’s how to generate a meaningful Shared Access Signature in the previous example, where we specify a human-readable filename for the resulting download, as well as a custom content type:

builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }, new SharedAccessBlobHeaders { ContentDisposition = "attachment; filename=" + customer.DisplayName + "-invoice-" + invoice.InvoiceId + ".pdf", ContentType = "application/pdf" }).TrimStart('?');

Note: for this feature to work, the service version for the storage account must be set to the latest one, using the DefaultServiceVersion on the blob client. Here’s an example:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var serviceProperties = client.GetServiceProperties(); serviceProperties.DefaultServiceVersion = "2013-08-15"; client.SetServiceProperties(serviceProperties);

Combining all these techniques, we can do some analytics and business logic in our web application and offload the boring file and network I/O to blob storage.


Pro NuGet second edition is out

Pro NuGet will learn you all there is to know about NuGetPfew! Around February 2013, Xavier and I started planning work on an update of our book. Eight months later, we’re proud to present you with Pro NuGet (second edition). It’s been a tough couple of months writing this: Xavier has become a father for the second time (congratulations!), we’ve had two massive updates to NuGet we had to work in our book, … But here it is!

What’s new?

  • A number of workflows with NuGet have changed and have been added. Expect all of these, including NuGet’s old and new package restore functionality.
  • Want to work with NuGet and Windows Azure Websites, TeamCity, Visual Studio Online, OctopusDeploy, NuGet Gallery, ProGet or MyGet? We have a bunch of recipes for you!
  • Pitfalls of package versioning
  • Building a plugin system based on NuGet

Next to that there is a lot more meat in there!

  • Understand how NuGet fits into the big picture of your software development process to save you time and money.
  • How to keep your team working when your project depends on an external resource (such as a web service or cloud) which suddenly becomes unavailable.
  • Whether or not to auto-update NuGet packages within a continuous integration process for maximum reliability and speed.
  • How to combine NuGet with PowerShell to create your own Cmdlets and extend the base toolset in an extremely powerful manner.
  • Evaluate the pros-and-cons of hosting your own NuGet repository.
  • How to incorporate NuGet seamlessly within your continuous integration process.
  • Much much more!

We would love to get your feedback! E-mail us or write a review on your blog or Amazon. Enjoy the read!

PS: Thanks to our excellent reviewers (the NuGet team) and everyone at Apress! There is a lot of people involved in getting a quality book out there. Thanks!

A new year's present: introducing Glimpse plugins for Windows Azure

Glimpse plugin for Windows AzureHave you tried Glimpse before? It shows you server-side information like execution times, server configuration, request data and such in your browser. At the February MVP Summit this year, Anthony, Nik and I had a chat about what would be useful information to be displayed in Glimpse when working on Windows Azure. Some beers and a bit of coding later, we had a proof-of-concept showing Windows Azure runtime configuration data in a Glimpse tab.

Today, we are happy to announce a first public preview of two Windows Azure tabs in Glimpse: the Glimpse.WindowsAzure package displaying runtime information, and Glimpse.WindowsAzure.Storage collecting information about traffic from and to storage.

Want to give it a try? You can install these two NuGet packages from (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

Now let’s have a look at what these packages have to offer!


The Glimpse.WindowsAzure package adds a new tab to Glimpse, displaying environment information when the web application is hosted on Windows Azure. It does this for Cloud Services as well as for Windows Azure Web Sites.

Installation is easy: simply add the Glimpse.WindowsAzure package to your project and you’re done. If you are running on .NET 4.5, you will have to add the following setting to your Web.config:

  <add key="Glimpse:DisableAsyncSupport" value="true"/>

When hosting in a Windows Azure Cloud Service (or the full emulator available in the Windows Azure SDK), the Azure Environment tab will provide information gathered from the RoleEnvironment class. Youcan see the deployment ID, current role instance information, a list of configured endpoints, which fault and uopdate domain our application is running in and so on.

Windows Azure Role Environment

When the web application is hosted on Windows Azure Web Sites, we get information like Compute Mode (Shared or Reserved) as well as Site Mode (Limited in the screenshot below means the application is running on a Free web site).

Glimpse Windows Azure Web Sites

The Azure Environment tab will also provide a link to the Kudu Remote Console, a feature in Windows Azure Web Sites where you can run commands on the box hosting the web site,

Kudu Console

Pretty handy if you ask me!


The Glimpse.WindowsAzure.Storage package adds an “Azure Storage” tab to Glimpse, displaying all sorts of information about traffic from and to Windows Azure storage. It will also estimate the cost for loading the current page depending on number of transactions and traffic to blobs, tables and/or queues. Note that this package can also be used in ASP.NET web sites that are not hosted on Windows Azure yet making use of Windows Azure Storage.

Once the package is installed into your project, you can almost start inspecting all this information. Almost? Well, see the caveat further down…


Number of transactions and a cost estimate

The first type of data displayed in the Azure Storage tab is the total number of transactions, traffic consumed and a cost estimate for 10.000 pageviews. This information can be used for several scenarios:

  • Know how many calls are made to storage. Maybe you can reduce the number of calls to reduce the toal number of transactions, one of the billing metrics for Windows Azure.
  • Another billing metric is the amount of traffic consumed. When running in the same datacenter as the storage account, it’s less important for cost but still, reducing the traffic can reduce the page load time.

Windows Azure Storage Transactions and bandwidth consumed

Now where do we get the price per 10.000 pageviews? Well, this is a very rough estimate, based om the pay-per-use pricing in Windows Azure. It is very likely that the actual price willk be lower if you are running on an MSDN subscription, a pre-paid plan or an Enterprise Agreement.

Warnings and analysis of requests

One feature we’re particularly proud of is this one: warnings and analysis of requests to Windows Azure Storage. First of all, we’ll analyse the settings for communicating over the network. In the screenshot below, you can see several general hints to optimize throughput by disabling the Nagle algorithm or disabling HTTP 100 Continue.

Another analysis we’ll do is verifying the requests themselves. In the example below, Glimpse is giving a warning about the fact that I’m querying table storage on properties that are not indexed, potentially causing timeouts in my application.

There are several more inspections in there, if you have suggestions for others feel free to let us know!

Analysis of requests

List of requests and Timeline

When using Windows Azure Storage, Glimpse will show you all requests that have been made together with the status code and total duration of the request.


Since a plain list is often not that easy to analyze, the Timeline tab is extended with this information as well. It shows you a summary of when calls to Windows Azure Storage have been made, as well as full details of the requests:

Timeline tracing Windows Azure Storage

One caveat

Because of a current limitation in the Windows Azure Storage SDK, you will have to explicitly add one parameter to every call that is made to Windows Azure Storage.

The idea is that the OperationContext parameter for calls to storage has to be a special Glimpse OperationContext obtained by calling OperationContextFactory.Current.Create(). This Glimpse-specific implementation provides us all the information required to do display information in the Azure Storage tab. here’s an example on how to wire it in for a call to create a blob storage container:

var account = CloudStorageAccount.DevelopmentStorageAccount;
var blobclient
= account.CreateCloudBlobClient();
var container1
= blobclient.GetContainerReference("glimpse1");
container1.CreateIfNotExists(operationContext: OperationContextFactory.Current.Create());

We are talking with Microsoft about this and are pretty sure this shortcoming will be addressed in the future.

What’s next?

It would be great if you could give these two packages a try! NuGet packages are available from (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

We’re still looking at load balanced environments. You can implement Glimpse’s IPersistenceStore but we would like to have a zero-configuration setup.

Once we’re confident Glimpse.WindowsAzure and Glimpse.WindowsAzure.Storage are working properly, we’ll have a look at Windows Azure Caching and Service Bus.


Visual Studio Online for Windows Azure Web Sites

Today’s official Visual Studio 2013 launch provides some interesting novelties, especially for Windows Azure Web Sites. There is now the choice of choosing which pipeline to run in (classic or integrated), we can define separate applications in subfolders of our web site, debug a web site right from within Visual Studio. But the most impressive one is this. How about… an in-browser editor for your application?

Editing Node.JS in browser

Let’s take a quick tour of it. After creating a web site we can go to the web site’s configuration we can enable the Visual Studio Online preview.

Edit in Visual Studio Online

Once enabled, simply navigate to https://<yoursitename> or click the link from the dashboard, provide your site credentials and be greeted with Visual Studio Online.

On the left-hand menu, we can select the feature to work with. Explore does as it says: it gives you the possibility to explore the files in your site, open them, save them, delete them and so on. We can enable Git integration, search for files and classes and so on. When working in the editor we get features like autocompletion, FInd References, Peek Definition and so on. Apparently these don’t work for all languages yet, currently JavaScript and node.js seem to work, C# and PHP come with syntax highlighting but nothing more than that.

Peek definition

Most actions in the editor come with keyboard shortcuts, for example Ctrl+, opens navigation towards files in our application.


The console comes with things like npm and autocompletion on most commands as well.

Console in Visual Studio Online

I can see myself using this for some scenarios like on-the-road editing from a Git repository (yes, you can clone any repo you want in this tool) or make live modifications to some simple sites I have running. What would you use this for?

Developing Windows Azure Mobile Services server-side

Word of warning: This is a partial cross-post from the JetBrains WebStorm blog. The post you are currently reading adds some more information around Windows Azure Mobile Services and builds on a full example and is a bit more in-depth.

With Microsoft’s Windows Azure Mobile Services, we can build a back-end for iOS, Android, HTML, Windows Phone and Windows 8 apps that supports storing data, authentication, push notifications across all platforms and more. There are client libraries available for all these platforms which can be used when developing in an IDE of choice, e.g. AppCode, Google Android Studio or Visual Studio. In this post, let’s focus on what these different platforms have in common: the server-side code.

This post was sparked by my buddy Kristof Rennen’s session for our user group. During his session he mentioned a couple of times how he dislikes Node.js and the trial-and-error manner of building the server-side due to lack of good tooling. Working for a tooling vendor and intrigued by the quest of finding a better way, I decided to post the short article you are currently reading.

Do note that I will focus more on how to get your development environment set-up and less on the Windows Azure Mobile Services feature set. Yes, you will learn some of the very basics but there are way better resources available for getting in-depth knowledge on the topic.

Here’s what we will see in this post:

  • Setting up a Windows Azure Mobile Service
  • Creating a table and storing data
  • A simple HTML/JS client
  • Adding logic to our API
  • Working on server-side logic with WebStorm
  • Sending e-mail using an Node.js module
  • Putting our API to the test with the REST client
  • Unit testing our logic

The scenario

Doing some exploration is always more fun when we can do it based on a simple scenario. Whenever JetBrains goes to a conference and we have a booth, we like to do a raffle for licenses. The idea is simple: come to our booth for a chat, fill out a simple form and we will pick random names after the conference and send a free license.

For this post, I’ve created a very simple form in HTML and JavaScript, collecting visitor name and e-mail address.


Once someone participates in the raffle, the name and e-mail address are stored in a database and we send out an e-mail thanking that person for visiting the booth together with a link to download a product trial.

Setting up a Windows Azure Mobile Service

First things first: we will require a Windows Azure account to start developing. Next, we can create a new Mobile Service through the Windows Azure Management Portal.


Next, we can give our service a name and pick the datacenter location for it. We also have to provide the type of database we want to use: a free, 20 MB database, or a full-fledged SQL Database. While Windows Azure Mobile Services is always coupled to a database, we can build a custom API with it as well.


Once completed, we get several tabs to work with. There’s the initial welcome screen, displaying links to documentation and client libraries. The other tabs give access to monitoring, scaling, how we want to authenticate users, push notification settings and logs. Since we want to store data of booth visitors, let’s enter the Data tab.

Creating a table and storing data

From the Data tab, we can create a new table. Let’s call it Visitor. When creating a new table, we have to specify access rules for the API that will be available on top of it.


We can tell who can read (API GET request), insert (API POST request), update (API PATCH request) and delete (API DELETE request). Since our application will only insert new data and we don’t want to force booth visitors to log in with their social profiles, we can specify inserts can be done if an API key is provided. All other operations will be blocked for outside users: reading and deleting will only be available through the Windows Azure Management Portal with the above settings.

Do we have to create columns for storing booth visitor data? By default, Windows Azure Mobile Services has “dynamic schema” enabled which means we can throw some JSON at our Mobile Service it and it will store data for us.

A simple HTML/JS client

As promised earlier in this post, let’s see how we can build a simple client for the service we have just created. We’ll go with an HTML and JavaScript based client as it’s fairly easy to demonstrate. Again, have a look at other client SDK’s for the platform you are developing for.

Our HTML page exists of nothing but two text boxes and a button, conveniently named name, email and send. There are two ways of sending data to our Mobile Service: calling the API directly or making use of the client library provided. Both are easy to do: the API lives at https://<servicename><tablename> and we can POST a JSON-serialized object to it, an approach we’ll take later in this blog post. There is also a JavaScript client library available from https://<servicename> which our client is using.


As we can see, a new MobileServiceClient is created on which we can get a table reference (getTable) and insert a JSON-formatted object. Do note that we have to pass in an API key in the client constructor, which can be obtained from the Windows Azure Management Portal under the Manage Keys toolbar button.

From the portal, we can now see the data we’re submitting from our simple application:


Adding logic to our API

Let’s make it a bit more exciting! What if we wanted to store a timestamp with every record? We may want to have some insight into when our booth was busiest. We can send a timestamp from the client but that would only add clutter to our client-side code. Also if we wanted to port the HTML/JS client to other platforms it would mean we have to make sure every client sends this data to our mobile service. In short: this calls for some server-side logic.

For every table created, we can make use of the Script tab to add custom logic to read, insert, update and delete operations which we can write in JavaScript. By default, this is what a script for insert may look like:


The insert function will be called with 3 parameters: the item to be stored (our JSON-serialized object), the current user and the full request. By default, the request.execute() function is called which will make use of the other two parameters internally. Let’s enrich our item with a timestamp.


Hitting Save will deploy this script to our mobile service which from now on will store an inserted timestamp in our database as well.

This is a very trivial example. There are a lot of things that can be done server-side: enforcing validation, record filtering, storing data in other tables as well, sending e-mail or text messages, … Here’s a post with some common scenarios. Full reference to the server-side objects is also available.

Working on server-side logic with WebStorm

Unfortunately, the in-browser editor for server-side scripts is a bit limited. It features no autocompletion and all code has to go in one file. How would we create shared logic which can be re-used across different scripts? How would we unit test our code? This is where WebStorm comes in. We can access the complete server-side code through a Git repository and work on it in a full IDE!

The Git access to our mobile service is disabled by default. Through the portal’s right-hand side menu, we can enable it by clicking the Set up source control link. Next, we can find repository details from the Configure tab.


We can now use WebStorm’s VCS | Checkout From Version Control | Git menu to bring down the server-side code for our Windows Azure Mobile Service.


In our project, we can see several folders and files. The service/api folder can hold custom API’s (check the file for more info). service/scheduler can hold scripts that execute at a given time or interval, much like CRON jobs. service/shared can hold shared scripts that can be used inside table logic, custom API’s and scheduler scripts. In the service/table folder we can find the script we have created through the portal: visitor.insert.js. Also note the visitor.json file which contains the access rules we configured through the portal earlier.


From now on, we can work inside WebStorm and push to the remote Git repository if we want to deploy our new code.

Sending e-mail using a Node.js module

Let’s go back to our initial requirements: whenever someone enters their name and e-mail address in our application, we want to send out an e-mail thanking them for participating. We can do this by making use of an NPM module, for example SendGrid.

Windows Azure Mobile Services comes with some NPM modules preinstalled, like SendGrid and Twilio. However we want to make sure we are always using the same version of the NPM package, so let’s install it into our project. WebStorm has a built-in package manager to do this, however Windows Azure Mobile Services requires us to install the module in a non-standard location (the service folder) hence we will use the Terminal tool window to install it.


Once finished, we can start working on our e-mail logic. Since we may want to re-use the e-mail logic (and we want to unit test it later), it’s best to create our logic in the shared folder.


In our shared module, we can make use of the SendGrid module to create and send an e-mail. We can export our sendThankYouMessage function to consumers of our shared module. In the visitor.insert.js script we can require our shared module and make use of the functionality it exposes. And as an added bonus, WebStorm provides us with autocompletion, code analysis and so on.


Once we’ve updated our code, we can transfer our server-side code to Windows Azure Mobile Services. Ctrl+K (or Cmd+K on Mac OS X) allows us to commit and push from within the IDE.


Putting our API to the test with the REST client

Once our changes have been deployed, we can test our API. This can be done using one of the client libraries or by making use of WebStorm’s built-in REST client. From the Tools | Test RESTful Web Service menu we can craft our API calls manually.

We can specify the HTTP method to use (POST since we want to insert) and the URL to our Windows Azure Mobile Services endpoint. In the headers section, we can add a Content-Type header and set it to application/json. We also have to specify an API key in the X-ZUMO-APPLICATION header. This API key can be found in the Windows Azure Management Portal. On the right-hand side we can provide the text to post, in this case a JSON-serialized object with some properties.


After running the request, we get back response headers and a response body:


No error message but an object is being returned? Great, that means our code works (and should also be sending out an e-mail). If something does go wrong, the Logs tab in the Windows Azure portal can be a tremendous help in finding out what went wrong.

Through the toolbar on the left, we can export/import requests, making it easy to create a number of predefined requests that can easily be run over and over for testing the REST API.

Unit testing our logic

With WebStorm we can easily test our JavaScript code and custom Node.js modules. Let’s first set up our IDE. Unit testing can be done using thenodeunit testing framework which we can install using the Node.js package manager.


Next, we can create a new Run Configuration from the toolbar selecting Nodeunit as the configuration type and entering all required configuration details. In our case, let’s run all tests from the test directory.


Next, we can create a folder that will hold our tests and mark it as a Test Source Root (open the context menu and use Mark Directory As | Test Source Root). Tests for Nodeunit are always considered modules and should export their test functions. Here’s a very basic example which tells Nodeunit to wait for one assertion, assert that a boolean is true and marks the test case completed.


Of course we can also test our business logic. It’s best to create separate modules under the shared folder as they will be easier to unit test. However if you do have to test the actual table scripts (like insert functionality), there is a little trick that allows doing just that. The following snippet exports the insert function outside of the table-specific module:


We can now test the complete visitor.insert.js module and even provide mocks to work with. The following example loads all our modules and sets up test expectation. We’re also overriding specific functionalities such as the sendThankYouMessage function to just make sure it’s called by our table API logic.


The full source code for both the server-side and client-side application can be found on

If you would like to learn more about Windows Azure Mobile Services and work with authentication, push notifications or custom API’s checkout the getting started documentation. And if you haven’t already, give WebStorm a try.