Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...


Working with a private npm registry in Azure Web Apps

Using Azure Web Apps, we can deploy and host Node applications quite easily. But what to do with packages the site depends on? Do we have to upload them manually to Azure Web Apps? Include them in our Git repository? None of that: we just have to make sure our app’s package,json is checked in so that Azure Web Apps can install them during deployment. Let’s see how.

Installing node modules during deployment

In this blog post, we’ll create a simple application using Express. In its simplest form, an Express application will map incoming request paths to a function that generates the response. This makes Express quite interesting to work with: we can return a simple string or delegate work to a full-fledged MVC component if we want to. Here’s the simplest application I could think of, returning “Hello world!” whenever the root URL is requested. We can save it as server.js so we can deploy it later on.

var express = require("express"); var app = express(); app.get("/", function(req, res) { res.send("Hello world!"); }); console.log("Web application starting..."); app.listen(process.env.PORT); console.log("Web application started on port " + process.env.PORT);

Of course, this will not work as-is. We need to ensure Express (and its dependencies) are installed as well. We can do this using npm, running the following commands:

# create package.json describing our project npm init # install and save express as a dependency npm install express --save

That’s pretty much it, running this is as simple as setting the PORT environment variable and running it using node.

set PORT=1234 node server.js

We can now commit our code, excluding the node_modules folder to our Azure Web App git repository. Ideally we create a .gitignore file that excludes this folder for once and for all. Once committed, Azure Web Apps starts a convention-based deployment process. One of the conventions is that for a Node application, all dependencies from package.json are installed. We can see this convention in action from the Azure portal.

Azure Deploy Node.JS

Great! Seems we have to do nothing special to get this to work. Except… What if we are using our own, private npm modules? How can we tell Azure Web Apps to make use of a different npm registry? Let’s see…

Installing private node modules during deployment

When building applications, we may be splitting parts of the application into separate node modules to make the application more componentized, make it easier to develop individual components and so on. We can use a private npm registry to host these components, an example being MyGet. Using a private npm feed we can give our development team access to these components using “good old npm” while not throwing these components out on the public

Imagine we have a module called demo-site-pages which contains some of the views our web application will be hosting. We can add a dependency to this module in our package.json:

{ "name": "demo-site", "version": "1.0.0", "description": "Demo site", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "dependencies": { "express": "^4.13.3", "demo-site-pages": "*" } }

Alternatively we could install this package using npm, specifying the registry directly:

npm install --save --registry

But now comes the issue: if we push this out to Azure Web Apps, our private registry is not known!

Generating a .npmrc file to work with a private npm registry in Azure Web Apps

To be able to install node modules from a private npm registry during deployment on Azure Web Apps, we have to ship a .npmrc file with our code. Let’s see how we can do this.

Since our application uses both as well as our private registry, we want to make sure MyGet proxies packages used from during installation. We can enable this from our feed’s Package Sources tab and edit the default package source source. Ensure the Make all upstream packages available in clients option is checked.

Next, register your MyGet NPM feed (or another registry URL). The easiest way to do this is by running the following commands:

npm config set registry npm login --registry= npm config set always-auth true

This generates a .npmrc file under our user profile folder. On Windows that would be something like C:\Users\Username\.npmrc. Copy this file into the application’s root folder and open it in an editor. Depending on the version of npm being used, we may have to set the always-auth setting to true:

registry= //"BASE64ENCODEDPASSWORD" // // //

If we now commit this file to our git repository, the next deployment on Azure Web Apps will install both packages from, in this case express, as well as packages from our private npm registry.

Installing node module from private npm registry


Not enough space on the disk - Azure Cloud Services

I have been using Microsoft Azure Cloud Services since PDC 2008 when it was first announced. Ever since, I’ve been a huge fan of “cloud services”, the cattle VMs in the cloud that are stateless. In all those years, I have never seen this error, until yesterday:

There is not enough space on the disk.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BinaryWriter.Write(Byte[] buffer, Int32 index, Int32 count)

Help! Where did that come from! I decided to set up a remote desktop connection to one of my VMs and see if any of the disks were full or near being full. Nope!

Azure temp path full

The stack trace of the exception told me the exception originated from creating a temporary file, so I decided to check all the obvious Windows temp paths which were squeaky clean. The next thing I looked at were quotas: were any disk quotas enabled? No. But there are folder quotas enabled on an Azure Cloud Services Virtual Machine!

Azure temporary folder TEMP TMP quotas 100 MB

The one that has a hard quota of 100 MB caught my eye. The path was C:\Resources\temp\…. Putting one and one together, I deducted that Azure was redirecting my application’s temporary folder to this one. And indeed, a few searches and this was confirmed: cloud services do redirect the temporary folder and limit it with a hard quota. But I needed more temporary space…

Increasing temporary disk space on Azure Cloud Services

Turns out the System.IO namespace has several calls to get the temporary path (for example in Path.GetTempPath()) which all use the Win32 API’s under the hood. For which the docs read:

The GetTempPath function checks for the existence of environment variables in the following order and uses the first path found:

  1. The path specified by the TMP environment variable.
  2. The path specified by the TEMP environment variable.
  3. The path specified by the USERPROFILE environment variable.
  4. The Windows directory.

Fantastic, so all we have to do is create a folder on the VM that has no quota (or larger quota) and set the TMP and/or TEMP environment variables to point to it.

Let’s start with the first step: creating a folder that will serve as the temporary folder on our VM. We can do this from our Visual Studio cloud service project. For each role, we can create a Local Resource that has a given quota (make sure to not exceed the local resource limit for the VM size you are using!)

Create local resource on Azure VM

The next step would be setting the TMP / TEMP environment variables. We can do this by adding the following code into the role’s RoleEntryPoint (pasting full class here for reference):

public class WorkerRole : RoleEntryPoint { private const string _customTempPathResource = "CustomTempPath"; private CancellationTokenSource _cancellationTokenSource; public override bool OnStart() { // Set TEMP path on current role string customTempPath = RoleEnvironment.GetLocalResource(_customTempPathResource).RootPath; Environment.SetEnvironmentVariable("TMP", customTempPath); Environment.SetEnvironmentVariable("TEMP", customTempPath); return base.OnStart(); } }

That’s it! A fresh deploy and our temporary files are now stored in a bigger folder.

Replaying IIS request logs using Apache JMeter

I don't always test code...How would you validate a new API is compatible with an old API? While upgrading frameworks in a web application we’re building, that was exactly the question we were asking ourselves. Sure, we could write synthetic tests on each endpoint, but is that representative? Users typically find insane better ways to test an API, so why not replay actual requests against the new API?

In this post, we’ll see how we can do exactly this using IIS and Apache JMeter. I’ve been using JMeter quite often in the past years doing web development, as it’s one of the most customizable load test and functional test tools for web applications. The interface is quite spartan, but don’t let that discourage you from using JMeter. After all, this is Sparta!

Collecting IIS logs

Of course, one of the first things to do before being able to replay actual request logs is collecting those logs. Depending on the server configuration, these can be stored in many locations, such as C:\inetpub\logs\LogFiles or C:\Windows\system32\LogFiles\W3SVC1. I will leave it up to you where to find them.

In our case, we’re using Azure Cloud Services to host our web applications. IIS logs are stored in a location similar to C:\Resources\Directory\<deploymentid>.<solename>.DiagnosticStore\LogFiles\Web\W3SVC<numbers>. If you’d be on Azure Web Sites, logs must be enabled first before they can be downloaded.

Converting IIS logs

By default, IIS logs are in the W3C-IIS format. Which is great, as many tools are able to parse that format. Except for JMeter,which works with NCSA Common Log Format. Not to worry though! Fetch the RConvLog tool and invoke it on the IIS log file that should be replayed later on.

Running RConvLog

We’re running RConvLog on our log, which will be converted to the NCSA log format. We’re also providing RConvLog with an idea about the time zone the logs were generated in. Since Azure runs in UTC, we can just tell it it’s all UTC by passing in +0000.

Setting up a JMeter test plan

Time for action! After launching JMeter, we can start setting up our test plan. Using the context Add | Threads | Thread Group menu, we can add a thread group. A thread group in JMeter is what simulates users. We can decide how many users are active at the same time (number of threads), how dispersed requests are (ramp-up period) and how many requests will be made in total (loop count * number of threads). The following configuration will simulate 10.000 requests with at most 100 concurrent users. Note that when replaying logs, having 10.000 requests means only the first 10.000 requests from the log will be replayed (if in the next step, the OrderPreservingLogParser is selected). If the log file holds 40.000 and we want to replay them all, we’ll have to do the math and ensure we actually will do that number of requests.

JMeter test plan - users and threads

Yes, spartan. I already mentioned that. Next up, we can use the Add | Sampler | Access Log Sampler context menu. We can now specify where our log file that is to be replayed lives. Since NCSA log files don’t hold the hostname or server port, we can configure them here. For example if we want to replay logs against localhost, that’s what we’d enter under Server. We’ll also have to enter the log file location so that JMeter knows which log file to read from.

Important thing to change here! The log parser class. JMeter comes with several of them and can be extended with custom classes as well. The three that come in the box are the TCLogParser which processes the access log independently for each thread. The SharedTCLogParser and OrderPreservingLogParser share access to the file, where each thread gets the next entry in the log. Let’s pick the OrderPreservingLogParser so that the access log is read and replayed line by line.

JMeter Access Logs Sampler

All that’s left is using the Add | Listener | Aggregate Report context menu so that we can have a look at the results. That’s pretty much it. We should now save our test plan so we can run JMeter.

Replaying the logs with JMeter

Clicking the green Run button launches our virtual users and processes logs. The Aggregate Report will list every request made, show its timings and the error rate.

Aggregate Report when replaying IIS request logs

That’s about it. But there are some considerations to make…


What have we tested so far? Not a lot, to be honest. We’ve replayed IIS request logs but have not validated they return the expected results. So… how to do that? Using the Add | Assertions context menu, we can add assertions on status code and response contents. That’s great for functional tests, but replaying 100.000 entries is a bit harder to validate… For the test case we’ve opened this blog post with, we’ve created an Excel file that has the HTTP status codes and response size (they are in the logs) and compare them with the results we see in JMeter.

Maybe we’re not interested in the actual result, but in what is going on in our application? Turns out replaying IIS request logs can help us there, to. For the application we’re converting, we’re using Azure AppInsights to collect real-time telemetry from our application.  We could also use a profiler like JetBrains’ dotMemory and dotTrace, we can subject our running application to close inspection while JMeter simulates a real load against it.

And of course, we're testing only anonymous GET requests here. While JMeter supports sending cookies and request types other than GET, there are other tools to test those scenarios as well, like Fiddler or Runscope.


Automatically strong name signing NuGet packages

Some developers prefer to strong name sign their assemblies. Signing them also means that the dependencies that are consumed must be signed. Not all third-party dependencies are signed, though, for example when consuming packages from NuGet. Some are signed, some are unsigned, and the only way to know is when at compile time when we see this:

Referenced assembly does not have a strong name

That’s right: a signed assembly cannot consume an unsigned one. Now what if we really need that dependency but don’t want to lose the fact that we can easily update it using NuGet… Turns out there is a NuGet package for that!

The Assembly Strong Naming Toolkit can be installed into our project, after which we can use the NuGet Package Manager Console to sign referenced assemblies. There is also the .NET Assembly Strong-Name Signer by Werner van Deventer, which provides us with a nice UI as well.

The problem is that the above tools only sign the assemblies once we already consumed the NuGet package. With package restore enabled, that’s pretty annoying as the assemblies will be restored when we don’t have them on our system, thus restoring unsigned assemblies…

NuGet Signature

Based on the .NET Assembly Strong-Name Signer, I decided to create a small utility that can sign all assemblies in a NuGet package and creates a new package out of those. This “signed package” can then be used instead of the original, making sure we can simply consume the package in Visual Studio and be done with it. Here’s some sample code which signs the package “MyPackage” and creates “MyPackage.Signed”:

var signer = new PackageSigner(); signer.SignPackage("MyPackage.1.0.0.nupkg", "MyPackage.Signed.1.0.0.nupkg", "SampleKey.pfx", "password", "MyPackage.Signed");

This is pretty neat, if I may say so, but still requires manual intervention for the packages we consume. Unless the NuGet feed we’re consuming could sign the assemblies in the packages for us?

NuGet Signature meets MyGet Webhooks

Earlier this week, MyGet announced their webhooks feature. After enabling the feature on our feed, we could pipe events, such as “package added”, into software of our own and perform an action based on this event. Such as, signing a package.

MyGet automatically sign NuGet package

Sweet! From the GitHub repository here, download the Web API project and deploy it to Microsoft Azure Websites. I wrote the Web API project with some configuration options, which we can either specify before deploying or through the management dashboard. The application needs these:

  • Signature:KeyFile - path to the PFX file to use when signing (defaults to a sample key file)
  • Signature:KeyFilePassword - private key/password for using the PFX file
  • Signature:PackageIdSuffix - suffix for signed package id's. Can be empty or something like ".Signed"
  • Signature:NuGetFeedUrl - NuGet feed to push signed packages to
  • Signature:NuGetFeedApiKey - API key for pushing packages to the above feed

The configuration in the Microsoft Azure Management Dashboard could look like the this:

Azure Websites

Once that runs, we can configure the web hook on the MyGet side. Be sure to add an HTTP POST hook that posts to <url to your deployed app>/api/sign, and only with the package added event.

MyGet Webhook configuration

From now on, all packages that are added to our feed will be signed when the webhook is triggered. Here’s an example where I pushed several packages to the feed and the NuGet Signature application signed the packages themselves.

NuGet list signed packages

The nice thing is in Visual Studio we can now consume the “.Signed” packages and no longer have to worry about strong name signing.

Thanks to Werner for the .NET Assembly Strong-Name Signer I was able to base this on.