Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...

NAVIGATION - SEARCH

Disabling session affinity in Azure App Service Web Apps (Websites)

In one of our production systems, we’re using Azure Websites to host a back-end web API. It runs on several machines and benefits from the automatic load balancing we get on Azure Websites. When going through request logs, however, we discovered that of these several machines a few were getting a lot of traffic, some got less and one even only got hit by our monitoring system and no other traffic. That sucks!

In our back-end web API we’re not using any session state or other techniques where we’d expect the same client to always end up on the same server. Ideally, we want round-robin load balancing, distributing traffic across machines as much as possible. How to do this with Azure Websites?

How load balancing in Azure Websites works

Flashback to 2013. Calvin Keaton did a TechEd session titled “Windows Azure Web Sites: An Architecture and Technical Deep Dive” (watch it here). In this session (around 51:18), he explains what Azure Websites architecture looks like. The interesting part is the load balancing: it seems there’s a boatload of reverse proxies that handle load balancing at the HTTP(S) level, using IIS Application Request Routing (ARR, like a pirate).

In short: when a request comes in, ARR makes the request with the actual web server. Right before sending a response to the client, ARR slaps a “session affinity cookie” on the response which it uses on subsequent requests to direct that specific users requests back to the same server. You may have seen this cookie in action when using Fiddler on an Azure Website – look for ARRAffinity in cookies.

Disabling Application Request Routing session affinity via a header

By default, it seems ARR does try to map a specific client to a specific server. That’s good for some web apps, but in our back-end web API we’d rather not have this feature enabled. Turns out this is possible: when Application Request Routing 3.0 was released, a magic header was added to achieve this.

From the release blog post:

The special response header is Arr-Disable-Session-Affinity and the application would set the value of the header to be either True or False. If the value of the header is true, ARR would not set the affinity cookie when responding to the client request. In such a situation, subsequent requests from the client would not have the affinity cookie in them, and so ARR would route that request to the backend servers based on the load balance algorithm.

Aha! And indeed: after adding the following to our Web.config, load balancing seems better for our scenario:

<?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <add name="Arr-Disable-Session-Affinity" value="true" /> </customHeaders> </httpProtocol> </system.webServer> </configuration>

Enjoy!

Disclaimer: I’m an Azure geezer and may have misnamed Azure App Service Web Apps as “Azure Websites” throughout this blog post.

Working with a private npm registry in Azure Web Apps

Using Azure Web Apps, we can deploy and host Node applications quite easily. But what to do with packages the site depends on? Do we have to upload them manually to Azure Web Apps? Include them in our Git repository? None of that: we just have to make sure our app’s package,json is checked in so that Azure Web Apps can install them during deployment. Let’s see how.

Installing node modules during deployment

In this blog post, we’ll create a simple application using Express. In its simplest form, an Express application will map incoming request paths to a function that generates the response. This makes Express quite interesting to work with: we can return a simple string or delegate work to a full-fledged MVC component if we want to. Here’s the simplest application I could think of, returning “Hello world!” whenever the root URL is requested. We can save it as server.js so we can deploy it later on.

var express = require("express"); var app = express(); app.get("/", function(req, res) { res.send("Hello world!"); }); console.log("Web application starting..."); app.listen(process.env.PORT); console.log("Web application started on port " + process.env.PORT);

Of course, this will not work as-is. We need to ensure Express (and its dependencies) are installed as well. We can do this using npm, running the following commands:

# create package.json describing our project npm init # install and save express as a dependency npm install express --save

That’s pretty much it, running this is as simple as setting the PORT environment variable and running it using node.

set PORT=1234 node server.js

We can now commit our code, excluding the node_modules folder to our Azure Web App git repository. Ideally we create a .gitignore file that excludes this folder for once and for all. Once committed, Azure Web Apps starts a convention-based deployment process. One of the conventions is that for a Node application, all dependencies from package.json are installed. We can see this convention in action from the Azure portal.

Azure Deploy Node.JS

Great! Seems we have to do nothing special to get this to work. Except… What if we are using our own, private npm modules? How can we tell Azure Web Apps to make use of a different npm registry? Let’s see…

Installing private node modules during deployment

When building applications, we may be splitting parts of the application into separate node modules to make the application more componentized, make it easier to develop individual components and so on. We can use a private npm registry to host these components, an example being MyGet. Using a private npm feed we can give our development team access to these components using “good old npm” while not throwing these components out on the public npmjs.org.

Imagine we have a module called demo-site-pages which contains some of the views our web application will be hosting. We can add a dependency to this module in our package.json:

{ "name": "demo-site", "version": "1.0.0", "description": "Demo site", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "dependencies": { "express": "^4.13.3", "demo-site-pages": "*" } }

Alternatively we could install this package using npm, specifying the registry directly:

npm install --save --registry https://www.myget.org/F/demo-site/npm/

But now comes the issue: if we push this out to Azure Web Apps, our private registry is not known!

Generating a .npmrc file to work with a private npm registry in Azure Web Apps

To be able to install node modules from a private npm registry during deployment on Azure Web Apps, we have to ship a .npmrc file with our code. Let’s see how we can do this.

Since our application uses both npmjs.org as well as our private registry, we want to make sure MyGet proxies packages used from npmjs.org during installation. We can enable this from our feed’s Package Sources tab and edit the default Npmjs.org package source source. Ensure the Make all upstream packages available in clients option is checked.

Next, register your MyGet NPM feed (or another registry URL). The easiest way to do this is by running the following commands:

npm config set registry https://www.myget.org/F/your-feed-name/npm npm login --registry=https://www.myget.org/F/your-feed-name/npm npm config set always-auth true

This generates a .npmrc file under our user profile folder. On Windows that would be something like C:\Users\Username\.npmrc. Copy this file into the application’s root folder and open it in an editor. Depending on the version of npm being used, we may have to set the always-auth setting to true:

registry=https://www.myget.org/F/demo-site/npm //www.myget.org/F/demo-site/:_password="BASE64ENCODEDPASSWORD" //www.myget.org/F/demo-site/:username=maartenba //www.myget.org/F/demo-site/:email=maarten@myget.org //www.myget.org/F/demo-site/:always-auth=true

If we now commit this file to our git repository, the next deployment on Azure Web Apps will install both packages from npmjs.org, in this case express, as well as packages from our private npm registry.

Installing node module from private npm registry

Enjoy!

Not enough space on the disk - Azure Cloud Services

I have been using Microsoft Azure Cloud Services since PDC 2008 when it was first announced. Ever since, I’ve been a huge fan of “cloud services”, the cattle VMs in the cloud that are stateless. In all those years, I have never seen this error, until yesterday:

There is not enough space on the disk.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BinaryWriter.Write(Byte[] buffer, Int32 index, Int32 count)

Help! Where did that come from! I decided to set up a remote desktop connection to one of my VMs and see if any of the disks were full or near being full. Nope!

Azure temp path full

The stack trace of the exception told me the exception originated from creating a temporary file, so I decided to check all the obvious Windows temp paths which were squeaky clean. The next thing I looked at were quotas: were any disk quotas enabled? No. But there are folder quotas enabled on an Azure Cloud Services Virtual Machine!

Azure temporary folder TEMP TMP quotas 100 MB

The one that has a hard quota of 100 MB caught my eye. The path was C:\Resources\temp\…. Putting one and one together, I deducted that Azure was redirecting my application’s temporary folder to this one. And indeed, a few searches and this was confirmed: cloud services do redirect the temporary folder and limit it with a hard quota. But I needed more temporary space…

Increasing temporary disk space on Azure Cloud Services

Turns out the System.IO namespace has several calls to get the temporary path (for example in Path.GetTempPath()) which all use the Win32 API’s under the hood. For which the docs read:

The GetTempPath function checks for the existence of environment variables in the following order and uses the first path found:

  1. The path specified by the TMP environment variable.
  2. The path specified by the TEMP environment variable.
  3. The path specified by the USERPROFILE environment variable.
  4. The Windows directory.

Fantastic, so all we have to do is create a folder on the VM that has no quota (or larger quota) and set the TMP and/or TEMP environment variables to point to it.

Let’s start with the first step: creating a folder that will serve as the temporary folder on our VM. We can do this from our Visual Studio cloud service project. For each role, we can create a Local Resource that has a given quota (make sure to not exceed the local resource limit for the VM size you are using!)

Create local resource on Azure VM

The next step would be setting the TMP / TEMP environment variables. We can do this by adding the following code into the role’s RoleEntryPoint (pasting full class here for reference):

public class WorkerRole : RoleEntryPoint { private const string _customTempPathResource = "CustomTempPath"; private CancellationTokenSource _cancellationTokenSource; public override bool OnStart() { // Set TEMP path on current role string customTempPath = RoleEnvironment.GetLocalResource(_customTempPathResource).RootPath; Environment.SetEnvironmentVariable("TMP", customTempPath); Environment.SetEnvironmentVariable("TEMP", customTempPath); return base.OnStart(); } }

That’s it! A fresh deploy and our temporary files are now stored in a bigger folder.

What happened to Code Spaces could happen to you. On Amazon, Azure and any host out there.

Earlier this week, a sad thing happened to the version control hosting service Code Spaces. A malicious person gained access to their Amazon control panel and after demanding a ransom to the owners of Code Spaces, that malicious person started deleting data and EC2 instances. After a couple of failed attempts from Code Spaces to stop this from happening, the impossible happened: the hacker rendered Code Spaces dead. Everything that was their business is gone. As they state themselves:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.

That’s sad. Sad for users, sad for employees and sad for business owner. Some nutcase destroyed a flourishing business over the course of 12 hours. Horrible! But the most horrible thing? It can happen to you! Or as Jeff Atwood stated:

Jeff Atwood - they are everywhere!

The fact that this could happen is bad. But security is what it is: there is always this chance of something happening, whatever we do to mitigate as much of this as possible. Any service out there, whether Amazon Microsoft Azure or your hosting control panel are open for everyone with a username and password. Being a Microsoft Azure fan, I’ll use this post to scare everyone using the service and tools about what can happen. Knowing about what can happen is the first step towards mitigating it.

Disclaimer and setting the stage

What I do NOT want to do in this post is go into the technical details of every potential mishap that can happen. We’re all developers, there’s a myriad of search engines out there that can present us with all the details. I also do not want to give people the tools to do these mishaps. I’ll give you some theory on what could happen but I don’t want to be the guy who told people to be evil. Don’t. I deny any responsibility for potential consequences of this post.

Microsoft Account

Every Microsoft Azure subscription is linked to either an organizational account or a Microsoft Account. Earlier this week, I saw someone tweet that they had 32 Microsoft Azure subscriptions linked to their Microsoft Account. If I were looking to do bad things there, I’d try and get access to that account using any of the approaches available. Trying to gain access, some social engineering, anything! 32 subscriptions is a lot of ransom I could ask for. And with potentially 20 cores of CPU available in all of them, it’s also an ideal target to go and host some spam bots or some machines to perform a DDoS.

What can we do with our Microsoft Account to make it all a bit more secure?

  • Enable 2-factor authentication on your Microsoft Account. Do it!
  • Partition. Have one Microsoft Account for every subscription. With a different, complex password.
  • Managing this many subscriptions with this many accounts is hard. Don’t be tempted to make all the accounts “Administrators” on all of the subscriptions. It’s convenient and you will have one single logon to manage it all, but it broadens the potential attack surface again.

Certificates, PowerShell, the Command Line, NuGet and Visual Studio

The Microsoft Azure Management API’s can be used to do virtually anything you can do through the management portal. And more! Access to the management API is secured using a certificate that you have to upload to the portal. Great! Unless that management certificate was generated on your end without any security in mind. Not having a passphrase to use it or storing that passphrase on your system means that anyone with access to your computer could, in theory, use the management API with that certificate. But this is probably unlikely since as an attacker I’d have to have access to your computer. There are more clever ways!

Those PowerShell and cross-platform tools are great! Using them, we can script against the management API to create storage accounts, provision and deprovision resources, add co-administrators and so forth. What if an attacker got some software on your system? Malware. A piece of sample code. Anything! If you’re using the PowerShell or cross-platform tools, you’ve probably used them before and set the active subscription. All an attacker would have to do is run the command to create a co-admin or delete or provision something. No. Credentials. Needed.

Not possible, you say? You never install any software that is out there? And you’re especially wary when getting something through e-mail? Good for you! “But that NuGet thing is so damn tempting. I installed half of NuGet.org so far!” – sounds familiar? Did you know NuGet packages can run PowerShell code when installed in Visual Studio? What if… an attacker put a package named “jQeury” out there? And other potential spelling mistakes? They could ship the contents of the real jQuery package in them so you don’t see anything unusual. In that package, someone could put some call to the Azure PowerShell CmdLets and a fallback using the cross-platform tools to create a storage account, mirror a couple of TB of illegal content and host it on your account. Or delete all your precious VMs.

Not using any of the PowerShell or cross-platform tools? No worries: attackers could also leverage the $dte object and invoke stuff inside Visual Studio and trigger any of the ample commands available in there. You may notice something in the activity log when this happens, but still.

What can we do to use these tools but make it a bit more secure?

  • Think about good certificate management. Give them a shorter lifetime, replace them every now and then. Don’t store passphrases.
  • Using the PowerShell or cross-platform tools? Make sure that after every use you either invalidate the credential used. Don’t just set the active subscription in these tools to null. There’s a list command of which an attacker could set the currect subscription id.
  • That publish settings file? It contains the management certificate. Don't distribute it.
  • Automate using all the tools! But not on all developer machines, do it on the build server.

All these tools are very useful and handy to work with, but use them with some common sense. If you have other tips for locking it all down, leave them in the comments.

Enjoy your night rest.

Microsoft Azure cloud plugin for TeamCity (dabbling in Java code)

NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.


If you follow me on Twitter, you may have seen me in several stages of anger at Java. After two weeks of learning, experimenting, coding and even getting it all to compile, I’m proud to announce an inital very early preview of my Microsoft Azure cloud plugin for TeamCity.

This plugin provides Microsoft Azure cloud support for TeamCity. By configuring a Microsoft Azure cloud in TeamCity, a set of known virtual build agents can be started and stopped on demand by the TeamCity server so we can benefit from Microsoft Azure’s cost model (a stopped VM is almost free) and scaling model (only start new instances when we need them).

Curious to try it? Make sure you know it is all still very early alpha version software so use with caution. I wanted to get an early preview out to gather some comments on it. Here are the installation steps:

  • Download the plugin ZIP file from the latest GitHub release.
  • Copy it to the TeamCity plugins folder
  • Restart TeamCity server and verify the plugin was installed from Administration | Plugins List

Creating a new cloud profile

From TeamCity’s Administration | Agent Cloud, we can create a new cloud configuration making use of the Microsoft Azure plugin for TeamCity. All we have to do is select “Microsoft Azure” as the cloud type and enter the requested details.

TeamCity agent on Azure VM

Once we enter some preconfigured and pre-provisioned VM names, we’re good to save and profit.

Known issue: only one Microsoft Azure cloud configuration can be created per TeamCity server because the KeyStore being configured by the plugin only stores one management certificate. Contribute a fix if you feel up for it!

What’s up?

From Agents | Cloud, we can now see which VM instances are stopped/running on Microsoft Azure.

Start stop TeamCity agent on Azure

Known issue: status of the VM displayed is not always current. The VM status is read from TeamCity's last known status, not from Microsoft Azure. Again, contribute a fix if you feel up for it.

What is there to come?

That’s pretty much it for now. I told you, it’s early. In my ideal world, there should also be a possibility to launch VM instances from a predefined image and destroy them when no longer needed. I also would love to convert it all to Kotlin as I still don’t like Java as a language and Kotlin looks really nice. ANd ideally, the crude UI I did for the plugin should be much nicer too.

Happy building in the cloud!

Optimizing calls to Azure storage using Fiddler

Last week, Xavier and I were really happy for achieving a milestone. After having spent quite some evenings on bringing Visual Studio Online integration to MyGet, we were happy to be mentioned in the TechEd keynote and even pop up in quite some sessions. We also learned ASP.NET vNext was coming and it would leverage NuGet as an important part of it. What we did not know, however, is that the ASP.NET team would host all vNext preview packages from MyGet. But we soon noticed and found our evening hours were going to be very focused for another few days…

On May 12th, we all of a sudden saw usage of our service double in an instant. Ouch! Here’s what Google Analytics told us:

image

Luckily for us, we are hosted on Azure and could just pull the slider to the right and add more servers. Scale out for the win! Apart for some hickups when we enabled auto scaling (we thought traffic would go down at some points during the day), MyGet handled traffic pretty well. But still, we had to double our server capacity for being able to host one high-traffic NuGet feed. And even though we doubled sever capacity, response times went up as well.

image

Time for action! But what…

Some background on our application

When we started MyGet, our idea was to leverage table storage and blob storage, and avoid SQL completely. The reason for that is back then MyGet was a simple proof-of-concept and we wanted to play with new technology. Growing, getting traction and onboarding users we found out that what we had in place to work on this back-end was very nice to work with and we’ve since evolved to a more CQRS-ish and event driven (-ish) architecture.

But with all good things come some bad things as well. Adding features, improving code, implementing quota so we could actually meter what our users were doing and put a price tag on it had added quite some calls to table storage. And while it’s blazingly fast if you know what you are doing, they are still calls to an external system that have to open up a TCP connection, do an SSL handshake and so on. Not so many milliseconds, but summing them all together they do add up. So how do you find out what is happening? Let’s see…

Analyzing Azure storage traffic

There is no profiler out there that currently allows you to easily hook into what traffic is going over the wire to Azure storage. Fortunately for us, the Azure team added a way of hooking a proxy server between your application and storage itself. Using the development storage emulator, we can simply change our storage connection string to the following and hook Fiddler in:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler

Great! Now we have Fiddler to analyze our traffic to Azure storage. All requests going to blob, table or queue storage services are now visible: URL, result, timing and so forth.

image

The picture above is not coming from MyGet but just to illustrate the idea. You can clear the list of requests, load a specific page or action in your application and see the calls going out to storage. And we had some critical paths where we did over 7 requests. If each of them is 30ms on average, that is 210ms just to grab some data. And then you’ve not even done anything with it… So we decided to tackle that in our code.

Another thing we noticed by looking at URLs here is that we had some of those requests filtering only using the table storage RowKey, resulting in a +/- 2 second roundtrip on some requests. That is bad. And so we also fixed that (on some occasions by adding some caching, on others by restructuring the way data is stored and moving our filter to PartitionKey or a combination of PartitionKey and RowKey as you should).

The result of this? Well have a look at that picture above where our response times are shown: we managed to drastically reduce response times, making ourselves happy (we could kill some VM’s), and our users as well because everything is faster.

A simple trick with great results!

Windows Azure Storage magic with Shared Access Signatures

When building cloud applications on Windows Azure, it’s always a good thing to delegate as much work to specialized services as possible. File downloads would be one good example: these can be streamed directly from Windows Azure blob storage to your client, without having to pass a web application hosted on Windows Azure Cloud Services or Web Sites. Why occupy the web server with copying data from a request stream to a response stream? Let blob storage handle it!

When thinking this through there may be some issues you may think of. Here are a few:

  • How can I keep this blob secure? I don’t want to give everyone access to it!
  • How can I keep the download URL on my web server, track the number of downloads (or enforce other security rules) and still benefit from offloading the download to blob storage?
  • How can the blob be stored in a way that is clear to my application (e.g. a customer ID or something), yet give it a friendly name when downloading?

Let’s answer these!

Meet Shared Access Signatures

Keeping blobs secure is pretty easy on Windows Azure Blob Storage, but it’s also sort of an all-or-nothing story… Either you make all blobs in a container private, or you make them public.

Not to worry though! Using Shared Access Signatures it is possible to grant temporary privileges on a blob, for read and write access. Here’s a code snippet that will grant read access to the blob named helloworld.txt, residing in a private container named files, during the next minute:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("files"); var blob = container.GetBlockBlobReference("helloworld.txt"); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri;

Note I’m giving access starting 5 minutes ago, just to make sure any clock skew along the way is ignored within a reasonable time window.

There we go: our blob is secured and by passing along the signedBlobUrl to our user, he or she can start downloading our blob without having access to any other blobs at all.

Meet HTTP redirects

Shared Access Signatures are really cool, but the generated URLs are… “fugly”, they are not pretty or easy to remember. Well, there is this thing called HTTP redirects, right? Here’s an ASP.NET MVC action method that checks if the user is authenticated, queries a repository for the correct filename, generates the signed access signature and redirects us to the actual download.

[Authorize] [EnsureInvoiceAccessibleForUser] public ActionResult DownloadInvoice(string invoiceId) { // Fetch invoice var invoice = InvoiceService.RetrieveInvoice(invoiceId); if (invoice == null) { return new HttpNotFoundResult(); } // We can do other things here: track # downloads, ... // Build shared access signature CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var container = client.GetContainerReference("invoices"); var blob = container.GetBlockBlobReference(invoice.CustomerId + "-" + invoice.InvoiceId); var builder = new UriBuilder(blob.Uri); builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }).TrimStart('?'); var signedBlobUrl = builder.Uri; // Redirect return Redirect(signedBlobUrl); }

This gives us the best of both worlds: our web application can still verify access and run some business logic on it, yet we can offload the file download to blob storage.

Meet Shared Access Signatures content disposition header

Often, storage is a technical thing where we choose technical filenames for the things we store, instead of human-readable or human-friendly file names. In the example above, users will get a very strange filename to be downloaded: the customer id + invoice id, concatenated. No .pdf file extension, nothing else either. Users may get confused by this, or have problems opening the file because teir browser will not recognize this is a PDF.

Last November, a feature was added to blob storage which enables us to let a blob be whatever we want it to be: support for setting some additional headers on a blob through the Shared Access Signature.

The following headers can be specified on-the-fly, through the shared access signature:

  • Cache-Control
  • Content-Disposition
  • Content-Encoding
  • Content-Language
  • Content-Type

Here’s how to generate a meaningful Shared Access Signature in the previous example, where we specify a human-readable filename for the resulting download, as well as a custom content type:

builder.Query = blob.GetSharedAccessSignature( new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Read, SharedAccessStartTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(-5)), SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1) }, new SharedAccessBlobHeaders { ContentDisposition = "attachment; filename=" + customer.DisplayName + "-invoice-" + invoice.InvoiceId + ".pdf", ContentType = "application/pdf" }).TrimStart('?');

Note: for this feature to work, the service version for the storage account must be set to the latest one, using the DefaultServiceVersion on the blob client. Here’s an example:

CloudStorageAccount account = // your storage account connection here var client = account.CreateCloudBlobClient(); var serviceProperties = client.GetServiceProperties(); serviceProperties.DefaultServiceVersion = "2013-08-15"; client.SetServiceProperties(serviceProperties);

Combining all these techniques, we can do some analytics and business logic in our web application and offload the boring file and network I/O to blob storage.

Enjoy!

Pro NuGet second edition is out

Pro NuGet will learn you all there is to know about NuGetPfew! Around February 2013, Xavier and I started planning work on an update of our book. Eight months later, we’re proud to present you with Pro NuGet (second edition). It’s been a tough couple of months writing this: Xavier has become a father for the second time (congratulations!), we’ve had two massive updates to NuGet we had to work in our book, … But here it is!

What’s new?

  • A number of workflows with NuGet have changed and have been added. Expect all of these, including NuGet’s old and new package restore functionality.
  • Want to work with NuGet and Windows Azure Websites, TeamCity, Visual Studio Online, OctopusDeploy, NuGet Gallery, ProGet or MyGet? We have a bunch of recipes for you!
  • Pitfalls of package versioning
  • Building a plugin system based on NuGet

Next to that there is a lot more meat in there!

  • Understand how NuGet fits into the big picture of your software development process to save you time and money.
  • How to keep your team working when your project depends on an external resource (such as a web service or cloud) which suddenly becomes unavailable.
  • Whether or not to auto-update NuGet packages within a continuous integration process for maximum reliability and speed.
  • How to combine NuGet with PowerShell to create your own Cmdlets and extend the base toolset in an extremely powerful manner.
  • Evaluate the pros-and-cons of hosting your own NuGet repository.
  • How to incorporate NuGet seamlessly within your continuous integration process.
  • Much much more!

We would love to get your feedback! E-mail us or write a review on your blog or Amazon. Enjoy the read!

PS: Thanks to our excellent reviewers (the NuGet team) and everyone at Apress! There is a lot of people involved in getting a quality book out there. Thanks!

A new year's present: introducing Glimpse plugins for Windows Azure

Glimpse plugin for Windows AzureHave you tried Glimpse before? It shows you server-side information like execution times, server configuration, request data and such in your browser. At the February MVP Summit this year, Anthony, Nik and I had a chat about what would be useful information to be displayed in Glimpse when working on Windows Azure. Some beers and a bit of coding later, we had a proof-of-concept showing Windows Azure runtime configuration data in a Glimpse tab.

Today, we are happy to announce a first public preview of two Windows Azure tabs in Glimpse: the Glimpse.WindowsAzure package displaying runtime information, and Glimpse.WindowsAzure.Storage collecting information about traffic from and to storage.

Want to give it a try? You can install these two NuGet packages from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

Now let’s have a look at what these packages have to offer!

Glimpse.WindowsAzure

The Glimpse.WindowsAzure package adds a new tab to Glimpse, displaying environment information when the web application is hosted on Windows Azure. It does this for Cloud Services as well as for Windows Azure Web Sites.

Installation is easy: simply add the Glimpse.WindowsAzure package to your project and you’re done. If you are running on .NET 4.5, you will have to add the following setting to your Web.config:

<appSettings>
  <add key="Glimpse:DisableAsyncSupport" value="true"/>
</appSettings>

When hosting in a Windows Azure Cloud Service (or the full emulator available in the Windows Azure SDK), the Azure Environment tab will provide information gathered from the RoleEnvironment class. Youcan see the deployment ID, current role instance information, a list of configured endpoints, which fault and uopdate domain our application is running in and so on.

Windows Azure Role Environment

When the web application is hosted on Windows Azure Web Sites, we get information like Compute Mode (Shared or Reserved) as well as Site Mode (Limited in the screenshot below means the application is running on a Free web site).

Glimpse Windows Azure Web Sites

The Azure Environment tab will also provide a link to the Kudu Remote Console, a feature in Windows Azure Web Sites where you can run commands on the box hosting the web site,

Kudu Console

Pretty handy if you ask me!

Glimpse.WindowsAzure.Storage

The Glimpse.WindowsAzure.Storage package adds an “Azure Storage” tab to Glimpse, displaying all sorts of information about traffic from and to Windows Azure storage. It will also estimate the cost for loading the current page depending on number of transactions and traffic to blobs, tables and/or queues. Note that this package can also be used in ASP.NET web sites that are not hosted on Windows Azure yet making use of Windows Azure Storage.

Once the package is installed into your project, you can almost start inspecting all this information. Almost? Well, see the caveat further down…

 

Number of transactions and a cost estimate

The first type of data displayed in the Azure Storage tab is the total number of transactions, traffic consumed and a cost estimate for 10.000 pageviews. This information can be used for several scenarios:

  • Know how many calls are made to storage. Maybe you can reduce the number of calls to reduce the toal number of transactions, one of the billing metrics for Windows Azure.
  • Another billing metric is the amount of traffic consumed. When running in the same datacenter as the storage account, it’s less important for cost but still, reducing the traffic can reduce the page load time.

Windows Azure Storage Transactions and bandwidth consumed

Now where do we get the price per 10.000 pageviews? Well, this is a very rough estimate, based om the pay-per-use pricing in Windows Azure. It is very likely that the actual price willk be lower if you are running on an MSDN subscription, a pre-paid plan or an Enterprise Agreement.

Warnings and analysis of requests

One feature we’re particularly proud of is this one: warnings and analysis of requests to Windows Azure Storage. First of all, we’ll analyse the settings for communicating over the network. In the screenshot below, you can see several general hints to optimize throughput by disabling the Nagle algorithm or disabling HTTP 100 Continue.

Another analysis we’ll do is verifying the requests themselves. In the example below, Glimpse is giving a warning about the fact that I’m querying table storage on properties that are not indexed, potentially causing timeouts in my application.

There are several more inspections in there, if you have suggestions for others feel free to let us know!

Analysis of requests

List of requests and Timeline

When using Windows Azure Storage, Glimpse will show you all requests that have been made together with the status code and total duration of the request.

image

Since a plain list is often not that easy to analyze, the Timeline tab is extended with this information as well. It shows you a summary of when calls to Windows Azure Storage have been made, as well as full details of the requests:

Timeline tracing Windows Azure Storage

One caveat

Because of a current limitation in the Windows Azure Storage SDK, you will have to explicitly add one parameter to every call that is made to Windows Azure Storage.

The idea is that the OperationContext parameter for calls to storage has to be a special Glimpse OperationContext obtained by calling OperationContextFactory.Current.Create(). This Glimpse-specific implementation provides us all the information required to do display information in the Azure Storage tab. here’s an example on how to wire it in for a call to create a blob storage container:

var account = CloudStorageAccount.DevelopmentStorageAccount;
var blobclient
= account.CreateCloudBlobClient();
var container1
= blobclient.GetContainerReference("glimpse1");
container1.CreateIfNotExists(operationContext: OperationContextFactory.Current.Create());

We are talking with Microsoft about this and are pretty sure this shortcoming will be addressed in the future.

What’s next?

It would be great if you could give these two packages a try! NuGet packages are available from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

We’re still looking at load balanced environments. You can implement Glimpse’s IPersistenceStore but we would like to have a zero-configuration setup.

Once we’re confident Glimpse.WindowsAzure and Glimpse.WindowsAzure.Storage are working properly, we’ll have a look at Windows Azure Caching and Service Bus.

Enjoy!

Visual Studio Online for Windows Azure Web Sites

Today’s official Visual Studio 2013 launch provides some interesting novelties, especially for Windows Azure Web Sites. There is now the choice of choosing which pipeline to run in (classic or integrated), we can define separate applications in subfolders of our web site, debug a web site right from within Visual Studio. But the most impressive one is this. How about… an in-browser editor for your application?

Editing Node.JS in browser

Let’s take a quick tour of it. After creating a web site we can go to the web site’s configuration we can enable the Visual Studio Online preview.

Edit in Visual Studio Online

Once enabled, simply navigate to https://<yoursitename>.scm.azurewebsites.net/dev or click the link from the dashboard, provide your site credentials and be greeted with Visual Studio Online.

On the left-hand menu, we can select the feature to work with. Explore does as it says: it gives you the possibility to explore the files in your site, open them, save them, delete them and so on. We can enable Git integration, search for files and classes and so on. When working in the editor we get features like autocompletion, FInd References, Peek Definition and so on. Apparently these don’t work for all languages yet, currently JavaScript and node.js seem to work, C# and PHP come with syntax highlighting but nothing more than that.

Peek definition

Most actions in the editor come with keyboard shortcuts, for example Ctrl+, opens navigation towards files in our application.

Navigation

The console comes with things like npm and autocompletion on most commands as well.

Console in Visual Studio Online

I can see myself using this for some scenarios like on-the-road editing from a Git repository (yes, you can clone any repo you want in this tool) or make live modifications to some simple sites I have running. What would you use this for?