Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Speeding up ASP.NET vNext package restore

TL;DR: If you have multiple NuGet feeds configured on your machine, it may be worth to do some tweaking in the NuGet.config file shipping with your project.

Last week, the ASP.NET team released a preview of “ASP.NET vNext”, a first step in the good direction for solving the pain building .NET projects is, but more than that a great step towards having an open and cross-platform ASP.NET that is super developer friendly. If you haven’t checked it out yet, do so now.

One of the things ASP.NET vNext leans on heavily is NuGet. In fact, every application comes with a project.json file that describes an application’s dependencies. Only when running kpm restore these dependencies are downloaded and the application can be run. Running this package restore (it’s NuGet after all) is usually pretty fast, but if you, like me, are a heavy NuGet user, chances are the restore is not happening in the most optimal way. Have a look at the output of my kpm restore command right after I installed ASP.NET vNext on my system:

Project K package restore

It’s not easy to capture a screenshot that proves the point I'm about to make, but if you do this yourself and you have multiple NuGet feeds configured on your system, you’ll see that ASP.NET vNext is trying to restore packages from all configured feeds. In my case, I’m using a personal feed on MyGet, a feed hosted on my TeamCity server, a feed on my local filesystem (testing purposes) and then the ASP.NET vNext MyGet feed as well as NuGet.org. That’s 5 feeds being checked over and over again for the dependencies listed in my project.json… Let’s see if we can reduce this a bit.

If we look at the samples shipped in ASP.NET vNext, we can find a NuGet.config file in there. And as we know, NuGet has this thing called configuration file inheritance. This means that the feeds defined in here will be enriched with the feeds configured at the machine level, in my case 5 of them. But that also means we can easily fix this: adding a <clear /> element under the <packageSources> element will do the trick of removing all previously defined feeds and using just the ones defined for the project I’m working on:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/" /> <add key="NuGet.org" value="https://nuget.org/api/v2/" /> </packageSources> <!-- ... --> </configuration>

Use this trick for your own ASP.NET vNext projects as well: specify the feeds you want to use explicitly and everything will be faster for you and other developers working with your code. It ensures that kpm or NuGet for that matter only check the feeds that are relevant to your project and not every feed that is configured on your system.

Pro NuGet second edition is out

Pro NuGet will learn you all there is to know about NuGetPfew! Around February 2013, Xavier and I started planning work on an update of our book. Eight months later, we’re proud to present you with Pro NuGet (second edition). It’s been a tough couple of months writing this: Xavier has become a father for the second time (congratulations!), we’ve had two massive updates to NuGet we had to work in our book, … But here it is!

What’s new?

  • A number of workflows with NuGet have changed and have been added. Expect all of these, including NuGet’s old and new package restore functionality.
  • Want to work with NuGet and Windows Azure Websites, TeamCity, Visual Studio Online, OctopusDeploy, NuGet Gallery, ProGet or MyGet? We have a bunch of recipes for you!
  • Pitfalls of package versioning
  • Building a plugin system based on NuGet

Next to that there is a lot more meat in there!

  • Understand how NuGet fits into the big picture of your software development process to save you time and money.
  • How to keep your team working when your project depends on an external resource (such as a web service or cloud) which suddenly becomes unavailable.
  • Whether or not to auto-update NuGet packages within a continuous integration process for maximum reliability and speed.
  • How to combine NuGet with PowerShell to create your own Cmdlets and extend the base toolset in an extremely powerful manner.
  • Evaluate the pros-and-cons of hosting your own NuGet repository.
  • How to incorporate NuGet seamlessly within your continuous integration process.
  • Much much more!

We would love to get your feedback! E-mail us or write a review on your blog or Amazon. Enjoy the read!

PS: Thanks to our excellent reviewers (the NuGet team) and everyone at Apress! There is a lot of people involved in getting a quality book out there. Thanks!

A new year's present: introducing Glimpse plugins for Windows Azure

Glimpse plugin for Windows AzureHave you tried Glimpse before? It shows you server-side information like execution times, server configuration, request data and such in your browser. At the February MVP Summit this year, Anthony, Nik and I had a chat about what would be useful information to be displayed in Glimpse when working on Windows Azure. Some beers and a bit of coding later, we had a proof-of-concept showing Windows Azure runtime configuration data in a Glimpse tab.

Today, we are happy to announce a first public preview of two Windows Azure tabs in Glimpse: the Glimpse.WindowsAzure package displaying runtime information, and Glimpse.WindowsAzure.Storage collecting information about traffic from and to storage.

Want to give it a try? You can install these two NuGet packages from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

Now let’s have a look at what these packages have to offer!

Glimpse.WindowsAzure

The Glimpse.WindowsAzure package adds a new tab to Glimpse, displaying environment information when the web application is hosted on Windows Azure. It does this for Cloud Services as well as for Windows Azure Web Sites.

Installation is easy: simply add the Glimpse.WindowsAzure package to your project and you’re done. If you are running on .NET 4.5, you will have to add the following setting to your Web.config:

<appSettings>
  <add key="Glimpse:DisableAsyncSupport" value="true"/>
</appSettings>

When hosting in a Windows Azure Cloud Service (or the full emulator available in the Windows Azure SDK), the Azure Environment tab will provide information gathered from the RoleEnvironment class. Youcan see the deployment ID, current role instance information, a list of configured endpoints, which fault and uopdate domain our application is running in and so on.

Windows Azure Role Environment

When the web application is hosted on Windows Azure Web Sites, we get information like Compute Mode (Shared or Reserved) as well as Site Mode (Limited in the screenshot below means the application is running on a Free web site).

Glimpse Windows Azure Web Sites

The Azure Environment tab will also provide a link to the Kudu Remote Console, a feature in Windows Azure Web Sites where you can run commands on the box hosting the web site,

Kudu Console

Pretty handy if you ask me!

Glimpse.WindowsAzure.Storage

The Glimpse.WindowsAzure.Storage package adds an “Azure Storage” tab to Glimpse, displaying all sorts of information about traffic from and to Windows Azure storage. It will also estimate the cost for loading the current page depending on number of transactions and traffic to blobs, tables and/or queues. Note that this package can also be used in ASP.NET web sites that are not hosted on Windows Azure yet making use of Windows Azure Storage.

Once the package is installed into your project, you can almost start inspecting all this information. Almost? Well, see the caveat further down…

 

Number of transactions and a cost estimate

The first type of data displayed in the Azure Storage tab is the total number of transactions, traffic consumed and a cost estimate for 10.000 pageviews. This information can be used for several scenarios:

  • Know how many calls are made to storage. Maybe you can reduce the number of calls to reduce the toal number of transactions, one of the billing metrics for Windows Azure.
  • Another billing metric is the amount of traffic consumed. When running in the same datacenter as the storage account, it’s less important for cost but still, reducing the traffic can reduce the page load time.

Windows Azure Storage Transactions and bandwidth consumed

Now where do we get the price per 10.000 pageviews? Well, this is a very rough estimate, based om the pay-per-use pricing in Windows Azure. It is very likely that the actual price willk be lower if you are running on an MSDN subscription, a pre-paid plan or an Enterprise Agreement.

Warnings and analysis of requests

One feature we’re particularly proud of is this one: warnings and analysis of requests to Windows Azure Storage. First of all, we’ll analyse the settings for communicating over the network. In the screenshot below, you can see several general hints to optimize throughput by disabling the Nagle algorithm or disabling HTTP 100 Continue.

Another analysis we’ll do is verifying the requests themselves. In the example below, Glimpse is giving a warning about the fact that I’m querying table storage on properties that are not indexed, potentially causing timeouts in my application.

There are several more inspections in there, if you have suggestions for others feel free to let us know!

Analysis of requests

List of requests and Timeline

When using Windows Azure Storage, Glimpse will show you all requests that have been made together with the status code and total duration of the request.

image

Since a plain list is often not that easy to analyze, the Timeline tab is extended with this information as well. It shows you a summary of when calls to Windows Azure Storage have been made, as well as full details of the requests:

Timeline tracing Windows Azure Storage

One caveat

Because of a current limitation in the Windows Azure Storage SDK, you will have to explicitly add one parameter to every call that is made to Windows Azure Storage.

The idea is that the OperationContext parameter for calls to storage has to be a special Glimpse OperationContext obtained by calling OperationContextFactory.Current.Create(). This Glimpse-specific implementation provides us all the information required to do display information in the Azure Storage tab. here’s an example on how to wire it in for a call to create a blob storage container:

var account = CloudStorageAccount.DevelopmentStorageAccount;
var blobclient
= account.CreateCloudBlobClient();
var container1
= blobclient.GetContainerReference("glimpse1");
container1.CreateIfNotExists(operationContext: OperationContextFactory.Current.Create());

We are talking with Microsoft about this and are pretty sure this shortcoming will be addressed in the future.

What’s next?

It would be great if you could give these two packages a try! NuGet packages are available from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

We’re still looking at load balanced environments. You can implement Glimpse’s IPersistenceStore but we would like to have a zero-configuration setup.

Once we’re confident Glimpse.WindowsAzure and Glimpse.WindowsAzure.Storage are working properly, we’ll have a look at Windows Azure Caching and Service Bus.

Enjoy!

Using the Windows Azure Content Delivery Network (CDN)

CDNWith the Windows Azure Content Delivery Network (CDN) released as a preview, I thought it was a good time to write up some details about how to work with it. The CDN can be used for offloading content to a globally distributed network of servers, ensuring faster throughput to your end users.

Note: this is a modified and updated version of my article at ACloudyPlace.com roughly two years ago. I have added information on how to work with ASP.NET MVC bundling and the Windows Azure CDN, updated screenshots and so on.

Reasons for using a CDN

There are a number of reasons to use a CDN. One of the obvious reasons lies in the nature of the CDN itself: a CDN is globally distributed and caches static content on edge nodes, closer to the end user. If a user accesses your web application and some of the files are cached on the CDN, the end user will download those files directly from the CDN, experiencing less latency in their request.

Windows Azure CDN graphically

Another reason for using the CDN is throughput. If you look at a typical webpage, about 20% of it is HTML which was dynamically rendered based on the user’s request. The other 80% goes to static files like images, CSS, JavaScript and so forth. Your server has to read those static files from disk and write them on the response stream, both actions which take away some of the resources available on your virtual machine. By moving static content to the CDN, your virtual machine will have more capacity available for generating dynamic content.

Enabling the Windows Azure CDN

The Windows Azure CDN is built for two services that are available in your subscription: storage and cloud services. The easiest way to get started with the CDN is by using the Windows Azure Management Portal. From the New menu at the bottom, select App Services | CDN | Quick Create.

Enabling Windows Azure CDN

From the dropdown that is shown, select either a storage account or a cloud service which will serve as the source of our CDN edge data. After clicking Create, the CDN will be initialized. This may take up to 60 minutes because the settings you’ve just applied may take that long to propagate to all CDN edge locations globally (over 24 was the last number I read). Your CDN will be assigned a URL in the form of .vo.msecnd.net">http://<id>.vo.msecnd.net.

Once the CDN endpoint is created, there are some options that can be managed. Currently they are somewhat limited but I’m pretty sure this will expand. For now, you can for example assign a custom domain name to the CDN by clicking the “Manage Domains” button in the toolbar.

Manage the Windows Azure CDN - Add custom domain

Note that the CDN works using HTTP by default, but HTTPS is supported as well and can be enabled through the management portal. Unfortunately, SSL is using a certificate that Microsoft provides and there’s currently no option to use your own, making it hard to use a custom domain name and HTTPS.

Serving blob storage content through the CDN

Let’s start and offload our static content (CSS, images, JavaScript) to the Windows Azure CDN using a storage account as the source for CDN content. In an ASP.NET MVC project, edit the _Layout.cshtml view. Instead of using the bundles for CSS and scripts, let’s include them manually from a URL hosted on your newly created CDN:

1 <!DOCTYPE html> 2 <html> 3 <head> 4 <title>@ViewBag.Title</title> 5 <link href="http://az172665.vo.msecnd.net/static/Content/Site.css" rel="stylesheet" type="text/css" /> 6 <script src="http://az172665.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js" type="text/javascript"></script> 7 </head> 8 <!-- more HTML --> 9 </html>

Note that the CDN URL includes a reference to a folder named “static”.

If you now run this application, you’ll find no CSS or JavaScript applied. The reason for this is obvious: we have specified the URL to our CDN but haven’t uploaded any files to our storage account backing the CDN.

Where are our styles?

Uploading files to the CDN is easy. All you need is a public blob container and some blobs hosted in there. You can use tools like Cerebrata’s Cloud Storage Studio or upload the files from code. For example, I’ve created an action method taking care of uploading static content for me:

1 [HttpPost, ActionName("Synchronize")] 2 public ActionResult Synchronize_Post() 3 { 4 var account = CloudStorageAccount.Parse( 5 ConfigurationManager.AppSettings["StorageConnectionString"]); 6 var client = account.CreateCloudBlobClient(); 7 8 var container = client.GetContainerReference("static"); 9 container.CreateIfNotExist(); 10 container.SetPermissions( 11 new BlobContainerPermissions { 12 PublicAccess = BlobContainerPublicAccessType.Blob }); 13 14 var approot = HostingEnvironment.MapPath("~/"); 15 var files = new List<string>(); 16 files.AddRange(Directory.EnumerateFiles( 17 HostingEnvironment.MapPath("~/Content"), "*", SearchOption.AllDirectories)); 18 files.AddRange(Directory.EnumerateFiles( 19 HostingEnvironment.MapPath("~/Scripts"), "*", SearchOption.AllDirectories)); 20 21 foreach (var file in files) 22 { 23 var contentType = "application/octet-stream"; 24 switch (Path.GetExtension(file)) 25 { 26 case "png": contentType = "image/png"; break; 27 case "css": contentType = "text/css"; break; 28 case "js": contentType = "text/javascript"; break; 29 } 30 31 var blob = container.GetBlobReference(file.Replace(approot, "")); 32 blob.Properties.ContentType = contentType; 33 blob.Properties.CacheControl = "public, max-age=3600"; 34 blob.UploadFile(file); 35 blob.SetProperties(); 36 } 37 38 ViewBag.Message = "Contents have been synchronized with the CDN."; 39 40 return View(); 41 }

There are two very important lines of code in there. The first one, container.SetPermissions, ensures that the blob storage container we’re uploading to allows public access. The Windows Azure CDN can only cache blobs stored in public containers.

The second important line of code, blob.Properties.CacheControl, is more interesting. How does the Windows Azure CDN know how long a blob should be cached on each edge node? By default, each blob will be cached for roughly 72 hours. This has some important consequences. First, you cannot invalidate the cache and have to wait for content expiration to occur. Second, the CDN will possibly refresh your blob every 72 hours.

As a general best practice, make sure that you specify the Cache-Control HTTP header for every blob you want to have cached on the CDN. If you want to have the possibility to update content every hour, make sure you specify a low TTL of, say, 3600 seconds. If you want less traffic to occur between the CDN and your storage account, specify a longer TTL of a few days or even a few weeks.

Another best practice is to address CDN URLs using a version number. Since the CDN can create a separate cache of a blob based on the query string, appending a version number to the URL may make it easier to refresh contents in the CDN based on the version of your application. For example, main.css?v1 and main.css?v2 may return different versions of main.css cached on the CDN edge node. Do note that the query string support is opt-in and should be enabled through the management portal. Here’s a quick code snippet which appends the AssemblyVersion to the CDN URLs to version content based on the deployed application version:

1 @{ 2 var version = System.Reflection.Assembly.GetAssembly( 3 typeof(WindowsAzureCdn.Web.Controllers.HomeController)) 4 .GetName().Version.ToString(); 5 } 6 <!DOCTYPE html> 7 <html> 8 <head> 9 <title>@ViewBag.Title</title> 10 <link href="http://az172729.vo.msecnd.net/static/Content/Site.css?@version" rel="stylesheet" type="text/css" /> 11 <script src="http://az172729.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js?@version" type="text/javascript"></script> 12 </head> 13 <!-- more HTML --> 14 </html>

Using cloud services with the CDN

So far we’ve seen how you can offload static content to the Windows Azure CDN. We can upload blobs to a storage account and have them cached on different edge nodes around the globe. Did you know you can also use your cloud service as a source for files cached on the CDN? The only thing to do is, again, go to the Windows Azure Management Portal and ensure the CDN is enabled for the cloud service you want to use.

Serving static content through the CDN

The main difference with using a storage account as the source for the CDN is that the CDN will look into the /cdn/* folder on your cloud service to retrieve its contents. There are two options for doing this: either moving static content to the /cdn folder, or using IIS URL rewriting to “fake” a /cdn folder.

When using ASP.NET MVC’s bundling features, we’ll have to modify the bundle configuration in BundleConfig.cs. First, we’ll have to set bundle.EnableCdn to true. Next, we’ll have to provide the URL to the CDN version of our bundles. Here’s a snippet which does just that for the Content/css bundle. We’re still working with a version number to make sure we can update the CDN contents for every deployment of our application.

1 var version = System.Reflection.Assembly.GetAssembly(typeof(BundleConfig)).GetName().Version.ToString(); 2 var cdnUrl = "http://az170459.vo.msecnd.net/{0}?" + version; 3 4 bundles.UseCdn = true; 5 bundles.Add(new StyleBundle("~/Content/css", string.Format(cdnUrl, "Content/css")).Include("~/Content/site.css"));

Note that this time, the CDN URL does not include any reference to a blob container.

Whether you are using bundling or not, the trick will be to request URLs straight from the CDN instead of from your server to be able to make use of the CDN.

Exposing static content to the CDN with IIS URL rewriting

The Windows Azure CDN only looks at the /cdn folder as a source of files to cache. This means that if you simply copy your static content into the /cdn folder, you’re finished. Your web application and the CDN will play happily together. But this means the static content really has to be static. In the previous example of using ASP.NET MVC bundling, our static “bundles” aren’t really static…

An alternative to copying static content to a /cdn folder explicitly is to use IIS URL rewriting. IIS URL rewriting is enabled on Windows Azure by default and can be configured to translate a /cdn URL to a / URL. For example, if the CDN requests the /cdn/Content/css bundle, IIS URL rewriting will simply serve the /Content/css bundle leaving you with no additional work.

To configure IIS URL rewriting, add a <rewrite> section under the <system.webServer> section in Web.config:

1 <system.webServer> 2 <!-- More settings --> 3 4 <rewrite> 5 <rules> 6 <rule name="RewriteIncomingCdnRequest" stopProcessing="true"> 7 <match url="^cdn/(.*)$" /> 8 <action type="Rewrite" url="{R:1}" /> 9 </rule> 10 </rules> 11 </rewrite> 12 </system.webServer>

As a side note, you can also configure an outbound rule in IIS URL rewriting to automatically modify your HTML into using the Windows Azure CDN. Do know that this option is only supported when not using dynamic content compression and adds additional workload to your web server due to having to parse and modify your outgoing HTML.

Serving dynamic content through the CDN

Some dynamic content is static in a sense. For example, generating an image on the server or generating a PDF report based on the same inputs. Why would you generate those files over and over again? This kind of content is a perfect candidate to cache on the CDN as well!

Imagine you have an ASP.NET MVC action method which generates an image based on a given string. For every different string the output would be different, however if someone uses the same input string the image being generated would be exactly the same.

As an example, we’ll be using this action method in a view to display the page title as an image. Here’s the view’s Razor code:

1 @{ 2 ViewBag.Title = "Home Page"; 3 } 4 5 <h2><img src="/Home/GenerateImage/@ViewBag.Message" alt="@ViewBag.Message" /></h2> 6 <p> 7 To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8 </p>

In the previous section, we’ve seen how an IIS rewrite rule can map all incoming requests from the CDN. The same rule can be applied here: if the CDN requests /cdn/Home/GenerateImage/Welcome, IIS will rewrite this to /Home/GenerateImage/Welcome and render the image once and cache it on the CDN from then on.

As mentioned earlier, a best practice is to specify the Cache-Control HTTP header. This can be done in our action method by using the [OutputCache] attribute, specifying the time-to-live in seconds:

1 [OutputCache(VaryByParam = "*", Duration = 3600, Location = OutputCacheLocation.Downstream)] 2 public ActionResult GenerateImage(string id) 3 { 4 // ... generate image ... 5 6 return File(image, "image/png"); 7 }

We would now only have to generate this image once for every different string requested. The Windows Azure CDN will take care of all intermediate caching.

Conclusion

The Windows Azure CDN is one of the building blocks to create fault-tolerant, reliable and fast applications running on Windows Azure. By caching static content on the CDN, the web server has more resources available to process other requests. Next to that, users will experience faster loading of your applications because content is delivered from a server closer to their location.

Enjoy!

Just released: MvcSiteMapProvider 4.0

MvcSiteMapProviderAfter a beta version about a month ago, we are proud to release MvcSiteMapProvider 4.0 stable! (get it from NuGet, it’s fresh!) It took 6 months to complete this major version but I think our GitHub contributors have done a great job. Thank you all and especially Shad for taking the lead on this release!

MvcSiteMapProvider is a tool targeted at ASP.NET MVC that provides menus, site maps, site map path functionality, and more. It provides the ability to configure a hierarchical navigation structure using a pluggable architecture that can be XML, database, or code driven. We have moved beyond a mere ASP.NET SiteMapProvider implementation to provide support for multi-tenant applications, flexible caching, dependency injection, and several interface-based extensibility points where virtually any part of the provider can be replaced with a custom implementation.

Based on areas, controller and action method names rather than hardcoded URL references, sitemap nodes are completely dynamic based on the routing engine used in an application. Search Engine Optimization support is also provided in the form of dynamic sitemaps XML, canonical URL tags, and meta robots tags to ensure you send the search engines consistent - rather than conflicting - information about your URLs.

What has changed?

What I originally intended to do in v2 (but decided against based on popular request) is something that now has been done. The biggest change in this release is that we have stepped away from being an ASP.NET SiteMapProvider implementation. This means a lot of code had to be rewritten making v4 a pretty clean release. We’re not there yet completely as we want to have unit tests for all (and some more changes will be required for that).

Next to stepping away from the ASP.NET provider model, we’ve improved support for dependency injection. If you don’t need it, no worries. If you do need it: every component of the MvcSiteMapProvider is now pluggable. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup. Read more in the documentation.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

The Menu() helper has been rewritten to become a more performant and reliable helper (thanks for the contribution, midishero!)

A great bunch of performance enhancements and stability fixes are in as well.

How do I upgrade?

Since MvcSiteMapProvider has had some significant updates going from v3 to v4, it is best to read the upgrade guide. The first part of the upgrade from v3 to v4 will be updating the NuGet package. Before, MvcSiteMapProvider only had one NuGet package. Today, it has been split in multiple, of which the following ones are good to know at this time:

  • MvcSiteMapProvider.Web containing all views and web.config changes
  • MvcSiteMapProvider.MVC<version>.Core containing the library itself

Upgrading from v3 to v4 consists of installing the correct packages for your ASP.NET MVC version:

  • For MVC 2, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC2
  • For MVC 3, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC3
  • For MVC 4, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC4
  • Note that for MVC 4 we have made it possible to upgrade MvcSiteMapProvider instead, which will pull in all required dependencies. Do know that this is not the recommended scenario and it is preferred to install MvcSiteMapProvider.MVC4 instead.

The MvcSiteMapProvider.Web update will add views and all required runtime dependencies to your project. This package is a dependency of each of the above options and generally will not need to be installed explicitly.

In .NET versions prior to .NET 4.0, one line of code should be added to the Application_Start() event of Global.asax:

MvcSiteMapProvider.DI.Composer.Compose();

Note that this code is automatically executed if using .NET 4.0 or higher by the use of WebActivator, so in most cases you will not need to call it manually.

More? Please read the upgrade guide.

What’s next?

NuGet all the things! Install the new MvcSiteMapProvider.MVCx package (replace X with your ASP.NET MVC version) and try it out! Leave your comments, ideas and pull requests on our GitHub page.

Enjoy!

Storing user uploads in Windows Azure blob storage

On one of the mailing lists I follow, an interesting question came up: “We want to write a VSTO plugin for Outlook which copies attachments to blob storage. What’s the best way to do this? What about security?”. Shortly thereafter, an answer came around: “That can be done directly from the client. And storage credentials can be encrypted for use in your VSTO plugin.”

While that’s certainly a solution to the problem, it’s not the best. Let’s try and answer…

What’s the best way to uploads data to blob storage directly from the client?

The first solution that comes to mind is implementing the following flow: the client authenticates and uploads data to your service which then stores the upload on blob storage.

Upload data to blob storage

While that is in fact a valid solution, think about the following: you are creating an expensive layer in your application that just sits there copying data from one network connection to another. If you have to scale this solution, you will have to scale out the service layer in between. If you want redundancy, you need at least two machines for doing this simple copy operation… A better approach would be one where the client authenticates with your service and then uploads the data directly to blob storage.

Upload data to blob storage using shared access signature

This approach allows you to have a “cheap” service layer: it can even run on the free version of Windows Azure Web Sites if you have a low traffic volume. You don’t have to scale out the service layer once your number of clients grows (at least, not for the uploading scenario).But how would you handle uploading to blob storage from a security point of view…

What about security? Shared access signatures!

The first suggested answer on the mailing list was this: “(…) storage credentials can be encrypted for use in your VSTO plugin.” That’s true, but you only have 2 access keys to storage. It’s like giving the master key of your house to someone you don’t know. It’s encrypted, sure, but still, the master key is at the client and that’s a potential risk. The solution? Using a shared access signature!

Shared access signatures (SAS) allow us to separate the code that signs a request from the code that executes it. It basically is a set of query string parameters attached to a blob (or container!) URL that serves as the authentication ticket to blob storage. Of course, these parameters are signed using the real storage access key, so that no-one can change this signature without knowing the master key. And that’s the scenario we want to support…

On the service side, the place where you’ll be authenticating your user, you can create a Web API method (or ASMX or WCF or whatever you feel like) similar to this one:

public class UploadController : ApiController { [Authorize] public string Put(string fileName) { var account = CloudStorageAccount.DevelopmentStorageAccount; var blobClient = account.CreateCloudBlobClient(); var blobContainer = blobClient.GetContainerReference("uploads"); blobContainer.CreateIfNotExists(); var blob = blobContainer.GetBlockBlobReference("customer1-" + fileName); var uriBuilder = new UriBuilder(blob.Uri); uriBuilder.Query = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Write, SharedAccessStartTime = DateTime.UtcNow, SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5) }).Substring(1); return uriBuilder.ToString(); } }

This method does a couple of things:

  • Authenticate the client using your authentication mechanism
  • Create a blob reference (not the actual blob, just a URL)
  • Signs the blob URL with write access, allowed from now until now + 5 minutes. That should give the client 5 minutes to start the upload.

On the client side, in our VSTO plugin, the only thing to do now is call this method with a filename. The web service will create a shared access signature to a non-existing blob and returns that to the client. The VSTO plugin can then use this signed blob URL to perform the upload:

Uri url = new Uri("http://...../uploads/customer1-test.txt?sv=2012-02-12&st=2012-12-18T08%3A11%3A57Z&se=2012-12-18T08%3A16%3A57Z&sr=b&sp=w&sig=Rb5sHlwRAJp7mELGBiog%2F1t0qYcdA9glaJGryFocj88%3D"); var blob = new CloudBlockBlob(url); blob.Properties.ContentType = "test/plain"; using (var data = new MemoryStream( Encoding.UTF8.GetBytes("Hello, world!"))) { blob.UploadFromStream(data); }

Easy, secure and scalable. Enjoy!

Sending e-mail from Windows Azure

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 04 SendingEmailsFromTheCloud.zip (922.27 kb).

When a user subscribes, you send him a thank-you e-mail. When his account expires, you send him a warning message containing a link to purchase a new subscription. When he places an order, you send him an order confirmation. I think you get the picture: a fairly common scenario in almost any application is sending out e-mails.

Now, why would I spend a blog post on sending out an e-mail? Well, for two reasons. First, Windows Azure doesn’t have a built-in mail server. No reason to panic! I’ll explain why and how to overcome this in a minute. Second, I want to demonstrate a technique that will make your applications a lot more scalable and resilient to errors.

E-mail services for Windows Azure

Windows Azure doesn’t have a built-in mail server. And for good reasons: if you deploy your application to Windows Azure, it will be hosted on an IP address which previously belonged to someone else. It’s a shared platform, after all. Now, what if some obscure person used Windows Azure to send out a number of spam messages? Chances are your newly-acquired IP address has already been blacklisted, and any e-mail you send from it ends up in people’s spam filters.

All that is fine, but of course, you still want to send out e-mails. If you have your own SMTP server, you can simply configure your .NET application hosted on Windows Azure to make use of your own mail server. There are a number of so-called SMTP relay services out there as well. Even the Belgian hosters like Combell, Hostbasket or OVH offer this service. Microsoft has also partnered with SendGrid to have an officially-supported service for sending out e-mails too. Windows Azure customers receive a special offer of 25,000 free e-mails per month from them. It’s a great service to get started with sending e-mails from your applications: after all, you’ll be able to send 25,000 free e-mails every month. I’ll be using SendGrid in this blog post.

Asynchronous operations

I said earlier that I wanted to show you two things: sending e-mails and building scalable and fault-resilient applications. This can be done using asynchronous operations. No, I don’t mean AJAX. What I mean is that you should create loosely-coupled applications.

Imagine that I was to send out an e-mail whenever a user registers. If the mail server is not available for that millisecond when I want to use it, the send fails and I might have to show an error message to my user (or even worse: a YSOD). Why would that happen? Because my application logic expects that it can communicate with a mail server in a synchronous manner.

clip_image002

Now let’s remove that expectation. If we introduce a queue in between both services, the front-end can keep accepting registrations even when the mail server is down. And when it’s back up, the queue will be processed and e-mails will be sent out. Also, if you experience high loads, simply scale out the front-end and add more servers there. More e-mail messages will end up in the queue, but they are guaranteed to be processed in the future at the pace of the mail server. With synchronous communication, the mail service would probably experience high loads or even go down when a large number of front-end servers is added.

clip_image004

Show me the code!

Let’s combine the two approaches described earlier in this post: sending out e-mails over an asynchronous service. Before we start, make sure you have a SendGrid account (free!). Next, familiarise yourself with Windows Azure storage queues using this simple tutorial.

In a fresh Windows Azure web role, I’ve created a quick-and-dirty user registration form:

clip_image006

Nothing fancy, just a form that takes a post to an ASP.NET MVC action method. This action method stores the user in a database and adds a message to a queue named emailconfirmation. Here’s the code for this action method:

[HttpPost, ActionName("Register")] public ActionResult Register_Post(RegistrationModel model) { if (ModelState.IsValid) { // ... store the user in the database ... // serialize the model var serializer = new JavaScriptSerializer(); var modelAsString = serializer.Serialize(model); // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage(modelAsString)); return RedirectToAction("Thanks"); } return View(model); }

As you can see, it’s not difficult to work with queues. You just enter some data in a message and push it onto the queue. In the code above, I’ve serialized the registration model containing my newly-created user’s name and e-mail to the JSON format (using JavaScriptSerializer). A message can contain binary or textual data: as long as it’s less than 64 KB in data size, the message can be added to a queue.

Being cheap with Web Workers

When boning up on Windows Azure, you’ve probably read about so-called Worker Roles, virtual machines that are able to run your back-end code. The problem I see with Worker Roles is that they are expensive to start with. If your application has 100 users and your back-end load is low, why would you reserve an entire server to run that back-end code? The cloud and Windows Azure are all about scalability and using a “Web Worker” will be much more cost-efficient to start with - until you have a large user base, that is.

A Worker Role consists of a class that inherits the RoleEntryPoint class. It looks something along these lines:

public class WebRole : RoleEntryPoint { public override bool OnStart() { // ... return base.OnStart(); } public override void Run() { while (true) { // ... } } }

You can run this same code in a Web Role too! And that’s what I mean by a Web Worker: by simply adding this class which inherits RoleEntryPoint to your Web Role, it will act as both a Web and Worker role in one machine.

Call me cheap, but I think this is a nice hidden gem. The best part about this is that whenever your application’s load requires a separate virtual machine running the worker role code, you can simply drag and drop this file from the Web Role to the Worker Role and scale out your application as it grows.

Did you send that e-mail already?

Now that we have a pending e-mail message in our queue and we know we can reduce costs using a web worker, let’s get our e-mail across the wire. First of all, using SendGrid as our external e-mail service offers us a giant development speed advantage, since they are distributing their API client as a NuGet package. In Visual Studio, right-click your web role project and click the “Library Package Manager” menu. In the dialog (shown below), search for Sendgrid and install the package found. This will take a couple of seconds: it will download the SendGrid API client and will add an assembly reference to your project.

clip_image008

All that’s left to do is write the code that reads out the messages from the queue and sends the e-mails using SendGrid. Here’s the queue reading:

public class WebRole : RoleEntryPoint { public override bool OnStart() { CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => { string value = ""; if (RoleEnvironment.IsAvailable) { value = RoleEnvironment.GetConfigurationSettingValue(configName); } else { value = ConfigurationManager.AppSettings[configName]; } configSetter(value); }); return base.OnStart(); } public override void Run() { // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); while (true) { var message = queue.GetMessage(); if (message != null) { // ... // mark the message as processed queue.DeleteMessage(message); } else { Thread.Sleep(TimeSpan.FromSeconds(30)); } } } }

As you can see, reading from the queue is very straightforward. You use a storage account, get a queue reference from it and then, in an infinite loop, you fetch a message from the queue. If a message is present, process it. If not, sleep for 30 seconds. On a side note: why wait 30 seconds for every poll? Well, Windows Azure will bill you per 100,000 requests to your storage account. It’s a small amount, around 0.01 cent, but it may add up quickly if this code is polling your queue continuously on an 8 core machine… Bottom line: on any cloud platform, try to architect for cost as well.

Now that we have our message, we can deserialize it and create a new e-mail that can be sent out using SendGrid:

// deserialize the model var serializer = new JavaScriptSerializer(); var model = serializer.Deserialize<RegistrationModel>(message.AsString); // create a new email object using SendGrid var email = SendGrid.GenerateInstance(); email.From = new MailAddress("maarten@example.com", "Maarten"); email.AddTo(model.Email); email.Subject = "Welcome to Maarten's Awesome Service!"; email.Html = string.Format( "<html><p>Hello {0},</p><p>Welcome to Maarten's Awesome Service!</p>" + "<p>Best regards, <br />Maarten</p></html>", model.Name); var transportInstance = REST.GetInstance(new NetworkCredential("username", "password")); transportInstance.Deliver(email); // mark the message as processed queue.DeleteMessage(message);

Sending e-mail using SendGrid is in fact getting a new e-mail message instance from the SendGrid API client, passing the e-mail details (from, to, body, etc.) on to it and handing it your SendGrid username and password upon sending.

One last thing: you notice we’re only deleting the message from the queue after processing it has succeeded. This is to ensure the message is actually processed. If for some reason the worker role crashes during processing, the message will become visible again on the queue and will be processed by a new worker role which processes this specific queue. That way, messages are never lost and always guaranteed to be processed at least once.

From API key to user with ASP.NET Web API

ASP.NET Web API is a great tool to build an API with. Or as my buddy Kristof Rennen (and the French) always say: “it makes you ‘api”. One of the things I like a lot is the fact that you can do very powerful things that you know and love from the ASP.NET MVC stack, like, for example, using filter attributes. Action filters, result filters and… authorization filters.

Say you wanted to protect your API and make use of the controller’s User property to return user-specific information. You probably will add an [Authorize] attribute (to ensure the user is authenticated) to either the entire API controller or to one of its action methods, like this:

[Authorize] public class SuperSecretController : ApiController { public string Get() { return string.Format("Hello, {0}", User.Identity.Name); } }

Great! But… How will your application know who’s calling? Forms authentication doesn’t really make sense for a lot of API’s. Configuring IIS and switching to Windows authentication or basic authentication may be an option. But not every ASP.NET Web API will live in IIS, right? And maybe you want to use some other form of authentication for your API, for example one that uses a custom HTTP header containing an API key? Let’s see how you can do that…

Our API authentication? An API key

API keys may make sense for your API. They provide an easy means of authenticating your API consumers based on a simple token that is passed around in a custom header. OAuth2 may make sense as well, but even that one boils down to a custom Authorization header at the HTTP level. (hint: the approach outlined in this post can be used for OAuth2 tokens as well)

Let’s build our API and require every API consumer to pass in a custom header, named “X-ApiKey”. Calls to our API will look like this:

GET http://localhost:60573/api/v1/SuperSecret HTTP/1.1 Host: localhost:60573 X-ApiKey: 12345

In our SuperSecretController above, we want to make sure that we’re working with a traditional IPrincipal which we can query for username, roles and possibly even claims if needed. How do we get that identity there?

Translating the API key using a DelegatingHandler

The title already gives you a pointer. We want to add a plugin into ASP.NET Web API’s pipeline which replaces the current thread’s IPrincipal with one that is mapped from the incoming API key. That plugin will come in the form of a DelegatingHandler, a class that’s plugged in really early in the ASP.NET Web API pipeline. I’m not going to elaborate on what DelegatingHandler does and where it fits, there’s a perfect post on that to be found here.

Our handler, which I’ll call AuthorizationHeaderHandler will be inheriting ASP.NET Web API’s DelegatingHandler. The method we’re interested in is SendAsync, which will be called on every request into our API.

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { // ... } }

This method offers access to the HttpRequestMessage, which contains everything you’ll probably be needing such as… HTTP headers! Let’s read out our X-ApiKey header, convert it to a ClaimsIdentity (so we can add additional claims if needed) and assign it to the current thread:

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { IEnumerable<string> apiKeyHeaderValues = null; if (request.Headers.TryGetValues("X-ApiKey", out apiKeyHeaderValues)) { var apiKeyHeaderValue = apiKeyHeaderValues.First(); // ... your authentication logic here ... var username = (apiKeyHeaderValue == "12345" ? "Maarten" : "OtherUser"); var usernameClaim = new Claim(ClaimTypes.Name, username); var identity = new ClaimsIdentity(new[] {usernameClaim}, "ApiKey"); var principal = new ClaimsPrincipal(identity); Thread.CurrentPrincipal = principal; } return base.SendAsync(request, cancellationToken); } }

Easy, no? The only thing left to do is registering this handler in the pipeline during your application’s start:

GlobalConfiguration.Configuration.MessageHandlers.Add(new AuthorizationHeaderHandler());

From now on, any request coming in with the X-ApiKey header will be translated into an IPrincipal which you can easily use throughout your web API. Enjoy!

PS: if you’re looking into OAuth2, I’ve used a similar approach in  “ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service” to handle OAuth2 tokens.

Domain based routing with ASP.NET Web API

Subdomain route ASP.NET Web API WCFImagine you are building an API which is “multi-tenant”: the domain name defines the tenant or customer name and should be passed as a route value to your API. An example would be http://customer1.mydomain.com/api/v1/users/1. Customer 2 can use the same API, using http://customer2.mydomain.com/api/v1/users/1. How would you solve routing based on a (sub)domain in your ASP.NET Web API projects?

Almost 2 years ago (wow, time flies), I’ve written a blog post on ASP.NET MVC Domain Routing. Unfortunately, that solution does not work out-of-the-box with ASP.NET Web API. The good news is: it almost works out of the box. The only thing required is adding one simple class:

1 public class HttpDomainRoute 2 : DomainRoute 3 { 4 public HttpDomainRoute(string domain, string url, RouteValueDictionary defaults) 5 : base(domain, url, defaults, HttpControllerRouteHandler.Instance) 6 { 7 } 8 9 public HttpDomainRoute(string domain, string url, object defaults) 10 : base(domain, url, new RouteValueDictionary(defaults), HttpControllerRouteHandler.Instance) 11 { 12 } 13 }

Using this class, you can now define subdomain routes for your ASP.NET Web API as follows:

1 RouteTable.Routes.Add(new HttpDomainRoute( 2 "{controller}.mydomain.com", // without tenant 3 "api/v1/{action}/{id}", 4 new { id = RouteParameter.Optional } 5 )); 6 7 RouteTable.Routes.Add(new HttpDomainRoute( 8 "{tenant}.{controller}.mydomain.com", // with tenant 9 "api/v1/{action}/{id}", 10 new { id = RouteParameter.Optional } 11 ));

And consuming them in your API controller is as easy as:

1 public class UsersController 2 : ApiController 3 { 4 public string Get() 5 { 6 var routeData = this.Request.GetRouteData().Values; 7 if (routeData.ContainsKey("tenant")) 8 { 9 return "UsersController, called by tenant " + routeData["tenant"]; 10 } 11 return "UsersController"; 12 } 13 }

Here’s a download for you if you want to make use of (sub)domain routes. Enjoy!

WebApiSubdomainRouting.zip (496.64 kb)

Setting up a webfarm using Windows Azure Virtual Machines

With the release of Microsoft’s Windows Azure Virtual Machines, a bunch of new scenarios became available on their cloud platform. If you plan to host multiple web applications, you can either go with Windows Azure Web Sites or go with a webfarm you create using the new IaaS capabilities. The first is okay for any type of application, the latter may be suitable when running a large-scale web application that can not be deployed easily in the PaaS offering. In this blog post, I’ll show you how to build a webfarm with (free!) load balancing.

Note: I’ll be using the built-in Windows Azure load balancer. If required, you can also deploy your own load balancer VM or reverse proxy. But since the Windows Azure load balancer comes with no extra cost, I think it’s the better choice for a lot of scenarios.

Creating a first virtual machine

After logging in to the new Windows Azure management portal, create a new virtual machine. You can choose to create a Linux or a Windows machine from a template or upload your own VM. I’ll go with a Windows machine but everything explained in this post is valid for a Linux webfarm, too.

Creating a Windows Azure Virtual Machine

Navigate through the wizard, selecting the VM size and administrator username of choice. In step 3 where you have to specify the DNS name and some other settings, be sure to choose an affinity group (giving better networking performance due to the fact that machines are on the same network in the Windows Azure datacenter). The DNS name can be anything you want to name your webfarm.

Windows Azure Virtual Machine Windows Linux

Before finishing the wizard, there is an important thing to do: in step 4, make sure to create an availability group in which all machines of the webfarm will reside. An availability group ensures that whenever maintenance occurs in the datacenter, this only occurs on one or some of your webfarm machines and not on all at once.

Windows Azure VM options

Adding an HTTP endpoint to the first machine

After the first virtual machine has been created, navigate to its configuration dashboard in the Windows Azure management portal. In order to have port 80 connected to this machine, a new endpoint should be added to the machine. Add the endpoints of choice, I chose to have port 80 open.

Windows Azure configure VM endpoints

It is important to understand that the endpoints added here are only opened at the load balancer level. That’s right: even a single machine will be behind a load balancer. This is incredibly powerful, as you’ll see when we add a new machine to our IaaS webfarm. It also poses an extra configuration step for single machines though: you’ll have to open port 80 on the machine’s firewall, too. You can safely use remote desktop (Windows) or SSH (Linux) to do so:

Install IIS on WIndows Azure Virtual Machine

Cloning the first machine

To make things easy, I’ve first configured IIS on the first machine. I simply enabled the webserver and made sure Windows Firewall allows connections to IIS. From this point on, I simply want to clone this machine and add it to my webfarm.

The first thing to do when cloning (or “capturing”) a VM is “sysprepping” it. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting it’s own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Windows azure virtual machine requires sysprep

After sysprepping the machine, shut it down. If you’ve selected the option during sysprep, the machine will automatically shutdown. Otherwise you can do so through remote desktop or SSH, or simply through the Windows Azure portal.

Shutdown virtual machine on Windows Azure

Next, click the “Capture” button to create a disk image from this machine. Give it a name and  check the “Yes, I’ve sysprepped the machine” checkbox in order to be able to continue.

Windows Azure Capture virtual machine

After clicking the “ok” button, Windows Azure will create an image of our first webserver.

After the image has been created, you’ll notice that your first webserver has disappeared! This is normal: the machine has been disemboweled in order to create a template from it. You can now simply re-create this machine using the same settings as before, except you can now base it on this newly created VM image instead of basing it off a VM template Microsoft provides.

In the endpoints configuration, make sure to add the HTTP endpoint again listening on port 80.

Creating a second virtual machine

To create the second machine in your webfarm, create a fresh virtual machine. As the base disk, choose the image we’ve created earlier:

Windows Azure create your own virtual machine image

In step 3 of the machine creation, make sure to connect this machine to our existing web server. In step 4, locate the VM in the same availability set.

Connect to an existing virtual machine in Windows Azure

You now have two machines running, yet they aren’t load balanced at this moment. You’ll notice that both machines are already behind the same hostname (http://webfarm.cloudapp.net) and that they share the same public virtual IP address. This is due to the fact that we “linked” the machines earlier. If you don’t, you will never be able to use the out-of-the-box load balancer that comes with Windows Azure. This also means that the public remote desktop endpoint for both machines will be different: there’s only one IP address exposed to the outside world so you’ll have to think about endpoints.

Don’t add the HTTP endpoint to this machine just yet.

Configuring the Windows Azure load balancer

The last part of setting up our webfarm will be load balancing.  This is in fact really, really easy. Simply go to second machine’s dashboard in the Windows Azure portal and navigate to the Endpoints tab. We’ve already added public HTTP endpoints on our first machine, which means for our second machine we can just subscribe to load balancing:

Windows Azure comes with free load balancing

Easy, huh? You now have free round-robin load balancing with checks every few seconds to ensure that all machines are up and running. And since we linked these machines through an availability set, they are on different fault domains in the datacenter reducing the chance of errors due to malfunctioning hardware or maintenance. You can safely shut down a machine too. In short: anything you’d expect from a load balancer (except sticky sessions).

Final words

There is of course more to it. In ASP.NET, you’ll have to configure machine keys and such in the same way you would do it on-premise. But at the infrastructure level, we’re covered. Enjoy! And be sure to brag about this adventure to any IT pro you know :-)