Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Automatically strong name signing NuGet packages

Some developers prefer to strong name sign their assemblies. Signing them also means that the dependencies that are consumed must be signed. Not all third-party dependencies are signed, though, for example when consuming packages from NuGet. Some are signed, some are unsigned, and the only way to know is when at compile time when we see this:

Referenced assembly does not have a strong name

That’s right: a signed assembly cannot consume an unsigned one. Now what if we really need that dependency but don’t want to lose the fact that we can easily update it using NuGet… Turns out there is a NuGet package for that!

The Assembly Strong Naming Toolkit can be installed into our project, after which we can use the NuGet Package Manager Console to sign referenced assemblies. There is also the .NET Assembly Strong-Name Signer by Werner van Deventer, which provides us with a nice UI as well.

The problem is that the above tools only sign the assemblies once we already consumed the NuGet package. With package restore enabled, that’s pretty annoying as the assemblies will be restored when we don’t have them on our system, thus restoring unsigned assemblies…

NuGet Signature

Based on the .NET Assembly Strong-Name Signer, I decided to create a small utility that can sign all assemblies in a NuGet package and creates a new package out of those. This “signed package” can then be used instead of the original, making sure we can simply consume the package in Visual Studio and be done with it. Here’s some sample code which signs the package “MyPackage” and creates “MyPackage.Signed”:

var signer = new PackageSigner(); signer.SignPackage("MyPackage.1.0.0.nupkg", "MyPackage.Signed.1.0.0.nupkg", "SampleKey.pfx", "password", "MyPackage.Signed");

This is pretty neat, if I may say so, but still requires manual intervention for the packages we consume. Unless the NuGet feed we’re consuming could sign the assemblies in the packages for us?

NuGet Signature meets MyGet Webhooks

Earlier this week, MyGet announced their webhooks feature. After enabling the feature on our feed, we could pipe events, such as “package added”, into software of our own and perform an action based on this event. Such as, signing a package.

MyGet automatically sign NuGet package

Sweet! From the GitHub repository here, download the Web API project and deploy it to Microsoft Azure Websites. I wrote the Web API project with some configuration options, which we can either specify before deploying or through the management dashboard. The application needs these:

  • Signature:KeyFile - path to the PFX file to use when signing (defaults to a sample key file)
  • Signature:KeyFilePassword - private key/password for using the PFX file
  • Signature:PackageIdSuffix - suffix for signed package id's. Can be empty or something like ".Signed"
  • Signature:NuGetFeedUrl - NuGet feed to push signed packages to
  • Signature:NuGetFeedApiKey - API key for pushing packages to the above feed

The configuration in the Microsoft Azure Management Dashboard could look like the this:

Azure Websites

Once that runs, we can configure the web hook on the MyGet side. Be sure to add an HTTP POST hook that posts to <url to your deployed app>/api/sign, and only with the package added event.

MyGet Webhook configuration

From now on, all packages that are added to our feed will be signed when the webhook is triggered. Here’s an example where I pushed several packages to the feed and the NuGet Signature application signed the packages themselves.

NuGet list signed packages

The nice thing is in Visual Studio we can now consume the “.Signed” packages and no longer have to worry about strong name signing.

Thanks to Werner for the .NET Assembly Strong-Name Signer I was able to base this on.

Enjoy!

NuGet Configuration File inheritance is awesome

One way to remove friction from using NuGet in multiple projects is by making use of NuGet Configuration File inheritance, probably the awesomest unknown feature in there.

By default, all NuGet clients (the command-line tool, the Visual Studio extension and the Package Manager Console) all make use of the default NuGet configuration file which lives under %AppData%\NuGet\NuGet.config. NuGet can make use of other configuration files as well! In fact, NuGet can walk an entire tree of configuration files and fetch settings from those.

Which configuration file will be used?

Good question and happy you asked! The standard answer I always give to any question is: it depends. In this case on the client you are using. But ignoring that fact, here’s a generalized version of the three that is walked for building the configuration the client will work with.

  • The current directory and all its parents
  • The user-specific config file located under %AppData%\NuGet\NuGet.config
  • IDE-specific configuration files, for example:
        %ProgramData%\NuGet\Config\{IDE}\{Version}\{SKU}\*.config (e.g. %ProgramData%\NuGet\Config\VisualStudio\12.0\Pro\NuGet.config)
        %ProgramData%\NuGet\Config\{IDE}\{Version}\*.config
        %ProgramData%\NuGet\Config\{IDE}\*.config
        %ProgramData%\NuGet\Config\*.config
  • The machine-wide config file located under %ProgramData%\NuGet\NuGetDefaults.config (which, as a sysadmin, is a good one to put default configuration options in using an Active Directory Group Policy, just saying)

Full details can be found in the NuGet docs, just keep in mind that first item of the list: all clients start with a NuGet.config in the current directory and then walk up to the drive root, and only then are the standard files checked. Wow. Just WOW! This means every parent folder of a project or solution can contain additional configuration details that will be applied (note: the file that is first consulted wins).

So in short, if I have a solution file C:\Projects\CustomerA\AwesomeSolution\AwesomeSolution.sln, all NuGet clients will load configuration values from:

  • C:\Projects\CustomerA\AwesomeSolution\NuGet.config
  • C:\Projects\CustomerA\NuGet.config
  • C:\Projects\NuGet.config
  • C:\NuGet.config
  • All the other locations mentioned above

This gives some pretty interesting scenarios! Let’s cover a few. But again, check the NuGet docs for more information on possible entries in a NuGet.config.

Example 1: a project-specific configuration

So you are using a private feed? That’s a good thing! (I do hope it’s on MyGet ;-)). It’s the default for your current project? Even better! But why do all your developers have to add this feed to their NuGet configuration if a NuGet.config can be shipped in source control? Simply putting the following file right next to your .sln file will do the job:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </packageSources> </configuration>

Want to block access to NuGet.org and simply use the private feed all the time? Here’s some more:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </packageSources> <disabledPackageSources> <add key="nuget.org" value="true" /> </disabledPackageSources> <activePackageSource> <add key="Chuck Norris Feed" value="https://www.myget.org/F/chucknorris" /> </activePackageSource> </configuration>

Example 2: help, my devs are pushing our internal framework to NuGet.org!

Good one, good one. We don’t want that to happen. Probably they forgot the -Source parameter to NuGet.exe, but still. Accidental pushes are not fun! Place this one next to the .sln file and you should be good:

<?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="DefaultPushSource" value="https://www.myget.org/F/chucknorris/api/v2/package" /> </config> </configuration>

Feel free to combine it with example 1, it may make sense!

Example 3: NuGet.exe always asks me for proxy credentials

That is not funny. Proxies are like printers: the idea is great but when you need them things don’t always go well. Good thing is we can configure default proxy credentials. While possible to put this one in a project, it’s probably better to do this in the default %AppData%\NuGet\NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="http_proxy" value="host" /> <add key="http_proxy.user" value="username" /> <add key="http_proxy.password" value="encrypted_password" /> </config> </configuration>

Example 4: feed inheritance and package restore

We have multiple customers, each with a specific feed they can use. Awesome! Every customer project can contain the following NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Customer X" value="https://www.myget.org/F/customerx" /> </packageSources> </configuration>

In the C:\Projects folder, we can add another configuration file which adds in another feed for every project located under C:\Projects. All customer projects use both of these feeds, typically. Customer specific components as well as that framework built in-house, each on their own feed. But help! All of a sudden, package restore started complaining no package named X can be found!

The reason for that is probably the active package source is set to one specific feed and not the “aggregate” of all configured feeds. Here’s a solution to that which can go in C:\Projects\NuGet.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Our Cool Framework" value="https://www.myget.org/F/ourcoolframework" /> </packageSources> <activePackageSource> <add key="All" value="(Aggregate source)" /> </activePackageSource> </configuration>

All sorts of fancy combinations are possible, the only thing you have to do is find an approach that works for you.

Enjoy!

Source Control considered harmful

TL;DR: Using source control is a really bad idea. Or is it? Skip to Conclusion for the meat of this post.

One of the first things I do with a new project in Visual Studio is not add it to source control. There are many reasons, but it all boils down to this: Source Control introduces more problems than it solves.

Before I dive into this, I'll share the solution with you. Put your sources on a USB drive. Yes, it's that simple.

Implications

If you're like most other people, you don't like that solution, because it feels inefficient:

  • USB drives can get lost
  • USB drives can end up in the dishwasher
  • I have to buy a USB drive for every developer on the team
  • Sharing sources with distributed teams is more difficult: USB drives have to be shipped by snail mail

All of that is true, but then again...

  • You can always make a copy of a USB drive to safeguard against loss
  • Sharing USB drives is really easy: plug and play! Ease of use!
  • You can have lots of coffee waiting for a USB drive to arrive with that contribution to your OSS project

Still, many people go for source control: Source Control and a central repository solve all implications of using a USB drive, so why not use source control?

Fragility

Have you ever let a junior developer loose on a git repository? I can promise you, it's not pretty.

  • Merges will go wrong
  • They will find out about rebasing and mess up the entire system
  • Pull requests on GitHub? One click to merge, no need to test or review!
  • Developers will forget to check in specific files

Again: all of this is easy with a USB drive: one location to store the project. Yes, merging is slightly difficult too but then again replaying history in source control is much worse.

And I haven't even talked about having to have a network share or a GitHub account in which you can have private repositories. That's all extra costs and extra risks. What if the Internet connection goes down. What if a dev's laptop breaks? You might even say a USB drive is too advanced and a typewriter is an even better way to write code!

Cost

Did I mention the cost of USB drives? At most conferences and shops you will get them for free. Even if you buy them, they are probably around 0.10$ per GB. USB drives are very inexpensive.

Compare that with source control: we need an Internet connecion, a GitHub repository, and most importantly: devs will have to read documentation on using git or be coached by someone on the team. That's really inefficient and costs a lot of time!

Conclusion

You may have noted that this is a slightly strange post. You are correct, it is. I’m responding to some of the outrages regarding yesterday’s NuGet.org outage. Tweets and blogs mention to not use NuGet, or use NuGet but definitely not use package restore. That’s perfectly fine, but I don’t think the reasons for not using it are well founded, hence the above sarcasm. If it wasn’t clear: you should be using source control.

Should you use NuGet package restore? I think it depends on your preference, mostly. It should not depend on NuGet.org outages, nor on the microwave destroying your WiFi signal and failing your builds utilizing package restore. Should you add packages to your repository or use package restore? It depends on what you want to achieve and how you want to work. I prefer not to do this because they are dependencies that are versioned (package version and packages.config) so why version them again? We don’t add the issues from our issue tracker to source control either, right?

We put issues in a specialized system for managing issues. In my opinion, the same should be true for software and component dependencies. But then again: if you want to add packages to source control, fine by me. As some tweets said, you don’t have to do it for the minimal disk space optimizations. All that matters is if it makes sense to your process. 

Just like with source control, issue trackers and other things (like package restore) in your build process, you should read up on them, play with them and know the risks. Do we know that our Internet connection can break during solar storms? Well yes. It’s a minor risk but if it’s important to your shop do mitigate that risk. Do laptops break? Yes. If it’s important that you can keep working even if a laptop crashes, buy some more and keep them up-to-date with your main development machine. If you rely on GitHub and want to get work done if they have issues, make sure you have an up to date fork somewhere on a file share. Make that two file shares!

And if you rely on NuGet package restore… you get the point, right? For NuGet, there are private repositories available that can host your in-house packages and the ones you are using from upstream sources like NuGet.org. Use them, if they matter for your development process. Know about NuGet 2.8’s automatic fallback to the local cache you have on disk and if something goes wrong, use that cache until the package source is back up.

The development process and the tools are part of your system. Know your tools. Even if it requires you to read crazy books like how to work with git. Or Pro NuGet 2.

Developing Windows Azure Mobile Services server-side

Word of warning: This is a partial cross-post from the JetBrains WebStorm blog. The post you are currently reading adds some more information around Windows Azure Mobile Services and builds on a full example and is a bit more in-depth.

With Microsoft’s Windows Azure Mobile Services, we can build a back-end for iOS, Android, HTML, Windows Phone and Windows 8 apps that supports storing data, authentication, push notifications across all platforms and more. There are client libraries available for all these platforms which can be used when developing in an IDE of choice, e.g. AppCode, Google Android Studio or Visual Studio. In this post, let’s focus on what these different platforms have in common: the server-side code.

This post was sparked by my buddy Kristof Rennen’s session for our user group. During his session he mentioned a couple of times how he dislikes Node.js and the trial-and-error manner of building the server-side due to lack of good tooling. Working for a tooling vendor and intrigued by the quest of finding a better way, I decided to post the short article you are currently reading.

Do note that I will focus more on how to get your development environment set-up and less on the Windows Azure Mobile Services feature set. Yes, you will learn some of the very basics but there are way better resources available for getting in-depth knowledge on the topic.

Here’s what we will see in this post:

  • Setting up a Windows Azure Mobile Service
  • Creating a table and storing data
  • A simple HTML/JS client
  • Adding logic to our API
  • Working on server-side logic with WebStorm
  • Sending e-mail using an Node.js module
  • Putting our API to the test with the REST client
  • Unit testing our logic

The scenario

Doing some exploration is always more fun when we can do it based on a simple scenario. Whenever JetBrains goes to a conference and we have a booth, we like to do a raffle for licenses. The idea is simple: come to our booth for a chat, fill out a simple form and we will pick random names after the conference and send a free license.

For this post, I’ve created a very simple form in HTML and JavaScript, collecting visitor name and e-mail address.

1

Once someone participates in the raffle, the name and e-mail address are stored in a database and we send out an e-mail thanking that person for visiting the booth together with a link to download a product trial.

Setting up a Windows Azure Mobile Service

First things first: we will require a Windows Azure account to start developing. Next, we can create a new Mobile Service through the Windows Azure Management Portal.

2

Next, we can give our service a name and pick the datacenter location for it. We also have to provide the type of database we want to use: a free, 20 MB database, or a full-fledged SQL Database. While Windows Azure Mobile Services is always coupled to a database, we can build a custom API with it as well.

3

Once completed, we get several tabs to work with. There’s the initial welcome screen, displaying links to documentation and client libraries. The other tabs give access to monitoring, scaling, how we want to authenticate users, push notification settings and logs. Since we want to store data of booth visitors, let’s enter the Data tab.

Creating a table and storing data

From the Data tab, we can create a new table. Let’s call it Visitor. When creating a new table, we have to specify access rules for the API that will be available on top of it.

4

We can tell who can read (API GET request), insert (API POST request), update (API PATCH request) and delete (API DELETE request). Since our application will only insert new data and we don’t want to force booth visitors to log in with their social profiles, we can specify inserts can be done if an API key is provided. All other operations will be blocked for outside users: reading and deleting will only be available through the Windows Azure Management Portal with the above settings.

Do we have to create columns for storing booth visitor data? By default, Windows Azure Mobile Services has “dynamic schema” enabled which means we can throw some JSON at our Mobile Service it and it will store data for us.

A simple HTML/JS client

As promised earlier in this post, let’s see how we can build a simple client for the service we have just created. We’ll go with an HTML and JavaScript based client as it’s fairly easy to demonstrate. Again, have a look at other client SDK’s for the platform you are developing for.

Our HTML page exists of nothing but two text boxes and a button, conveniently named name, email and send. There are two ways of sending data to our Mobile Service: calling the API directly or making use of the client library provided. Both are easy to do: the API lives at https://<servicename>.azure-mobile.net/tables/<tablename> and we can POST a JSON-serialized object to it, an approach we’ll take later in this blog post. There is also a JavaScript client library available from https://<servicename>.azure-mobile.net/client/MobileServices.Web-1.0.0.min.js which our client is using.

5

As we can see, a new MobileServiceClient is created on which we can get a table reference (getTable) and insert a JSON-formatted object. Do note that we have to pass in an API key in the client constructor, which can be obtained from the Windows Azure Management Portal under the Manage Keys toolbar button.

From the portal, we can now see the data we’re submitting from our simple application:

6

Adding logic to our API

Let’s make it a bit more exciting! What if we wanted to store a timestamp with every record? We may want to have some insight into when our booth was busiest. We can send a timestamp from the client but that would only add clutter to our client-side code. Also if we wanted to port the HTML/JS client to other platforms it would mean we have to make sure every client sends this data to our mobile service. In short: this calls for some server-side logic.

For every table created, we can make use of the Script tab to add custom logic to read, insert, update and delete operations which we can write in JavaScript. By default, this is what a script for insert may look like:

7

The insert function will be called with 3 parameters: the item to be stored (our JSON-serialized object), the current user and the full request. By default, the request.execute() function is called which will make use of the other two parameters internally. Let’s enrich our item with a timestamp.

8

Hitting Save will deploy this script to our mobile service which from now on will store an inserted timestamp in our database as well.

This is a very trivial example. There are a lot of things that can be done server-side: enforcing validation, record filtering, storing data in other tables as well, sending e-mail or text messages, … Here’s a post with some common scenarios. Full reference to the server-side objects is also available.

Working on server-side logic with WebStorm

Unfortunately, the in-browser editor for server-side scripts is a bit limited. It features no autocompletion and all code has to go in one file. How would we create shared logic which can be re-used across different scripts? How would we unit test our code? This is where WebStorm comes in. We can access the complete server-side code through a Git repository and work on it in a full IDE!

The Git access to our mobile service is disabled by default. Through the portal’s right-hand side menu, we can enable it by clicking the Set up source control link. Next, we can find repository details from the Configure tab.

9

We can now use WebStorm’s VCS | Checkout From Version Control | Git menu to bring down the server-side code for our Windows Azure Mobile Service.

10

In our project, we can see several folders and files. The service/api folder can hold custom API’s (check the readme.md file for more info). service/scheduler can hold scripts that execute at a given time or interval, much like CRON jobs. service/shared can hold shared scripts that can be used inside table logic, custom API’s and scheduler scripts. In the service/table folder we can find the script we have created through the portal: visitor.insert.js. Also note the visitor.json file which contains the access rules we configured through the portal earlier.

11

From now on, we can work inside WebStorm and push to the remote Git repository if we want to deploy our new code.

Sending e-mail using a Node.js module

Let’s go back to our initial requirements: whenever someone enters their name and e-mail address in our application, we want to send out an e-mail thanking them for participating. We can do this by making use of an NPM module, for example SendGrid.

Windows Azure Mobile Services comes with some NPM modules preinstalled, like SendGrid and Twilio. However we want to make sure we are always using the same version of the NPM package, so let’s install it into our project. WebStorm has a built-in package manager to do this, however Windows Azure Mobile Services requires us to install the module in a non-standard location (the service folder) hence we will use the Terminal tool window to install it.

12

Once finished, we can start working on our e-mail logic. Since we may want to re-use the e-mail logic (and we want to unit test it later), it’s best to create our logic in the shared folder.

13

In our shared module, we can make use of the SendGrid module to create and send an e-mail. We can export our sendThankYouMessage function to consumers of our shared module. In the visitor.insert.js script we can require our shared module and make use of the functionality it exposes. And as an added bonus, WebStorm provides us with autocompletion, code analysis and so on.

14

Once we’ve updated our code, we can transfer our server-side code to Windows Azure Mobile Services. Ctrl+K (or Cmd+K on Mac OS X) allows us to commit and push from within the IDE.

15

Putting our API to the test with the REST client

Once our changes have been deployed, we can test our API. This can be done using one of the client libraries or by making use of WebStorm’s built-in REST client. From the Tools | Test RESTful Web Service menu we can craft our API calls manually.

We can specify the HTTP method to use (POST since we want to insert) and the URL to our Windows Azure Mobile Services endpoint. In the headers section, we can add a Content-Type header and set it to application/json. We also have to specify an API key in the X-ZUMO-APPLICATION header. This API key can be found in the Windows Azure Management Portal. On the right-hand side we can provide the text to post, in this case a JSON-serialized object with some properties.

16

After running the request, we get back response headers and a response body:

17

No error message but an object is being returned? Great, that means our code works (and should also be sending out an e-mail). If something does go wrong, the Logs tab in the Windows Azure portal can be a tremendous help in finding out what went wrong.

Through the toolbar on the left, we can export/import requests, making it easy to create a number of predefined requests that can easily be run over and over for testing the REST API.

Unit testing our logic

With WebStorm we can easily test our JavaScript code and custom Node.js modules. Let’s first set up our IDE. Unit testing can be done using thenodeunit testing framework which we can install using the Node.js package manager.

18

Next, we can create a new Run Configuration from the toolbar selecting Nodeunit as the configuration type and entering all required configuration details. In our case, let’s run all tests from the test directory.

19

Next, we can create a folder that will hold our tests and mark it as a Test Source Root (open the context menu and use Mark Directory As | Test Source Root). Tests for Nodeunit are always considered modules and should export their test functions. Here’s a very basic example which tells Nodeunit to wait for one assertion, assert that a boolean is true and marks the test case completed.

20

Of course we can also test our business logic. It’s best to create separate modules under the shared folder as they will be easier to unit test. However if you do have to test the actual table scripts (like insert functionality), there is a little trick that allows doing just that. The following snippet exports the insert function outside of the table-specific module:

21

We can now test the complete visitor.insert.js module and even provide mocks to work with. The following example loads all our modules and sets up test expectation. We’re also overriding specific functionalities such as the sendThankYouMessage function to just make sure it’s called by our table API logic.

22

The full source code for both the server-side and client-side application can be found onhttps://github.com/maartenba/JetBrainsBoothMobileService.

If you would like to learn more about Windows Azure Mobile Services and work with authentication, push notifications or custom API’s checkout the getting started documentation. And if you haven’t already, give WebStorm a try.

Enjoy!

Using the Windows Azure Content Delivery Network (CDN)

CDNWith the Windows Azure Content Delivery Network (CDN) released as a preview, I thought it was a good time to write up some details about how to work with it. The CDN can be used for offloading content to a globally distributed network of servers, ensuring faster throughput to your end users.

Note: this is a modified and updated version of my article at ACloudyPlace.com roughly two years ago. I have added information on how to work with ASP.NET MVC bundling and the Windows Azure CDN, updated screenshots and so on.

Reasons for using a CDN

There are a number of reasons to use a CDN. One of the obvious reasons lies in the nature of the CDN itself: a CDN is globally distributed and caches static content on edge nodes, closer to the end user. If a user accesses your web application and some of the files are cached on the CDN, the end user will download those files directly from the CDN, experiencing less latency in their request.

Windows Azure CDN graphically

Another reason for using the CDN is throughput. If you look at a typical webpage, about 20% of it is HTML which was dynamically rendered based on the user’s request. The other 80% goes to static files like images, CSS, JavaScript and so forth. Your server has to read those static files from disk and write them on the response stream, both actions which take away some of the resources available on your virtual machine. By moving static content to the CDN, your virtual machine will have more capacity available for generating dynamic content.

Enabling the Windows Azure CDN

The Windows Azure CDN is built for two services that are available in your subscription: storage and cloud services. The easiest way to get started with the CDN is by using the Windows Azure Management Portal. From the New menu at the bottom, select App Services | CDN | Quick Create.

Enabling Windows Azure CDN

From the dropdown that is shown, select either a storage account or a cloud service which will serve as the source of our CDN edge data. After clicking Create, the CDN will be initialized. This may take up to 60 minutes because the settings you’ve just applied may take that long to propagate to all CDN edge locations globally (over 24 was the last number I read). Your CDN will be assigned a URL in the form of .vo.msecnd.net">http://<id>.vo.msecnd.net.

Once the CDN endpoint is created, there are some options that can be managed. Currently they are somewhat limited but I’m pretty sure this will expand. For now, you can for example assign a custom domain name to the CDN by clicking the “Manage Domains” button in the toolbar.

Manage the Windows Azure CDN - Add custom domain

Note that the CDN works using HTTP by default, but HTTPS is supported as well and can be enabled through the management portal. Unfortunately, SSL is using a certificate that Microsoft provides and there’s currently no option to use your own, making it hard to use a custom domain name and HTTPS.

Serving blob storage content through the CDN

Let’s start and offload our static content (CSS, images, JavaScript) to the Windows Azure CDN using a storage account as the source for CDN content. In an ASP.NET MVC project, edit the _Layout.cshtml view. Instead of using the bundles for CSS and scripts, let’s include them manually from a URL hosted on your newly created CDN:

1 <!DOCTYPE html> 2 <html> 3 <head> 4 <title>@ViewBag.Title</title> 5 <link href="http://az172665.vo.msecnd.net/static/Content/Site.css" rel="stylesheet" type="text/css" /> 6 <script src="http://az172665.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js" type="text/javascript"></script> 7 </head> 8 <!-- more HTML --> 9 </html>

Note that the CDN URL includes a reference to a folder named “static”.

If you now run this application, you’ll find no CSS or JavaScript applied. The reason for this is obvious: we have specified the URL to our CDN but haven’t uploaded any files to our storage account backing the CDN.

Where are our styles?

Uploading files to the CDN is easy. All you need is a public blob container and some blobs hosted in there. You can use tools like Cerebrata’s Cloud Storage Studio or upload the files from code. For example, I’ve created an action method taking care of uploading static content for me:

1 [HttpPost, ActionName("Synchronize")] 2 public ActionResult Synchronize_Post() 3 { 4 var account = CloudStorageAccount.Parse( 5 ConfigurationManager.AppSettings["StorageConnectionString"]); 6 var client = account.CreateCloudBlobClient(); 7 8 var container = client.GetContainerReference("static"); 9 container.CreateIfNotExist(); 10 container.SetPermissions( 11 new BlobContainerPermissions { 12 PublicAccess = BlobContainerPublicAccessType.Blob }); 13 14 var approot = HostingEnvironment.MapPath("~/"); 15 var files = new List<string>(); 16 files.AddRange(Directory.EnumerateFiles( 17 HostingEnvironment.MapPath("~/Content"), "*", SearchOption.AllDirectories)); 18 files.AddRange(Directory.EnumerateFiles( 19 HostingEnvironment.MapPath("~/Scripts"), "*", SearchOption.AllDirectories)); 20 21 foreach (var file in files) 22 { 23 var contentType = "application/octet-stream"; 24 switch (Path.GetExtension(file)) 25 { 26 case "png": contentType = "image/png"; break; 27 case "css": contentType = "text/css"; break; 28 case "js": contentType = "text/javascript"; break; 29 } 30 31 var blob = container.GetBlobReference(file.Replace(approot, "")); 32 blob.Properties.ContentType = contentType; 33 blob.Properties.CacheControl = "public, max-age=3600"; 34 blob.UploadFile(file); 35 blob.SetProperties(); 36 } 37 38 ViewBag.Message = "Contents have been synchronized with the CDN."; 39 40 return View(); 41 }

There are two very important lines of code in there. The first one, container.SetPermissions, ensures that the blob storage container we’re uploading to allows public access. The Windows Azure CDN can only cache blobs stored in public containers.

The second important line of code, blob.Properties.CacheControl, is more interesting. How does the Windows Azure CDN know how long a blob should be cached on each edge node? By default, each blob will be cached for roughly 72 hours. This has some important consequences. First, you cannot invalidate the cache and have to wait for content expiration to occur. Second, the CDN will possibly refresh your blob every 72 hours.

As a general best practice, make sure that you specify the Cache-Control HTTP header for every blob you want to have cached on the CDN. If you want to have the possibility to update content every hour, make sure you specify a low TTL of, say, 3600 seconds. If you want less traffic to occur between the CDN and your storage account, specify a longer TTL of a few days or even a few weeks.

Another best practice is to address CDN URLs using a version number. Since the CDN can create a separate cache of a blob based on the query string, appending a version number to the URL may make it easier to refresh contents in the CDN based on the version of your application. For example, main.css?v1 and main.css?v2 may return different versions of main.css cached on the CDN edge node. Do note that the query string support is opt-in and should be enabled through the management portal. Here’s a quick code snippet which appends the AssemblyVersion to the CDN URLs to version content based on the deployed application version:

1 @{ 2 var version = System.Reflection.Assembly.GetAssembly( 3 typeof(WindowsAzureCdn.Web.Controllers.HomeController)) 4 .GetName().Version.ToString(); 5 } 6 <!DOCTYPE html> 7 <html> 8 <head> 9 <title>@ViewBag.Title</title> 10 <link href="http://az172729.vo.msecnd.net/static/Content/Site.css?@version" rel="stylesheet" type="text/css" /> 11 <script src="http://az172729.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js?@version" type="text/javascript"></script> 12 </head> 13 <!-- more HTML --> 14 </html>

Using cloud services with the CDN

So far we’ve seen how you can offload static content to the Windows Azure CDN. We can upload blobs to a storage account and have them cached on different edge nodes around the globe. Did you know you can also use your cloud service as a source for files cached on the CDN? The only thing to do is, again, go to the Windows Azure Management Portal and ensure the CDN is enabled for the cloud service you want to use.

Serving static content through the CDN

The main difference with using a storage account as the source for the CDN is that the CDN will look into the /cdn/* folder on your cloud service to retrieve its contents. There are two options for doing this: either moving static content to the /cdn folder, or using IIS URL rewriting to “fake” a /cdn folder.

When using ASP.NET MVC’s bundling features, we’ll have to modify the bundle configuration in BundleConfig.cs. First, we’ll have to set bundle.EnableCdn to true. Next, we’ll have to provide the URL to the CDN version of our bundles. Here’s a snippet which does just that for the Content/css bundle. We’re still working with a version number to make sure we can update the CDN contents for every deployment of our application.

1 var version = System.Reflection.Assembly.GetAssembly(typeof(BundleConfig)).GetName().Version.ToString(); 2 var cdnUrl = "http://az170459.vo.msecnd.net/{0}?" + version; 3 4 bundles.UseCdn = true; 5 bundles.Add(new StyleBundle("~/Content/css", string.Format(cdnUrl, "Content/css")).Include("~/Content/site.css"));

Note that this time, the CDN URL does not include any reference to a blob container.

Whether you are using bundling or not, the trick will be to request URLs straight from the CDN instead of from your server to be able to make use of the CDN.

Exposing static content to the CDN with IIS URL rewriting

The Windows Azure CDN only looks at the /cdn folder as a source of files to cache. This means that if you simply copy your static content into the /cdn folder, you’re finished. Your web application and the CDN will play happily together. But this means the static content really has to be static. In the previous example of using ASP.NET MVC bundling, our static “bundles” aren’t really static…

An alternative to copying static content to a /cdn folder explicitly is to use IIS URL rewriting. IIS URL rewriting is enabled on Windows Azure by default and can be configured to translate a /cdn URL to a / URL. For example, if the CDN requests the /cdn/Content/css bundle, IIS URL rewriting will simply serve the /Content/css bundle leaving you with no additional work.

To configure IIS URL rewriting, add a <rewrite> section under the <system.webServer> section in Web.config:

1 <system.webServer> 2 <!-- More settings --> 3 4 <rewrite> 5 <rules> 6 <rule name="RewriteIncomingCdnRequest" stopProcessing="true"> 7 <match url="^cdn/(.*)$" /> 8 <action type="Rewrite" url="{R:1}" /> 9 </rule> 10 </rules> 11 </rewrite> 12 </system.webServer>

As a side note, you can also configure an outbound rule in IIS URL rewriting to automatically modify your HTML into using the Windows Azure CDN. Do know that this option is only supported when not using dynamic content compression and adds additional workload to your web server due to having to parse and modify your outgoing HTML.

Serving dynamic content through the CDN

Some dynamic content is static in a sense. For example, generating an image on the server or generating a PDF report based on the same inputs. Why would you generate those files over and over again? This kind of content is a perfect candidate to cache on the CDN as well!

Imagine you have an ASP.NET MVC action method which generates an image based on a given string. For every different string the output would be different, however if someone uses the same input string the image being generated would be exactly the same.

As an example, we’ll be using this action method in a view to display the page title as an image. Here’s the view’s Razor code:

1 @{ 2 ViewBag.Title = "Home Page"; 3 } 4 5 <h2><img src="/Home/GenerateImage/@ViewBag.Message" alt="@ViewBag.Message" /></h2> 6 <p> 7 To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8 </p>

In the previous section, we’ve seen how an IIS rewrite rule can map all incoming requests from the CDN. The same rule can be applied here: if the CDN requests /cdn/Home/GenerateImage/Welcome, IIS will rewrite this to /Home/GenerateImage/Welcome and render the image once and cache it on the CDN from then on.

As mentioned earlier, a best practice is to specify the Cache-Control HTTP header. This can be done in our action method by using the [OutputCache] attribute, specifying the time-to-live in seconds:

1 [OutputCache(VaryByParam = "*", Duration = 3600, Location = OutputCacheLocation.Downstream)] 2 public ActionResult GenerateImage(string id) 3 { 4 // ... generate image ... 5 6 return File(image, "image/png"); 7 }

We would now only have to generate this image once for every different string requested. The Windows Azure CDN will take care of all intermediate caching.

Conclusion

The Windows Azure CDN is one of the building blocks to create fault-tolerant, reliable and fast applications running on Windows Azure. By caching static content on the CDN, the web server has more resources available to process other requests. Next to that, users will experience faster loading of your applications because content is delivered from a server closer to their location.

Enjoy!

An autoscaling build farm using TeamCity and Windows Azure

Autoscaling... myself!NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.


Cloud computing is often referred to as a cost saver due to its billing models. If we can move workloads that are seasonal to the cloud, cost reduction is something that will come. No matter if it’s really “seasonal seasonal” (e.g. a temporary high workload around the holidays) or “daily seasonal” where workloads are different depending on the time of day, these workloads have written cloud all over them.

A workload that may be seasonal is the workload done by build servers. Take TeamCity for example. A TeamCity server instruments a pool of build agents that are either idle or compiling source code into binaries. Depending on how your team is structured and when people work, there is a big chance that pool of build agents is doing nothing for several hours every day, except incurring cost. What if we could move the build agents to a platform like Windows Azure and have them autoscale, depending on the actual load on the build farm?

Creating a build agent virtual machine

The first step in setting this brilliant scheme in motion is to set up a build agent virtual machine. We can select any virtual machine image we want for our build agent, even upload our own vhd’s if needed. I'm selecting a Windows Server 2012 image here but if you need a different OS for your build agent you can select that instead.

During the creation of this build agent, there is nothing special we should do. We can select a small/medium/large/extra large instance, give ourselves an administrator password and so on. The only important step here is that we set up an endpoint for the TeamCity build agent, listening on TCP port 9090.

Open load balancer endpoint

Once the machine is started, we will have to install all required prerequisites for our build agent. We can connect using remote desktop (or SSH if it’s a Linux machine). On the machine I have here, I installed all .NET framework versions to ensure I'm able to build .NET projects. On a build machine for Java, we would install the correct runtimes and JDK's for our projects. Anything, really, if it is needed for the sources we’ll be building. On Windows, I typically use Web Platform Installer and Chocolatey to get this done as automated as possible.

Web Platform Installer in action

Installing TeamCity build agent

In order for our build agent to communicate with the TeamCity server, we have to install the build agent. We can do this by navigating to our TeamCity server from within the virtual machine and use the Install Build Agents link from the Agents page.

Installing build agent

On a Windows server, we can use the Windows Installer but we can also use Java Web Start or even simply extract a ZIP file. This last option can be useful on a Linux machine, for example.

Installing the build agent is pretty much a next, next, finish operation. The only important thing is that we run the agent as a Windows service (or have it automatically start at boot time on other operating systems). We also want to specify the URL to our TeamCity server as well as the port on which the build agent will listen for incoming data from TeamCity. Note that this port should be the one opened in the load balancer earlier, in the case of this machine port 9090.

Specify port and server

Before starting the build agent, make sure the local firewall allows incoming connections. Through the Windows firewall, allow incoming connections for port 9090 (and while we’re at it, for a range of ports so we can easily clone this machine and not care about the firewall anymore).

Windows Firewall configuration

If we now start the build agent service, it should connect to our TeamCity server. Under the agents tab, we should be seeing a new unauthorized agent popping up. If that works, we’re good to go with our build agent farm.

A new unauthorized build agent shows up

Don’t shut down the machine just yet, we still need to prepare it for creating a build agent image.

Creating a build agent image

While still connected through remote desktop, open a command prompt and run the sysprep /generalize command from the c:\windows\system32\sysprep folder. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting its own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Sysprep our machine

Once finished, our RDP connection should be gone and our machine can be shutdown in the Windows Azure Management Portal. In fact, it must be shut down. Once that is done, we can use the Capture button and transform our virtual machine into a template we can create new virtual machines from.Capturing a virtual machine image

The capturing process will take a couple of minutes and results in having no more build server virtual machine to be found in the Windows Azure Management Portal. Is that bad? No, we can now start cloning the machine and create multiple, all having the exact same configuration and components installed.

Setting up multiple build agent machines

The next thing we want to have is multiple build agent machines. From the Windows Azure Management portal, we can create them. Not based on a platform image but using the image created during the previous step.

Using a virtual machine image as the base

The virtual machine configuration can be whatever we want. Do we want extra small instances or extra large? It’s all up to us and our credit card. On the next page, we have to specify some more important details. First, we have to select or create a cloud service. This will be the DNS host name under which all of our build agents are going to live. We also have to specify the affinity group, in essence a setting telling Windows Azure to never unplug power or networking for all machines in this group at the same time.

Selecting cloud service and availability set

We will be creating a couple of machines, so it’s important to get the next page right. Since all our machines will share the same hostname and IP address to the outside world, our build agents have to listen on different TCP ports. Make sure that the first agent maps port 9090 to port 9090, the second one 9091 to 9091 and so on. Not doing this will mess with your mind afterwards when troubleshooting.

Endpoint configuration

Finish the process, let Windows Azure start the machine and create a new one. Important: same cloud service, same availability set and correct endpoint mappings!

Configuring the build agents

Once we have several machines running, we have to connect to them using remote desktop again. This can be done through the portal. Once in, locate the build agent configuration file (c:\BuildAgent\conf\buildAgent.properties in a default installation) and set the port number on which it listens to the one that was mapped as an external endpoint. Again, agent one will listen on port 9090, agent number two on 9091 and so on. We can also set a better name for the build agent, in my case I’ve chosen to go with “agent2”. Very inspirational and all.

Setting build agent name and port

Save and restart the build agent (or the machine). The TeamCity server should now start listing all build agents.

Windows Azure build agents for TeamCity

Make sure to authorize them all, as we want to be sure they can connect to TeamCity server later on. Once that has been done and all build agents are listed here, we can shut them all down except for one. We want to have something running, right?

Configuring autoscaling

It might have been a good question: why did we have to move all these machines under the same cloud service? The reason is simple: we wanted to autoscale our farm and this can only be done within one cloud service. From the cloud service, click the Scale tab and start configuring.

Autoscaling configuration

For this post, I’ve chosen the following values:

  • Autoscale based on CPU
  • Have a minimum of one instance, and a maximum of, well… all of them.
  • The target CPU range is 0 to 10. For a production environment this will typically be between 60 and 80 or 40 and 80, depending on the chosen machine size for the build agents. Windows Azure will trigger an autoscaling operation if we go outside this range, having a small and low range means it will trigger a scale operation much faster. Bigger numbers means slower to respond.
  • Scale up by and scale down by as well as the number of minutes to wait after the previous operations are up to you. If you want a build agent to remain online for 30 minutes after it has been started, even if CPU usage drops, set it to 30 minutes. If 2 machines should be started at once, increase that number as well.

Scaling will happen based on the average CPU percentage of all running machines. If our builds run at 100% CPU all the time on our agent we can set the thresholds a bit higher. If builds are only taking 20% we might want to run multiple agents on one machine or decrease the scaling thresholds a bit. Want to measure CPU utilization for a given build? Better read up on the TeamCity Performance Monitor then.

Putting it to the test

Putting it to the test shouldn’t be that hard. Start some builds and make sure the agent gets loaded with builds. Once we hit the CPU threshold, Windows Azure will launch a virtual machine that was previously turned off.

Windows Azure autoscaling in action

Once it has booted, we will also see it surface on the TeamCity server.

Build agents on TeamCity

Once the load goes down again, Windows Azure will shutdown machines that are below the thresholds and make sure they don’t incur costs any longer. Which is pretty impressive!

If a development team triggers a massive amount of builds during the day, Windows Azure will pretty soon scale out to a higher number of virtual build agents. And at night when there are only some builds being triggered, it will scale back to lesser instances. If, for example, we manage to run machines only for 12 hours instead of 24 hours a day, that means our build farm’s price goes down by half.

TeamCity’s architecture as well as the way Windows Azure works makes this cost reduction possible. It’s also fun to set up, it gives us a wide range of options (how about a Windows Server 2012 farm, a Linux farm and so on).

Enjoy!

Just released: MvcSiteMapProvider 4.0

MvcSiteMapProviderAfter a beta version about a month ago, we are proud to release MvcSiteMapProvider 4.0 stable! (get it from NuGet, it’s fresh!) It took 6 months to complete this major version but I think our GitHub contributors have done a great job. Thank you all and especially Shad for taking the lead on this release!

MvcSiteMapProvider is a tool targeted at ASP.NET MVC that provides menus, site maps, site map path functionality, and more. It provides the ability to configure a hierarchical navigation structure using a pluggable architecture that can be XML, database, or code driven. We have moved beyond a mere ASP.NET SiteMapProvider implementation to provide support for multi-tenant applications, flexible caching, dependency injection, and several interface-based extensibility points where virtually any part of the provider can be replaced with a custom implementation.

Based on areas, controller and action method names rather than hardcoded URL references, sitemap nodes are completely dynamic based on the routing engine used in an application. Search Engine Optimization support is also provided in the form of dynamic sitemaps XML, canonical URL tags, and meta robots tags to ensure you send the search engines consistent - rather than conflicting - information about your URLs.

What has changed?

What I originally intended to do in v2 (but decided against based on popular request) is something that now has been done. The biggest change in this release is that we have stepped away from being an ASP.NET SiteMapProvider implementation. This means a lot of code had to be rewritten making v4 a pretty clean release. We’re not there yet completely as we want to have unit tests for all (and some more changes will be required for that).

Next to stepping away from the ASP.NET provider model, we’ve improved support for dependency injection. If you don’t need it, no worries. If you do need it: every component of the MvcSiteMapProvider is now pluggable. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup. Read more in the documentation.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

The Menu() helper has been rewritten to become a more performant and reliable helper (thanks for the contribution, midishero!)

A great bunch of performance enhancements and stability fixes are in as well.

How do I upgrade?

Since MvcSiteMapProvider has had some significant updates going from v3 to v4, it is best to read the upgrade guide. The first part of the upgrade from v3 to v4 will be updating the NuGet package. Before, MvcSiteMapProvider only had one NuGet package. Today, it has been split in multiple, of which the following ones are good to know at this time:

  • MvcSiteMapProvider.Web containing all views and web.config changes
  • MvcSiteMapProvider.MVC<version>.Core containing the library itself

Upgrading from v3 to v4 consists of installing the correct packages for your ASP.NET MVC version:

  • For MVC 2, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC2
  • For MVC 3, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC3
  • For MVC 4, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC4
  • Note that for MVC 4 we have made it possible to upgrade MvcSiteMapProvider instead, which will pull in all required dependencies. Do know that this is not the recommended scenario and it is preferred to install MvcSiteMapProvider.MVC4 instead.

The MvcSiteMapProvider.Web update will add views and all required runtime dependencies to your project. This package is a dependency of each of the above options and generally will not need to be installed explicitly.

In .NET versions prior to .NET 4.0, one line of code should be added to the Application_Start() event of Global.asax:

MvcSiteMapProvider.DI.Composer.Compose();

Note that this code is automatically executed if using .NET 4.0 or higher by the use of WebActivator, so in most cases you will not need to call it manually.

More? Please read the upgrade guide.

What’s next?

NuGet all the things! Install the new MvcSiteMapProvider.MVCx package (replace X with your ASP.NET MVC version) and try it out! Leave your comments, ideas and pull requests on our GitHub page.

Enjoy!

Windows Azure Traffic Manager Explained

imageWith yesterday’s announcement on Windows Azure Traffic Manager surfacing in the management portal (as a preview), I thought it was a good moment to recap this more than 2 year old service. Windows Azure Traffic Manager allows you to control the distribution of network traffic to your Cloud Services and VMs hosted within Windows Azure.

 

 

What is Traffic Manager?

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more cloud services or VMs, all accessible with the same URL, in one or more Windows Azure datacenters. At its core, it is basically a distributed DNS service that knows which Windows Azure services are sitting behind the traffic manager URL and distributes requests based on three possible profiles:

  • Failover: all traffic is mapped to one Windows Azure service, unless it fails. It then directs all traffic to the failover Windows Azure service.
  • Performance: all traffic is mapped to the Windows Azure service “closest” (in routing terms) to the client requesting it. This will direct users from the US to one of the US datacenters, European users will probably end up in one of the European datacenters and Asian users, well, somewhere in the Asian datacenters.
  • Round-robin: Just distribute requests between various Windows Azure services defined in the Traffic Manager policy

Now I’ve started this post with the slightly bitchy tone that “this service has been around for over two years”. And that’s true! It has been in the old management portal for ages and hasn’t since left the preview stage. However don’t think nothing happened with this service: next to using Traffic Manager for cloud services, we now can also use it for distributing traffic across VM’s. Next to distributing traffic over datacenters for cloud services, we can now do this for VMs as well. What about a SharePoint farm deployed in multiple datacenters, using Traffic Manager to distribute traffic geographically?

Why should I care?

We’ve seen it before: clouds being down. Amazon EC2, Google, Windows Azure, … They all have had their glitches. With any cloud going down, whether completely or partially, it seems a lot of websites “in the cloud” are down at that time. Most comments you read on Twitter at those times are along the lines of “outrageous!” and “don’t go cloud!”. While I understand these comments, I think they are wrong. These “clouds” can fail. They are even designed to fail, and often provide components and services that allow you to cope with these failures. You just have to expect failure at some point in time and build it into your application.

Yes, I just told you to expect failure when going to the cloud. But don’t consider a failing cloud a bad cloud or a cloud that is down. For your application, a “failing” cloud or server or database should be nothing more than a scaling operation. The only thing is: it’s scaling down to zero. If you design your application so that it can scale out, you should also plan for scaling “in”, eventually to zero. Use different availability zones on Amazon, and if you’re a Windows Azure user you are protected by fault domains within the datacenter, and Traffic Manager can save your behind cross-datacenter. Use it!

 

My thoughts on Traffic Manager

Let’s come back to that “2 year old service”. Don’t let that or the fact that is “is still a preview” hold you back from using Traffic Manager. Our MyGet web application is making use of it since it was first introduced. While in the beginning we used it for performance reasons (routing US traffic to a US datacenter and EU traffic to a EU datacenter), we’ve changed the strategy and are now using it as a failover to the North Europe datacenter in which nothing is deployed. The screenshot below highlights a degradation (because there indeed is no deployment in Europe North, currently).

MyGet Windows Azure Traffic Manager

But why failover to a datacenter in which no deployments are done? Well, because if West Europe datacenter would fail, we can simply spin up a new deployment in North Europe. Yes, there will be some downtime, but the last thing we want to have in such situation is downtime from DNS propagation taking too long. Now we simply map www.myget.org to our Traffic Manager domain and whenever we need to switch, Traffic Manager takes care of the DNS part.

In general, Traffic Manager has probably been the most stable service in the Windows Azure platform. I haven’t experienced any issues so far with Traffic Manager over more than two years, preview mode or not.

Enjoy!

Update: Alexandre Brisebois, a colleague MVP, has some additional insights to share.

And there it is - MvcSiteMapProvider v4 (beta)

imageIt has been a while since a new major update has been done to the MvcSiteMapProvider project, but today is the day! MvcSiteMapProvider is a tool that provides flexible menus, breadcrumb trails, and SEO features for the ASP.NET MVC framework, similar to the ASP.NET SiteMapProvider model.

To be honest, I have not done a lot of work. Thanks to the power of open source (and Shad who did a massive job on refactoring the whole, thanks!), MvcSiteMapProvider v4 is around the corner.

A lot of things have changed. And by a lot, I mean A LOT! The most important change is that we’ve stepped away from the ASP.NET SiteMapProvider dependency. This has been a massive pain in the behind and source of a lot of issues. Whereas I initially planned on ditching this dependency with v3, it happened now anyway.

imageOther improvements have been done around dependency injection: every component in the MvcSiteMapProvider can now be replaced with custom implementations. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

Next to that, a series of new helpers have been added, bugs have been fixed, the security model has been made more performant and lots more. Consider v4 as almost a rewrite for the entire project!

We’ve tried to make the upgrade path as smooth as possible but there may be some breaking changes in the provider. If you currently have the ASP.NET MVC SiteMapProvider installed in your project, feel free to give the new version a try using the NuGet package of your choice (only one is needed for your ASP.NET MVC version).

Install-Package MvcSiteMapProvider.MVC2 -Pre
Install-Package MvcSiteMapProvider.MVC3 -Pre
Install-Package MvcSiteMapProvider.MVC4 -Pre

Speaking of NuGet packages: by popular demand, the core of MvcSIteMapProvider has been extracted into a separate package (MvcSiteMapProvider.MVC<version>.Core) so that you don’t have to include views and so on in your library projects.

Please give the beta a try and let us know your thoughts on GitHub (or the comments below). Pull requests currently go in the v4 branch.

Create a list of favorite ReSharper plugins

With the latest version of the ReSharper 8 EAP, JetBrains shipped an extension manager for plugins, annotations and settings. Where it previously was a hassle and a suboptimal experience to install plugins into ReSharper, it’s really easy to do now. And what is really nice is that this extension manager is built on top of NuGet! Which means we can do all sorts of tricks…

The first thing that comes to mind is creating a personal NuGet feed containing just those plugins that are of interest to me. And where better to create such feed than MyGet? Create a new feed, navigate to the Package Sources pane and add a new package source. There’s a preset available for using the ReSharper extension gallery!

Add package source on MyGet - R# plugins

After adding the ReSharper extension gallery as a package source, we can start adding our favorite plugins, annotations and extensions to our own feed.

Add ReSharper plugins to MyGet

Of course there are some other things we can do as well:

  • “Proxy” the plugins from the ReSharper extension gallery and post your project/team/organization specific plugins, annotations and settings to your private feed. Check this post for more information.
  • Push prerelease versions of your own plugins, annotations and settings to a MyGet feed. Once stable, push them “upstream” to the ReSharper extension gallery.

Enjoy!