Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Get your Windows 8 up to speed fast

With the release of Windows 8 on MSDN yesterday, I have a gut feeling that today, around the globe, people are installing this fresh operating system on their machine. I’ve done so too and I wanted to share with your two tools: one that helped me get up to speed fast, one that will help me up to speed even faster the next time I want to reset my PC.

Chocolatey

One of the best things created for Windows, ever, is Chocolatey. If you are familiar with Ninite, you will find that both serve the same purpose, however Chocolatey is more developer focused.

Chocolatey provides a catalog of software packages like Notepad++, ReSharper, Paint.Net and a whole lot more. After installing Chocolatey, all you have to do to install such a package is invoke, from the command line, “cinst <package>”. The keyword command line is pretty important: what if you could just create a batch file containing all packages you need, like I did here?

Batch files are great, but even easier is creating a custom Chocolatey feed on www.myget.org (create a feed, go to package sources, add Chocolatey): you can simply add whatever you need on a fresh system to this feed and whenever you want to install every package from your custom feed, like I did yesterday evening, you invoke

cinst All -source "http://www.myget.org/F/chocolateymaarten"

and go to bed. In the morning, everything is on your PC.

Windows 8 - Reset Your PC

There’s a new feature in Windows 8 called “Refresh/reset Your PC”. What it does is revert to a certain baseline whenever you feel the need of a format C: coming up. This baseline, by default, is a fresh install. Now what if you could just set your own baseline and revert back to that one next time you need a reinstall? The good news: you can do this!

  • Configure your PC at will
  • From an elevated command prompt, issue:
    mkdir C:\SoFreshThatItSmellsGreat
    recimg -CreateImage C:\SoFreshThatItSmellsGreat

Done!

ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service

OAuth 2 Windows AzureIf you are familiar with OAuth2’s protocol flow, you know there’s a lot of things you should implement if you want to protect your ASP.NET Web API using OAuth2. To refresh your mind, here’s what’s required (at least):

  • OAuth authorization server
  • Keep track of consuming applications
  • Keep track of user consent (yes, I allow application X to act on my behalf)
  • OAuth token expiration & refresh token handling
  • Oh, and your API

That’s a lot to build there. Wouldn’t it be great to outsource part of that list to a third party? A little-known feature of the Windows Azure Access Control Service is that you can use it to keep track of applications, user consent and token expiration & refresh token handling. That leaves you with implementing:

  • OAuth authorization server
  • Your API

Let’s do it!

On a side note: I’m aware of the road-to-hell post released last week on OAuth2. I still think that whoever offers OAuth2 should be responsible enough to implement the protocol in a secure fashion. The protocol gives you the options to do so, and, as with regular web page logins, you as the implementer should think about security.

Building a simple API

I’ve been doing some demos lately using www.brewbuddy.net, a sample application (sources here) which enables hobby beer brewers to keep track of their recipes and current brews. There are a lot of applications out there that may benefit from being able to consume my recipes. I love the smell of a fresh API in the morning!

Here’s an API which would enable access to my BrewBuddy recipes:

1 [Authorize] 2 public class RecipesController 3 : ApiController 4 { 5 protected IRecipeService RecipeService { get; private set; } 6 7 public RecipesController(IRecipeService recipeService) 8 { 9 RecipeService = recipeService; 10 } 11 12 public IQueryable<RecipeViewModel> Get() 13 { 14 var recipes = RecipeService.GetRecipes(User.Identity.Name); 15 var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); 16 17 return model.AsQueryable(); 18 } 19 }

Nothing special, right? We’re just querying our RecipeService for the current user’s recipes. And the current user should be logged in as specified using the [Authorize] attribute.  Wait a minute! The current user?

I’ve built this API on the standard ASP.NET Web API features such as the [Authorize] attribute and the expectation that the User.Identity.Name property is populated. The reason for that is simple: my API requires a user and should not care how that user is populated. If someone wants to consume my API by authenticating over Forms authentication, fine by me. If someone configures IIS to use Windows authentication or even hacks in basic authentication, fine by me. My API shouldn’t care about that.

OAuth2 is a different state of mind

OAuth2 adds a layer of complexity. Mental complexity that is. Your API consumer is not your end user. Your API consumer is acting on behalf of your end user. That’s a huge difference! Here’s what really happens:

OAuth2 protocol flow

The end user loads a consuming application (a mobile app or a web app that doesn’t really matter). That application requests a token from an authorization server trusted by your application. The user has to login, and usually accept the fact that the app can perform actions on the user’s behalf (think of Twitter’s “Allow/Deny” screen). If successful, the authorization server returns a code to the app which the app can then exchange for an access token containing the user’s username and potentially other claims.

Now remember what we started this post with? We want to get rid of part of the OAuth2 implementation. We don’t want to be bothered by too much of this. Let’s try to accomplish the following:

OAuth2 protocol flow with Windows Azure

Let’s introduce you to…

WindowsAzure.Acs.Oauth2

“That looks like an assembly name. Heck, even like a NuGet package identifier!” You’re right about that. I’ve done a lot of the integration work for you (sources / NuGet package).

WindowsAzure.Acs.Oauth2 is currently in alpha status, so you’ll will have to register this package in your ASP.NET MVC Web API project using the package manager console, issuing the following command:

Install-Package WindowsAzure.Acs.Oauth2 -IncludePrerelease

This command will bring some dependencies to your project and installs the following source files:

  • App_Start/AppStart_OAuth2API.cs - Makes sure that OAuth2-signed SWT tokens are transformed into a ClaimsIdentity for use in your API. Remember where I used User.Identity.Name in my API? Populating that is performed by this guy.

  • Controllers/AuthorizeController.cs - A standard authorization server implementation which is configured by the Web.config settings. You can override certain methods here, for example if you want to show additional application information on the consent page.

  • Views/Shared/_AuthorizationServer.cshtml - A default consent page. This can be customized at will.

Next to these files, the following entries are added to your Web.config file:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <configuration> 3 <appSettings> 4 <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> 5 <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> 6 <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> 7 <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> 8 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> 9 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> 10 </appSettings> 11 </configuration>

These settings should be configured based on the Windows Azure Access Control settings. Details on this can be found on the Github page.

Consuming the API

After populating Windows Azure Access Control Service with a client_id and client_secret for my consuming app (which you can do using the excellent FluentACS package or manually, as shown in the following screenshot), you’re good to go.

ACS OAuth2 Service Identity

The WindowsAzure.Acs.Oauth2 package adds additional functionality to your application: it provides your ASP.NET Web API with the current user’s details (after a successful OAuth2 authorization flow took place) and it adds a controller and view to your app which provides a simple consent page (that can be customized):

image

After granting access, WindowsAzure.Acs.Oauth2 will store the choice of the user in Windows Azure ACS and redirect you back to the application. From there on, the application can ask Windows Azure ACS for an access token and refresh the access token once it expires. Without your application having to interfere with that process ever again. WindowsAzure.Acs.Oauth2 transforms the incoming OAuth2 token into a ClaimsIdentity which your API can use to determine which user is accessing your API. Focus on your API, not on OAuth.

Enjoy!

Hands-on Windows Azure Services for Windows

A couple of weeks ago, Microsoft announced their Windows Azure Services for Windows Server. If you’ve ever heard about the Windows Azure Appliance (which is vaporware imho :-)), you’ll be interested to see that the Windows Azure Services for Windows Server are in fact bringing the Windows Azure Services to your datacenter. It’s still a Technical Preview, but I took the plunge and installed this on a bunch of virtual machines I had lying around. In this post, I’ll share you with some impressions, ideas, pains and speculations.

Why would you run Windows Azure Services in your own datacenter? Why not! You will make your developers happy because they have access to all services they are getting to know and getting to love. You’ll be able to provide self-service access to SQL Server, MySQL, shared hosting and virtual machines. You decide on the quota. And if you’re a server hugger like a lot of companies in Belgium: you can keep hugging your servers. I’ll elaborate more on the “why?” further in this blog post.

Note: Currently only SQL Server, MySQL, Web Sites and Virtual Machines are supported in Windows Azure Services for Windows Server. Not storage, not ACS, not Service Bus, not...

You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late). It's down.

My setup

Since I did not have enough capacity to run enough virtual machines (you need at least four!) on my machine, I decided to deploy the Windows Azure Services for Windows Server on a series of virtual machines in Windows Azure’s IaaS offering.

You will need servers for the following roles:

  • Controller node (the management portal your users will be using)
  • SQL Server (can be hosted on the controller node)
  • Storage server (can be on the cntroller node as well)

If you want to host Windows Azure Websites (shared hosting):

  • At least one load balancer node (will route HTTP(S) traffic to a frontend node)
  • At least one frontend node (will host web sites, more frontends = more websites / redundancy)
  • At least one publisher node (will serve FTP and Webdeploy)

If you want to host Virtual Machines:

  • A System Center 2012 SP1 CTP2 node (managing VM’s)
  • At least one Hyper-V server (running VM’s)

Being a true ITPro (forgot the <irony /> element there…), I decided I did not want to host those virtual machines on the public Internet. Instead, I created a Windows Azure Virtual Network. Knowing CIDR notation (<irony />), I quickly crafted the BalliauwCloud virtual network: 172.16.240.0/24.

So a private network… Then again: I wanted to be able to access some of the resources hosted in my cloud on the Internet, so I decided to open up some ports in Windows Azure’s load balancer and firewall so that my users could use the SQL Sever both internally (172.16.240.9) and externally (sql1.cloud.balliauw.net). Same with high-density shared hosting in the form of Windows Azure Websites by the way.

Being a Visio pro (no <irony /> there!), here’s the schematical overview of what I setup:

Windows Azure Services for Windows Server - Virtual Network

Nice, huh? Even nicer is my to-be diagram where I also link crating Hyper-V machines to this portal (not there yet…):

Virtual machines

My setup experience

I found the detailed step-by-step installation guide and completed the installation as described. Not a great success! The Windows Azure Websites feature requires a file share and I forgot to open up a firewall port for that. The result? A failed setup. I restarted setup and ended with 500 Internal Server Terror a couple of times. Help!

Being a Technical Preview product, there is no support for cleaning / restarting a failed setup. Luckily, someone hooked me up with the team at Microsoft who built this and thanks to Andrew (thanks, Andrew!), I was able to continue my setup.

If everything works out for your setup: enjoy! If not, here’s some troubleshooting tips:

Keep an eye on the C:\inetpub\MgmtSvc-ConfigSite\trace.txt  log file. It holds valuable information, as well as the event log (Applications and Services Log > Microsoft > Windows > Antares).

If you’re also experiencing issues and want to retry installation, here are the steps to clean your installation:

  1. On the controller node: stop services:
    net stop w3svc
    net stop WebFarmService
    net stop ResourceMetering
    net stop QuotaEnforcement
  2. In IIS Manager (inetmgr), clean up the Hosting Administration REST API service. Under site MgmtSvc-WebSites:
    - Remove IIS application HostingAdministration (just the app, NOT the site itself)
    - Remove physical files: C:\inetpub\MgmtSvc-WebSites\HostingAdministration
  3. Drop databases, and logins by running the SQL script: C:\inetpub\MgmtSvc-ConfigSite\Drop-MgmtSvcDatabases.sql
  4. (Optional, but helped in my case) Repair permissions
    PowerShell.exe -c "Add-PSSnapin WebHostingSnapin ; Set-ReadAccessToAsymmetricKeys IIS_IUSRS"
  5. Clean up registry keys by deleting the three folders under the following registry key (NOT the key itself, just the child folders):
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\Web Hosting Framework

    Delete these folders: HostingAdmin, Metering, Security
  6. Restart IIS
    net start w3svc
  7. Re-run the installation with https://localhost:30101/

Configuration

After installation comes configuration. Configuration depends on the services you want to offer. I’m greedy so I wanted to provide them all. First, I registered my SQL Server and told the Windows Azure Services for Windows Server management portal that I have about 80 GB to spare for hosting my user’s databases. I did the same with MySQL (setup is similar):

Windows Azure Services for Windows Server SQL Server

You can add more SQL Servers and even define groups. For example, if you have a SQL Server which can be used for development purposes, add that one. If you have a high-end, failover setup for production, you can add that as a separate group so that only designated users can create databases on that SQL Server cluster of yours.

For Windows Azure Web Sites, I deployed one node of every role that was required:

Windows Azure Services for Windows Server Web Sites

What I liked in this setup is that if I want to add one of these roles, the only thing required is a fresh Windows Server 2008 R2 or 2012. No need to configure the machine: the Windows Azure Services for Windows Server management portal does that for me. All I have to do as an administrator in order to grow my pool of shared resources is spin up a machine and enter the IP address. Windows Azure Services for Windows Server management portal  takes care of the installation, linking, etc.

Windows Azure Services for Windows Server - Adding a role

The final step in offering services to my users is creating at least one plan they can subscribe to. Plans define the services provided as well as the quota on these services. Here’s an example quota configuration for SQL Server in my “Cloud Basics” plan:

Windows Azure Services for Windows Server Manage plans

Plans can be private (you assign them to a user) or public (users can self-subscribe, optionally only when they have a specific access code).

End-user experience

As an end user, I can have a plan. Either I enroll myself or an administrator enrolls me. You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late).

Sign up for Windows Azure Services for Windows Server

Side note: as an administrator, you can modify this page. It’s a bunch of ASP.NET MVC .cshtml files located under C:\inetpub\MgmtSvc-TenantSite\Views.

After signing in, you’ll be given access to a portal which resembles Windows Azure’s portal. You’ll have an at-a-glance look at all services you are using and can optionally just delete your account. Here’s the initial portal:

Windows Azure Services for Windows Server customer portal

You’ll be able to manage services yourself, for example create a new SQL Server database:

Windows Azure Services for Windows Server create database

After creating a database, you can see the connection information from within the portal:

Windows Azure Services for Windows Server connection string

Just imagine you could create databases on-the-fly, whenever you need them, in your internal infrastructure. Without an administrator having to interfere. Without creating a support ticket or a formal request…

Speculations

I’m not sure if I’m supposed to disclose this information, but… The following paragraphs are based on what I can see in the installation of my “private cloud” using Windows Azure Services for Windows Server.

  • I have a suspicion that the public cloud services can enter in Windows Azure Services for Windows Server. The SQL Server database for this management portal contains various additional tables, such as a table in which SQL Azure servers can be added to a pool linked to a plan. My guess is that you’ll be able to spread users and plans between public cloud (maybe your cheap test databases can go there) and private cloud (production applications run on a SQL Server cluster in your basement).
  • The management portals are clearly build with extensibility in mind. Yes, I’ve cracked open some assemblies using ILSpy, yes I’ve opened some of the XML configuration files in there. I expect the recently announced Service Bus for Windows Server to pop up in this product as well. And who knows, maybe a nice SDK to create your own services embedded in this portal so that users can create mailboxes as they please. Or link to a VMWare cloud, I know they have management API’s.

Conclusion

I’ve opened this post with a “Why?”, let’s end it with that question. Why would you want to use this? The product was announced on Microsoft’s hosting subsite, but the product name (Windows Azure Services for Windows Server) and my experience with it so far makes me tend to think that this product is a fit for any enterprise!

You will make your developers happy because they have access to all services they are getting to know and getting to love. You’ll be able to provide self-service access to SQL Server, MySQL, shared hosting and virtual machines. You decide on the quota. You manage this. The only thing you don’t have to manage is the actual provisioning of services: users can use the self-service possibilities in Windows Azure Services for Windows Server.

Want your departments to be able to quickly setup a Wordpress or Drupal site? No problem: using Web Sites, they are up and running. And depending on the front-end role you assign them, you can even put them on internet, intranet or both. (note: this is possible throug some Powershell scripting, by default it's just one pool of servers there)

The fact that there is support for server groups (say, development servers and high-end SQL Server clusters or 8-core IIS machines running your web applications) makes it easy for administrators to grant access to specific resources while some other resources are reserved for production applications. And I suspect this will extend to the public cloud making it possible to go hybrid if you wish. Some services out there, some in your basement.

I’m keeping an eye on this one.

Note: You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late). It's down.

Tweaking Windows Azure Web Sites

A while ago, I was at a customer who wanted to run his own WebDAV server (using www.sabredav.org) on Windows Azure Web Sites. After some testing, it seemed that this PHP-based WebDAV server was missing some configuration at the webserver level. Some HTTP keywords required for the WebDAV protocol were not mapped to the PHP runtime making it virtually impossible to run a custom WebDAV implementation on PHP. Unless there’s some configuration possible…

I’ve issued a simple phpinfo(); on Windows Azure Websites, simply outputting the PHP configuration and all available environment variables in Windows Azure Websites. This revealed the following interesting environment variable:

Windows Azure Web Sites web.config

Aha! That’s an interesting one! It’s basically the configuration of the IIS web server you are running. It contains which configuration sections can be overridden using your own Web.config file and which ones can not. I’ve read the file (it seems you have access to this path) and have placed the output of it here: applicationhost.config (70.04 kb). There’s also a file called rootweb.config: rootweb.config (36.66 kb)

Overridable configuration parameters

For mere humans not interested in reading through the entire applicationhost.config and rootweb.config here’s what you can override in your own Web.config. Small disclaimer: these are implementation details and may be subject to change. I’m not Microsoft so I can not predict if this will all continue to work. Use your common sense.

Configuration parameter Can be overriden in Web.config?
system.webServer.caching Yes
system.webServer.defaultDocument Yes
system.webServer.directoryBrowse Yes
system.webServer.httpErrors Yes
system.webServer.httpProtocol Yes
system.webServer.httpRedirect Yes
system.webServer.security.authorization Yes
system.webServer.security.requestFiltering Yes
system.webServer.staticContent Yes
system.webServer.tracing.traceFailedRequests Yes
system.webServer.urlCompression Yes
system.webServer.validation Yes
system.webServer.rewrite.rules Yes
system.webServer.rewrite.outboundRules Yes
system.webServer.rewrite.providers Yes
system.webServer.rewrite.rewriteMaps Yes
system.webServer.externalCache.diskCache Yes
system.webServer.handlers Yes, but some are locked
system.webServer.modules Yes, but some are locked

All others are probably not possible.

Project Kudu

There are some interesting things in the applicationhost.config (70.04 kb). Of course, you decide what’s interesting so read for yourself. Here’s what I found interesting: project Kudu is in there! Project Kudu? Yes, the open-source engine behind Windows Azure Web Sites (which implies that you can in fact host your own Windows Azure Web Sites-like service).

If you look at the architectural details, here’s an interesting statement:

The Kudu site runs in the same sandbox as the real site. This has some important implications.

First, the Kudu site cannot do anything that the site itself wouldn't be able to do itself. (…) But being in the same sandbox as the site, the only thing it can harm is the site itself.

Furthermore, the Kudu site shares the same quotas as the site. That is, the CPU/RAM/Disk used by the Kudu service is counted toward the site's quota. (…)

So to summarize, the Kudu services completely relies on the security model of the Azure Web Site runtime, which keeps it both simple and secure.

Proof can be found in applicationhost.config. If you look at the <sites /> definition, you’ll see two sites are defined. Your site, and a companion site named ~1yoursitename. The first one, of course, runs your site. The latter runs project Kudu which allows you to git push and use webdeploy.

In rootweb.config (36.66 kb), you’ll find the loadbalanced nature of Windows Azure Web Sites. A machine key is defined there which will be the same for all your web sites instances, allowing you to share session state, forms authentication cookies etc.

My PHP HTTP verbs override

To fix the PHP HTTP verb mapping, here’s the Web.config I’ve used at the customer, simply removing and re-adding the PHP handler:

1 <?xml version="1.0" encoding="UTF-8"?> 2 <configuration> 3 <system.webServer> 4 <handlers> 5 <remove name="PHP53_via_FastCGI" /> 6 <add name="PHP53_via_FastCGI" path="*.php" 7 verb="GET, PUT, POST, HEAD, OPTIONS, TRACE, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK" modules="FastCgiModule" scriptProcessor="D:\Program Files (x86)\PHP\v5.3\php-cgi.exe" 8 resourceType="Either" requireAccess="Script" /> 9 </handlers> 10 </system.webServer> 11 </configuration>

 

Community guidelines to stay out of the busy trap

For the past few days, an interesting blog post on the NY Times has been popping up in my Twitter timeline. In your as well, probably, since almost everyone I know has retweeted it a couple of times. Which blog post? The one about the so-called “busy trap”.

The idea is simple: we’re all caught in the busy trap. Everyone feels busy, runs their life and activities at 200%. Here’s a great summary from the blog post:

The present hysteria is not a necessary or inevitable condition of life; it’s something we’ve chosen, if only by our acquiescence to it. Not long ago I Skyped with a friend who was driven out of the city by high rent and now has an artist’s residency in a small town in the south of France. She described herself as happy and relaxed for the first time in years. She still gets her work done, but it doesn’t consume her entire day and brain. She says it feels like college — she has a big circle of friends who all go out to the cafe together every night. She has a boyfriend again. (She once ruefully summarized dating in New York: “Everyone’s too busy and everyone thinks they can do better.”) What she had mistakenly assumed was her personality — driven, cranky, anxious and sad — turned out to be a deformative effect of her environment. It’s not as if any of us wants to live like this, any more than any one person wants to be part of a traffic jam or stadium trampling or the hierarchy of cruelty in high school — it’s something we collectively force one another to do. – From http://opinionator.blogs.nytimes.com/2012/06/30/the-busy-trap/

Everyone I know from the Belgian IT community is in this trap. I’m in there. My wife is in there. My boss probably is, too. We’re all too busy to realize this. We’re used to it, and it’s really easy to say “yes” to things because those things nag you and you just want to get them over with. And the easy way often is not saying “no way!”, it’s often just doing it. Reinforcing that same busy trap.

Lately, some people I know quit their 16-hours-per-day-consultancy-job and switched to a nine-to-five closer to home to gain time for themselves. Another one is maxed out and on the verge of cracking and relying on social security for a couple of weeks, if not months (if you are this person or you know him, have a break and get well soon buddy!). I find myself in this busy trap too, but I usually manage to balance it pretty well. There are of course periods in the year where the balance flips over to busy, but I have established a few ground rules that I agreed on with my wife and family.

  • During the week, I’m owned by the community (and work, that too). That does not mean I will be out every night to some event (our Belgian community has interesting sessions almost daily). It does mean that I don’t really have a problem being out one evening a week.
  • The weekend is sacred. Weekend mean: No computer will be switched on. Ever. Unless it’s to order pizza or to do taxes or something.
  • In the weekend, don’t use Twitter. Unless an occasional check (some of my friends don’t txt me, they send me tweets) or to tweet about drinking/brewing beer or having a great barbecue.
  • Vacation? Long weekend? The computer stays at home. Roaming and wifi on the smartphone get disabled. Phone call from anyone but close relatives and friends? Ignore it (by pushing the ignore button, voice mail will handle it).

These don’t get you out of the busy trap, but it will help. It certainly helps me. Which rules help for you? Comments welcomed!

[edit]

Here's a list of tips I got from the community:

Fourth year as an MVP, second year for Windows Azure

View Maarten Balliauw's MVP profileWoohoo! I just received the great mail I expect yearly on the first of July:

Dear Maarten Balliauw,

Congratulations! We are pleased to present you with the 2012 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Windows Azure technical communities during the past year.

The Microsoft MVP Award provides us the unique opportunity to celebrate and honor your significant contributions and say "Thank you for your technical leadership."

Toby Richards
General Manager
Community & Online Support

Year four is down, 2 years as an ASP.NET MVP and now my second year as a Windows Azure MVP. Thanks everyone for keeping me motivated in working with the community, sharing knowledge and providing me time to do all this. That last one means: thank you, boss, and thank you to my lovely wife!

Let’s start work on earning the award for next year…

Domain based routing with ASP.NET Web API

Subdomain route ASP.NET Web API WCFImagine you are building an API which is “multi-tenant”: the domain name defines the tenant or customer name and should be passed as a route value to your API. An example would be http://customer1.mydomain.com/api/v1/users/1. Customer 2 can use the same API, using http://customer2.mydomain.com/api/v1/users/1. How would you solve routing based on a (sub)domain in your ASP.NET Web API projects?

Almost 2 years ago (wow, time flies), I’ve written a blog post on ASP.NET MVC Domain Routing. Unfortunately, that solution does not work out-of-the-box with ASP.NET Web API. The good news is: it almost works out of the box. The only thing required is adding one simple class:

1 public class HttpDomainRoute 2 : DomainRoute 3 { 4 public HttpDomainRoute(string domain, string url, RouteValueDictionary defaults) 5 : base(domain, url, defaults, HttpControllerRouteHandler.Instance) 6 { 7 } 8 9 public HttpDomainRoute(string domain, string url, object defaults) 10 : base(domain, url, new RouteValueDictionary(defaults), HttpControllerRouteHandler.Instance) 11 { 12 } 13 }

Using this class, you can now define subdomain routes for your ASP.NET Web API as follows:

1 RouteTable.Routes.Add(new HttpDomainRoute( 2 "{controller}.mydomain.com", // without tenant 3 "api/v1/{action}/{id}", 4 new { id = RouteParameter.Optional } 5 )); 6 7 RouteTable.Routes.Add(new HttpDomainRoute( 8 "{tenant}.{controller}.mydomain.com", // with tenant 9 "api/v1/{action}/{id}", 10 new { id = RouteParameter.Optional } 11 ));

And consuming them in your API controller is as easy as:

1 public class UsersController 2 : ApiController 3 { 4 public string Get() 5 { 6 var routeData = this.Request.GetRouteData().Values; 7 if (routeData.ContainsKey("tenant")) 8 { 9 return "UsersController, called by tenant " + routeData["tenant"]; 10 } 11 return "UsersController"; 12 } 13 }

Here’s a download for you if you want to make use of (sub)domain routes. Enjoy!

WebApiSubdomainRouting.zip (496.64 kb)

Setting up a webfarm using Windows Azure Virtual Machines

With the release of Microsoft’s Windows Azure Virtual Machines, a bunch of new scenarios became available on their cloud platform. If you plan to host multiple web applications, you can either go with Windows Azure Web Sites or go with a webfarm you create using the new IaaS capabilities. The first is okay for any type of application, the latter may be suitable when running a large-scale web application that can not be deployed easily in the PaaS offering. In this blog post, I’ll show you how to build a webfarm with (free!) load balancing.

Note: I’ll be using the built-in Windows Azure load balancer. If required, you can also deploy your own load balancer VM or reverse proxy. But since the Windows Azure load balancer comes with no extra cost, I think it’s the better choice for a lot of scenarios.

Creating a first virtual machine

After logging in to the new Windows Azure management portal, create a new virtual machine. You can choose to create a Linux or a Windows machine from a template or upload your own VM. I’ll go with a Windows machine but everything explained in this post is valid for a Linux webfarm, too.

Creating a Windows Azure Virtual Machine

Navigate through the wizard, selecting the VM size and administrator username of choice. In step 3 where you have to specify the DNS name and some other settings, be sure to choose an affinity group (giving better networking performance due to the fact that machines are on the same network in the Windows Azure datacenter). The DNS name can be anything you want to name your webfarm.

Windows Azure Virtual Machine Windows Linux

Before finishing the wizard, there is an important thing to do: in step 4, make sure to create an availability group in which all machines of the webfarm will reside. An availability group ensures that whenever maintenance occurs in the datacenter, this only occurs on one or some of your webfarm machines and not on all at once.

Windows Azure VM options

Adding an HTTP endpoint to the first machine

After the first virtual machine has been created, navigate to its configuration dashboard in the Windows Azure management portal. In order to have port 80 connected to this machine, a new endpoint should be added to the machine. Add the endpoints of choice, I chose to have port 80 open.

Windows Azure configure VM endpoints

It is important to understand that the endpoints added here are only opened at the load balancer level. That’s right: even a single machine will be behind a load balancer. This is incredibly powerful, as you’ll see when we add a new machine to our IaaS webfarm. It also poses an extra configuration step for single machines though: you’ll have to open port 80 on the machine’s firewall, too. You can safely use remote desktop (Windows) or SSH (Linux) to do so:

Install IIS on WIndows Azure Virtual Machine

Cloning the first machine

To make things easy, I’ve first configured IIS on the first machine. I simply enabled the webserver and made sure Windows Firewall allows connections to IIS. From this point on, I simply want to clone this machine and add it to my webfarm.

The first thing to do when cloning (or “capturing”) a VM is “sysprepping” it. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting it’s own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Windows azure virtual machine requires sysprep

After sysprepping the machine, shut it down. If you’ve selected the option during sysprep, the machine will automatically shutdown. Otherwise you can do so through remote desktop or SSH, or simply through the Windows Azure portal.

Shutdown virtual machine on Windows Azure

Next, click the “Capture” button to create a disk image from this machine. Give it a name and  check the “Yes, I’ve sysprepped the machine” checkbox in order to be able to continue.

Windows Azure Capture virtual machine

After clicking the “ok” button, Windows Azure will create an image of our first webserver.

After the image has been created, you’ll notice that your first webserver has disappeared! This is normal: the machine has been disemboweled in order to create a template from it. You can now simply re-create this machine using the same settings as before, except you can now base it on this newly created VM image instead of basing it off a VM template Microsoft provides.

In the endpoints configuration, make sure to add the HTTP endpoint again listening on port 80.

Creating a second virtual machine

To create the second machine in your webfarm, create a fresh virtual machine. As the base disk, choose the image we’ve created earlier:

Windows Azure create your own virtual machine image

In step 3 of the machine creation, make sure to connect this machine to our existing web server. In step 4, locate the VM in the same availability set.

Connect to an existing virtual machine in Windows Azure

You now have two machines running, yet they aren’t load balanced at this moment. You’ll notice that both machines are already behind the same hostname (http://webfarm.cloudapp.net) and that they share the same public virtual IP address. This is due to the fact that we “linked” the machines earlier. If you don’t, you will never be able to use the out-of-the-box load balancer that comes with Windows Azure. This also means that the public remote desktop endpoint for both machines will be different: there’s only one IP address exposed to the outside world so you’ll have to think about endpoints.

Don’t add the HTTP endpoint to this machine just yet.

Configuring the Windows Azure load balancer

The last part of setting up our webfarm will be load balancing.  This is in fact really, really easy. Simply go to second machine’s dashboard in the Windows Azure portal and navigate to the Endpoints tab. We’ve already added public HTTP endpoints on our first machine, which means for our second machine we can just subscribe to load balancing:

Windows Azure comes with free load balancing

Easy, huh? You now have free round-robin load balancing with checks every few seconds to ensure that all machines are up and running. And since we linked these machines through an availability set, they are on different fault domains in the datacenter reducing the chance of errors due to malfunctioning hardware or maintenance. You can safely shut down a machine too. In short: anything you’d expect from a load balancer (except sticky sessions).

Final words

There is of course more to it. In ASP.NET, you’ll have to configure machine keys and such in the same way you would do it on-premise. But at the infrastructure level, we’re covered. Enjoy! And be sure to brag about this adventure to any IT pro you know :-)

Use NuGet Package Restore to avoid pushing assemblies to Windows Azure Websites

Windows Azure Websites allows you to publish a web site in ASP.NET, PHP, Node, … to Windows Azure by simply pushing your source code to a TFS or Git repository. But how does Windows Azure Websites manage dependencies? Do you have to check-in your assemblies and NuGet packages into source control? How about no…

NuGet 1.6 shipped with a great feature called NuGet Package Restore. This feature lets you use NuGet packages without adding them to your source code repository. When your solution is built by Visual Studio (or MSBuild, which is used in Windows Azure Websites), a build target calls nuget.exe to make sure any missing packages are automatically fetched and installed before the code is compiled. This helps you keep your source repo small by keeping large packages out of version control.

Enabling NuGet Package Restore

Enabling NuGet package restore can be done from within Visual Studio. Simply right-click your solution and click the “Enable NuGet Package Restore” menu item.

NuGet package restore Windows Azure Websites Antares

Visual Studio will now do the following with the projects in your solution:

  • Create a .nuget folder at the root of your solution, containing a NuGet.exe and a NuGet build target
  • Import this NuGet target into all your projects so that MSBuild can find, download and install NuGet packages on-the-fly when creating a build

Be sure to push the files in the .nuget folder to your source control system. The packages folder is not needed, except for the repositories.config file that sits in there.

But what about my non-public assembly references? What if I don't trust auto-updating from NuGet.org?

Good question. What about them? A simple answer would be to create NuGet packages for them. And if you already have NuGet packages for them, things get even easier. Make sure that you are hosting these packages in an online feed which is not the public NuGet repository at www.nuget.org, unless you want your custom assemblies out there in public. A good choice would be to checkout www.myget.org and host your packages there.

But then a new question surfaces: how do I link a custom feed to my projects? The answer is pretty simple: in the .nuget folder, edit the NuGet.targets file. In the PackageSources element, you can supply a semicolon (;) separated list of feeds to check for packages:

1 <?xml version="1.0" encoding="utf-8"?> 2 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 3 <PropertyGroup> 4 <!-- ... --> 5 6 <!-- Package sources used to restore packages. By default will used the registered sources under %APPDATA%\NuGet\NuGet.Config --> 7 <PackageSources>"http://www.myget.org/F/chucknorris;http://www.nuget.org/api/v2"</PackageSources> 8 9 <!-- ... --> 10 </PropertyGroup> 11 12 <!-- ... --> 13 </Project>

By doing this and pushing the targets file to your Windows Azure Websites Git or TFS repo, the build system backing Windows Azure Websites will go ahead and download your packages from an external location, not cluttering your sources. Which makes for one, happy cloud.

Windows Azure Git Deploy

GitHub for Windows Azure Websites

Windows Azure Websites Git Github for WindowsWith the new release of Windows Azure and Windows Azure Websites, a lot of new scenarios with Windows Azure just became possible. One I like a lot, especially since Appharbor and Heroku have similar offers too, is the possibility to push source code (ASP.NET or PHP) to Windows Azure instead of binaries using Windows Azure Websites.

Not everyone out there is a command-line here though: if you want to use Git as a mechanism of pushing sources to Windows Azure Websites chances are you may go crazy if you are unfamiliar with command-line git commands. Luckily, a couple of weeks ago, GitHub released GitHub for Windows. It features an easy-to-use GUI on top of GitHub repositories. And with a small trick also on top of Windows Azure Websites.

Setting up a Windows Azure Website

Since you’re probably still unfamiliar with Windows Azure Websites, let me guide you through the setup. It’s a simple process. First of all, navigate to the new Windows Azure portal. It looks different than the one you’re used to but it’s way easier to use. In the toolbar at the bottom, click New, select Web site, Quick Create and enter a hostname of choice. I chose “websiteswithgit”:

Creating a Windows Azure Website

After a couple of seconds, you’ll be presented with the dashboard of your newly created Windows Azure Website. This dashboard features a lot of interesting metrics about your website such as data traffic, CPU usage, errors, … It also displays the available means for publishing a site to Windows Azure Websites: TFS deploy, Git deploy, Webdeploy and FTP publishing. That’s it: your website has been set up and if you navigate to the newly created URL, you’ll be greeted with the default Windows Azure Websites landing page.

Setting up Git publishing

Since we’ll be using Git, click the Set up Git Publishing option.

Windows Azure Websites Dashboard

If you haven’t noticed already: Windows Azure Websites makes Windows Azure a lot easier. After a couple of seconds, Git publishing is configured and all it takes to deploy your website is commit your source code, whether ASP.NET, ASP.NET Webpages or PHP to the newly created Git repository. Windows Azure Websites will take care of the build process (cool!) and will deploy this to Windows Azure in just a couple of seconds. Whoever told you deploying to Windows Azure takes ages lied to you!

Connecting GitHub for Windows to Windows Azure Websites

After setting up Git publishing, you probably have noticed that there’s a Git repository URL being displayed. Copy this one to your clipboard as we’ll be needing it in a minute. Open GitHub for Windows, right-click the UI and choose to “open a shell here”. Make sure you’re in the folder of choice. Next, issue a “git clone <url>” command, where <url> of course is the Git repository URL you’ve just copied.

Windows Azure Git Repository Build

The (currently empty) Windows Azure Website Git repository will be cloned onto your system. Now close this command-line (I promised we would use GitHub for Windows instead).

Git folder

Open the folder in which you cloned the Git repo and drag it onto GitHub for Windows. It will look kind of empty, still:

A Windows Azure Websites repository in GitHub for Windows

Next, add any file you want. A PHP file, a plain HTML file or a complete ASP.NET or ASP.NET MVC Web Application. GitHub for Windows will detect these changes and you can commit them to your local repository:

GitHub commit Windows Azure

All that’s left to do after a commit is clicking the Publish button. GitHub for Windows will now copy all changesets to the Windows Azure Websites GitHub repository which will in turn trigger an eventual build process for your web site. The result? A happy Windows Azure Websites dashboard and a site up and running. Rinse, repeat, commit. Happy deployments to Windows Azure Websites using GitHub for Windows!

Antares Windows Azure Websites Deployment History Build