Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Scale-out to the cloud, scale back to your rack

That is a bad blog post title, really! If Steve and Ryan have this post in the Cloud Cover show news I bet they will make fun of the title. Anyway…

Imagine you have an application running in your own datacenter. Everything works smoothly, except for some capacity spikes now and then. Someone has asked you for doing something about it with low budget. Not enough budget for new hardware, and frankly new hardware would be ridiculous to just ensure capacity for a few hours each month.

A possible solution would be: migrating the application to the cloud during capacity spikes. Not all the time though: the hardware is in house and you may be a server-hugger that wants to see blinking LAN and HDD lights most of the time. I have to admit: blinking lights are cool! But I digress.

Wouldn’t it be cool to have a Powershell script that you can execute whenever a spike occurs? This script would move everything to Windows Azure. Another script should exist as well, migrating everything back once the spike cools down. Yes, you hear me coming: that’s what this blog post is about.

For those who can not wait, here’s the download: ScaleOutToTheCloud.zip (2.81 kb)

Schematical overview

Since every cool idea goes with fancy pictures, here’s a schematical overview of what could happen when you read this post to the end. First of all: you have a bunch of users making use of your application. As a good administrator, you have deployed IIS Application Request Routing as a load balancer / reverse proxy in front of your application server. Everyone is happy!

IIS Application Request Routing

Unfortunately: sometimes there are just too much users. They keep using the application and the application server catches fire.

Server catches fire!

It is time to do something. Really. Users are getting timeouts and all nasty error messages. Why not run a Powershell script that packages the entire local application for WIndows Azure and deploys the application?

Powershell to the rescue

After deployment and once the application is running in Windows Azure, there’s one thing left for that same script to do: modify ARR and re-route all traffic to Windows Azure instead of that dying server.

Request routing Azure

There you go! All users are happy again, since the application is now running in the cloud one 2, 3, or whatever number of virtual machines.

Let’s try and do this using Powershell…

The Powershell script

The Powershell script will rougly perform 5 tasks:

  • Load settings
  • Load dependencies
  • Build a list of files to deploy
  • Package these files and deploy them
  • Update IIS Application Request Routing servers

Want the download? There you go: ScaleOutToTheCloud.zip (2.81 kb)

Load settings

There are quite some parameters in play for this script. I’ve located them in a settings.ps1 file which looks like this:

# Settings (prod) $global:wwwroot = "C:\inetpub\web.local\" $global:deployProduction = 1 $global:deployDevFabric = 0 $global:webFarmIndex = 0 $global:localUrl = "web.local" $global:localPort = 80 $global:azureUrl = "scaleout-prod.cloudapp.net" $global:azurePort = 80 $global:azureDeployedSite = "http://" + $azureUrl + ":" + $azurePort $global:numberOfInstances = 1 $global:subscriptionId = "" $global:certificate = "C:\Users\Maarten\Desktop\cert.cer" $global:serviceName = "scaleout-prod" $global:storageServiceName = "" $global:slot = "Production" $global:label = Date

Let’s explain these…

$global:wwwroot The file path to the on-premise application.
$global:deployProduction Deploy to Windows Azure?
$global:deployDevFabric Deploy to development fabric?
$global:webFarmIndex The 0-based index of your webfarm. Look at IIS manager and note the order of your web farm in the list of webfarms.
$global:localUrl The on-premise URL that is registered in ARR as the application server.
$global:localPort The on-premise port that is registered in ARR as the application server.
$global:azureUrl The Windows Azure URL that will be registered in ARR as the application server.
$global:azurePort The Windows Azure port that will be registered in ARR as the application server.
$global:azureDeployedSite The final URL of the deployed Windows Azre application.
$global:numberOfInstances Number of instances to run on Windows Azure.
$global:subscriptionId Your Windows Azure subscription ID.
$global:certificate
Your certificate for managing Windows Azure.
$global:serviceName Your Windows Azure service name.
$global:storageServiceName The Windows Azure storage account that will be used for uploading the packaged application.
$global:slot The Windows Azure deployment slot (production/staging)
$global:label The label for the deployment. I chose the current date and time.

Load dependencies

Next, our script will load dependencies. There is one additional set of CmdLets tha tyou have to install: the Windows Azure management CmdLets available at http://code.msdn.microsoft.com/azurecmdlets.

Here’s the set we load:

# Load required CmdLets and assemblies $env:Path = $env:Path + "; c:\Program Files\Windows Azure SDK\v1.2\bin\" Add-PSSnapin AzureManagementToolsSnapIn [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Administration")

Build a list of files to deploy

In order to package the application, we need a text file containing all the files that should be packaged and deployed to Windows Azure. This is done by recursively traversing the directory where the on-premise application is hosted.

 

$filesToDeploy = Get-ChildItem $wwwroot -recurse | where {$_.extension -match "\..*"} foreach ($fileToDeploy in $filesToDeploy) { $inputPath = $fileToDeploy.FullName $outputPath = $fileToDeploy.FullName.Replace($wwwroot,"") $inputPath + ";" + $outputPath | Out-File FilesToDeploy.txt -Append }

Package these files and deploy them

I have been polite and included this both for development fabric as well as Windows Azure fabric. Here’s the packaging and deployment code for development fabric:

# Package & run the website for Windows Azure (dev fabric) if ($deployDevFabric -eq 1) { trap [Exception] { del -Recurse ScaleOutService continue } cspack ServiceDefinition.csdef /roleFiles:"WebRole;FilesToDeploy.txt" /copyOnly /out:ScaleOutService /generateConfigurationFile:ServiceConfiguration.cscfg # Set instance count (Get-Content ServiceConfiguration.cscfg) | Foreach-Object {$_.Replace("count=""1""","count=""" + $numberOfInstances + """")} | Set-Content ServiceConfiguration.cscfg # Run! csrun ScaleOutService ServiceConfiguration.cscfg /launchBrowser }

And here’s the same for Windows Azure fabric:

# Package the website for Windows Azure (production) if ($deployProduction -eq 1) { cspack ServiceDefinition.csdef /roleFiles:"WebRole;FilesToDeploy.txt" /out:"ScaleOutService.cspkg" /generateConfigurationFile:ServiceConfiguration.cscfg # Set instance count (Get-Content ServiceConfiguration.cscfg) | Foreach-Object {$_.Replace("count=""1""","count=""" + $numberOfInstances + """")} | Set-Content ServiceConfiguration.cscfg # Run! (may take up to 15 minutes!) New-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot -StorageServiceName $storageServiceName -Package "ScaleOutService.cspkg" -Configuration "ServiceConfiguration.cscfg" -label $label $deployment = Get-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot do { Start-Sleep -s 10 $deployment = Get-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot } while ($deployment.Status -ne "Suspended") Set-DeploymentStatus -Status "Running" -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot $wc = new-object system.net.webclient $html = "" do { Start-Sleep -s 60 trap [Exception] { continue } $html = $wc.DownloadString($azureDeployedSite) } while (!$html.ToLower().Contains("<html")) }

Update IIS Application Request Routing servers

This one can be done by abusing the .NET class Microsoft.Web.Administration.ServerManager.

# Modify IIS ARR $mgr = new-object Microsoft.Web.Administration.ServerManager $conf = $mgr.GetApplicationHostConfiguration() $section = $conf.GetSection("webFarms") $webFarms = $section.GetCollection() $webFarm = $webFarms[$webFarmIndex] $servers = $webFarm.GetCollection() $server = $servers[0] $server.SetAttributeValue("address", $azureUrl) $server.ChildElements["applicationRequestRouting"].SetAttributeValue("httpPort", $azurePort) $mgr.CommitChanges()

Running the script

Of course I’ve tested this to see if it works. And guess what: it does!

The script output itself is not very interesting. I did not add logging or meaningful messages to see what it is doing. Instead you’ll just see it working.

Powershell script running

Once it has been fired up, the Windows Azure portal will soon be showing that the application is actually deploying. No hands!

Powershell deployment to Azure

After the usual 15-20 minutes that a deployment + application first start takes, IIS ARR is re-configured by Powershell.

image

And my local users can just keep browsing to http://farm.local which now simply routes requests to Windows Azure. Don’t be fooled: I actually just packaged the default IIS website and deployed it to Windows Azure. Very performant!

image

Conclusion

It works! And it’s fancy and cool stuff. I think this may be a good deployment and scale-out model in some situations, however there may still be a bottleneck in the on-premise ARR server: if this one has too much traffic to cope with, a new burning server is in play. Note that this solution will work for any website hosted on IIS: custom made ASP.NET apps, ASP.NET MVC, PHP, …

Here’s the download: ScaleOutToTheCloud.zip (2.81 kb)

Using MvcSiteMapProvider throuh NuPack

NuPackProbably you have seen the buzz around NuPack, a package manager for .NET with thight integration in Visual Studio 2010. NuPack is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. If you download and install NuPack into Visual Studio, you can now reference MvcSiteMapProvider with a few simple clicks!

From within your ASP.NET MVC 2 project, right click the project file and use the new “Add Package Reference…” option.

Add package reference

Next, a nice dialog shows up where you can just pick a package and click “Install” to download it and add the necessary references to your project. The packages are retrieved from a central XML feed, but feel free to add a reference to a directory where your corporate packages are stored and install them through NuPack. Anyway: MvcSiteMapProvider. Just look for it in the list and click “Install”.

MvcSiteMapProvider in NuPack

Next, MvcSiteMapProvider will automatically be downloaded, added as an assembly reference, a default Mvc.sitemap file is added to your project and all configuration in Web.config takes place without having to do anything! I’m sold :-)

Disclaimer for some: I’m not saying NuPack is the best package manager out there nor that it is the best original idea ever invented. I do believe that the tight integration in VS2010 will make NuPack a good friend during development: the process of downloading and including third party components in your application becomes frictionless. That’s the aim for NuPack, and also the reason why I believe this tool matters and will matter a lot!

Cost Architecting for Windows Azure

Cost architecting for Windows AzureJust wanted to do a quick plug to an article I’ve written for TechNet Magazine: Windows Azure: Cost Architecting for Windows Azure.

Designing applications and solutions for cloud computing and Windows Azure requires a completely different way of considering the operating costs.

Cloud computing and platforms like Windows Azure are billed as “the next big thing” in IT. This certainly seems true when you consider the myriad advantages to cloud computing.

Computing and storage become an on-demand story that you can use at any time, paying only for what you effectively use. However, this also poses a problem. If a cloud application is designed like a regular application, chances are that that application’s cost perspective will not be as expected.

Want to read more? Check the full article. I will also be doing a session on this later this month for the Belgian Azure User Group.

Remix 2010 slides and sample code

As promised during my session on Remix 10 yesterday in Belgium, here's the slide deck and sample code.

Building for the cloud: integrating an application on Windows Azure

Abstract: “It’s time to take advantage of the cloud! In this session Maarten builds further on the application created during Gill Cleeren’s Silverlight session. The campaign website that was developed in Silverlight 4 still needs a home. Because the campaign will only run for a short period of time, the company chose for cloud computing on the Windows Azure platform. Learn how to leverage flexible hosting with automated scaling on Windows Azure, combined with the power of a cloud hosted SQL Azure database to create a cost-effective and responsive web application.”

Thanks for joining and bearing with me during this tough session with very sparse bandwidth!

Source code used in the session: TDD.ChristmasCreator.zip (686.86 kb)

Introducing Windows Azure Companion – Cloud for the masses?

Windows Azure CompanionAt OSIDays in India, the Interoperability team at Microsoft has made an interesting series of announcements related to PHP and Windows Azure.  To summarize: Windows Azure Tools for Eclipse for PHP has been updated and is on par with Visual Studio tooling (which means you can deploy a PHP app to Windows Azure without leaving Eclipse!). The Windows Azure Command-line Tools for PHP have been updated, and there’s a new release of the Windows Azure SDK for PHP and a Windows Azure Storage plugin for WordPress built on that.

What’s most interesting in the series of announcements is the Windows Azure Companion – September 2010 Community Technology Preview(CTP). In short, compare it with Web Platform Installer but targeted at Windows Azure. It allows you to install a set of popular PHP applications on a Windows Azure instance, like WordPress or phpBB.

This list of applications seems a bit limited, but it’s not. It’s just a standard Atom feed where the Companion gets its information from. Feel free to create your own feed, or use a sample feed I created and contains following applications which I know work well on Windows Azure:

  • PHP Runtime
  • PHP Wincache Extension
  • Microsoft Drivers for PHP for SQL Server
  • Windows Azure SDK for PHP
  • PEAR Archive Tar
  • phpBB
  • Wordpress
  • eXtplorer File Manager

kick it on DotNetKicks.com

Obtaining & installing Windows Azure Companion

There are 3 steps involved in this. The first one is: go get yourself a Windows Azure subscription. I recall there is a free, limited version where you can use a virtual machine for 25 hours. Not much, but enough to try out Windows Azure Companion. Make sure to completely undeploy the application afterwards if you mind being billed.

Next, get the Windows Azure Companion – September 2010 Community Technology Preview(CTP). There is a source code download where you can compile it yourself using Visual Studio, there is also a “cspkg” version that you can just deploy onto your Windows Azure account and get running. I recommend the latter one if you want to be up and running fast.

The third step of course, is deploying. Before doing this edit the “ServiceConfiguration.cscfg” file. It needs your Windows Azure storage credentials and a administrative username/password so only you can log onto the Companion.

This configuration file also contains a reference to the application feed, so if you want to create one yourself this is the place where you can reference it.

Installing applications

Getting a “Running” state and a green bullet on the Windows Azure portal? Perfect! Then browse to http://yourchosenname.cloudapp.net:8080 (mind the port number!), this is where the administrative interface resides. Log in with the credentials you specified in “ServiceConfiguration.cscfg” before and behold the Windows Azure Companion administrative user interface.

Windows Azure Companion Administration

As a side note: this screenshot was taken with a custom feed I created which included some other applications with SQL Server support, like the Drupal 7 alpha releases. Because these are alpha’s I decided to not include them in my sample feed that you can use. I am confident that more supported applications will come in the future though.

Go to the platform tab, select the PHP runtime and other components followed by clicking “Next”. Pick your favorite version numbers and proceed with installing. After this has been finished, you can install an application from the applications tab. How about WordPress?WordPress on Windows Azure

In this last step you can choose where an application will be installed. Under the root of the website or under a virtual folder, anything you like. Afterwards, the application will be running at http://yourchosenname.cloudapp.net.

More control with eXtplorer

The sample feed I provide includes eXtplorer, a web-based file management solution. When installing this, you get full control over the applications folder on your Windows Azure instance, enabling you to edit files (configuration files) but also enabling you to upload *any* application you want to host on Windows Azure Companion. Here is me creating a highly modern homepage: and the rendered version of it:

eXtplorer on Windows AzureWelcome!

Administrative options

As with any web server, you want some administrative options. Windows Azure Companion provides you with logging of both Windows Azure and PHP. You can edit php.ini, restart the server, see memory and CPU usage statistics and create a backup of your current instance in case you want to start messing things up and want a “last known good” instance of your installation.

Windows Azure Companion AdministrationWindows Azure Companion Administration

Note: If you are a control freak, just stop your application on Windows Azure, download the virtual hard drive (.vhd) file from blob storage and make some modifications, upload it again and restart the Windows Azure Companion. I don’t recommend this as you will have to download and upload a large .vhd file but in theory it is possible to fiddle around.

Internet Explorer 9 jumplist support

A cool feature included is the IE9 jumplist support. IE9 beta is out and it seems all teams at Microsoft are adding support for it. If you drag the Windows Azure Companion administration tab to your Windows 7 taskbar, you get the following nifty shortcuts when right-clicking:

IE9 jumplist

Scalability

The current preview release of Windows Azure Companion can not provide scale-out. It can scale up to a higher number of CPU, memory and storage, but not to multiple role instances. This is due to the fact that Windows Azure drives can not be shared in read/write mode across multiple machines. On the other hand: if you deploy 2 instances and install the same apps on them, use the same SQL Azure database backend and use round-robin DNS, you can achieve scale-out at this time. Not the way you'd want it, but it should work. Then again: I don’t think that Windows Azure Companion has been created with very large sites in mind as this type of sites will benefit more from a completely optimized version for “regular” Windows Azure.

Conclusion

I’m impressed with this series of releases, especially the Windows Azure Companion. It clearly shows Microsoft is not just focusing on its own platform but also treating PHP as an equal citizen for Windows Azure. The Companion in my opinion also lowers the step to cloud computing: it’s really easy to install and use and may attract more people to the Windows Azure platform (especially if they would add a basic, entry-level subscription with low capacity and a low price, pun intended :-))

Update: also check Jim O’Neil's blog post: Windows Azure Companion: PHP and WordPress in Azure and Brian Swan's blog post: Announcing the Windows Azure Companion and More...

kick it on DotNetKicks.com

Announcing the Windows Azure Online Conference

Steve Plank from Microsoft UK has just announced the UK Windows Azure Online Conference on his blog. This will be a whole day, online Windows Azure conference consisting of three different tracks: Cirrus – the high level stuff, Altocumulus – the mid level stuff (cast studies) and Stratocumulus – the low level stuff (deep tech). I’ll be doing a session in the Stratocumulus track.

Since this is an online conference, feel free to subscribe for the event! All details can be found on the UK Windows Azure Online Conference announcement. I feel this is going to be very interesting, covering a broad range of Windows Azure topics! Here’s the list of sessions I’ll try to attend:

  • Case studies in the Altocumulus track, these should be very interesting real-life examples
  • Of course my own session, “Taking care of a cloud environment”, since otherwise there would be no speaker in that slot…
  • Windows Azure Guidance Project
  • Azure Table Service – getting creative with Microsoft’s NoSQL datastore
  • The Q&A panel sessions

The only thing I’m wondering about: how are they going to provide lunch through Live Meeting…

Hybrid Azure applications using OData

OData in the cloud on AzureIn the whole Windows Azure story, Microsoft has always been telling you could build hybrid applications: an on-premise application with a service on Azure or a database on SQL Azure. But how to do it in the opposite direction? Easy answer there: use the (careful, long product name coming!) Windows Azure platform AppFabric Service Bus to expose an on-premise WCF service securely to an application hosted on Windows Azure. Now how would you go about exposing your database to Windows Azure? Open a hole in the firewall? Use something like PortBridge to redirect TCP traffic over the service bus? Why not just create an OData service for our database and expose that over AppFabric Service Bus. In this post, I’ll show you how.

For those who can not wait: download the sample code: ServiceBusHost.zip (7.87 kb)

kick it on DotNetKicks.com

What we are trying to achieve

The objective is quite easy: we want to expose our database to an application on Windows Azure (or another cloud or in another datacenter) without having to open up our firewall. To do so, we’ll be using an OData service which we’ll expose through Windows Azure platform AppFabric Service Bus. But first things first…

  • OData??? The Open Data Protocol (OData) is a Web protocol for querying and updating data over HTTP using REST. And data can be a broad subject: a database, a filesystem, customer information from SAP, … The idea is to have one protocol for different datasources, accessible through web standards. More info? Here you go.
  • Service Bus??? There’s an easy explanation to this one, although the product itself offers much more use cases. We’ll be using the Service Bus to interconnect two applications, sitting on different networks and protected by firewalls. This can be done by using the Service Bus as the “man in the middle”, passing around data from both applications.

Windows Azure platform AppFabric Service Bus

Our OData feed will be created using WCF Data Services, formerly known as ADO.NET Data Services formerly known as project Astoria.

Creating an OData service

We’ll go a little bit off the standard path to achieve this, although the concepts are the same. Usually, you’ll be adding an OData service to a web application in Visual Studio. Difference: we’ll be creating a console application. So start off with a console application and add the following additional references to the console application:

  • Microsoft.ServiceBus (from the SDK that can be found on the product site)
  • System.Data.Entity
  • System.Data.Services
  • System.Data.Services.Client
  • System.EnterpriseServices
  • System.Runtime.Serialization
  • System.ServiceModel
  • System.ServiceModel.Web
  • System.Web.ApplicationServices
  • System.Web.DynamicData
  • System.Web.Entity
  • System.Web.Services
  • System.Data.DataSetExtensions

Next, add an Entity Data Model for a database you want to expose. I have a light version of the Contoso sample database and will be using that one. Also, I only added one table to the model for sake of simplicity:

Entity Data Model for OData

Pretty straightforward, right? Next thing: expose this beauty through an OData service created with WCF Data Services. Add a new class to the project, and add the following source code:

1 public class ContosoService : DataService<ContosoSalesEntities> 2 { 3 // This method is called only once to initialize service-wide policies. 4 public static void InitializeService(DataServiceConfiguration config) 5 { 6 // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. 7 // Examples: 8 // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); 9 // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); 10 config.SetEntitySetAccessRule("Store", EntitySetRights.All); 11 config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 12 } 13 }

Let’s explain this thing: the ContosoService class inherits DataService<ContosoSalesEntities>, a ready-to-use service implementation which you pass the type of your Entity Data Model. In the InitializeService method, there’s only one thing left to do: specify the access rules for entities. I chose to expose the entity set “Store” with all rights (read/write).

In a normal world: this would be it: we would now have a service ready to expose our database through OData. Quick, simple, flexible. But in our console application, there’s a small problem: we are not hosting inside a web application, so we’ll have to write the WCF hosting code ourselves.

Hosting the OData service using a custom WCF host

Since we’re not hosting inside a web application but in a console application, there’s some plumbing we need to do: set up our own WCF host and configure it accordingly.

Let’s first work on our App.config file:

1 <?xml version="1.0"?> 2 <configuration> 3 <connectionStrings> 4 <add name="ContosoSalesEntities" connectionString="metadata=res://*/ContosoModel.csdl|res://*/ContosoModel.ssdl|res://*/ContosoModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.\SQLEXPRESS;Initial Catalog=ContosoSales;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> 5 </connectionStrings> 6 7 <system.serviceModel> 8 <services> 9 <service behaviorConfiguration="contosoServiceBehavior" 10 name="ServiceBusHost.ContosoService"> 11 <host> 12 <baseAddresses> 13 <add baseAddress="http://localhost:8080/ContosoModel" /> 14 </baseAddresses> 15 </host> 16 <endpoint address="" 17 binding="webHttpBinding" 18 contract="System.Data.Services.IRequestHandler" /> 19 20 <endpoint address="https://<service_namespace>.servicebus.windows.net/ContosoModel/" 21 binding="webHttpRelayBinding" 22 bindingConfiguration="contosoServiceConfiguration" 23 contract="System.Data.Services.IRequestHandler" 24 behaviorConfiguration="serviceBusCredentialBehavior" /> 25 </service> 26 </services> 27 <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> 28 29 <behaviors> 30 <serviceBehaviors> 31 <behavior name="contosoServiceBehavior"> 32 <serviceMetadata httpGetEnabled="true" /> 33 <serviceDebug includeExceptionDetailInFaults="True" /> 34 </behavior> 35 </serviceBehaviors> 36 37 <endpointBehaviors> 38 <behavior name="serviceBusCredentialBehavior"> 39 <transportClientEndpointBehavior credentialType="SharedSecret"> 40 <clientCredentials> 41 <sharedSecret issuerName="owner" issuerSecret="<secret_from_portal>" /> 42 </clientCredentials> 43 </transportClientEndpointBehavior> 44 </behavior> 45 </endpointBehaviors> 46 </behaviors> 47 48 <bindings> 49 <webHttpRelayBinding> 50 <binding name="contosoServiceConfiguration"> 51 <security relayClientAuthenticationType="None" /> 52 </binding> 53 </webHttpRelayBinding> 54 </bindings> 55 </system.serviceModel> 56 </configuration>

There's a lot of stuff going on in there!

  • The connection string to my on-premise database is specified
  • The WCF service is configured

To be honest: that second bullet is a bunch of work…

  • We specify 2 endpoints: one local (so we can access the OData service from our local network) and one on the service bus, hence the https://<service_namespace>.servicebus.windows.net/ContosoModel/ URL.
  • The service bus endpoint has 2 behaviors specified: the service behavior is configured to allow metadata retrieval. The endpoint behavior is configured to use the service bus credentials (that can be found on the AppFabric portal site once logged in) when connecting to the service bus.
  • The webHttpRelayBinding, a new binding type for Windows Azure AppFabric Service Bus, is configured to use no authentication when someone connects to it. That way, we will have an OData service that is accessible from the Internet for anyone.

With that configuration in place, we can start building our WCF service host in code. Here’s the full blown snippet:

1 class Program 2 { 3 static void Main(string[] args) 4 { 5 ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.AutoDetect; 6 7 using (ServiceHost serviceHost = new WebServiceHost(typeof(ContosoService))) 8 { 9 try 10 { 11 // Open the ServiceHost to start listening for messages. 12 serviceHost.Open(TimeSpan.FromSeconds(30)); 13 14 // The service can now be accessed. 15 Console.WriteLine("The service is ready."); 16 foreach (var endpoint in serviceHost.Description.Endpoints) 17 { 18 Console.WriteLine(" - " + endpoint.Address.Uri); 19 } 20 Console.WriteLine("Press <ENTER> to terminate service."); 21 Console.ReadLine(); 22 23 // Close the ServiceHost. 24 serviceHost.Close(); 25 } 26 catch (TimeoutException timeProblem) 27 { 28 Console.WriteLine(timeProblem.Message); 29 Console.ReadLine(); 30 } 31 catch (CommunicationException commProblem) 32 { 33 Console.WriteLine(commProblem.Message); 34 Console.ReadLine(); 35 } 36 } 37 } 38 }

We’ve just created our hosting environment, completely configured using the configuration file for WCF. The important thing to note here is that we’re spinning up a WebServiceHost, and that we’re using it to host multiple endpoints. Compile, run, F5, and here’s what happens:

Command line WCF hosting for AppFabric service bus

Consuming the feed

Now just leave that host running and browse to the public service bus endpoint for your OData service, i.e. https://<service_namespace>.servicebus.windows.net/ContosoModel/:

Consuming OData over service bus

There’s two reactions possible now: “So, this is a service?” and “WOW! I actually connected to my local SQL Server database using a public URL and I did not have to call IT to open up the firewall!”. I’d go for the latter…

Of course, you can also consume the feed from code. Open up a new project in Visual Studio, and add a service reference for the public service bus address:

Add reference OData

The only thing left now is consuming it, for example using this code snippet:

1 class Program 2 { 3 static void Main(string[] args) 4 { 5 var odataService = 6 new ContosoSalesEntities( 7 new Uri("https://<service_namespace>.servicebus.windows.net/ContosoModel/")); 8 var store = odataService.Store.Take(1).ToList().First(); 9 10 Console.WriteLine("Store: " + store.StoreName); 11 Console.ReadLine(); 12 } 13 }

(Do not that updates do not work out-of-the-box, you’ll have to use a small portion of magic on the server side to fix that… I’ll try to follow up on that one.)

Conclusion

That was quite easy! Of course, if you need full access to your database, you are currently stuck with PortBridge or similar solutions. I am not completely exposing my database to the outside world: there’s an extra level of control in the EDMX model where I can choose which datasets to expose and which not. The WCF Data Services class I created allows for specifying user access rights per dataset.

Download sample code: ServiceBusHost.zip (7.87 kb)

kick it on DotNetKicks.com

Simplified access control using Windows Azure AppFabric Labs

Windows Azure ApFabric Access ControlEarlier this week, Zane Adam announced the availability of the New AppFabric Access Control service in LABS. The highlights for this release (and I quote):

  • Expanded Identity provider support - allowing developers to build applications and services that accept both enterprise identities (through integration with Active Directory Federation Services 2.0), and a broad range of web identities (through support of Windows Live ID, Open ID, Google, Yahoo, Facebook identities) using a single code base.
  • WS-Trust and WS-Federation protocol support – Interoperable WS-* support is important to many of our enterprise customers.
  • Full integration with Windows Identity Foundation (WIF) - developers can apply the familiar WIF identity programming model and tooling for cloud applications and services.
  • A new management web portal -  gives simple, complete control over all Access Control settings.

Wow! This just *has* to be good! Let’s see how easy it is to work with claims based authentication and the AppFabric Labs Access Control Service, which I’ll abbreviate to “ACS” throughout this post.

kick it on DotNetKicks.com

What are you doing?

In essence, I’ll be “outsourcing” the access control part of my application to the ACS. When a user comes to the application, he will be asked to present certain “claims”, for example a claim that tells what the user’s role is. Of course, the application will only trust claims that have been signed by a trusted party, which in this case will be the ACS.

Fun thing is: my application only has to know about the ACS. As an administrator, I can then tell the ACS to trust claims provided by Windows Live ID or Google Accounts, which will be reflected to my application automatically: users will be able to authenticate through any service I configure in the ACS, without my application having to know. Very flexible, as I can tell the ACS to trust for example my company’s Active Directory and perhaps also the Active Directory of a customer who uses the application

Prerequisites

Before you start, make sure you have the latest version of Windows Identity Foundation installed. This will make things easy, I promise! Other prerequisites, of course, are Visual Studio and an account on https://portal.appfabriclabs.com. Note that, since it’s still a “preview” version, this is free to use.

In the labs account, create a project and in that project create a service namespace. This is what you should be seeing (or at least: something similar):

AppFabric labs project

Getting started: setting up the application side

Before starting, we will require a certificate for signing tokens and things like that. Let’s just start with creating one so we don’t have to worry about that further down the road. Issue the following command in a Visual Studio command prompt:

MakeCert.exe -r -pe -n "CN=<your service namespace>.accesscontrol.appfabriclabs.com" -sky exchange -ss my

This will create a certificate that is valid for your ACS project. It will be installed in the local certificate store on your computer. Make sure to export both the public and private key (.cer and .pkx).

That being said and done: let’s add claims-based authentication to a new ASP.NET Website. Simply fire up Visual Studio, create a new ASP.NET application. I called it “MyExternalApp” but in fact the name is all up to you. Next, edit the Default.aspx page and paste in the following code:

1 <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" 2 CodeBehind="Default.aspx.cs" Inherits="MyExternalApp._Default" %> 3 4 <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 5 </asp:Content> 6 <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 7 <p>Your claims:</p> 8 <asp:GridView ID="gridView" runat="server" AutoGenerateColumns="False"> 9 <Columns> 10 <asp:BoundField DataField="ClaimType" HeaderText="ClaimType" ReadOnly="True" /> 11 <asp:BoundField DataField="Value" HeaderText="Value" ReadOnly="True" /> 12 </Columns> 13 </asp:GridView> 14 </asp:Content>

Next, edit Default.aspx.cs and add the following Page_Load event handler:

1 protected void Page_Load(object sender, EventArgs e) 2 { 3 IClaimsIdentity claimsIdentity = 4 ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities.FirstOrDefault(); 5 6 if (claimsIdentity != null) 7 { 8 gridView.DataSource = claimsIdentity.Claims; 9 gridView.DataBind(); 10 } 11 }

So far, so good. If we had everything configured, Default.aspx would simply show us the claims we received from ACS once we have everything running. Now in order to configure the application to use the ACS, there’s two steps left to do:

  • Add a reference to Microsoft.IdentityModel (located somewhere at C:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll)
  • Add an STS reference…

That first step should be easy: add a reference to Microsoft.IdentityModel in your ASP.NET application. The second step is almost equally easy: right-click the project and select “Add STS reference…”, like so:

Add STS reference

A wizard will pop-up. Here’s a secret: this wizard will do a lot for us! On the first screen, enter the full URL to your application. I have mine hosted on IIS and enabled SSL, hence the following screenshot:

Specify application URI

In the next step, enter the URL to the STS federation metadata. To the what where? Well, to the metadata provided by ACS. This metadata contains the types of claims offered, the certificates used for signing, … The URL to enter will be something like https://<your service namespace>.accesscontrol.appfabriclabs.com:443/FederationMetadata/2007-06/FederationMetadata.xml:

Security Token Service

In the next step, select “Disable security chain validation”. Because we are using self-signed certificates, selecting the second option would lead us to doom because all infrastructure would require a certificate provided by a valid certificate authority.

From now on, it’s just “Next”, “Next”, “Finish”. If you now have a look at your Web.config file, you’ll see that the wizard has configured the application to use ACS as the federation authentication provider. Furthermore, a new folder called “FederationMetadata” has been created, which contains an XML file that specifies which claims this application requires. Oh, and some other details on the application, but nothing to worry about at this point.

Our application has now been configured: off to the ACS side!

Getting started: setting up the ACS side

First of all, we need to register our application with the Windows Azure AppFabric ACS. his can be done by clicking “Manage” on the management portal over at https://portal.appfabriclabs.com. Next, click “Relying Party Applications” and “Add Relying Party Application”. The following screen will be presented:

Add Relying Party Application

Fill out the form as follows:

  • Name: a descriptive name for your application.
  • Realm: the URI that the issued token will be valid for. This can be a complete domain (i.e. www.example.com) or the full path to your application. For now, enter the full URL to your application, which will be something like https://localhost/MyApp.
  • Return URL: where to return after successful sign-in
  • Token format: we’ll be using the defaults in WIF, so go for SAML 2.0.
  • For the token encryption certificate, select X.509 certificate and upload the certificate file (.cer) we’ve been using before
  • Rule groups: pick one, best is to create a new one specific to the application we are registering

Afterwards click “Save”. Your application is now registered with ACS.

The next step is to select the Identity Providers we want to use. I selected Windows Live ID and Google Accounts as shown in the next screenshot:

Identity Providers

One thing left: since we are using Windows Identity Foundation, we have to upload a token signing certificate to the portal. Export the private key of the previously created certificate and upload that to the “Certificates and Keys” part of the management portal. Make sure to specify that the certificate is to be used for token signing.

Signing certificate Windows Identity Foundation WIF

Allright, we’re almost done. Well, in fact: we are done! An optional next step would be to edit the rule group we’ve created before. This rule group will describe the claims that will be presented to the application asking for the user’s claims. Which is very powerful, because it also supports so-called claim transformations: if an identity provider provides ACS with a claim that says “the user is part of a group named Administrators”, the rules can then transform the claim into a new claim stating “the user has administrative rights”.

Testing our setup

With all this information and configuration in place, press F5 inside Visual Studio and behold… Your application now redirects to the STS in the form of ACS’ login page.

Sign in using AppFabric

So far so good. Now sign in using one of the identity providers listed. After a successful sign-in, you will be redirected back to ACS, which will in turn redirect you back to your application. And then: misery :-)

Request validation

ASP.NET request validation kicked in since it detected unusual headers. Let’s fix that. Two possible approaches:

  • Disable request validation, but I’d prefer not to do that
  • Create a custom RequestValidator

Let’s go with the latter option… Here’s a class that you can copy-paste in your application:

1 public class WifRequestValidator : RequestValidator 2 { 3 protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) 4 { 5 validationFailureIndex = 0; 6 7 if (requestValidationSource == RequestValidationSource.Form && collectionKey.Equals(WSFederationConstants.Parameters.Result, StringComparison.Ordinal)) 8 { 9 SignInResponseMessage message = WSFederationMessage.CreateFromFormPost(context.Request) as SignInResponseMessage; 10 11 if (message != null) 12 { 13 return true; 14 } 15 } 16 17 return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); 18 } 19 }

Basically, it’s just validating the request and returning true to ASP.NET request validation if a SignInMesage is in the request. One thing left to do: register this provider with ASP.NET. Add the following line of code in the <system.web> section of Web.config:

<httpRuntime requestValidationType="MyExternalApp.Modules.WifRequestValidator" />

If you now try loading the application again, chances are you will actually see claims provided by ACS:

Claims output from Windows Azure AppFabric Access Control Service

There', that’s it. We now have successfully delegated access control to ACS. Obviously the next step would be to specify which claims are required for specific actions in your application, provide the necessary claims transformations in ACS, … All of that can easily be found on Google Bing.

Conclusion

To be honest: I’ve always found claims-based authentication and Windows Azure AppFabric Access Control a good match in theory, but an ugly and cumbersome beast to work with. With this labs release, things get interesting and almost self-explaining, allowing for easier implementation of it in your own application. As an extra bonus to this blog post, I also decided to link my ADFS server to ACS: it took me literally 5 minutes to do so and works like a charm!

Final conclusion: AppFabric team, please ship this soon :-) I really like the way this labs release works and I think many users who find the step up to using ACS today may as well take the step if they can use ACS in the easy manner the labs release provides.

By the way: more information can be found on http://acs.codeplex.com.

kick it on DotNetKicks.com

MvcSiteMapProvider 2.1.0 released!

MvcSiteMapProvider The release for MvcSiteMapProvider 2.1.0 has just been posted on CodePlex. MvcSiteMapProvider is, as the name implies, an ASP.NET MVC SiteMapProvider implementation for the ASP.NET MVC framework. Targeted at ASP.NET MVC 2, it provides sitemap XML functionality and interoperability with the classic ASP.NET sitemap controls, like the SiteMapPath control for rendering breadcrumbs and the Menu control.

Next to a brand new logo, the component has been patched up with several bugfixes, the visibility attribute is back (in a slightly cooler reincarnation) and a number of new extension points have been introduced. Let’s give you a quick overview…

kick it on DotNetKicks.com

Extension points

MvcSiteMapProvider is built wih extensibility in mind. All extension point contracts are defined in the MvcSiteMapProvider.Extensibility namespace. The sample application on the downloads page contains several custom implementations of these extension points.

Global extension points (valid for the entire provider and all nodes)

These extension points can be defined when Registering the provider.

  Node key generator

Keys for sitemap nodes are usually automatically generated by the MvcSiteMapProvider core. If, for reasons of accessing sitemap nodes from code, the generated keys should follow other naming rules, a custom MvcSiteMapProvider.Extensibility.INodeKeyGenerator implementation can be written.

  Controller type resolver

In order to resolve a controller type and action method related to a specific sitemap node, a MvcSiteMapProvider.Extensibility.IControllerTypeResolver is used. This should normally not be extended, however if you want to make use of other systems for resolving controller types and action methods, this is the logical extension point.

  Action method parameter resolver

Action method parameters are resolved by using ASP.NET MVC's ActionDescriptor class. If you want to use a custom system for this, a MvcSiteMapProvider.Extensibility.IActionMethodParameterResolver implementation can be specified.

  ACL module

To determine whether a sitemap node is accessible to a specific user, a MvcSiteMapProvider.Extensibility.IAclModule implementation is used. MvcSiteMapProvider uses two of these modules by default: access is granted or denied by checking for [Authorize] attributes on action methods, followed by the roles attribute that can be specified in the sitemap XML.

  URL resolver

URLs are generated by leveraging a MvcSiteMapProvider.Extensibility.ISiteMapNodeUrlResolver implementation. If, for example, you want all URLs generated by MvcSiteMapProvider to be in lowercase text, a custom implementation can be created.

  Visibility provider

In some situations, nodes should be visible in the breadcrumb trail but not in a complete sitemap. This can be solved using the MvcSiteMapProvider.Extensibility.ISiteMapNodeVisibilityProvider extension point that can be specified globally for every node in the sitemap or granularly on a specific sitemap node. A sample is available on the Advanced node visibility page.

Node-specific extenion points (valid for a single node)

These extension points can be defined when Creating a first sitemap.

  Dynamic node provider

In many web applications, sitemap nodes are directly related to content in a persistent store like a database.For example, in an e-commerce application, a list of product details pages in the sitemap maps directly to the list of products in the database. Using dynamic sitemaps, a small class implementing MvcSiteMapProvider.Extensibility.IDynamicNodeProvider or extending MvcSiteMapProvider.Extensibility.DynamicNodeProviderBase can be provided to the MvcSiteMapProvider offering a list of dynamic nodes that should be incldued in the sitemap. This ensures the product pages do not have to be specified by hand in the sitemap XML.

A sample can be found on the Dynamic sitemaps page.

  URL resolver

URLs are generated by leveraging a MvcSiteMapProvider.Extensibility.ISiteMapNodeUrlResolver implementation. If, for example, you want the URL for a sitemap node generated by MvcSiteMapProvider to be in lowercase text, a custom implementation can be created.

  Visibility provider

In some situations, nodes should be visible in the breadcrumb trail but not in a complete sitemap. This can be solved using the MvcSiteMapProvider.Extensibility.ISiteMapNodeVisibilityProvider extension point that can be specified globally for every node in the sitemap or granularly on a specific sitemap node. A sample is available on the Advanced node visibility page.

Conclusion

Only one conclusion: grab the latest bits and start playing with them! And feel free to bug me with feature requests and issues found.

Also, follow me on Twitter for updates on this project.

ASP.NET MVC 3 and MEF sitting in a tree...

As I stated in a previous blog post: ASP.NET MVC 3 preview 1 has been released! I talked about some of the new features and promised to do a blog post in the dependency injection part. In this post, I'll show you how to use that together with MEF.

Download my sample code: Mvc3WithMEF.zip (256.21 kb)

kick it on DotNetKicks.com

Dependency injection in ASP.NET MVC 3

First of all, there’s 4 new hooks for injecting dependencies:

  • When creating controller factories
  • When creating controllers
  • When creating views (might be interesting!)
  • When using action filters

In ASP.NET MVC 2, only one of these hooks was used for dependency injection: a controller factory was implemented, using a dependency injection framework under the covers. I did this once, creating a controller factory that wired up MEF and made sure everything in the application was composed through a MEF container. That is, everything that is a controller or part thereof. No easy options for DI-ing things like action filters or views…

ASP.NET MVC 3 shuffled the cards a bit. ASP.NET MVC 3 now contains and uses the Common Service Locator’s IServiceLocator interface, which is used for resolving services required by the ASP.NET MVC framework. The IServiceLocator implementation should be registered in Global.asax using just one line of code:

[code:c#]

MvcServiceLocator.SetCurrent(new SomeServiceLocator());

[/code]

This is, since ASP.NET MVC 3 preview 1, the only thing required to make DI work. In controllers, in action filters and in views. Cool, eh?

Leveraging MEF with ASP.NET MVC 3

First of all: a disclaimer. I already did posts on MEF and ASP.NET MVC before, and in all these posts, I required you to explicitly export your controller types for composition. In this example, again, I will require that, just for keeping code a bit easier to understand. Do note that are some variants of a convention based registration model available.

As stated before, the only thing to build here is a MefServiceLocator that is suited for web (which means: an application-wide catalog and a per-request container). I’ll still have to create my own controller factory as well, because otherwise I would not be able to dynamically compose my controllers. Here goes…

Implementing ServiceLocatorControllerFactory

Starting in reverse, but this thing is the simple part :-)

[code:c#]

[Export(typeof(IControllerFactory))]
[PartCreationPolicy(CreationPolicy.Shared)]
public class ServiceLocatorControllerFactory
    : DefaultControllerFactory
{
    private IMvcServiceLocator serviceLocator;

    [ImportingConstructor]
    public ServiceLocatorControllerFactory(IMvcServiceLocator serviceLocator)
    {
        this.serviceLocator = serviceLocator;
    }

    public override IController CreateController(RequestContext requestContext, string controllerName)
    {
        var controllerType = GetControllerType(requestContext, controllerName);
        if (controllerType != null)
        {
            return this.serviceLocator.GetInstance(controllerType) as IController;
        }

        return base.CreateController(requestContext, controllerName);
    }

    public override void ReleaseController(IController controller)
    {
        this.serviceLocator.Release(controller);
    }
}

[/code]

Did you see that? A simple, MEF enabled controller factory that uses an IMvcServiceLocator. This thing can be used with other service locators as well.

Implementing MefServiceLocator

Like I said, this is the most important part, allowing us to use MEF for resolving almost any component in the ASP.NET MVC pipeline. Here’s my take on that:

[code:c#]

[Export(typeof(IMvcServiceLocator))]
[PartCreationPolicy(CreationPolicy.Shared)]
public class MefServiceLocator
    : IMvcServiceLocator
{
    const string HttpContextKey = "__MefServiceLocator_Container";

    private ComposablePartCatalog catalog;
    private IMvcServiceLocator defaultLocator;

    [ImportingConstructor]
    public MefServiceLocator()
    {
        // Get the catalog from the MvcServiceLocator.
        // This is a bit dirty, but currently
        // the only way to ensure one application-wide catalog
        // and a per-request container.
        MefServiceLocator mefServiceLocator = MvcServiceLocator.Current as MefServiceLocator;
        if (mefServiceLocator != null)
        {
            this.catalog = mefServiceLocator.catalog;
        }

        // And the fallback locator...
        this.defaultLocator = MvcServiceLocator.Default;
    }

    public MefServiceLocator(ComposablePartCatalog catalog)
        : this(catalog, MvcServiceLocator.Default)
    {
    }

    public MefServiceLocator(ComposablePartCatalog catalog, IMvcServiceLocator defaultLocator)
    {
        this.catalog = catalog;
        this.defaultLocator = defaultLocator;
    }

    protected CompositionContainer Container
    {
        get
        {
            if (!HttpContext.Current.Items.Contains(HttpContextKey))
            {
                HttpContext.Current.Items.Add(HttpContextKey, new CompositionContainer(catalog));
            }

            return (CompositionContainer)HttpContext.Current.Items[HttpContextKey];
        }
    }

    private object Resolve(Type serviceType, string key = null)
    {
        var exports = this.Container.GetExports(serviceType, null, null);
        if (exports.Any())
        {
            return exports.First().Value;
        }

        var instance = defaultLocator.GetInstance(serviceType, key);
        if (instance != null)
        {
            return instance;
        }

        throw new ActivationException(string.Format("Could not resolve service type {0}.", serviceType.FullName));
    }

    private IEnumerable<object> ResolveAll(Type serviceType)
    {
        var exports = this.Container.GetExports(serviceType, null, null);
        if (exports.Any())
        {
            return exports.Select(e => e.Value).AsEnumerable();
        }

        var instances = defaultLocator.GetAllInstances(serviceType);
        if (instances != null)
        {
            return instances;
        }

        throw new ActivationException(string.Format("Could not resolve service type {0}.", serviceType.FullName));
    }

    #region IMvcServiceLocator Members

    public void Release(object instance)
    {
        var export = instance as Lazy<object>;
        if (export != null)
        {
            this.Container.ReleaseExport(export);
        }

        defaultLocator.Release(export);
    }

    #endregion

    #region IServiceLocator Members

    public IEnumerable<object> GetAllInstances(Type serviceType)
    {
        return ResolveAll(serviceType);
    }

    public IEnumerable<TService> GetAllInstances<TService>()
    {
        var instances = ResolveAll(typeof(TService));
        foreach (TService instance in instances)
        {
            yield return (TService)instance;
        }
    }

    public TService GetInstance<TService>(string key)
    {
        return (TService)Resolve(typeof(TService), key);
    }

    public object GetInstance(Type serviceType)
    {
        return Resolve(serviceType);
    }

    public object GetInstance(Type serviceType, string key)
    {
        return Resolve(serviceType, key);
    }

    public TService GetInstance<TService>()
    {
        return (TService)Resolve(typeof(TService));
    }

    #endregion

    #region IServiceProvider Members

    public object GetService(Type serviceType)
    {
        return Resolve(serviceType);
    }

    #endregion
}

[/code]

HOLY SCHMOLEY! That is a lot of code. Let’s break it down…

First of all, I have 3 constructors. 2 for convenience, one for MEF. Since the MefServiceLocator will be instantiated in Global.asax and I only want one instance of it to live in the application, I have to do a dirty trick: whenever MEF wants to create a new MefServiceLocator for some reason (should in theory only happen once per request, but I want this thing to live application-wide), I’m giving it indeed a new instance which at least shares the part catalog with the one I originally created. Don’t shoot me for doing this…

Next, you will also notice that I’m using a “fallback” locator, which in theory will be the instance stored in MvcServiceLocator.Default, which is ASP.NET MVC 3’s default MvcServiceLocator. I’m doing this for a reason though… read my disclaimer again: I stated that everything should be decorated with the [Export] attribute when I’m relying on MEF. Now since the services exposed by ASP.NET MVC 3, like the IFilterProvider, are not decorated with this attribute, MEF will not be able to find those. When I find myself in that situation, the MefServiceLocator is simply asking the default service locator for it. Not a beauty, but it works and makes my life easy.

Wiring things

To wire this thing, all it takes is adding 3 lines of code to my Global.asax. For clarity, I’m giving you my entire Global.asax Application_Start method:

[code:c#]

protected void Application_Start()
{
    // Register areas

    AreaRegistration.RegisterAllAreas();

    // Register filters and routes

    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);

    // Register MEF catalogs

    var catalog = new DirectoryCatalog(
        Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"));
    MvcServiceLocator.SetCurrent(new MefServiceLocator(catalog, MvcServiceLocator.Default));
}

[/code]

Can you spot the 3 lines of code? This is really all it takes to make the complete application use MEF where appropriate. (Ok, that is a bit of a lie since you would still have to implement a very small IFilterProvider if you want MEF in your action filters, but still.)

Hooks

The cool thing is: a lot of things are now requested in the service locator we just created. When browsing to my site index, here’s all the things that are requested:

  • Resolve called for serviceType: System.Web.Mvc.IControllerFactory
  • Resolve called for serviceType: Mvc3WithMEF.Controllers.HomeController
  • Resolve called for serviceType: System.Web.Mvc.IFilterProvider
  • Resolve called for serviceType: System.Web.Mvc.IFilterProvider
  • Resolve called for serviceType: System.Web.Mvc.IFilterProvider
  • Resolve called for serviceType: System.Web.Mvc.IFilterProvider
  • Resolve called for serviceType: System.Web.Mvc.IViewEngine
  • Resolve called for serviceType: System.Web.Mvc.IViewEngine
  • Resolve called for serviceType: ASP.Index_cshtml
  • Resolve called for serviceType: System.Web.Mvc.IViewEngine
  • Resolve called for serviceType: System.Web.Mvc.IViewEngine
  • Resolve called for serviceType: ASP._LogOnPartial_cshtml

Which means that you can now even inject stuff into views or compose their parts dynamically.

Conclusion

I have a strong sense of a power in here… ASP.NET MVC 3 will support DI natively if you want to use it, and I’ll be one of the users happily making use of it. There’s use cases for injecting/composing something in all of the above components, and ASP.NET MVC 3 made this just simpler and more straightforward.

Here’s my sample code with some more examples in it: Mvc3WithMEF.zip (256.21 kb)