Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...


Just released: MvcSiteMapProvider 3.1.0 RC

ASP.NET MVC Sitemap providerIt looks like I’m really cr… ehm… releasing way too much over the past few days, but yes, here’s another one: I just posted MvcSiteMapProvider 3.1.0 RC both on CodePlex and NuGet.

The easiest way to get the current bits is this one:

Install-Package MvcSiteMapProvider

As usual, here are the release notes:

  • Created one NuGet package containing both .NET 3.5 and .NET 4.0 assemblies
  • Significantly improved memory usage and performance
  • Medium Trust optimizations
  • DefaultControllerTypeResolver speed improvement
  • Resolve authorize attributes through FilterProviders.Current (in MVC3)
  • Allow to specify target on SiteMapTitleAttribute
  • Fix the NuGet package DisplayTemplates folder location
  • Fixed: Nuget web.config section duplication
  • Fixed: HelperMenu.Menu() always uses default provider
  • Fixed: 2.x Uses Default Parameters
  • Fixed: Bad Null Checking in MvcSiteMapProvider.DefaultSiteMapProvider
  • Fixed: Exception: An item with the same key has already been added.
  • Fixed: Add id="menu" to default MenuHelperModel DisplayTemplate (not in NuGet yet)
  • Fixed: Wrong Breadcrumb Displayed Under Heavy Load
  • Fixed: Backport Route support to 2.3.1

Windows Azure SDK for PHP v3.0 released

Microsoft and RealDolmen are very proud to announce the availability of the Windows Azure SDK for PHP v3.0 on CodePlex! (here's the official Microsoft post) This open source SDK gives PHP developers a speed dial library to fully take advantage of Windows Azure’s cool features. Version 3.0 of this SDK marks an important milestone because we’re not only starting to witness real world deployment, but also we’re seeing more people joining the project and contributing.

New features include a pluggable logging infrastructure (based on Table Storage) as well as a full implementation of the Windows Azure management API. This means that you can now build your own Windows Azure Management Portal using PHP. How cool is that? What’s even cooler about this… Well… how about combining some features and build an autoscaling web application in PHP? Checkout for a sample of that. Make sure to read through as there are some links to how you can autoscale YOUR apps as well!

A comment we received a lot for previous versions was the fact that for table storage, datetime values were returned as strings and parsing of them was something you as a developer should do. In this release, we’ve broken that: table storage entities now return native PHP DateTime objects instead of strings for Edm.DateTime properties.

Here’s the official changelog:

  • Breaking change: Table storage entities now return DateTime objects instead of strings for Edm.DateTime properties
  • New feature: Service Management API in the form of Microsoft_WindowsAzure_Management_Client
  • New feature: logging infrastructure on top of table storage
  • Session provider now works on table storage for small sessions, larger sessions can be persisted to blob storage
  • Queue storage client: new hasMessages() method
  • Introduction of an autoloader class, increasing speed for class resolving
  • Several minor bugfixes and performance tweaks

Find the current download at Do you prefer PEAR? Well... pear channel-discover & pear install pearplex/PHPAzure should do the trick.

Microsoft .NET Framework 4 Platform Update 1 KB2478063 Service Pack 5 Feature Set 3.1 R2 November Edition RTW

As you can see, a new .NET Framework version just came out. Read about it at Now why does my title not match with the title from the blog post I referenced? Well… How is this going to help people?

For those who don’t see the problem, let me explain… If we get new people on board that are not yet proficient enough in .NET, they all struggle with some concepts. Concepts like: service packs for a development framework. Or better: client profile stuff! Stuff that breaks their code because stuff is missing in there! I feel like this is going the Java road where every version has a billion updates associated with it. That’s not where we want to go, right? The Java side?


As I’m saying: why not make things clear and call these “updates” something like .NET 4.1 or so? Simple major/minor versions. We’re developers, not marketeers. We’re developers, not ITPro who enjoy these strange names to bill yet another upgrade to their customers

How am I going to persuade my manager to move to the next version? Telling him that we now should use “Microsoft .NET Framework 4 Platform Update 1 KB2478063” instead of telling “hey, there’s a new .NET 4! It’s .NET 4.1 and it’s shiny and new!”.

It seems I’m not alone with this thought. Hadi Hariri also blogged about it. And I expect more to follow... If you feel the same: now is the time to stop this madness! I suspect there’s an R2 November Edition coming otherwise…

[Edit @ 14:00] Here's how to use it in NuGet. Seems this thing is actually ".NET 4.0.1" under the hood.
[Edit @ 14:01] And here's another one. And another one.
[Edit] And Scott Hanselman chimes in:

A Glimpse at Windows Identity Foundation claims

For a current project, I’m using Glimpse to inspect what’s going on behind the ASP.NET covers. I really hope that you have heard about the greatest ASP.NET module of 2011: Glimpse. If not, shame on you! Install-Package Glimpse immediately! And if you don’t know what I mean by that, NuGet it now! (the greatest .NET addition since sliced bread).

This project is also using Windows Identity Foundation. It’s really a PITA to get a look at the claims being passed around. Usually, I do this by putting a breakpoint somewhere and inspecting the current IPrincipal’s internals. But with Glimpse, using a small plugin to just show me the claims and their values is a no-brainer. Check the right bottom of this '(partial) screenshot:

Glimpse Windows Identity Foundation

Want to have this too? Simply copy the following class in your project and you’re done:

1 [GlimpsePlugin()] 2 public class GlimpseClaimsInspectorPlugin : IGlimpsePlugin 3 { 4 public object GetData(HttpApplication application) 5 { 6 // Return the data you want to display on your tab 7 var data = new List<object[]> { new[] { "Identity", "Claim", "Value", "OriginalIssuer", "Issuer" } }; 8 9 // Add all claims found 10 var claimsPrincipal = application.User as ClaimsPrincipal; 11 if (claimsPrincipal != null) 12 { 13 foreach (var identity in claimsPrincipal.Identities) 14 { 15 foreach (var claim in identity.Claims) 16 { 17 data.Add(new object[] { identity.Name, claim.ClaimType, claim.Value, claim.OriginalIssuer, claim.Issuer }); 18 } 19 } 20 } 21 22 return data; 23 } 24 25 public void SetupInit(HttpApplication application) 26 { 27 } 28 29 public string Name 30 { 31 get { return "WIF Claims"; } 32 } 33 }

Enjoy! And if you feel like NuGet-packaging this (or including it with Glimpse), feel free.

Using dynamic WCF service routes

DynamicFor a demo I am working on, I’m creating an OData feed. This OData feed is in essence a WCF service which is activated using System.ServiceModel.Activation.ServiceRoute. The idea of using that technique is simple: map an incoming URL route, e.g. “” to a WCF service. But there’s a catch in ServiceRoute: unlike ASP.NET routing, it does not support the usage of route data. This means that if I want to create a service which can exist multiple times but in different contexts, like, for example, a “private” instance of that service for a customer, the ServiceRoute will not be enough. No support for having and to map to the same “MyService”. Unless you create multiple ServiceRoutes which require recompilation. Or… unless you sprinkle some route magic on top!

Implementing an MVC-style route for WCF

Let’s call this thing DynamicServiceRoute. The goal of it will be to achieve a working ServiceRoute which supports route data and which allows you to create service routes of the format “MyService/{customername}”, like you would do in ASP.NET MVC.

First of all, let’s inherit from RouteBase and IRouteHandler. No, not from ServiceRoute! The latter is so closed that it’s basically a no-go if you want to extend it. Instead, we’ll wrap it! Here’s the base code for our DynamicServiceRoute:

1 public class DynamicServiceRoute 2 : RouteBase, IRouteHandler 3 { 4 private string virtualPath = null; 5 private ServiceRoute innerServiceRoute = null; 6 private Route innerRoute = null; 7 8 public static RouteData GetCurrentRouteData() 9 { 10 } 11 12 public DynamicServiceRoute(string pathPrefix, object defaults, ServiceHostFactoryBase serviceHostFactory, Type serviceType) 13 { 14 } 15 16 public override RouteData GetRouteData(HttpContextBase httpContext) 17 { 18 } 19 20 public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) 21 { 22 } 23 24 public System.Web.IHttpHandler GetHttpHandler(RequestContext requestContext) 25 { 26 } 27 }

As you can see, we’re creating a new RouteBase implementation and wrap 2 routes: an inner ServiceRoute and and inner Route. The first one will hold all our WCF details and will, in one of the next code snippets, be used to dispatch and activate the WCF service (or an OData feed or …). The latter will be used for URL matching: no way I’m going to rewrite the URL matching logic if it’s already there for you in Route.

Let’s create a constructor:

1 public DynamicServiceRoute(string pathPrefix, object defaults, ServiceHostFactoryBase serviceHostFactory, Type serviceType) 2 { 3 if (pathPrefix.IndexOf("{*") >= 0) 4 { 5 throw new ArgumentException("Path prefix can not include catch-all route parameters.", "pathPrefix"); 6 } 7 if (!pathPrefix.EndsWith("/")) 8 { 9 pathPrefix += "/"; 10 } 11 pathPrefix += "{*servicePath}"; 12 13 virtualPath = serviceType.FullName + "-" + Guid.NewGuid().ToString() + "/"; 14 innerServiceRoute = new ServiceRoute(virtualPath, serviceHostFactory, serviceType); 15 innerRoute = new Route(pathPrefix, new RouteValueDictionary(defaults), this); 16 }

As you can see, it accepts a path prefix (e.g. “MyService/{customername}”), a defaults object (so you can say new { customername = “Default” }), a ServiceHostFactoryBase (which may sound familiar if you’ve been using ServiceRoute) and a service type, which is the type of the class that will be your WCF service.

Within the constructor, we check for catch-all parameters. Since I’ll be abusing those later on, it’s important the user of this class can not make use of them. Next, a catch-all parameter {*servicePath} is appended to the pathPrefix parameter. I’m doing this because I want all calls to a path below “MyService/somecustomer/…” to match for this route. Yes, I can try to do this myself, but again this logic is already available in Route so I’ll just reuse it.

One other thing that happens is a virtual path is generated. This will be a fake path that I’ll use as the URL to match in the inner ServiceRoute. This means if I navigate to “MyService/SomeCustomer” or if I navigate to “MyServiceNamespace.MyServiceType-guid”, the same route will trigger. The first one is the pretty one that we’re trying to create, the latter is the internal “make-things-work” URL. Using this virtual path and the path prefix, simply create a ServiceRoute and Route.

Actually, a lot of work has been done in 3 lines of code in the constructor. What’s left is just an implementation of RouteBase which calls the corresponding inner logic. Here’s the meat:

1 public override RouteData GetRouteData(HttpContextBase httpContext) 2 { 3 return innerRoute.GetRouteData(httpContext); 4 } 5 6 public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) 7 { 8 return null; 9 } 10 11 public System.Web.IHttpHandler GetHttpHandler(RequestContext requestContext) 12 { 13 requestContext.HttpContext.RewritePath("~/" + virtualPath + requestContext.RouteData.Values["servicePath"], true); 14 return innerServiceRoute.RouteHandler.GetHttpHandler(requestContext); 15 }

I told you it was easy, right? GetRouteData is used by the routing engine to check if a route matches. We just pass that call to the inner route which is able to handle this. GetVirtualPath will not be important here, so simply return null there. If you really really feel this is needed, it would require some logic that creates a URL from a set of route data. But since you’ll probably never have to do that, null is good here. The most important thing here is GetHttpHandler. It is called by the routing engine to get a HTTP handler for a specific request context if the route matches. In this method, I simply rewrite the requested URL to the internal, ugly “MyServiceNamespace.MyServiceType-guid” URL and ask the inner ServiceRoute to have fun with it and serve the request. There, the magic just happened.

Want to use it? Simply register a new route:

1 var dataServiceHostFactory = new DataServiceHostFactory(); 2 RouteTable.Routes.Add(new DynamicServiceRoute("MyService/{customername}", null, dataServiceHostFactory, typeof(MyService)));


Why would you need this? Well, imagine you are building a customer-specific service where you want to track service calls for a specific sutomer. For example, if you’re creating private NuGet repositories. And yes, this was a hint on a future blog post :-)

Feel this is useful to you as well? Grab the code here: DynamicServiceRoute.cs (1.94 kb)

Wordpress auto sign-on with IIS7 and a plugin

For our RealDolmen blog platform, where we use Wordpress as the engine running multiple external and internal blogs (yes, that’s an internal SaaS we have there!), we wanted to have an easy solution for our employees to sign-on to the platform. We had a look at the Wordpress plugin repository and found the excellent Simple LDAP Login plugin for providing sign-on through Active Directory. This allowed for sign-on using Active Directory credentials. However, when browsing the blogs from the corporate network, the login page is one extra step in the way of users: they are already logged on to the network, so why sign-on again using the same credentials?

Luckily for us, we are hosting Wordpress on Windows, IIS 7 and SQL Server. Shocked? No Linux, MySQL, .htaccess and mod_rewrite there! And it works perfectly. In fact, we get some extras for free: single sign-on is made possible by IIS!

Configuring Windows Authentication in IIS7

In order to provide a single sign-on scenario for Wordpress on IIS, simply enable Windows Authentication in the IIS7 management console, like so:

Windows Authentication in IIS - Wordpress, PHP

If you now browse to the Wordpress site… Nothing happens! Except the normal stuff: a non-logged-in version of the site is displaying… The reason for this is obvious: anonymous authentication is also enabled and is higher up the chain, hence IIS7 refuses to authenticate the user using his Active Directory credentials… One solution may be to reverse the order, but that would mean *every* single user is required to sign-on. Not the ideal situation… And that’s where our custom plugin for Wordpress comes in handy, heck, we’re even sharing it with you so you can use it too!

Fooling IIS7 when required…

A solution to the fact that anonymous authentication is higher up the chain in IIS7 and that this is required by the fact that we don’t want everyone to have to login, is fooling IIS7 into believing that Windows Authentication is higher up the chain in some situations… And why not do that from PHP and wrap that “hack” into a Wordpress plugin?

The basis for our plugin is the following: whenever a user browses the website and uses Internet Explorer (sorry, no support for this in the other browsers…), Windows Authentication is a possibility. The only step left is triggering this, which is pretty easy: if you detect a user is coming from the local LAN and is using Internet Explorer (on Windows), send the user a HTTP/1.1 401 Unauthorized header. This will make IE send out the Windows Authentication token to the server and will also trick IIS7 into thinking that anonymous authentication failed, which will immediately trigger Windows Authentication server-side as well.

Now how to do this in a Wordpress plugin? Well, simple: hook into 2 events Wordpress offers, namely init and login_form. Init? Well, yes! You want users to automatically sign-on when coming from the LAN. There’s no better hook to do that than init. The other one is obvious: if a user somehow lands at the login page and is coming from the local LAN, you want that page to be skipped and use Windows Authentication there. Here’s some simplified code for registering the hooks:

1 <?php 2 add_action('init','iisauth_auto_login'); 3 add_action('login_form','iisauth_wp_login_form');

Next, implementation! Let’s start with what happens on init:

1 function iisauth_auto_login() { 2 if (!is_user_logged_in() && iisauth_is_lan_user() && iisauth_using_ie()) { 3 iisauth_wp_login_form(); 4 } 5 }

As you can see: whenever we suspect a user is coming from the internal LAN and is using IE, we call the iisauth_wp_login_form() method (which “by accident” also gets triggered when a user is on the login page). Here’s that code:

1 function iisauth_wp_login_form() { 2 // Checks if IIS provided a user, and if not, rejects the request with 401 3 // so that it can be authenticated 4 if (iisauth_is_lan_user() && iisauth_using_ie() &&empty($_SERVER["REMOTE_USER"])) { 5 nocache_headers(); 6 header("HTTP/1.1 401 Unauthorized"); 7 ob_clean(); 8 exit(); 9 } elseif (iisauth_is_lan_user() && iisauth_using_ie() &&!empty($_SERVER["REMOTE_USER"])) { 10 if (function_exists('get_userdatabylogin')) { 11 $username=strtolower(substr($_SERVER['REMOTE_USER'],strrpos($_SERVER['REMOTE_USER'],'\\') +1)); 12 13 $user= get_userdatabylogin($username); 14 if (!is_a($user,'WP_User')) { 15 // Create the user16 $newUserId= iisauth_create_wp_user($username); 17 if (!is_a($newUserId,'WP_Error')) { 18 $user= get_userdatabylogin($username); 19 } 20 } 21 22 if ($user&&$username==$user->user_login) { 23 // Clean buffers24 ob_clean(); 25 26 // Feed WordPress a double-MD5 hash (MD5 of value generated in check_passwords)27 $password=md5($user->user_pass); 28 29 // User is now authorized; force WordPress to use the generated password30 $using_cookie=true; 31 wp_setcookie($user->user_login,$password,$using_cookie); 32 33 // Redirect and stop execution34 $redirectUrl= home_url(); 35 if (isset($_GET['redirect_to'])) { 36 $redirectUrl=$_GET['redirect_to']; 37 } 38 wp_redirect($redirectUrl); 39 exit; 40 } 41 } 42 } 43 }

What happens here is that the authentication header is sent when needed, and once a user is provided by IIS we just log the user in to Wordpress and redirect him. The real “magic” is in this part:

1 // Checks if IIS provided a user, and if not, rejects the request with 401 2 // so that it can be authenticated3 if (iisauth_is_lan_user() && iisauth_using_ie() &&empty($_SERVER["REMOTE_USER"])) { 4 nocache_headers(); 5 header("HTTP/1.1 401 Unauthorized"); 6 ob_clean(); 7 exit(); 8 }

Which does exactly what I described before in this post…


Well of course, feel free to use this plugin! Here’s the source code: (1.44 kb) [update] Code for Wordpress 3.1+: IISAUTH.PHP (3.4KB)

(And big thanks to our marketing manager for allowing me to distribute this little plugin! Again proof for the no-nonsense spirit at RealDolmen!)

Geographically distributing Windows Azure applications using Traffic Manager

With the downtime of Amazon EC2 this week, it seems a lot of websites “in the cloud” are down at the moment. Most comments I read on Twitter (and that I also made, jokingly :-)) are in the lines of “outrageous!” and “don’t go cloud!”. While I understand these comments, I think they are wrong. These “clouds” can fail. They are even designed to fail, and often provide components and services that allow you to cope with these failures. You just have to expect failure at some point in time and build it into your application.

Let me rephrase my introduction. I just told you to expect failure, but I actually believe that clouds don’t “fail”. Yes, you may think I’m lunatic there, providing you with two different and opposing views in 2 paragraphs. Allow me to explain: "a “failing” cloud is actually a “scaling” cloud, only thing is: it’s scaling down to zero. If you design your application so that it can scale out, you should also plan for scaling “in”, eventually to zero. Use different availability zones on Amazon, and if you’re a Windows Azure user: try the new Traffic Manager CTP!

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It’s basically a distributed DNS service that knows which Windows Azure Services are sitting behind the traffic manager URL and distributes requests based on three possible profiles:

  • Failover: all traffic is mapped to one Windows Azure service, unless it fails. It then directs all traffic to the failover Windows Azure service.
  • Performance: all traffic is mapped to the Windows Azure service “closest” (in routing terms) to the client requesting it. This will direct users from the US to one of the US datacenters, European users will probably end up in one of the European datacenters and Asian users, well, somewhere in the Asian datacenters.
  • Round-robin: Just distribute requests between various Windows Azure services defined in the Traffic Manager policy

As a sample, I have uploaded my Windows Azure package to two datacenters: EU North and US North Central. Both have their own URL:

I have created a “performance” policy at the URL, which redirects users to the nearest datacenter (and fails-over if one goes down):

Windows Azure Traffic Manager geo replicate

If one of the datacenters goes down, the other service will take over. And as a bonus, I get reduced latency because users use their nearest datacenter.

So what’s this have to do with my initial thoughts? Well: design to scale, using an appropriate technique to your specific situation. Use all the tools the platform has to offer, and prepare for scaling out and for scaling '”in”, even to zero instances. And as with backups: test your disaster strategy now and then.

PS: Artwork based on Josh Twist’s sketches

Windows Azure SDK for PHP v3.0.0 BETA released

imageMicrosoft and RealDolmen are very proud to announce the availability of the Windows Azure SDK for PHP v3.0.0 BETA on CodePlex. This releases is something we’ve been working on in the past few weeks, implementing a lot of new features that enable you to fully leverage the Windows Azure platform from PHP.

This release is BETA software, which means it is feature complete. However, since we have one breaking change, we’re releasing a BETA first to ensure every edge case is covered. Of you are using the current version of the Windows Azure SDK for PHP, feel free to upgrade and let us know your comments.

A comment we received a lot for previous versions was the fact that for table storage, datetime values were returned as strings and parsing of them was something you as a developer should do. In this release, we’ve broken that: table storage entities now return native PHP DateTime objects instead of strings for Edm.DateTime properties.

The feature we’re most proud of is the support for the management API: you can now instruct WIndows Azure from PHP, where you would normally do this through the web portal. This means that you can fully automate your Windows Azure deployment, scaling, … from a PHP script. I even have sample of this, check my blog post “Windows Azure and scaling: how? (PHP)”.

Another nice feature is the new logging infrastructure: if you are used to working with loggers and appenders (like for example in Zend Framework), this should be familiar. It is used to provide logging capabilities in a mayor production site, (yes, that is PHP on Windows Azure you’re seeing there!). Thanks, Lucian, for contributing this!

Last but not least: the session handler has been updated. It relied on table storage for storing session data, however large session objects were not supported since table storage has a maximum amount of data per record. If you are creating large session objects (which I do not recommend, as a best practice), feel free to pass a blob storage client to the session handler instead to have sessions stored in blob storage.

To close this post, here’s the official changelog:

  • Breaking change: Table storage entities now return DateTime objects instead of strings for Edm.DateTime properties
  • New feature: Service Management API in the form of Microsoft_WindowsAzure_Management_Client
  • New feature: logging infrastructure on top of table storage
  • Session provider now works on table storage for small sessions, larger sessions can be persisted to blob storage
  • Queue storage client: new hasMessages() method
  • Introduction of an autoloader class, increasing speed for class resolving
  • Several minor bugfixes and performance tweaks

Get it while it’s hot:

Do you prefer PEAR? Well... pear channel-discover & pear install pearplex/PHPAzure should do the trick. Make sure you allow BETA stability packages in order to get the fresh bits.

PS: We’re running a PHP on Windows Azure contest in Belgium and surrounding countries. The contest is closed for registration, but there’s good value in the blog posts coming out of it. Check for more details.

Slides for my talk at MIX11: Fun with ASP.NET MVC 3, MEF and NuGet

As promised, here are the slides and demo code for my talk "Fun with ASP.NET MVC 3, MEF and NuGet" I presented at MIX in Las Vegas.

Abstract: "So you have a team of developers… And a nice architecture to build on… How about making that architecture easy for everyone and getting developers up to speed quickly? Learn all about integrating the managed extensibility framework (MEF) and ASP.NET MVC with some NuGet sauce for creating loosely coupled, easy to use architectures that anyone can grasp."

The recorded session: (on Channel 9)


The slide deck:

The demo code: 2011-04-14 Fun with ASP.NET MVC 3 (6.76 mb)

Enjoy! And thanks for joining!

kick it on

Official Belgium TechDays 2011 Windows Phone 7 app released

I’m proud to announce that we (RealDolmen) have released the official Belgium TechDays 2011 Windows Phone 7 app! The official Belgium TechDays 2011 gives you the ability to browse current & upcoming sessions, as well as provide LIVE feedback to the event organizers. Is the current session awesome? Let us know! Is the food too spicy? Let us know!

Why am I blogging this? Well: one of the first sessions at the event will be Silverlight, Windows Phone 7, Windows Azure, jQuery, OData and RIA Services. Shaken, not stirred, deliverd by Kevin Dockx and myself. It will feature this WIndows Phone 7 application as well as the backoffice for it (Silverlight), the mobile web front-end (jQuery mobile), the web front-end (MVC), the integration points with the event organizers and the deployment on Windows Azure. Not to mention the twitterwall that integrates with this. ANd the top sessions ranking that will be displayed based on input from all the channels I mentioned before. In short: I’m blogging this to plug our session :-)

Interested in what we’ve built? Or just a consumer of WP7 apps? Download the app at or directly by clicking the picture below:

Download the official Techdays 2011 application for WIndows Phone 7

See you at TechDays!