Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Rewriting WCF OData Services base URL with load balancing & reverse proxy

When scaling out an application to multiple servers, often a form of load balancing or reverse proxying is used to provide external users access to a web server. For example, one can be in the situation where two servers are hosting a WCF OData Service and are exposed to the Internet through either a load balancer or a reverse proxy. Below is a figure of such setup using a reverse proxy.

WCF OData Services hosted in reverse proxy

As you can see, the external server listens on the URL www.example.com, while both internal servers are listening on their respective host names. Guess what: whenever someone accesses a WCF OData Service through the reverse proxy, the XML generated by one of the two backend servers is slightly invalid:

OData base URL invalid incorrect

While valid XML, the hostname provided to all our clients is wrong. The host name of the backend machine is in there and not the hostname of the reverse proxy URL…

How can this be solved? There are a couple of answers to that, one that popped into our minds was to rewrite the XML on the reverse proxy and simply “string.Replace” the invalid URLs. This will probably work, but it feels… dirty. We chose to create WCF inspector, which simply changes this at the WCF level on each backend node.

Our inspector looks like this: (note I did some hardcoding of the base hostname in here, which obviously should not be done in your code)

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 baseUriBuilder.Host = "www.example.com"; 12 requestUriBuilder.Host = baseUriBuilder.Host; 13 14 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 15 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 16 } 17 18 return null; 19 } 20 21 public void BeforeSendReply(ref Message reply, object correlationState) 22 { 23 // Noop 24 } 25 }

There’s not much rocket science in there, although some noteworthy actions are being performed:

  • The current WebOperationContext is queried for the full incoming request URI as well as the base URI. These values are based on the local server, in our example “srvweb01” and “srvweb02”.
  • The Host part of that URI is being replaced with the external hostname, www.example.com
  • These two values are stored in the current OperationContext’s IncomingMessageProperties. Apparently the keys MicrosoftDataServicesRootUri and MicrosoftDataServicesRequestUri affect the URL being generated in the XML feed

To apply this inspector to our WCF OData Service, we’ve created a behavior and applied the inspector to our service channel. Here’s the code for that:

1 [AttributeUsage(AttributeTargets.Class)] 2 public class RewriteBaseUrlBehavior 3 : Attribute, IServiceBehavior 4 { 5 public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 6 { 7 // Noop 8 } 9 10 public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) 11 { 12 // Noop 13 } 14 15 public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 16 { 17 foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers) 18 { 19 foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints) 20 { 21 endpointDispatcher.DispatchRuntime.MessageInspectors.Add( 22 new RewriteBaseUrlMessageInspector()); 23 } 24 } 25 } 26 }

This behavior simply loops all channel dispatchers and their endpoints and applies our inspector to them.

Finally, there’s nothing left to do to fix our reverse proxy issue than to just annotate our WCF OData Service with this behavior attribute:

1 [RewriteBaseUrlBehavior] 2 public class PackageFeedHandler 3 : DataService<PackageEntities> 4 { 5 // ... 6 }

Working with URL routing

A while ago, I posted about Using dynamic WCF service routes. The technique described below is also appropriate for services created using that technique. When working with that implementation, the source code for the inspector would be slightly different.

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 var routeData = MyGet.Server.Routing.DynamicServiceRoute.GetCurrentRouteData(); 12 var route = routeData.Route as Route; 13 if (route != null) 14 { 15 string servicePath = route.Url; 16 servicePath = Regex.Replace(servicePath, @"({\*.*})", ""); // strip out catch-all 17 foreach (var routeValue in routeData.Values) 18 { 19 if (routeValue.Value != null) 20 { 21 servicePath = servicePath.Replace("{" + routeValue.Key + "}", routeValue.Value.ToString()); 22 } 23 } 24 25 if (!servicePath.StartsWith("/")) 26 { 27 servicePath = "/" + servicePath; 28 } 29 30 if (!servicePath.EndsWith("/")) 31 { 32 servicePath = servicePath + "/"; 33 } 34 35 requestUriBuilder.Path = requestUriBuilder.Path.Replace(baseUriBuilder.Path, servicePath); 36 requestUriBuilder.Host = baseUriBuilder.Host; 37 baseUriBuilder.Path = servicePath; 38 } 39 40 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 41 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 42 } 43 44 return null; 45 } 46 47 public void BeforeSendReply(ref Message reply, object correlationState) 48 { 49 // Noop 50 } 51 }

The idea is identical, except that we’re updating the incoming URL path for reasons described in the aforementioned blog post.

Enjoy!

Running Memcached on Windows Azure for PHP

After three conferences in two weeks with a lot of “airport time”, which typically converts into “let’s code!” time, I think I may have tackled a commonly requested Windows Azure feature for PHP developers. Some sort of distributed caching is always a great thing to have when building scalable services and applications. While Windows Azure offers a distributed caching layer under the form of the Windows Azure Caching, that components currently lacks support for non-.NET technologies. I’ve heard there’s work being done there, but that’s not very interesting if you are building your app today. This blog post will show you how to modify a Windows Azure deployment to run and use Memcached in the easiest possible manner.

Note: this post focuses on PHP but can also be used to setup Memcached on Windows Azure for NodeJS, Java, Ruby, Python, …

Related downloads:
The scaffolder source code: MemcachedScaffolderSource.zip (1.12 mb)
The scaffolder, packaged and ready for use: MemcachedScaffolder.phar (2.87 mb)

The short version: use my scaffolder

As you may know, when working with PHP on Windows Azure and when making use of the Windows Azure SDK, you can use and create scaffolders. The Windows Azure SDK for PHP includes a powerful scaffolding feature that allows users to quickly setup a pre-packaged and configured website ready for Windows Azure.

If you want to use Memcached in your project, do the following:

  • Download my custom MemcacheScaffolder (MemcachedScaffolder.phar (2.87 mb)) and make sure it is located either under the scaffolders folder of the Windows Azure SDK for PHP, or that you remember the path to this scaffolder
  • Run the scaffolder from the command line: (note: best use the latest SVN version of the command line tools)
1 scaffolder run -out="c:\temp\myapp" -s="MemcachedScaffolder"

  • Find the newly created Windows Azure project structure in the folder you’ve used.
  • In your PHP code, simply add require_once 'memcache.inc.php'; to your code, and enjoy the $memcache variable which will hold a preconfigured Memcached client for you to use. This $memcache instance will also be automatically updated when adding more server instances or deleting server instances.
  • 1 require_once 'memcache.inc.php';

    That’s it!

    The long version: what this scaffolder does behind the scenes

    Of course, behind this “developers can simply use 1 line of code” trick a lot of things happen in the background. Let’s go through the places I’ve made changes from the default scaffolder.

    The ServiceDefinition.csdef file

    Let’s start with the beginning: when running Memcached in a Windows Azure instance, you’ll have to specify it with a port number to use. As such, the ServiceDefinition.csdef file which defines what the datacenter configuration for your app should be looks like the following:

    1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="PhpOnAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 3 <WebRole name="PhpOnAzure.Web" enableNativeCodeExecution="true"> 4 <Sites> 5 <Site name="Web" physicalDirectory="./PhpOnAzure.Web"> 6 <Bindings> 7 <Binding name="Endpoint1" endpointName="HttpEndpoint" /> 8 </Bindings> 9 </Site> 10 </Sites> 11 <Startup> 12 <Task commandLine="add-environment-variables.cmd" executionContext="elevated" taskType="simple" /> 13 <Task commandLine="install-php.cmd" executionContext="elevated" taskType="simple"> 14 <Environment> 15 <Variable name="EMULATED"> 16 <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" /> 17 </Variable> 18 </Environment> 19 </Task> 20 <Task commandLine="memcached.cmd" executionContext="elevated" taskType="background" /> 21 <Task commandLine="monitor-environment.cmd" executionContext="elevated" taskType="background" /> 22 </Startup> 23 <Endpoints> 24 <InputEndpoint name="HttpEndpoint" protocol="http" port="80" /> 25 <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> 26 </Endpoints> 27 <Imports> 28 <Import moduleName="Diagnostics"/> 29 </Imports> 30 <ConfigurationSettings> 31 </ConfigurationSettings> 32 </WebRole> 33 </ServiceDefinition>

    Note the <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> line of code. This one defines that the web role instance should open some TCP port in the firewall with the name MemcachedEndpoint and expose that to the other virtual machines in your deployment. We’ll use this named endpoint later when starting Memcached.

    Something else in this file is noteworthy: the startup tasks under the <Startup> element. With the default scaffolder, the first two tasks (namely add-environment-variables.cmd and install-php.cmd) are also present. These do nothing more than providing some environment information about your deployment in the environment variables. The second one does what its name implies: install PHP on your virtual machine. The latter two scripts added, memcached.cmd and monitor-environment.cmd are used to bootstrap Memcached. Note these two tasks run as background tasks: I wanted to have these two always running to ensure when Memcached crashes the task can simply restart Memcached.

    The php folder

    If you’ve played with the default scaffolder in the Windows Azure SDK for PHP, you probably know that the PHP installation in Windows Azure is a “default” one. This means: no memcached extension is in there. To overcome this, simply copy the correct php_memcache.dll extension into the /php/ext folder and Windows Azure (well, the install-php.cmd script) will know what to do with it.

    Memcached.cmd and Memcached.ps1

    Under the application’s bin folder, I’ve added some additional startup tasks. The one responsible for starting (and maintaining a running instance of) Memcached is, of course, Memcached.cmd. This one simply delegates the call to Memcached.ps1, of which the following is the source code:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # Start memcached. To infinity and beyond! 4 while (1) { 5 $p = [diagnostics.process]::Start("memcached.exe", "-m 64 -p " + [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.InstanceEndpoints["MemcachedEndpoint"].IPEndpoint.Port) 6 $p.WaitForExit() 7 }

    To be honest, this file is pretty simple. It loads the WindowsAzure ServiceRuntime assembly which contains all kinds of information about the current deployment. Next, I start an infinite loop which continuously starts a new memcached.exe process consuming 64MB of RAM memory and listens on the port specified by the MemcachedEndpoint defined earlier.

    Monitor-environment.cmd and Monitor-environment.ps1

    The monitor-environment.cmd script takes the same approach as the memcached.cmd script: just pass the command along to a PowerShell script in the form of monitor-environment.ps1. I do want to show you the monitor-environment.cmd script however, as there’s one difference in there: I’m changing the file system permissions for my application (the icacls line).

    1 @echo off 2 cd "%~dp0" 3 4 icacls %RoleRoot%\approot /grant "Everyone":F /T 5 6 powershell.exe Set-ExecutionPolicy Unrestricted 7 powershell.exe .\monitor-environment.ps1

    The reason for changing permissions is simple: I want to make sure I can write a PHP script to disk every minute. Yes, you heard me! I’m using PowerShell (in the monitor-environment.ps1 script) to generate PHP code. Here’s the PowerShell:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # To infinity and beyond! 4 5 while(1) { 6 ########################################################## 7 # Create memcached include file for PHP 8 ########################################################## 9 10 # Dump all memcached endpoints to ../memcached-servers.php 11 $memcached = "<?php`r`n" 12 $memcached += "`$memcachedServers = array(" 13 14 $currentRolename = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.Role.Name 15 $roles = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::Roles 16 foreach ($role in $roles.Keys | sort-object) { 17 if ($role -eq $currentRolename) { 18 $instances = $roles[$role].Instances 19 for ($i = 0; $i -lt $instances.Count; $i++) { 20 $endpoints = $instances[$i].InstanceEndpoints 21 foreach ($endpoint in $endpoints.Keys | sort-object) { 22 if ($endpoint -eq "MemcachedEndpoint") { 23 $memcached += "array(`"" 24 $memcached += $endpoints[$endpoint].IPEndpoint.Address 25 $memcached += "`" ," 26 $memcached += $endpoints[$endpoint].IPEndpoint.Port 27 $memcached += "), " 28 } 29 30 31 } 32 } 33 } 34 } 35 36 $memcached += ");" 37 38 Write-Output $memcached | Out-File -Encoding Ascii ../memcached-servers.php 39 40 # Restart the loop in 1 minute 41 Start-Sleep -Seconds 60 42 }

    The output is being written every minute to the memcached-servers.php file. Why every minute? Well, if servers are added or removed I want my application to use the correct set of servers. This leaves a possible gap of one minute where some server may not be available, you can easily catch any error related to this in your PHP code (or add a comment to this blog post telling me what’s a better interval). Anyway, here’s the sample output:

    1 <?php 2 $memcachedServers = array(array('10.0.0.1', 11211), array('10.0.0.2', 11211), );

    All there’s left to do is consume this array. I’ve added a default memcache.inc.php file in the root of the web role to make things easy:

    1 <?php 2 require_once $_SERVER["RoleRoot"] . '\\approot\\memcached-servers.php'; 3 $memcache = new Memcache(); 4 foreach ($memcachedServers as $memcachedServer) { 5 if (strpos($memcachedServer[0], '127.') !== false) { 6 $memcachedServer[0] = 'localhost'; 7 } 8 $memcache->addServer($memcachedServer[0], $memcachedServer[1]); 9 }

    Include this file in your code and you have a full-blown distributed cache available in your Windows Azure deployment! Here’s a sample of some operations that can be done on Memcached:

    1 <?php 2 error_reporting(E_ALL); 3 require_once 'memcache.inc.php'; 4 5 var_dump($memcachedServers); 6 var_dump($memcache->getVersion()); 7 8 $memcache->set('key1', 'value1', false, 30); 9 echo $memcache->get('key1'); 10 11 $memcache->set('var_key', 'some really big variable', MEMCACHE_COMPRESSED, 50); 12 echo $memcache->get('var_key');

    That’s it!

    Conclusion and feedback

    This is just a fun project I’ve been working on when lonely and bored on airports. However, if you think this is valuable and in your opinion should be made available as a standard thing in the Windows Azure SDK for PHP, let me know. I’ll be happy to push this into the main branch and make sure it’s available in a future release.

    Comments or praise? There’s a comment form right below this post!

    Why MyGet uses Windows Azure

    MyGet - NuGet hosting private feedRecently one of the Tweeps following me started fooling around and hit one of my sweet spots: Windows Azure. Basically, he mocked me for using Windows Azure for MyGet, a website with enough users but not enough to justify the “scalability” aspect he thought Windows Azure was offering. Since Windows Azure is much, much more than scalability alone, I decided to do a quick writeup about the various reasons on why we use Windows Azure for MyGet. And those are not scalability.

    First of all, here’s a high-level overview of our deployment, which may illustrate some of the aspects below:

    image

    Costs

    Windows Azure is cheap. Cheap as in cost-effective, not as in, well, sleezy. Many will disagree with me but the cost perspective of Windows Azure can be real cheap in some cases as well as very expensive in other cases. For example, if someone asks me if they should move to Windows Azure and they now have one server running 300 small sites, I’d probably tell them not to move as it will be a tough price comparison.

    With MyGet we run 2 Windows Azure instances in 2 datacenters across the globe (one in the US and one in the EU). For $180.00 per month this means 2 great machines at two very distant regions of the globe. You can probably find those with other hosters as well, but will they manage your machines? Patch and update them? Probably not, for that amount. In our scenario, Windows Azure is cheap.

    Feel free to look at the cost calculator tool to estimate usage costs.

    Traffic Manager

    Traffic Manager, a great (beta) product in the Windows Azure offering allows us to do geographically distributed applications. For example, US users of MyGet will end up in the US datacenter, European users will end up in the EU datacenter. This is great, and we can easily add extra locations to this policy and have, for example, a third location in Asia.

    Next to geographically distributing MyGet, Traffic Manager also ensures that if one datacenter goes down, the DNS pool will consist of only “live” datacenters and thus provide datacenter fail-over. Not ideal as the web application will be served faster from a server that’s closer to the end user, but the application will not go down.

    One problem we have with this is storage. We use Windows Azure storage (blobs, tables and queues) as those only cost $0.12 per GB. Distributing the application does mean that our US datacenter server has to access storage in the EU datacenter which of course adds some latency. We try to reduce this using extensive caching on all sides, but it’d be nicer if Traffic Manager allowed us to setup georeplication for storage as well. This only affects storing package metadata and packages. Reading packages is not affected by this because we’re using the Windows Azure CDN for that.

    CDN

    The Windows Azure Content Delivery Network allows us to serve users fast. The main use case for MyGet is accessing and downloading packages. Ok, the updating has some latency due to the restrictions mentioned above, but if you download a package from MyGet it will always come from a CDN node near the end user to ensure low latency and fast access. Given the CDN is just a checkbox on the management pages means integrating with CDN is a breeze. The only thing we’ve struggled with is finding an acceptable caching policy to ensure stale data is limited.

    Windows Azure AppFabric Access Control

    MyGet is not one application. MyGet is three applications: our development environment, staging and production. In fact, we even plan for tenants so every tenant in fact is its own application. To streamline, manage and maintain a clear overview of which user can authenticate to which application via which identity provider, we use ACS to facilitate MyGet authentication.

    To give you an example: our dev environment allows logging in via OpenID on a development machine. Production allows for OpenID on a live environment. In staging, we only use Windows Live ID and Facebook whereas our production website uses different identity providers. Tenants will, in the future, be given the option to authenticate to their own ADFS server, we’re pretty sure ACS will allow us to simply configure that and instrument only tenant X can use that ADFS server.

    ACs has been a great time saver and is definitely something we want to use in future project. It really eases common authentication pains and acts as a service bus between users, identity providers and our applications.

    Windows Azure AppFabric Caching

    Currently we don’t use Windows Azure AppFabric Caching in our application. We currently use the ASP.NET in-memory cache on all machines but do feel the need for having a distributed caching solution. While appealing, we think about deploying Memcached in our application because of the cost structure involved. But we might as well end up with Wndows Azure AppFabric Caching anyway as it integrates nicely with our current codebase.

    Conclusion

    In short, Windows Azure is much more than hosting and scalability. It’s the building blocks available such as Traffic Manager, CDN and Access Control Service that make our lives easier. The pricing structure is not always that transparent but if you dig a little into it you’ll find affordable solutions that are really easy to use because you don’t have to roll your own.

    Book review: Microsoft Windows Azure Development Cookbook

    Microsoft Windows Azure Development CookbookOver the past few months, I’ve been doing technical reviewing for a great Windows Azure book: the Windows Azure Development Cookbook published by Packt. During this review I had no idea who the author of the book was but after publishing it seems the author is no one less than my fellow Windows Azure MVP Neil Mackenzie! If you read his blog you should know you should immediately buy this book.

    Why? Well, Neil usually goes both broad and deep: all required context for understanding a recipe is given and the recipe itself goes deep enough to know most of the ins and outs of a specific feature of Windows Azure. Well written, to the point and clear to every reader both novice and expert.

    The book is one of a series of cookbooks published by Packt. They are intended to provide “recipes” showing how to implement specific techniques in a particular technology. They don’t cover getting started scenarios, but do cover some basic techniques, some more advanced techniques and usually one or two expert techniques. From the cookbooks I’ve read, this approach works and should get you up to speed real quick. And that’s no different with this one.

    Here’s a chapter overview:

    1. Controlling Access in the Windows Azure Platform
    2. Handling Blobs in Windows Azure
    3. Going NoSQL with Windows Azure Tables
    4. Disconnecting with Windows Azure Queues
    5. Developing Hosted Services for Windows Azure
    6. Digging into Windows Azure Diagnostics
    7. Managing Hosted Services with the Service Management API
    8. Using SQL Azure
    9. Looking at the Windows Azure AppFabric

    An interesting sample chapter on the Service Management API can be found here.

    Oh and before I forget: Neil, congratulations on your book!  It was a pleasure doing the reviewing!

    Windows Azure Accelerator for Web Roles

    Windows Azure Accelerator for Web RolesOne of the questions I often get around Windows Azure is: “Is Windows Azure interesting for me?”. It’s a tough one, because most of the time when someone asks that question they currently already have a server somewhere that hosts 100 websites. In the full-fledged Windows Azure model, that would mean 100 x 2 (we want the SLA) = 200 Windows Azure instances. And a stroke at the end of the month when the bill arrives. Microsoft’s DPE team have released something very interesting for those situations though: the Windows Azure Accelerator for Web Roles.

    In short, the WAAWR (no way I’m going to write Windows Azure Accelerator for Web Roles out every time!) is a means of creating virtual web sites on the IIS server running on a Windows Azure instance. Add “multi-instance” to that and have a free tool to create a server farm for you!

    The features on paper:

    • Deploy sites to Windows Azure in less than 30 seconds
    • Enables deployments to multiple Web Role instances using Web Deploy
    • Saves Web Deploy packages & IIS configuration in Windows Azure storage to provide durability
    • A web administrator portal for managing web sites deployed to the role
    • The ability to upload and manage SSL certificates
    • Simple logging and diagnostics tools

    Interesting… Let’s go for a ride!

    Obtaining & installing the Windows Azure Accelerator for Web Roles

    Installing the WAAWR is as easy as download, extract, buildme.cmd and you’re done. After that, Visual Studio 2010 (or Visual Studio Web Developer Express!) features a new project template:

    Create new Windows Azure Accelerator for Web Roles project

    Click OK, enter the required information (such as: a storage account that will be used for synchronizing the different server instances and an administrator account). After that, enable remote desktop and publish. That’s it. I’ve never ever setup a web farm more quickly than that.

    Creating a web site

    After deploying the solution you created in Visual Studio, browse to the live deployment and log in with the administrator credentials you created when creating the project. This will give you a nice looking web interface which allows you to create virtual web sites and have some insight into what’s happening in your server farm.

    I’ll create a new virtual website on my server farm:

    Create a site in Windows Azure Accelerator for Web Roles

    After clicking create we can try to publish an ASP.NET MVC application.

    Publishing a web site

    For testing purposes I created a simple ASP.NET MVC application. Since the default project template already has a high enough “Hello World factor”, let’s immediately right-click the project name and hit Publish. Deploying an application to the newly created Windows Azure webfarm is as easy as specifying the following parameters:

    Windows Azure Web Deploy

    One Publish click later, you are done. And deployed on a web farm instance, I can now see the website itself but also… some statistics :-)

    Maarten Balliauw Windows Azure

    Conclusion

    The newly released Windows Azure Accelerator for Web Roles is, IMHO, the easiest, fastest manner to deploy a multi-site webfarm on Windows Azure. Other options like.the ones proposed by Steve Marx on his blog do work, but are only suitable if you are billing your customer by the hour.

    The fact that it uses web deploy to publish applications to the system and the fact that this just works behind a corporate firewall and annoying proxy is just fabulous!

    This also has a downside: if I want to push my PHP applications to Windows Azure in a jiffy, chances are this will be a problem. Not on Windows (but not ideal there either), but certainly when publishing apps from other platforms. Is that a self-imposed challenge? Nope. Web deploy does not seem to have an open protocol (that I know of) and while reverse engineering it is possible I will not risk the legal consequences :-) However, some reverse-engineering of the WAAWR itself learned me that websites are stored as a ZIP package on blob storage and there’s a PHP SDK for that. A nifty workaround is possible as such, if you get your head around the ZIP file folder structure.

    My conclusion in short: if you ever receive the question “Is Windows Azure interesting for me?” from someone who wants to host a bunch of websites on it? It is. And it’s easy.

    A first look at Windows Azure AppFabric Applications

    After the Windows Azure AppFabric team announced the availability of Windows Azure AppFabric Applications (preview), I signed up for early access immediately and got in. After installing the tools and creating a namespace through the portal, I decided to give it a try to see what it’s all about. Note that Neil Mackenzie also has an extensive post on “WAAFapps” which I recommend you to read as well.

    So what is this Windows Azure AppFabric Applications thing?

    Before answering that question, let’s have a brief look at what Windows Azure is today. According to Microsoft, Windows Azure is a “PaaS” (Platform-as-a-Service) offering. What that means is that Windows Azure offers a series of platform components like compute, storage, caching, authentication, a service bus, a database, a CDN, … to your applications.

    Consuming those components is pretty low level though: in order to use, let’s say, caching, one has to add the required references, make some web.config changes and open up a connection to these things. Ok, an API is provided but it’s not the case that you can seamlessly integrate caching into an application in seconds (in a manner like one would integrate file system access in an application which you literally can do in seconds).

    Meet Windows Azure AppFabric Applications. Windows Azure AppFabric Applications (why such long names, Microsoft!) redefine the concept of Platform-as-a-Service: where Windows Azure out of the box is more like a “Platform API-as-a-Service”, Windows Azure AppFabric Applications  is offering tools and platform support for easily integrating the various Windows Azure components.

    This “redefinition” of Windows Azure introduces some new concepts: in Windows Azure you have roles and role instances. In AppFabric Applications you don’t have that concept: AFA (yes, I managed to abbreviate it!) uses so-called Containers. A Container is a logical unit in which one or more services of an application are hosted. For example, if you have 2 web applications, caching and SQL Azure, you will (by default) have one Container containing 2 web applications + 2 service references: one for caching, one for SQL Azure.

    Containers are not limited to one role or role instance: a container is a set of predeployed role instances on which your applications will run. For example, if you add a WCF service, chances are that this will be part of the same container. Or a different one if you specify otherwise.

    It’s pretty interesting that you can scale containers separately. For example, one can have 2 scale units for the container containing web applications, 3 for the WCF container, … A scale unit is not necessarily just one extra instance: it depends on how many services are in a container? In fact, you shouldn’t care anymore about role instances and virtual machines: with AFA (my abbreviation for Windows Azure AppFabric Applications, remember) one can now truly care about only one thing: the application you are building.

    Hello, Windows Azure AppFabric Applications

    Visual Studio tooling support

    To demonstrate a few concepts, I decided to create a simple web application that uses caching to store the number of visits to the website. After installing the Visual Studio tooling, I started with one of the templates contained in the SDK:

    Creating a Windows Azure AppFabric Application

    This template creates a few things. To start with, 2 projects are created in Visual Studio: one MVC application in which I’ll create my web application, and one Windows Azure AppFabric Application containing a file App.cs which seems to be a DSL for building Windows Azure AppFabric Application. Opening this DSL gives the following canvas in Visual Studio:

    App.cs Windows Azure AppFabric Applications

    As you can see, this is the overview of my application as well as how they interact with each other. For example, the “MVCWebApp” has 1 endpoint (to serve HTTP requests) + 2 service references (to Windows Azure AppFabric caching and SQL Azure). This is an important notion as it will generate integration code for you. For example, in my MVC web application I can find the ServiceReferences.g.cs file containing the following code:

    1 class ServiceReferences 2 { 3 public static Microsoft.ApplicationServer.Caching.DataCache CreateImport1() 4 { 5 return Service.ExecutingService.ResolveImport<Microsoft.ApplicationServer.Caching.DataCache>("Import1"); 6 } 7 8 public static System.Data.SqlClient.SqlConnection CreateImport2() 9 { 10 return Service.ExecutingService.ResolveImport<System.Data.SqlClient.SqlConnection>("Import2"); 11 } 12 }

    Wait a minute… This looks like a cool thing! It’s basically a factory for components that may be hosted elsewhere! Calling ServiceReferences.CreateImport1() will give me a caching client that I can immediately work with! ServiceReferences.CreateImport2() (you can change these names by the way) gives me a connection to SQL Azure. No need to add connection strings in the application itself, no need to configure caching in the application itself. Instead, I can configure these things in the Windows Azure AppFabric Application canvas and just consume them blindly in my code. Awesome!

    Here’s the code for my HomeController where I consume the cache/. Even my grandmother can write this!

    1 [HandleError] 2 public class HomeController : Controller 3 { 4 public ActionResult Index() 5 { 6 var count = 1; 7 var cache = ServiceReferences.CreateImport1(); 8 var countItem = cache.GetCacheItem("visits"); 9 if (countItem != null) 10 { 11 count = ((int)countItem.Value) + 1; 12 } 13 cache.Put("visits", count); 14 15 ViewData["Message"] = string.Format("You are visitor number {0}.", count); 16 17 return View(); 18 } 19 20 public ActionResult About() 21 { 22 return View(); 23 } 24 }

    Now let’s go back to the Windows Azure AppFabric Application canvas, where I can switch to “Deployment View”:

    Windows Azure AppFabric Application Deployment View

    Deployment View basically lets you decide in which container one or more applications will be running and how many scale units a container should span (see the properties window in Visual Studio for this).

    Right-clicking and selecting “Deploy…” deploys my Windows Azure AppFabric Application to the production environment.

    The management portal

    After logging in to http://portal.appfabriclabs.com, I can manage the application I just published:

    Windows Azure AppFabric Application Management Portal

    I’m not going to go in much detail but will highlight some features. The portal enables you to manage your application: deploy/undeploy, scale, monitor, change configuration, …  Basically everything you would expect to be able to do. And more! If you look at the monitoring part, for example, you will see some KPI’s on your application. Here’s what my sample application shows after being deployed for a few minutes:

    Windows Azure AppFabric Applications monitoring and latency

    Pretty slick. It even monitors average latencies etc.!

    Conclusion

    As you can read in this blog post, I’ve been exploring this product and trying out the basics of it. I’m no sure yet if this model will fit every application, but I’m sure a solution like this is where the future of PaaS should be: no longer caring about servers, VM’s or instances, just deploy and let the platform figure everything out. My business value is my application, not the fact that it spans 2 VM’s.

    Now when I say “future of PaaS”, I’m also a bit skeptical… Most customers I work with use COM, require startup scripts to configure the environment, care about the server their application runs on. In fact, some applications will never be able to be deployed on this solution because of that. Where Windows Azure already represents a major shift in terms of development paradigm (a too large step for many!), I thing the step to Windows Azure AppFabric Applications is a bridge too far for most people. At present.

    But then there’s corporations… As corporations always are 10 steps behind, I foresee that this will only become mainstream within the next 5-8 years (for enterpise). Too bad! I wish most corporate environments moved faster…

    If Microsoft wants this thing to succeed I think they need to work even more on shifting minds to the cloud paradigm and more specific to the PaaS paradigm. Perhaps Windows 8 can be a utility to do this: if Windows 8 shifts from “programming for a Windows environment” to “programming for a PaaS environment”, people will start following that direction. What the heck, maybe this is even a great model for Joe Average to create “apps” for Windows 8! Just like one submits an app to AppStore or Marketplace today, he/she can submit an app to “Windows Marketplace” which in the background just drops everything on a technology like Windows Azure AppFabric Applications?

    Geographically distributing Windows Azure applications using Traffic Manager

    With the downtime of Amazon EC2 this week, it seems a lot of websites “in the cloud” are down at the moment. Most comments I read on Twitter (and that I also made, jokingly :-)) are in the lines of “outrageous!” and “don’t go cloud!”. While I understand these comments, I think they are wrong. These “clouds” can fail. They are even designed to fail, and often provide components and services that allow you to cope with these failures. You just have to expect failure at some point in time and build it into your application.

    Let me rephrase my introduction. I just told you to expect failure, but I actually believe that clouds don’t “fail”. Yes, you may think I’m lunatic there, providing you with two different and opposing views in 2 paragraphs. Allow me to explain: "a “failing” cloud is actually a “scaling” cloud, only thing is: it’s scaling down to zero. If you design your application so that it can scale out, you should also plan for scaling “in”, eventually to zero. Use different availability zones on Amazon, and if you’re a Windows Azure user: try the new Traffic Manager CTP!

    The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It’s basically a distributed DNS service that knows which Windows Azure Services are sitting behind the traffic manager URL and distributes requests based on three possible profiles:

    • Failover: all traffic is mapped to one Windows Azure service, unless it fails. It then directs all traffic to the failover Windows Azure service.
    • Performance: all traffic is mapped to the Windows Azure service “closest” (in routing terms) to the client requesting it. This will direct users from the US to one of the US datacenters, European users will probably end up in one of the European datacenters and Asian users, well, somewhere in the Asian datacenters.
    • Round-robin: Just distribute requests between various Windows Azure services defined in the Traffic Manager policy

    As a sample, I have uploaded my Windows Azure package to two datacenters: EU North and US North Central. Both have their own URL:

    I have created a “performance” policy at the URL http://certgen.ctp.trafficmgr.com/, which redirects users to the nearest datacenter (and fails-over if one goes down):

    Windows Azure Traffic Manager geo replicate

    If one of the datacenters goes down, the other service will take over. And as a bonus, I get reduced latency because users use their nearest datacenter.

    So what’s this have to do with my initial thoughts? Well: design to scale, using an appropriate technique to your specific situation. Use all the tools the platform has to offer, and prepare for scaling out and for scaling '”in”, even to zero instances. And as with backups: test your disaster strategy now and then.

    PS: Artwork based on Josh Twist’s sketches

    Official Belgium TechDays 2011 Windows Phone 7 app released

    I’m proud to announce that we (RealDolmen) have released the official Belgium TechDays 2011 Windows Phone 7 app! The official Belgium TechDays 2011 gives you the ability to browse current & upcoming sessions, as well as provide LIVE feedback to the event organizers. Is the current session awesome? Let us know! Is the food too spicy? Let us know!

    Why am I blogging this? Well: one of the first sessions at the event will be Silverlight, Windows Phone 7, Windows Azure, jQuery, OData and RIA Services. Shaken, not stirred, deliverd by Kevin Dockx and myself. It will feature this WIndows Phone 7 application as well as the backoffice for it (Silverlight), the mobile web front-end (jQuery mobile), the web front-end (MVC), the integration points with the event organizers and the deployment on Windows Azure. Not to mention the twitterwall that integrates with this. ANd the top sessions ranking that will be displayed based on input from all the channels I mentioned before. In short: I’m blogging this to plug our session :-)

    Interested in what we’ve built? Or just a consumer of WP7 apps? Download the app at http://techdays.realdolmen.com or directly by clicking the picture below:

    Download the official Techdays 2011 application for WIndows Phone 7

    See you at TechDays!

    Windows Azure and scaling: how? (PHP)

    One of the key ideas behind cloud computing is the concept of scaling.Talking to customers and cloud enthusiasts, many people seem to be unaware about the fact that there is great opportunity in scaling, even for small applications. In this blog post series, I will talk about the following:

    Creating and uploading a management certificate

    In order to keep things DRY (Don’t Repeat Yourself), I’ll just link you to the previous post (Windows Azure and scaling: how? (.NET)) for this one.

    For PHP however, you’ll be needing a .pem certificate. Again, for the lazy, here’s mine (management.pfx (4.05 kb), management.cer (1.18 kb) and management.pem (5.11 kb)). If you want to create one yourself, check this site where you can convert and generate certificates.

    Building a small command-line scaling tool (in PHP)

    In order to be able to scale automatically, let’s build a small command-line tool in PHP. The idea is that you will be able to run the following command on a console to scale to 4 instances:

    1 php autoscale.php "management.cer" "subscription-id0" "service-name" "role-name" "production" 4

    Or down to 2 instances:

    1 php autoscale.php "management.cer" "subscription-id" "service-name" "role-name" "production" 2

    Will this work on Linux? Yup! Will this work on Windows? Yup! Now let’s get started.

    The Windows Azure SDK for PHP will be quite handy to do this kind of thing. Download the latest source code (as the Microsoft_WindowsAzure_Management_Client class we’ll be using is not released officially yet).

    Our script starts like this:

    1 <?php 2 // Set include path 3 $path = array('./library/', get_include_path()); 4 set_include_path(implode(PATH_SEPARATOR, $path)); 5 6 // Microsoft_WindowsAzure_Management_Client 7 require_once 'Microsoft/WindowsAzure/Management/Client.php';

    This is just making sure all necessary libraries have been loaded. next, call out to the Microsoft_WindowsAzure_Management_Client class’ setInstanceCountBySlot() method to set the instance count to the requested number. Easy! And in fact even easier than Microsoft's .NET version of this.

    1 // Do the magic 2 $managementClient = new Microsoft_WindowsAzure_Management_Client($subscriptionId, $certificateFile, ''); 3 4 echo "Uploading new configuration...\r\n"; 5 6 $managementClient->setInstanceCountBySlot($serviceName, $slot, $roleName, $instanceCount); 7 8 echo "Finished.\r\n";

    Here’s the full script:

    1 <?php 2 // Set include path 3 $path = array('./library/', get_include_path()); 4 set_include_path(implode(PATH_SEPARATOR, $path)); 5 6 // Microsoft_WindowsAzure_Management_Client 7 require_once 'Microsoft/WindowsAzure/Management/Client.php'; 8 9 // Some commercial info :-) 10 echo "AutoScale - (c) 2011 Maarten Balliauw\r\n"; 11 echo "\r\n"; 12 13 // Quick-and-dirty argument check 14 if (count($argv) != 7) 15 { 16 echo "Usage:\r\n"; 17 echo " AutoScale <certificatefile> <subscriptionid> <servicename> <rolename> <slot> <instancecount>\r\n"; 18 echo "\r\n"; 19 echo "Example:\r\n"; 20 echo " AutoScale mycert.pem 39f53bb4-752f-4b2c-a873-5ed94df029e2 bing Bing.Web production 20\r\n"; 21 exit; 22 } 23 24 // Save arguments to variables 25 $certificateFile = $argv[1]; 26 $subscriptionId = $argv[2]; 27 $serviceName = $argv[3]; 28 $roleName = $argv[4]; 29 $slot = $argv[5]; 30 $instanceCount = $argv[6]; 31 32 // Do the magic 33 $managementClient = new Microsoft_WindowsAzure_Management_Client($subscriptionId, $certificateFile, ''); 34 35 echo "Uploading new configuration...\r\n"; 36 37 $managementClient->setInstanceCountBySlot($serviceName, $slot, $roleName, $instanceCount); 38 39 echo "Finished.\r\n";

    Now schedule or cron this (when needed) and enjoy the benefits of scaling your Windows Azure service.

    So you’re lazy? Here’s my sample project (AutoScale-PHP.zip (181.67 kb)) and the certificates used (management.pfx (4.05 kb), management.cer (1.18 kb) and management.pem (5.11 kb)).

    Windows Azure and scaling: how? (.NET)

    One of the key ideas behind cloud computing is the concept of scaling.Talking to customers and cloud enthusiasts, many people seem to be unaware about the fact that there is great opportunity in scaling, even for small applications. In this blog post series, I will talk about the following:

    Creating and uploading a management certificate

    In order to be able to programmatically (and thus possibly automated) scale your Windows Azure service, one prerequisite exists: a management certificate should be created and uploaded to Windows Azure through the management portal at http://windows.azure.com. Creating a certificate is easy: follow the instructions listed on MSDN. It’s as easy as opening a Visual Studio command prompt and issuing the following command:

    1 makecert -sky exchange -r -n "CN=<CertificateName>" -pe -a sha1 -len 2048 -ss My "<CertificateName>.cer"

    Too anxious to try this out? Download my certificate files (management.pfx (4.05 kb) and management.cer (1.18 kb)) and feel free to use it (password: phpazure). Beware that it’s not safe to use in production as I just shared this with the world (and you may be sharing your Windows Azure subscription with the world :-)).

    Uploading the certificate through the management portal can be done under Hosted Services > Management Certificates.

    Management Certificate Windows Azure

    Building a small command-line scaling tool

    In order to be able to scale automatically, let’s build a small command-line tool. The idea is that you will be able to run the following command on a console to scale to 4 instances:

    1 AutoScale.exe "management.cer" "subscription-id0" "service-name" "role-name" "production" 4

    Or down to 2 instances:.

    1 AutoScale.exe "management.cer" "subscription-id0" "service-name" "role-name" "production" 2

    Now let’s get started. First of all, we’ll be needing the Windows Azure service management client API SDK. Since there is no official SDK, you can download a sample at http://archive.msdn.microsoft.com/azurecmdlets. Open the solution, compile it and head for the /bin folder: we’re interested in Microsoft.Samples.WindowsAzure.ServiceManagement.dll.

    Next, create a new Console Application in Visual Studio and add a reference to the above assembly. The code for Program.cs will start with the following:

    1 class Program 2 { 3 private const string ServiceEndpoint = "https://management.core.windows.net"; 4 5 private static Binding WebHttpBinding() 6 { 7 var binding = new WebHttpBinding(WebHttpSecurityMode.Transport); 8 binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; 9 binding.ReaderQuotas.MaxStringContentLength = 67108864; 10 11 return binding; 12 } 13 14 static void Main(string[] args) 15 { 16 } 17 }

    This constant and WebHttpBinding() method will be used by the Service Management client to connect to your Windows Azure subscription’s management API endpoint. The WebHttpBinding() creates a new WCF binding that is configured to use a certificate as the client credential. Just the way Windows Azure likes it.

    I’ll skip the command-line parameter parsing. Next interesting thing is the location where a new management client is created:

    1 var managementClient = Microsoft.Samples.WindowsAzure.ServiceManagement.ServiceManagementHelper.CreateServiceManagementChannel( 2 WebHttpBinding(), new Uri(ServiceEndpoint), new X509Certificate2(certificateFile));

    Afterwards, the deployment details are retrieved. The deployment’s configuration is in there (base64-encoded), so the only thing to do is read that into an XDocument, update the number of instances and store it back:

    1 var deployment = managementClient.GetDeploymentBySlot(subscriptionId, serviceName, slot); 2 string configurationXml = ServiceManagementHelper.DecodeFromBase64String(deployment.Configuration); 3 4 var serviceConfiguration = XDocument.Parse(configurationXml); 5 6 serviceConfiguration 7 .Descendants() 8 .Single(d => d.Name.LocalName == "Role" && d.Attributes().Single(a => a.Name.LocalName == "name").Value == roleName) 9 .Elements() 10 .Single(e => e.Name.LocalName == "Instances") 11 .Attributes() 12 .Single(a => a.Name.LocalName == "count").Value = instanceCount; 13 14 var changeConfigurationInput = new ChangeConfigurationInput(); 15 changeConfigurationInput.Configuration = ServiceManagementHelper.EncodeToBase64String(serviceConfiguration.ToString(SaveOptions.DisableFormatting)); 16 17 managementClient.ChangeConfigurationBySlot(subscriptionId, serviceName, slot, changeConfigurationInput);

    Here’s the complete Program.cs code:

    1 using System; 2 using System.Linq; 3 using System.Security.Cryptography.X509Certificates; 4 using System.ServiceModel; 5 using System.ServiceModel.Channels; 6 using System.Xml.Linq; 7 using Microsoft.Samples.WindowsAzure.ServiceManagement; 8 9 namespace AutoScale 10 { 11 class Program 12 { 13 private const string ServiceEndpoint = "https://management.core.windows.net"; 14 15 private static Binding WebHttpBinding() 16 { 17 var binding = new WebHttpBinding(WebHttpSecurityMode.Transport); 18 binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; 19 binding.ReaderQuotas.MaxStringContentLength = 67108864; 20 21 return binding; 22 } 23 24 static void Main(string[] args) 25 { 26 // Some commercial info :-) 27 Console.WriteLine("AutoScale - (c) 2011 Maarten Balliauw"); 28 Console.WriteLine(""); 29 30 // Quick-and-dirty argument check 31 if (args.Length != 6) 32 { 33 Console.WriteLine("Usage:"); 34 Console.WriteLine(" AutoScale.exe <certificatefile> <subscriptionid> <servicename> <rolename> <slot> <instancecount>"); 35 Console.WriteLine(""); 36 Console.WriteLine("Example:"); 37 Console.WriteLine(" AutoScale.exe mycert.cer 39f53bb4-752f-4b2c-a873-5ed94df029e2 bing Bing.Web production 20"); 38 return; 39 } 40 41 // Save arguments to variables 42 var certificateFile = args[0]; 43 var subscriptionId = args[1]; 44 var serviceName = args[2]; 45 var roleName = args[3]; 46 var slot = args[4]; 47 var instanceCount = args[5]; 48 49 // Do the magic 50 var managementClient = Microsoft.Samples.WindowsAzure.ServiceManagement.ServiceManagementHelper.CreateServiceManagementChannel( 51 WebHttpBinding(), new Uri(ServiceEndpoint), new X509Certificate2(certificateFile)); 52 53 Console.WriteLine("Retrieving current configuration..."); 54 55 var deployment = managementClient.GetDeploymentBySlot(subscriptionId, serviceName, slot); 56 string configurationXml = ServiceManagementHelper.DecodeFromBase64String(deployment.Configuration); 57 58 Console.WriteLine("Updating configuration value..."); 59 60 var serviceConfiguration = XDocument.Parse(configurationXml); 61 62 serviceConfiguration 63 .Descendants() 64 .Single(d => d.Name.LocalName == "Role" && d.Attributes().Single(a => a.Name.LocalName == "name").Value == roleName) 65 .Elements() 66 .Single(e => e.Name.LocalName == "Instances") 67 .Attributes() 68 .Single(a => a.Name.LocalName == "count").Value = instanceCount; 69 70 var changeConfigurationInput = new ChangeConfigurationInput(); 71 changeConfigurationInput.Configuration = ServiceManagementHelper.EncodeToBase64String(serviceConfiguration.ToString(SaveOptions.DisableFormatting)); 72 73 Console.WriteLine("Uploading new configuration..."); 74 75 managementClient.ChangeConfigurationBySlot(subscriptionId, serviceName, slot, changeConfigurationInput); 76 77 Console.WriteLine("Finished."); 78 } 79 } 80 }

    Now schedule this (when needed) and enjoy the benefits of scaling your Windows Azure service.

    So you’re lazy? Here’s my sample project (AutoScale.zip (26.31 kb)) and the certificates used (management.pfx (4.05 kb) and management.cer (1.18 kb)).

    Note: I use the .cer file here because I generated it on my machine. If you are using a certificate created on another machine, a .pfx file and it's key should be used.