Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

The quickest way to a VPN: Windows Azure Connect

Windows Azure Connect endpoint installerFirst of all: Merry Christmas in advance! But to be honest, I already have my Christmas present… I’ll give you a little story first as it’s winter, dark outside and stories are better when it’s winter and you are reading this post n front of your fireplace. Last week, I received the beta invite for Windows Azure Connect, a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources. Being targeted at interconnecting Windows Azure instances to your local network, it also contains a feature that allows interconnecting endpoints. Interesting!

Now why would that be interesting… Well, I recently moved into my own house, having lived with my parents since birth, now 27 years ago. During that time, I bought a Windows Home Server that was living happily in our home network and making backups of my PC, my work laptop, my father’s PC and laptop and my brother’s laptop. Oh right, and my virtual server hosting this blog and some other sites. Now what on earth does this have to do with Windows Azure Connect? Well, more then you may think…

I’ve always been struggling with the idea on how to keep my Windows Home Server functional, between the home network at my place and the home network at my parents place. Having tried PPTP tunnels, IPSEC, OpenVPN, TeamViewer’s VPN capabilities, I found these solutions working but they required a lot of tweaking and installation woes. Not to mention that my ISP (and almost all ISP’s in Belgium) blocks inbound TCP connections to certain ports.

Untill I heard of Windows Azure Connect and that tiny checkbox on the management portal that states “Allow connections between endpoints in group”. Aha! So I started installing the Windows Azure Connect connector on all machines. A few times “Next, I accept, Next, Finish” later, all PC’s, my virtual server and my homeserver were talking cloud with each other! Literally, it takes only a few minutes to set up a virtual network on multiple endpoints. And it also routes through proxies, which means that my homeserver should even be able to make backups of my work laptop when I’m in the office with a very restrictive network. Restrictive for non-cloudians, that is :-)

Installing Windows Azure Connect connector

This one’s easy. Navigate to http://windows.azure.com, and in the management portal for Windows Azure Connect, click “Install local endpoint”.

Windows Azure Connect management

You will be presented a screen containing a link to the endpoint installer.

Installing Windows Azure Connect

Copy this link and make sure you write it down as you will need it for all machines that you want to join in your virtual network. I tried just copying the download to all machines and installing from there, but that does not seem to work. You really need a fresh download every time.

Interconnecting machines

This one’s reall, really easy. I remember configuring Cisco routers when I was on school, but this approach is a lot easier I can tell you. Navigate to http://windows.azure.com and open the Windows Azure Connect management interface. Click the “Create group” button in the top toolbar.

Windows Azure Connect interconnecting

Next, enter a name and an optional description for your virtual network. Next, add the endpoints that you’ve installed. Note that it takes a while for them to appear, this can be forced by going to every machine that you installed the connector for and clicking the “Refresh” option in the system tray icon for Windows Azure Connect. Anyway, here are my settings:

Windows Azure Connect create group

Make sure that you check “Allow connections between endpoints in group”. And eh… that’s it! You now have interconnected your machines on different locations in about five minutes. Cloud power? For sure!

As a side node: it would be great if one endpoint could be joined to multiple “groups” or virtual networks. That would allow me to create a group for other siuations, and make my PC part of all these groups.

Some findings

Just for the techies out there, here’s some findings… Let’s start with doing an ipconfig /all on one of the interconnected machines:

Windows Azure Connect ipv6

Windows Azure Connect really is a virtual PPP adapter added to your machine. It operates over IPv6. Let’s see if we can ping other machines. Ebh-vm05 is the virtual machine hosting my blog, running in a datacenter near Brussels. I’m issuing this ping command from my work laptop in my parents home network near Antwerp. Here goes:

Windows Azure Connect ping

Bam! Windows Azure Connect even seems to advertise hostnames on the virtual network! I like this very, very much! This would mean I can also do remote desktop between machines, even behind my company’s restrictive proxy. I’m going to try that one on monday. Eat that, corporate IT :-)

One last thing I’m interested in: the IPv6 addresses of all connected machines seem to be in different subnets. Let’s issue a traceroute as well.

Windows Azure Connect trace route

Sweet! It seems that there’s routing going on inside Windows Azure Connect to communicate between all endpoints.

As a side node: yes, those are high ping times. But do note that I was at my parents home when taking this screenshot, and the microwave was defrosting Christmas meals between my laptop and the wireless access point.

Conclusion

I’m probably abusing Windows Azure Connect doing this. However, it’s a great use case in my opinion and I really hope Microsoft keeps supporting this. What would even be better is that they offered Windows Azure Connect in the setup I described above for home users as well. It would be a great addition to Windows Intune as well!

MvcSiteMapProvider 2.2.0 released

I’m proud to announce that MvcSiteMapProvider 2.2.0 has just been uploaded to CodePlex. It should also be available through NuPack in the coming hours. This release has taken a while, but that’s because I’ve been making some important changes...

MvcSiteMapProvider is, as the name implies, an ASP.NET MVC SiteMapProvider implementation for the ASP.NET MVC framework. Targeted at ASP.NET MVC 2, it provides sitemap XML functionality and interoperability with the classic ASP.NET sitemap controls, like the SiteMapPath control for rendering breadcrumbs and the Menu control.

In this post, I’ll give you a short update on what has changed as well as some examples on how to use newly introduced functionality.

Changes in MvcSiteMapProvider 2.2.0

  • Increased stability
  • HtmlHelpers upgraded to return MvcHtmlString
  • Templated HtmlHelpers have been introduced
  • New extensibility point: OnBeforeAddNode and OnAfterAddNode
  • Optimized sitemap XML for search engine indexing

Templated HtmlHelpers

The MvcSiteMapProvider provides different HtmlHelper extension methods which you can use to generate SiteMap-specific HTML code on your ASP.NET MVC views like a menu, a breadcrumb (sitemap path), a sitemap or just the current node’s title.

All helpers in MvcSiteMapProvider are make use of templates: whenever a node in a menu is to be rendered, a specific partial view is used to render this node. This is based on the idea of templated helpers. The default templates that are used can be found on the downloads page. Locate them under the Views/DisplayTemplates folder of your project to be able to customize them.

When creating your own templates for MvcSiteMapProvider's helpers, the following model objects are used and can be templated:

  • Html.MvcSiteMap().Menu() uses:
    MvcSiteMapProvider.Web.Html.Models.MenuHelperModel and MvcSiteMapProvider.Web.Html.Models.SiteMapNodeModel
  • Html.MvcSiteMap().SiteMap() uses:
    MvcSiteMapProvider.Web.Html.Models.SiteMapHelperModel and MvcSiteMapProvider.Web.Html.Models.SiteMapNodeModel
  • Html.MvcSiteMap().SiteMapPath() uses:
    MvcSiteMapProvider.Web.Html.Models.SiteMapPathHelperModel and MvcSiteMapProvider.Web.Html.Models.SiteMapNodeModel
  • Html.MvcSiteMap().SiteMapTitle()
    MvcSiteMapProvider.Web.Html.Models.Situses:eMapTitleHelperModel

The following template is an example for rendering a sitemap node represented by the MvcSiteMapProvider.Web.Html.Models.SiteMapNodeModel model.

1 <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MvcSiteMapProvider.Web.Html.Models.SiteMapNodeModel>" %> 2 <%@ Import Namespace="System.Web.Mvc.Html" %> 3 4 <% if (Model.IsCurrentNode && Model.SourceMetadata["HtmlHelper"].ToString() != "MvcSiteMapProvider.Web.Html.MenuHelper") { %> 5 <%=Model.Title %> 6 <% } else if (Model.IsClickable) { %> 7 <a href="<%=Model.Url %>"><%=Model.Title %></a> 8 <% } else { %> 9 <%=Model.Title %> 10 <% } %>

New extensibility point

In the previous release, a number of extensibility points have been introduced. I blogged about them before. A newly introduced extensibility point is the ISiteMapProviderEventHandler . A class implementing MvcSiteMapProvider.Extensibility.ISiteMapProviderEventHandler can be registered to handle specific events, such as when adding a SiteMapNode.

Here’s an example to log all the nodes that are being added to an MVC sitemap:

1 public class MySiteMapProviderEventHandler : ISiteMapProviderEventHandler 2 { 3 public bool OnAddingSiteMapNode(SiteMapProviderEventContext context) 4 { 5 // Should the node be added? Well yes! 6 return true; 7 } 8 9 public void OnAddedSiteMapNode(SiteMapProviderEventContext context) 10 { 11 Trace.Write("Node added: " + context.CurrentNode.Title); 12 } 13 }

Optimized sitemap XML for SEO

Generating a search-engine friendly list of all nodes in a sitemap was already possible. This functionality has been vastly improved with two new features:

  • Whenever a client sends an HTTP request header with Accept-encoding set to a value of gzip or deflate, the XmlSiteMapResult class (which is also used internally in the XmlSiteMapController) will automatically compress the sitemap using GZip compression.
  • Whenever a sitemap exceeds 50.000 nodes, the XmlSiteMapController will automatically split your sitemap into a sitemap index file (sitemap.xml) which references sub-sitemaps (sitemap-1.xml, sitemap-2.xml etc.) as described on http://www.sitemaps.org/protocol.php.

For example, if a website contains more than 50.000 nodes, the sitemap XML that is generated will look similar to the following:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> 3 <sitemap> 4 <loc>http://localhost:1397/sitemap-1.xml</loc> 5 </sitemap> 6 <sitemap> 7 <loc>http://localhost:1397/sitemap-2.xml</loc> 8 </sitemap> 9 </sitemapindex>

This sitemap index links to sub-sitemap files where all nodes are included.

Scale-out to the cloud, scale back to your rack

That is a bad blog post title, really! If Steve and Ryan have this post in the Cloud Cover show news I bet they will make fun of the title. Anyway…

Imagine you have an application running in your own datacenter. Everything works smoothly, except for some capacity spikes now and then. Someone has asked you for doing something about it with low budget. Not enough budget for new hardware, and frankly new hardware would be ridiculous to just ensure capacity for a few hours each month.

A possible solution would be: migrating the application to the cloud during capacity spikes. Not all the time though: the hardware is in house and you may be a server-hugger that wants to see blinking LAN and HDD lights most of the time. I have to admit: blinking lights are cool! But I digress.

Wouldn’t it be cool to have a Powershell script that you can execute whenever a spike occurs? This script would move everything to Windows Azure. Another script should exist as well, migrating everything back once the spike cools down. Yes, you hear me coming: that’s what this blog post is about.

For those who can not wait, here’s the download: ScaleOutToTheCloud.zip (2.81 kb)

Schematical overview

Since every cool idea goes with fancy pictures, here’s a schematical overview of what could happen when you read this post to the end. First of all: you have a bunch of users making use of your application. As a good administrator, you have deployed IIS Application Request Routing as a load balancer / reverse proxy in front of your application server. Everyone is happy!

IIS Application Request Routing

Unfortunately: sometimes there are just too much users. They keep using the application and the application server catches fire.

Server catches fire!

It is time to do something. Really. Users are getting timeouts and all nasty error messages. Why not run a Powershell script that packages the entire local application for WIndows Azure and deploys the application?

Powershell to the rescue

After deployment and once the application is running in Windows Azure, there’s one thing left for that same script to do: modify ARR and re-route all traffic to Windows Azure instead of that dying server.

Request routing Azure

There you go! All users are happy again, since the application is now running in the cloud one 2, 3, or whatever number of virtual machines.

Let’s try and do this using Powershell…

The Powershell script

The Powershell script will rougly perform 5 tasks:

  • Load settings
  • Load dependencies
  • Build a list of files to deploy
  • Package these files and deploy them
  • Update IIS Application Request Routing servers

Want the download? There you go: ScaleOutToTheCloud.zip (2.81 kb)

Load settings

There are quite some parameters in play for this script. I’ve located them in a settings.ps1 file which looks like this:

# Settings (prod) $global:wwwroot = "C:\inetpub\web.local\" $global:deployProduction = 1 $global:deployDevFabric = 0 $global:webFarmIndex = 0 $global:localUrl = "web.local" $global:localPort = 80 $global:azureUrl = "scaleout-prod.cloudapp.net" $global:azurePort = 80 $global:azureDeployedSite = "http://" + $azureUrl + ":" + $azurePort $global:numberOfInstances = 1 $global:subscriptionId = "" $global:certificate = "C:\Users\Maarten\Desktop\cert.cer" $global:serviceName = "scaleout-prod" $global:storageServiceName = "" $global:slot = "Production" $global:label = Date

Let’s explain these…

$global:wwwroot The file path to the on-premise application.
$global:deployProduction Deploy to Windows Azure?
$global:deployDevFabric Deploy to development fabric?
$global:webFarmIndex The 0-based index of your webfarm. Look at IIS manager and note the order of your web farm in the list of webfarms.
$global:localUrl The on-premise URL that is registered in ARR as the application server.
$global:localPort The on-premise port that is registered in ARR as the application server.
$global:azureUrl The Windows Azure URL that will be registered in ARR as the application server.
$global:azurePort The Windows Azure port that will be registered in ARR as the application server.
$global:azureDeployedSite The final URL of the deployed Windows Azre application.
$global:numberOfInstances Number of instances to run on Windows Azure.
$global:subscriptionId Your Windows Azure subscription ID.
$global:certificate
Your certificate for managing Windows Azure.
$global:serviceName Your Windows Azure service name.
$global:storageServiceName The Windows Azure storage account that will be used for uploading the packaged application.
$global:slot The Windows Azure deployment slot (production/staging)
$global:label The label for the deployment. I chose the current date and time.

Load dependencies

Next, our script will load dependencies. There is one additional set of CmdLets tha tyou have to install: the Windows Azure management CmdLets available at http://code.msdn.microsoft.com/azurecmdlets.

Here’s the set we load:

# Load required CmdLets and assemblies $env:Path = $env:Path + "; c:\Program Files\Windows Azure SDK\v1.2\bin\" Add-PSSnapin AzureManagementToolsSnapIn [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Administration")

Build a list of files to deploy

In order to package the application, we need a text file containing all the files that should be packaged and deployed to Windows Azure. This is done by recursively traversing the directory where the on-premise application is hosted.

 

$filesToDeploy = Get-ChildItem $wwwroot -recurse | where {$_.extension -match "\..*"} foreach ($fileToDeploy in $filesToDeploy) { $inputPath = $fileToDeploy.FullName $outputPath = $fileToDeploy.FullName.Replace($wwwroot,"") $inputPath + ";" + $outputPath | Out-File FilesToDeploy.txt -Append }

Package these files and deploy them

I have been polite and included this both for development fabric as well as Windows Azure fabric. Here’s the packaging and deployment code for development fabric:

# Package & run the website for Windows Azure (dev fabric) if ($deployDevFabric -eq 1) { trap [Exception] { del -Recurse ScaleOutService continue } cspack ServiceDefinition.csdef /roleFiles:"WebRole;FilesToDeploy.txt" /copyOnly /out:ScaleOutService /generateConfigurationFile:ServiceConfiguration.cscfg # Set instance count (Get-Content ServiceConfiguration.cscfg) | Foreach-Object {$_.Replace("count=""1""","count=""" + $numberOfInstances + """")} | Set-Content ServiceConfiguration.cscfg # Run! csrun ScaleOutService ServiceConfiguration.cscfg /launchBrowser }

And here’s the same for Windows Azure fabric:

# Package the website for Windows Azure (production) if ($deployProduction -eq 1) { cspack ServiceDefinition.csdef /roleFiles:"WebRole;FilesToDeploy.txt" /out:"ScaleOutService.cspkg" /generateConfigurationFile:ServiceConfiguration.cscfg # Set instance count (Get-Content ServiceConfiguration.cscfg) | Foreach-Object {$_.Replace("count=""1""","count=""" + $numberOfInstances + """")} | Set-Content ServiceConfiguration.cscfg # Run! (may take up to 15 minutes!) New-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot -StorageServiceName $storageServiceName -Package "ScaleOutService.cspkg" -Configuration "ServiceConfiguration.cscfg" -label $label $deployment = Get-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot do { Start-Sleep -s 10 $deployment = Get-Deployment -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot } while ($deployment.Status -ne "Suspended") Set-DeploymentStatus -Status "Running" -SubscriptionId $subscriptionId -certificate $certificate -ServiceName $serviceName -Slot $slot $wc = new-object system.net.webclient $html = "" do { Start-Sleep -s 60 trap [Exception] { continue } $html = $wc.DownloadString($azureDeployedSite) } while (!$html.ToLower().Contains("<html")) }

Update IIS Application Request Routing servers

This one can be done by abusing the .NET class Microsoft.Web.Administration.ServerManager.

# Modify IIS ARR $mgr = new-object Microsoft.Web.Administration.ServerManager $conf = $mgr.GetApplicationHostConfiguration() $section = $conf.GetSection("webFarms") $webFarms = $section.GetCollection() $webFarm = $webFarms[$webFarmIndex] $servers = $webFarm.GetCollection() $server = $servers[0] $server.SetAttributeValue("address", $azureUrl) $server.ChildElements["applicationRequestRouting"].SetAttributeValue("httpPort", $azurePort) $mgr.CommitChanges()

Running the script

Of course I’ve tested this to see if it works. And guess what: it does!

The script output itself is not very interesting. I did not add logging or meaningful messages to see what it is doing. Instead you’ll just see it working.

Powershell script running

Once it has been fired up, the Windows Azure portal will soon be showing that the application is actually deploying. No hands!

Powershell deployment to Azure

After the usual 15-20 minutes that a deployment + application first start takes, IIS ARR is re-configured by Powershell.

image

And my local users can just keep browsing to http://farm.local which now simply routes requests to Windows Azure. Don’t be fooled: I actually just packaged the default IIS website and deployed it to Windows Azure. Very performant!

image

Conclusion

It works! And it’s fancy and cool stuff. I think this may be a good deployment and scale-out model in some situations, however there may still be a bottleneck in the on-premise ARR server: if this one has too much traffic to cope with, a new burning server is in play. Note that this solution will work for any website hosted on IIS: custom made ASP.NET apps, ASP.NET MVC, PHP, …

Here’s the download: ScaleOutToTheCloud.zip (2.81 kb)

Using MvcSiteMapProvider throuh NuPack

NuPackProbably you have seen the buzz around NuPack, a package manager for .NET with thight integration in Visual Studio 2010. NuPack is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. If you download and install NuPack into Visual Studio, you can now reference MvcSiteMapProvider with a few simple clicks!

From within your ASP.NET MVC 2 project, right click the project file and use the new “Add Package Reference…” option.

Add package reference

Next, a nice dialog shows up where you can just pick a package and click “Install” to download it and add the necessary references to your project. The packages are retrieved from a central XML feed, but feel free to add a reference to a directory where your corporate packages are stored and install them through NuPack. Anyway: MvcSiteMapProvider. Just look for it in the list and click “Install”.

MvcSiteMapProvider in NuPack

Next, MvcSiteMapProvider will automatically be downloaded, added as an assembly reference, a default Mvc.sitemap file is added to your project and all configuration in Web.config takes place without having to do anything! I’m sold :-)

Disclaimer for some: I’m not saying NuPack is the best package manager out there nor that it is the best original idea ever invented. I do believe that the tight integration in VS2010 will make NuPack a good friend during development: the process of downloading and including third party components in your application becomes frictionless. That’s the aim for NuPack, and also the reason why I believe this tool matters and will matter a lot!

Cost Architecting for Windows Azure

Cost architecting for Windows AzureJust wanted to do a quick plug to an article I’ve written for TechNet Magazine: Windows Azure: Cost Architecting for Windows Azure.

Designing applications and solutions for cloud computing and Windows Azure requires a completely different way of considering the operating costs.

Cloud computing and platforms like Windows Azure are billed as “the next big thing” in IT. This certainly seems true when you consider the myriad advantages to cloud computing.

Computing and storage become an on-demand story that you can use at any time, paying only for what you effectively use. However, this also poses a problem. If a cloud application is designed like a regular application, chances are that that application’s cost perspective will not be as expected.

Want to read more? Check the full article. I will also be doing a session on this later this month for the Belgian Azure User Group.

Remix 2010 slides and sample code

As promised during my session on Remix 10 yesterday in Belgium, here's the slide deck and sample code.

Building for the cloud: integrating an application on Windows Azure

Abstract: “It’s time to take advantage of the cloud! In this session Maarten builds further on the application created during Gill Cleeren’s Silverlight session. The campaign website that was developed in Silverlight 4 still needs a home. Because the campaign will only run for a short period of time, the company chose for cloud computing on the Windows Azure platform. Learn how to leverage flexible hosting with automated scaling on Windows Azure, combined with the power of a cloud hosted SQL Azure database to create a cost-effective and responsive web application.”

Thanks for joining and bearing with me during this tough session with very sparse bandwidth!

Source code used in the session: TDD.ChristmasCreator.zip (686.86 kb)

Introducing Windows Azure Companion – Cloud for the masses?

Windows Azure CompanionAt OSIDays in India, the Interoperability team at Microsoft has made an interesting series of announcements related to PHP and Windows Azure.  To summarize: Windows Azure Tools for Eclipse for PHP has been updated and is on par with Visual Studio tooling (which means you can deploy a PHP app to Windows Azure without leaving Eclipse!). The Windows Azure Command-line Tools for PHP have been updated, and there’s a new release of the Windows Azure SDK for PHP and a Windows Azure Storage plugin for WordPress built on that.

What’s most interesting in the series of announcements is the Windows Azure Companion – September 2010 Community Technology Preview(CTP). In short, compare it with Web Platform Installer but targeted at Windows Azure. It allows you to install a set of popular PHP applications on a Windows Azure instance, like WordPress or phpBB.

This list of applications seems a bit limited, but it’s not. It’s just a standard Atom feed where the Companion gets its information from. Feel free to create your own feed, or use a sample feed I created and contains following applications which I know work well on Windows Azure:

  • PHP Runtime
  • PHP Wincache Extension
  • Microsoft Drivers for PHP for SQL Server
  • Windows Azure SDK for PHP
  • PEAR Archive Tar
  • phpBB
  • Wordpress
  • eXtplorer File Manager

kick it on DotNetKicks.com

Obtaining & installing Windows Azure Companion

There are 3 steps involved in this. The first one is: go get yourself a Windows Azure subscription. I recall there is a free, limited version where you can use a virtual machine for 25 hours. Not much, but enough to try out Windows Azure Companion. Make sure to completely undeploy the application afterwards if you mind being billed.

Next, get the Windows Azure Companion – September 2010 Community Technology Preview(CTP). There is a source code download where you can compile it yourself using Visual Studio, there is also a “cspkg” version that you can just deploy onto your Windows Azure account and get running. I recommend the latter one if you want to be up and running fast.

The third step of course, is deploying. Before doing this edit the “ServiceConfiguration.cscfg” file. It needs your Windows Azure storage credentials and a administrative username/password so only you can log onto the Companion.

This configuration file also contains a reference to the application feed, so if you want to create one yourself this is the place where you can reference it.

Installing applications

Getting a “Running” state and a green bullet on the Windows Azure portal? Perfect! Then browse to http://yourchosenname.cloudapp.net:8080 (mind the port number!), this is where the administrative interface resides. Log in with the credentials you specified in “ServiceConfiguration.cscfg” before and behold the Windows Azure Companion administrative user interface.

Windows Azure Companion Administration

As a side note: this screenshot was taken with a custom feed I created which included some other applications with SQL Server support, like the Drupal 7 alpha releases. Because these are alpha’s I decided to not include them in my sample feed that you can use. I am confident that more supported applications will come in the future though.

Go to the platform tab, select the PHP runtime and other components followed by clicking “Next”. Pick your favorite version numbers and proceed with installing. After this has been finished, you can install an application from the applications tab. How about WordPress?WordPress on Windows Azure

In this last step you can choose where an application will be installed. Under the root of the website or under a virtual folder, anything you like. Afterwards, the application will be running at http://yourchosenname.cloudapp.net.

More control with eXtplorer

The sample feed I provide includes eXtplorer, a web-based file management solution. When installing this, you get full control over the applications folder on your Windows Azure instance, enabling you to edit files (configuration files) but also enabling you to upload *any* application you want to host on Windows Azure Companion. Here is me creating a highly modern homepage: and the rendered version of it:

eXtplorer on Windows AzureWelcome!

Administrative options

As with any web server, you want some administrative options. Windows Azure Companion provides you with logging of both Windows Azure and PHP. You can edit php.ini, restart the server, see memory and CPU usage statistics and create a backup of your current instance in case you want to start messing things up and want a “last known good” instance of your installation.

Windows Azure Companion AdministrationWindows Azure Companion Administration

Note: If you are a control freak, just stop your application on Windows Azure, download the virtual hard drive (.vhd) file from blob storage and make some modifications, upload it again and restart the Windows Azure Companion. I don’t recommend this as you will have to download and upload a large .vhd file but in theory it is possible to fiddle around.

Internet Explorer 9 jumplist support

A cool feature included is the IE9 jumplist support. IE9 beta is out and it seems all teams at Microsoft are adding support for it. If you drag the Windows Azure Companion administration tab to your Windows 7 taskbar, you get the following nifty shortcuts when right-clicking:

IE9 jumplist

Scalability

The current preview release of Windows Azure Companion can not provide scale-out. It can scale up to a higher number of CPU, memory and storage, but not to multiple role instances. This is due to the fact that Windows Azure drives can not be shared in read/write mode across multiple machines. On the other hand: if you deploy 2 instances and install the same apps on them, use the same SQL Azure database backend and use round-robin DNS, you can achieve scale-out at this time. Not the way you'd want it, but it should work. Then again: I don’t think that Windows Azure Companion has been created with very large sites in mind as this type of sites will benefit more from a completely optimized version for “regular” Windows Azure.

Conclusion

I’m impressed with this series of releases, especially the Windows Azure Companion. It clearly shows Microsoft is not just focusing on its own platform but also treating PHP as an equal citizen for Windows Azure. The Companion in my opinion also lowers the step to cloud computing: it’s really easy to install and use and may attract more people to the Windows Azure platform (especially if they would add a basic, entry-level subscription with low capacity and a low price, pun intended :-))

Update: also check Jim O’Neil's blog post: Windows Azure Companion: PHP and WordPress in Azure and Brian Swan's blog post: Announcing the Windows Azure Companion and More...

kick it on DotNetKicks.com

Announcing the Windows Azure Online Conference

Steve Plank from Microsoft UK has just announced the UK Windows Azure Online Conference on his blog. This will be a whole day, online Windows Azure conference consisting of three different tracks: Cirrus – the high level stuff, Altocumulus – the mid level stuff (cast studies) and Stratocumulus – the low level stuff (deep tech). I’ll be doing a session in the Stratocumulus track.

Since this is an online conference, feel free to subscribe for the event! All details can be found on the UK Windows Azure Online Conference announcement. I feel this is going to be very interesting, covering a broad range of Windows Azure topics! Here’s the list of sessions I’ll try to attend:

  • Case studies in the Altocumulus track, these should be very interesting real-life examples
  • Of course my own session, “Taking care of a cloud environment”, since otherwise there would be no speaker in that slot…
  • Windows Azure Guidance Project
  • Azure Table Service – getting creative with Microsoft’s NoSQL datastore
  • The Q&A panel sessions

The only thing I’m wondering about: how are they going to provide lunch through Live Meeting…

Hybrid Azure applications using OData

OData in the cloud on AzureIn the whole Windows Azure story, Microsoft has always been telling you could build hybrid applications: an on-premise application with a service on Azure or a database on SQL Azure. But how to do it in the opposite direction? Easy answer there: use the (careful, long product name coming!) Windows Azure platform AppFabric Service Bus to expose an on-premise WCF service securely to an application hosted on Windows Azure. Now how would you go about exposing your database to Windows Azure? Open a hole in the firewall? Use something like PortBridge to redirect TCP traffic over the service bus? Why not just create an OData service for our database and expose that over AppFabric Service Bus. In this post, I’ll show you how.

For those who can not wait: download the sample code: ServiceBusHost.zip (7.87 kb)

kick it on DotNetKicks.com

What we are trying to achieve

The objective is quite easy: we want to expose our database to an application on Windows Azure (or another cloud or in another datacenter) without having to open up our firewall. To do so, we’ll be using an OData service which we’ll expose through Windows Azure platform AppFabric Service Bus. But first things first…

  • OData??? The Open Data Protocol (OData) is a Web protocol for querying and updating data over HTTP using REST. And data can be a broad subject: a database, a filesystem, customer information from SAP, … The idea is to have one protocol for different datasources, accessible through web standards. More info? Here you go.
  • Service Bus??? There’s an easy explanation to this one, although the product itself offers much more use cases. We’ll be using the Service Bus to interconnect two applications, sitting on different networks and protected by firewalls. This can be done by using the Service Bus as the “man in the middle”, passing around data from both applications.

Windows Azure platform AppFabric Service Bus

Our OData feed will be created using WCF Data Services, formerly known as ADO.NET Data Services formerly known as project Astoria.

Creating an OData service

We’ll go a little bit off the standard path to achieve this, although the concepts are the same. Usually, you’ll be adding an OData service to a web application in Visual Studio. Difference: we’ll be creating a console application. So start off with a console application and add the following additional references to the console application:

  • Microsoft.ServiceBus (from the SDK that can be found on the product site)
  • System.Data.Entity
  • System.Data.Services
  • System.Data.Services.Client
  • System.EnterpriseServices
  • System.Runtime.Serialization
  • System.ServiceModel
  • System.ServiceModel.Web
  • System.Web.ApplicationServices
  • System.Web.DynamicData
  • System.Web.Entity
  • System.Web.Services
  • System.Data.DataSetExtensions

Next, add an Entity Data Model for a database you want to expose. I have a light version of the Contoso sample database and will be using that one. Also, I only added one table to the model for sake of simplicity:

Entity Data Model for OData

Pretty straightforward, right? Next thing: expose this beauty through an OData service created with WCF Data Services. Add a new class to the project, and add the following source code:

1 public class ContosoService : DataService<ContosoSalesEntities> 2 { 3 // This method is called only once to initialize service-wide policies. 4 public static void InitializeService(DataServiceConfiguration config) 5 { 6 // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. 7 // Examples: 8 // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); 9 // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); 10 config.SetEntitySetAccessRule("Store", EntitySetRights.All); 11 config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 12 } 13 }

Let’s explain this thing: the ContosoService class inherits DataService<ContosoSalesEntities>, a ready-to-use service implementation which you pass the type of your Entity Data Model. In the InitializeService method, there’s only one thing left to do: specify the access rules for entities. I chose to expose the entity set “Store” with all rights (read/write).

In a normal world: this would be it: we would now have a service ready to expose our database through OData. Quick, simple, flexible. But in our console application, there’s a small problem: we are not hosting inside a web application, so we’ll have to write the WCF hosting code ourselves.

Hosting the OData service using a custom WCF host

Since we’re not hosting inside a web application but in a console application, there’s some plumbing we need to do: set up our own WCF host and configure it accordingly.

Let’s first work on our App.config file:

1 <?xml version="1.0"?> 2 <configuration> 3 <connectionStrings> 4 <add name="ContosoSalesEntities" connectionString="metadata=res://*/ContosoModel.csdl|res://*/ContosoModel.ssdl|res://*/ContosoModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.\SQLEXPRESS;Initial Catalog=ContosoSales;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> 5 </connectionStrings> 6 7 <system.serviceModel> 8 <services> 9 <service behaviorConfiguration="contosoServiceBehavior" 10 name="ServiceBusHost.ContosoService"> 11 <host> 12 <baseAddresses> 13 <add baseAddress="http://localhost:8080/ContosoModel" /> 14 </baseAddresses> 15 </host> 16 <endpoint address="" 17 binding="webHttpBinding" 18 contract="System.Data.Services.IRequestHandler" /> 19 20 <endpoint address="https://<service_namespace>.servicebus.windows.net/ContosoModel/" 21 binding="webHttpRelayBinding" 22 bindingConfiguration="contosoServiceConfiguration" 23 contract="System.Data.Services.IRequestHandler" 24 behaviorConfiguration="serviceBusCredentialBehavior" /> 25 </service> 26 </services> 27 <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> 28 29 <behaviors> 30 <serviceBehaviors> 31 <behavior name="contosoServiceBehavior"> 32 <serviceMetadata httpGetEnabled="true" /> 33 <serviceDebug includeExceptionDetailInFaults="True" /> 34 </behavior> 35 </serviceBehaviors> 36 37 <endpointBehaviors> 38 <behavior name="serviceBusCredentialBehavior"> 39 <transportClientEndpointBehavior credentialType="SharedSecret"> 40 <clientCredentials> 41 <sharedSecret issuerName="owner" issuerSecret="<secret_from_portal>" /> 42 </clientCredentials> 43 </transportClientEndpointBehavior> 44 </behavior> 45 </endpointBehaviors> 46 </behaviors> 47 48 <bindings> 49 <webHttpRelayBinding> 50 <binding name="contosoServiceConfiguration"> 51 <security relayClientAuthenticationType="None" /> 52 </binding> 53 </webHttpRelayBinding> 54 </bindings> 55 </system.serviceModel> 56 </configuration>

There's a lot of stuff going on in there!

  • The connection string to my on-premise database is specified
  • The WCF service is configured

To be honest: that second bullet is a bunch of work…

  • We specify 2 endpoints: one local (so we can access the OData service from our local network) and one on the service bus, hence the https://<service_namespace>.servicebus.windows.net/ContosoModel/ URL.
  • The service bus endpoint has 2 behaviors specified: the service behavior is configured to allow metadata retrieval. The endpoint behavior is configured to use the service bus credentials (that can be found on the AppFabric portal site once logged in) when connecting to the service bus.
  • The webHttpRelayBinding, a new binding type for Windows Azure AppFabric Service Bus, is configured to use no authentication when someone connects to it. That way, we will have an OData service that is accessible from the Internet for anyone.

With that configuration in place, we can start building our WCF service host in code. Here’s the full blown snippet:

1 class Program 2 { 3 static void Main(string[] args) 4 { 5 ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.AutoDetect; 6 7 using (ServiceHost serviceHost = new WebServiceHost(typeof(ContosoService))) 8 { 9 try 10 { 11 // Open the ServiceHost to start listening for messages. 12 serviceHost.Open(TimeSpan.FromSeconds(30)); 13 14 // The service can now be accessed. 15 Console.WriteLine("The service is ready."); 16 foreach (var endpoint in serviceHost.Description.Endpoints) 17 { 18 Console.WriteLine(" - " + endpoint.Address.Uri); 19 } 20 Console.WriteLine("Press <ENTER> to terminate service."); 21 Console.ReadLine(); 22 23 // Close the ServiceHost. 24 serviceHost.Close(); 25 } 26 catch (TimeoutException timeProblem) 27 { 28 Console.WriteLine(timeProblem.Message); 29 Console.ReadLine(); 30 } 31 catch (CommunicationException commProblem) 32 { 33 Console.WriteLine(commProblem.Message); 34 Console.ReadLine(); 35 } 36 } 37 } 38 }

We’ve just created our hosting environment, completely configured using the configuration file for WCF. The important thing to note here is that we’re spinning up a WebServiceHost, and that we’re using it to host multiple endpoints. Compile, run, F5, and here’s what happens:

Command line WCF hosting for AppFabric service bus

Consuming the feed

Now just leave that host running and browse to the public service bus endpoint for your OData service, i.e. https://<service_namespace>.servicebus.windows.net/ContosoModel/:

Consuming OData over service bus

There’s two reactions possible now: “So, this is a service?” and “WOW! I actually connected to my local SQL Server database using a public URL and I did not have to call IT to open up the firewall!”. I’d go for the latter…

Of course, you can also consume the feed from code. Open up a new project in Visual Studio, and add a service reference for the public service bus address:

Add reference OData

The only thing left now is consuming it, for example using this code snippet:

1 class Program 2 { 3 static void Main(string[] args) 4 { 5 var odataService = 6 new ContosoSalesEntities( 7 new Uri("https://<service_namespace>.servicebus.windows.net/ContosoModel/")); 8 var store = odataService.Store.Take(1).ToList().First(); 9 10 Console.WriteLine("Store: " + store.StoreName); 11 Console.ReadLine(); 12 } 13 }

(Do not that updates do not work out-of-the-box, you’ll have to use a small portion of magic on the server side to fix that… I’ll try to follow up on that one.)

Conclusion

That was quite easy! Of course, if you need full access to your database, you are currently stuck with PortBridge or similar solutions. I am not completely exposing my database to the outside world: there’s an extra level of control in the EDMX model where I can choose which datasets to expose and which not. The WCF Data Services class I created allows for specifying user access rights per dataset.

Download sample code: ServiceBusHost.zip (7.87 kb)

kick it on DotNetKicks.com

Simplified access control using Windows Azure AppFabric Labs

Windows Azure ApFabric Access ControlEarlier this week, Zane Adam announced the availability of the New AppFabric Access Control service in LABS. The highlights for this release (and I quote):

  • Expanded Identity provider support - allowing developers to build applications and services that accept both enterprise identities (through integration with Active Directory Federation Services 2.0), and a broad range of web identities (through support of Windows Live ID, Open ID, Google, Yahoo, Facebook identities) using a single code base.
  • WS-Trust and WS-Federation protocol support – Interoperable WS-* support is important to many of our enterprise customers.
  • Full integration with Windows Identity Foundation (WIF) - developers can apply the familiar WIF identity programming model and tooling for cloud applications and services.
  • A new management web portal -  gives simple, complete control over all Access Control settings.

Wow! This just *has* to be good! Let’s see how easy it is to work with claims based authentication and the AppFabric Labs Access Control Service, which I’ll abbreviate to “ACS” throughout this post.

kick it on DotNetKicks.com

What are you doing?

In essence, I’ll be “outsourcing” the access control part of my application to the ACS. When a user comes to the application, he will be asked to present certain “claims”, for example a claim that tells what the user’s role is. Of course, the application will only trust claims that have been signed by a trusted party, which in this case will be the ACS.

Fun thing is: my application only has to know about the ACS. As an administrator, I can then tell the ACS to trust claims provided by Windows Live ID or Google Accounts, which will be reflected to my application automatically: users will be able to authenticate through any service I configure in the ACS, without my application having to know. Very flexible, as I can tell the ACS to trust for example my company’s Active Directory and perhaps also the Active Directory of a customer who uses the application

Prerequisites

Before you start, make sure you have the latest version of Windows Identity Foundation installed. This will make things easy, I promise! Other prerequisites, of course, are Visual Studio and an account on https://portal.appfabriclabs.com. Note that, since it’s still a “preview” version, this is free to use.

In the labs account, create a project and in that project create a service namespace. This is what you should be seeing (or at least: something similar):

AppFabric labs project

Getting started: setting up the application side

Before starting, we will require a certificate for signing tokens and things like that. Let’s just start with creating one so we don’t have to worry about that further down the road. Issue the following command in a Visual Studio command prompt:

MakeCert.exe -r -pe -n "CN=<your service namespace>.accesscontrol.appfabriclabs.com" -sky exchange -ss my

This will create a certificate that is valid for your ACS project. It will be installed in the local certificate store on your computer. Make sure to export both the public and private key (.cer and .pkx).

That being said and done: let’s add claims-based authentication to a new ASP.NET Website. Simply fire up Visual Studio, create a new ASP.NET application. I called it “MyExternalApp” but in fact the name is all up to you. Next, edit the Default.aspx page and paste in the following code:

1 <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" 2 CodeBehind="Default.aspx.cs" Inherits="MyExternalApp._Default" %> 3 4 <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 5 </asp:Content> 6 <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 7 <p>Your claims:</p> 8 <asp:GridView ID="gridView" runat="server" AutoGenerateColumns="False"> 9 <Columns> 10 <asp:BoundField DataField="ClaimType" HeaderText="ClaimType" ReadOnly="True" /> 11 <asp:BoundField DataField="Value" HeaderText="Value" ReadOnly="True" /> 12 </Columns> 13 </asp:GridView> 14 </asp:Content>

Next, edit Default.aspx.cs and add the following Page_Load event handler:

1 protected void Page_Load(object sender, EventArgs e) 2 { 3 IClaimsIdentity claimsIdentity = 4 ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities.FirstOrDefault(); 5 6 if (claimsIdentity != null) 7 { 8 gridView.DataSource = claimsIdentity.Claims; 9 gridView.DataBind(); 10 } 11 }

So far, so good. If we had everything configured, Default.aspx would simply show us the claims we received from ACS once we have everything running. Now in order to configure the application to use the ACS, there’s two steps left to do:

  • Add a reference to Microsoft.IdentityModel (located somewhere at C:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll)
  • Add an STS reference…

That first step should be easy: add a reference to Microsoft.IdentityModel in your ASP.NET application. The second step is almost equally easy: right-click the project and select “Add STS reference…”, like so:

Add STS reference

A wizard will pop-up. Here’s a secret: this wizard will do a lot for us! On the first screen, enter the full URL to your application. I have mine hosted on IIS and enabled SSL, hence the following screenshot:

Specify application URI

In the next step, enter the URL to the STS federation metadata. To the what where? Well, to the metadata provided by ACS. This metadata contains the types of claims offered, the certificates used for signing, … The URL to enter will be something like https://<your service namespace>.accesscontrol.appfabriclabs.com:443/FederationMetadata/2007-06/FederationMetadata.xml:

Security Token Service

In the next step, select “Disable security chain validation”. Because we are using self-signed certificates, selecting the second option would lead us to doom because all infrastructure would require a certificate provided by a valid certificate authority.

From now on, it’s just “Next”, “Next”, “Finish”. If you now have a look at your Web.config file, you’ll see that the wizard has configured the application to use ACS as the federation authentication provider. Furthermore, a new folder called “FederationMetadata” has been created, which contains an XML file that specifies which claims this application requires. Oh, and some other details on the application, but nothing to worry about at this point.

Our application has now been configured: off to the ACS side!

Getting started: setting up the ACS side

First of all, we need to register our application with the Windows Azure AppFabric ACS. his can be done by clicking “Manage” on the management portal over at https://portal.appfabriclabs.com. Next, click “Relying Party Applications” and “Add Relying Party Application”. The following screen will be presented:

Add Relying Party Application

Fill out the form as follows:

  • Name: a descriptive name for your application.
  • Realm: the URI that the issued token will be valid for. This can be a complete domain (i.e. www.example.com) or the full path to your application. For now, enter the full URL to your application, which will be something like https://localhost/MyApp.
  • Return URL: where to return after successful sign-in
  • Token format: we’ll be using the defaults in WIF, so go for SAML 2.0.
  • For the token encryption certificate, select X.509 certificate and upload the certificate file (.cer) we’ve been using before
  • Rule groups: pick one, best is to create a new one specific to the application we are registering

Afterwards click “Save”. Your application is now registered with ACS.

The next step is to select the Identity Providers we want to use. I selected Windows Live ID and Google Accounts as shown in the next screenshot:

Identity Providers

One thing left: since we are using Windows Identity Foundation, we have to upload a token signing certificate to the portal. Export the private key of the previously created certificate and upload that to the “Certificates and Keys” part of the management portal. Make sure to specify that the certificate is to be used for token signing.

Signing certificate Windows Identity Foundation WIF

Allright, we’re almost done. Well, in fact: we are done! An optional next step would be to edit the rule group we’ve created before. This rule group will describe the claims that will be presented to the application asking for the user’s claims. Which is very powerful, because it also supports so-called claim transformations: if an identity provider provides ACS with a claim that says “the user is part of a group named Administrators”, the rules can then transform the claim into a new claim stating “the user has administrative rights”.

Testing our setup

With all this information and configuration in place, press F5 inside Visual Studio and behold… Your application now redirects to the STS in the form of ACS’ login page.

Sign in using AppFabric

So far so good. Now sign in using one of the identity providers listed. After a successful sign-in, you will be redirected back to ACS, which will in turn redirect you back to your application. And then: misery :-)

Request validation

ASP.NET request validation kicked in since it detected unusual headers. Let’s fix that. Two possible approaches:

  • Disable request validation, but I’d prefer not to do that
  • Create a custom RequestValidator

Let’s go with the latter option… Here’s a class that you can copy-paste in your application:

1 public class WifRequestValidator : RequestValidator 2 { 3 protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) 4 { 5 validationFailureIndex = 0; 6 7 if (requestValidationSource == RequestValidationSource.Form && collectionKey.Equals(WSFederationConstants.Parameters.Result, StringComparison.Ordinal)) 8 { 9 SignInResponseMessage message = WSFederationMessage.CreateFromFormPost(context.Request) as SignInResponseMessage; 10 11 if (message != null) 12 { 13 return true; 14 } 15 } 16 17 return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); 18 } 19 }

Basically, it’s just validating the request and returning true to ASP.NET request validation if a SignInMesage is in the request. One thing left to do: register this provider with ASP.NET. Add the following line of code in the <system.web> section of Web.config:

<httpRuntime requestValidationType="MyExternalApp.Modules.WifRequestValidator" />

If you now try loading the application again, chances are you will actually see claims provided by ACS:

Claims output from Windows Azure AppFabric Access Control Service

There', that’s it. We now have successfully delegated access control to ACS. Obviously the next step would be to specify which claims are required for specific actions in your application, provide the necessary claims transformations in ACS, … All of that can easily be found on Google Bing.

Conclusion

To be honest: I’ve always found claims-based authentication and Windows Azure AppFabric Access Control a good match in theory, but an ugly and cumbersome beast to work with. With this labs release, things get interesting and almost self-explaining, allowing for easier implementation of it in your own application. As an extra bonus to this blog post, I also decided to link my ADFS server to ACS: it took me literally 5 minutes to do so and works like a charm!

Final conclusion: AppFabric team, please ship this soon :-) I really like the way this labs release works and I think many users who find the step up to using ACS today may as well take the step if they can use ACS in the easy manner the labs release provides.

By the way: more information can be found on http://acs.codeplex.com.

kick it on DotNetKicks.com