Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Rewriting WCF OData Services base URL with load balancing & reverse proxy

When scaling out an application to multiple servers, often a form of load balancing or reverse proxying is used to provide external users access to a web server. For example, one can be in the situation where two servers are hosting a WCF OData Service and are exposed to the Internet through either a load balancer or a reverse proxy. Below is a figure of such setup using a reverse proxy.

WCF OData Services hosted in reverse proxy

As you can see, the external server listens on the URL www.example.com, while both internal servers are listening on their respective host names. Guess what: whenever someone accesses a WCF OData Service through the reverse proxy, the XML generated by one of the two backend servers is slightly invalid:

OData base URL invalid incorrect

While valid XML, the hostname provided to all our clients is wrong. The host name of the backend machine is in there and not the hostname of the reverse proxy URL…

How can this be solved? There are a couple of answers to that, one that popped into our minds was to rewrite the XML on the reverse proxy and simply “string.Replace” the invalid URLs. This will probably work, but it feels… dirty. We chose to create WCF inspector, which simply changes this at the WCF level on each backend node.

Our inspector looks like this: (note I did some hardcoding of the base hostname in here, which obviously should not be done in your code)

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 baseUriBuilder.Host = "www.example.com"; 12 requestUriBuilder.Host = baseUriBuilder.Host; 13 14 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 15 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 16 } 17 18 return null; 19 } 20 21 public void BeforeSendReply(ref Message reply, object correlationState) 22 { 23 // Noop 24 } 25 }

There’s not much rocket science in there, although some noteworthy actions are being performed:

  • The current WebOperationContext is queried for the full incoming request URI as well as the base URI. These values are based on the local server, in our example “srvweb01” and “srvweb02”.
  • The Host part of that URI is being replaced with the external hostname, www.example.com
  • These two values are stored in the current OperationContext’s IncomingMessageProperties. Apparently the keys MicrosoftDataServicesRootUri and MicrosoftDataServicesRequestUri affect the URL being generated in the XML feed

To apply this inspector to our WCF OData Service, we’ve created a behavior and applied the inspector to our service channel. Here’s the code for that:

1 [AttributeUsage(AttributeTargets.Class)] 2 public class RewriteBaseUrlBehavior 3 : Attribute, IServiceBehavior 4 { 5 public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 6 { 7 // Noop 8 } 9 10 public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) 11 { 12 // Noop 13 } 14 15 public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 16 { 17 foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers) 18 { 19 foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints) 20 { 21 endpointDispatcher.DispatchRuntime.MessageInspectors.Add( 22 new RewriteBaseUrlMessageInspector()); 23 } 24 } 25 } 26 }

This behavior simply loops all channel dispatchers and their endpoints and applies our inspector to them.

Finally, there’s nothing left to do to fix our reverse proxy issue than to just annotate our WCF OData Service with this behavior attribute:

1 [RewriteBaseUrlBehavior] 2 public class PackageFeedHandler 3 : DataService<PackageEntities> 4 { 5 // ... 6 }

Working with URL routing

A while ago, I posted about Using dynamic WCF service routes. The technique described below is also appropriate for services created using that technique. When working with that implementation, the source code for the inspector would be slightly different.

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 var routeData = MyGet.Server.Routing.DynamicServiceRoute.GetCurrentRouteData(); 12 var route = routeData.Route as Route; 13 if (route != null) 14 { 15 string servicePath = route.Url; 16 servicePath = Regex.Replace(servicePath, @"({\*.*})", ""); // strip out catch-all 17 foreach (var routeValue in routeData.Values) 18 { 19 if (routeValue.Value != null) 20 { 21 servicePath = servicePath.Replace("{" + routeValue.Key + "}", routeValue.Value.ToString()); 22 } 23 } 24 25 if (!servicePath.StartsWith("/")) 26 { 27 servicePath = "/" + servicePath; 28 } 29 30 if (!servicePath.EndsWith("/")) 31 { 32 servicePath = servicePath + "/"; 33 } 34 35 requestUriBuilder.Path = requestUriBuilder.Path.Replace(baseUriBuilder.Path, servicePath); 36 requestUriBuilder.Host = baseUriBuilder.Host; 37 baseUriBuilder.Path = servicePath; 38 } 39 40 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 41 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 42 } 43 44 return null; 45 } 46 47 public void BeforeSendReply(ref Message reply, object correlationState) 48 { 49 // Noop 50 } 51 }

The idea is identical, except that we’re updating the incoming URL path for reasons described in the aforementioned blog post.

Enjoy!

Running Memcached on Windows Azure for PHP

After three conferences in two weeks with a lot of “airport time”, which typically converts into “let’s code!” time, I think I may have tackled a commonly requested Windows Azure feature for PHP developers. Some sort of distributed caching is always a great thing to have when building scalable services and applications. While Windows Azure offers a distributed caching layer under the form of the Windows Azure Caching, that components currently lacks support for non-.NET technologies. I’ve heard there’s work being done there, but that’s not very interesting if you are building your app today. This blog post will show you how to modify a Windows Azure deployment to run and use Memcached in the easiest possible manner.

Note: this post focuses on PHP but can also be used to setup Memcached on Windows Azure for NodeJS, Java, Ruby, Python, …

Related downloads:
The scaffolder source code: MemcachedScaffolderSource.zip (1.12 mb)
The scaffolder, packaged and ready for use: MemcachedScaffolder.phar (2.87 mb)

The short version: use my scaffolder

As you may know, when working with PHP on Windows Azure and when making use of the Windows Azure SDK, you can use and create scaffolders. The Windows Azure SDK for PHP includes a powerful scaffolding feature that allows users to quickly setup a pre-packaged and configured website ready for Windows Azure.

If you want to use Memcached in your project, do the following:

  • Download my custom MemcacheScaffolder (MemcachedScaffolder.phar (2.87 mb)) and make sure it is located either under the scaffolders folder of the Windows Azure SDK for PHP, or that you remember the path to this scaffolder
  • Run the scaffolder from the command line: (note: best use the latest SVN version of the command line tools)
1 scaffolder run -out="c:\temp\myapp" -s="MemcachedScaffolder"

  • Find the newly created Windows Azure project structure in the folder you’ve used.
  • In your PHP code, simply add require_once 'memcache.inc.php'; to your code, and enjoy the $memcache variable which will hold a preconfigured Memcached client for you to use. This $memcache instance will also be automatically updated when adding more server instances or deleting server instances.
  • 1 require_once 'memcache.inc.php';

    That’s it!

    The long version: what this scaffolder does behind the scenes

    Of course, behind this “developers can simply use 1 line of code” trick a lot of things happen in the background. Let’s go through the places I’ve made changes from the default scaffolder.

    The ServiceDefinition.csdef file

    Let’s start with the beginning: when running Memcached in a Windows Azure instance, you’ll have to specify it with a port number to use. As such, the ServiceDefinition.csdef file which defines what the datacenter configuration for your app should be looks like the following:

    1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="PhpOnAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 3 <WebRole name="PhpOnAzure.Web" enableNativeCodeExecution="true"> 4 <Sites> 5 <Site name="Web" physicalDirectory="./PhpOnAzure.Web"> 6 <Bindings> 7 <Binding name="Endpoint1" endpointName="HttpEndpoint" /> 8 </Bindings> 9 </Site> 10 </Sites> 11 <Startup> 12 <Task commandLine="add-environment-variables.cmd" executionContext="elevated" taskType="simple" /> 13 <Task commandLine="install-php.cmd" executionContext="elevated" taskType="simple"> 14 <Environment> 15 <Variable name="EMULATED"> 16 <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" /> 17 </Variable> 18 </Environment> 19 </Task> 20 <Task commandLine="memcached.cmd" executionContext="elevated" taskType="background" /> 21 <Task commandLine="monitor-environment.cmd" executionContext="elevated" taskType="background" /> 22 </Startup> 23 <Endpoints> 24 <InputEndpoint name="HttpEndpoint" protocol="http" port="80" /> 25 <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> 26 </Endpoints> 27 <Imports> 28 <Import moduleName="Diagnostics"/> 29 </Imports> 30 <ConfigurationSettings> 31 </ConfigurationSettings> 32 </WebRole> 33 </ServiceDefinition>

    Note the <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> line of code. This one defines that the web role instance should open some TCP port in the firewall with the name MemcachedEndpoint and expose that to the other virtual machines in your deployment. We’ll use this named endpoint later when starting Memcached.

    Something else in this file is noteworthy: the startup tasks under the <Startup> element. With the default scaffolder, the first two tasks (namely add-environment-variables.cmd and install-php.cmd) are also present. These do nothing more than providing some environment information about your deployment in the environment variables. The second one does what its name implies: install PHP on your virtual machine. The latter two scripts added, memcached.cmd and monitor-environment.cmd are used to bootstrap Memcached. Note these two tasks run as background tasks: I wanted to have these two always running to ensure when Memcached crashes the task can simply restart Memcached.

    The php folder

    If you’ve played with the default scaffolder in the Windows Azure SDK for PHP, you probably know that the PHP installation in Windows Azure is a “default” one. This means: no memcached extension is in there. To overcome this, simply copy the correct php_memcache.dll extension into the /php/ext folder and Windows Azure (well, the install-php.cmd script) will know what to do with it.

    Memcached.cmd and Memcached.ps1

    Under the application’s bin folder, I’ve added some additional startup tasks. The one responsible for starting (and maintaining a running instance of) Memcached is, of course, Memcached.cmd. This one simply delegates the call to Memcached.ps1, of which the following is the source code:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # Start memcached. To infinity and beyond! 4 while (1) { 5 $p = [diagnostics.process]::Start("memcached.exe", "-m 64 -p " + [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.InstanceEndpoints["MemcachedEndpoint"].IPEndpoint.Port) 6 $p.WaitForExit() 7 }

    To be honest, this file is pretty simple. It loads the WindowsAzure ServiceRuntime assembly which contains all kinds of information about the current deployment. Next, I start an infinite loop which continuously starts a new memcached.exe process consuming 64MB of RAM memory and listens on the port specified by the MemcachedEndpoint defined earlier.

    Monitor-environment.cmd and Monitor-environment.ps1

    The monitor-environment.cmd script takes the same approach as the memcached.cmd script: just pass the command along to a PowerShell script in the form of monitor-environment.ps1. I do want to show you the monitor-environment.cmd script however, as there’s one difference in there: I’m changing the file system permissions for my application (the icacls line).

    1 @echo off 2 cd "%~dp0" 3 4 icacls %RoleRoot%\approot /grant "Everyone":F /T 5 6 powershell.exe Set-ExecutionPolicy Unrestricted 7 powershell.exe .\monitor-environment.ps1

    The reason for changing permissions is simple: I want to make sure I can write a PHP script to disk every minute. Yes, you heard me! I’m using PowerShell (in the monitor-environment.ps1 script) to generate PHP code. Here’s the PowerShell:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # To infinity and beyond! 4 5 while(1) { 6 ########################################################## 7 # Create memcached include file for PHP 8 ########################################################## 9 10 # Dump all memcached endpoints to ../memcached-servers.php 11 $memcached = "<?php`r`n" 12 $memcached += "`$memcachedServers = array(" 13 14 $currentRolename = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.Role.Name 15 $roles = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::Roles 16 foreach ($role in $roles.Keys | sort-object) { 17 if ($role -eq $currentRolename) { 18 $instances = $roles[$role].Instances 19 for ($i = 0; $i -lt $instances.Count; $i++) { 20 $endpoints = $instances[$i].InstanceEndpoints 21 foreach ($endpoint in $endpoints.Keys | sort-object) { 22 if ($endpoint -eq "MemcachedEndpoint") { 23 $memcached += "array(`"" 24 $memcached += $endpoints[$endpoint].IPEndpoint.Address 25 $memcached += "`" ," 26 $memcached += $endpoints[$endpoint].IPEndpoint.Port 27 $memcached += "), " 28 } 29 30 31 } 32 } 33 } 34 } 35 36 $memcached += ");" 37 38 Write-Output $memcached | Out-File -Encoding Ascii ../memcached-servers.php 39 40 # Restart the loop in 1 minute 41 Start-Sleep -Seconds 60 42 }

    The output is being written every minute to the memcached-servers.php file. Why every minute? Well, if servers are added or removed I want my application to use the correct set of servers. This leaves a possible gap of one minute where some server may not be available, you can easily catch any error related to this in your PHP code (or add a comment to this blog post telling me what’s a better interval). Anyway, here’s the sample output:

    1 <?php 2 $memcachedServers = array(array('10.0.0.1', 11211), array('10.0.0.2', 11211), );

    All there’s left to do is consume this array. I’ve added a default memcache.inc.php file in the root of the web role to make things easy:

    1 <?php 2 require_once $_SERVER["RoleRoot"] . '\\approot\\memcached-servers.php'; 3 $memcache = new Memcache(); 4 foreach ($memcachedServers as $memcachedServer) { 5 if (strpos($memcachedServer[0], '127.') !== false) { 6 $memcachedServer[0] = 'localhost'; 7 } 8 $memcache->addServer($memcachedServer[0], $memcachedServer[1]); 9 }

    Include this file in your code and you have a full-blown distributed cache available in your Windows Azure deployment! Here’s a sample of some operations that can be done on Memcached:

    1 <?php 2 error_reporting(E_ALL); 3 require_once 'memcache.inc.php'; 4 5 var_dump($memcachedServers); 6 var_dump($memcache->getVersion()); 7 8 $memcache->set('key1', 'value1', false, 30); 9 echo $memcache->get('key1'); 10 11 $memcache->set('var_key', 'some really big variable', MEMCACHE_COMPRESSED, 50); 12 echo $memcache->get('var_key');

    That’s it!

    Conclusion and feedback

    This is just a fun project I’ve been working on when lonely and bored on airports. However, if you think this is valuable and in your opinion should be made available as a standard thing in the Windows Azure SDK for PHP, let me know. I’ll be happy to push this into the main branch and make sure it’s available in a future release.

    Comments or praise? There’s a comment form right below this post!

    Setting up a NuGet repository in seconds: MyGet public feeds

    A few months ago, my colleague Xavier Decoster and I introduced MyGet as a tool where you can create your own, private NuGet feeds. A couple of weeks later we introduced some options to delegate feed privileges to other MyGet users allowing you to make another MyGet user “co-admin” or “contributor” to a feed. Since then we’ve expanded our view on the NuGet ecosystem and moved MyGet from a solution to create your private feeds to a service that allows you to set up a NuGet feed, whether private or public.

    Supporting public feeds allows you to set up a structure similar to www.nuget.org: you can give any user privileges to publish a package to your feed while the user can never manage other packages on your feed. This is great in several scenarios:

    • You run an open source project and want people to contribute modules or plugins to your feed
    • You are a business and you want people to contribute internal packages to your feed whilst prohibiting them from updating or deleting other packages

    Setting up a public feed

    Setting up a public feed on MyGet is similar to setting up a private feed. In fact, both are identical except for the default privileges assigned to users. Navigate to www.myget.org and sign in using an identity provider of choice. Next, create a feed, for example:

    Create a MyGet NuGet feed and host your own NuGet packages

    This new feed may be named “public”, however it is private by obscurity: if someone knows the URL to the feed, he/she can consume packages from it. Let’s change that. Go to the “Feed Security” tab and have a look at the assigned privileges for Everyone. By default, these are set to “Can consume this feed”, meaning that everyone can add the feed URL to Visual Studio and consume packages. Other options are “No access” (requires authentication prior to being able to consume the feed) and “Can contribute own packages to this feed”. This last one is what we want:

    Setting up a NuGet feed

    Assigning the “Can contribute own packages to this feed” privilege to a specific user or to everyone means that the user (or everyone) will be able to contribute packages to the feed, as long as the package id used is not already on the feed and as long as the package id was originally submitted by this user. Exactly the same model as www.nuget.org, that is.

    For reference, all available privileges are:

    • Has no access to this feed (speaks for itself)
    • Can consume this feed (allows the user to use the feed in Visual Studio / NuGet)
    • Can contribute own packages to this feed '(allows the user to contribute packages but can only update and remove his own packages and not those of others)
    • Can manage all packages for this feed (allows the user to add packages to the feed via the website and via the NuGet push API)
    • Can manage users and all packages for this feed (extends the above with feed privilege management capabilities)

    Contributing to a public feed

    Of course, if you have a public feed you may want to have people contributing to it. This is very easy: provide them with a link to your feed editing page (for example, http://www.myget.org/Feed/Edit/public). Users can publish their packages via the MyGet user interface in no time.

    If you want to have users push packages using nuget.exe or NuGet Package Explorer, provide them a link to the feed endpoint (for example, http://www.myget.org/F/public/). Using their API key (which can be found in the MyGet profile for the user) they can push packages to the public feed from any API consumer.

    Enjoy!

     

    PS: We’re working on lots more, but will probably provide that in a MyGet Premium version. Make sure to subscribe to our newsletter on www.myget.org if this is of interest.

    NuGet push... to Windows Azure

    When looking at how people like to deploy their applications to a cloud environment, a large faction seems to prefer being able to use their source control system as a source for their production deployment. While interesting, I see a lot of problems there: your source code may not run immediately and probably has to be compiled. You don’t want to maintain compiled assemblies in source control, right? Also, maybe some QA process is in place where a deployment can only occur after approval. Why not use source control for what it’s there for: source control? And how about using a NuGet repository as the source for our deployment? Meet the Windows Azure NuGetRole.

    Disclaimer/Warning: this is demo material and should probably not be used for real-life deployments without making it bullet proof!

    Download the sample code: NuGetRole.zip (262.22 kb)

    How to use it

    If you compile the source code (download), you have X steps left in getting your NuGetRole running on Windows Azure:

    • Specifying the package source to use
    • Add some packages to the package source feed (which you can easily host on MyGet)
    • Deploy to Windows Azure

    When all these steps have been taken care of, the NuGetRole will download all latest package versions from the package source specified in ServiceConfiguration.cscfg:

    1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceConfiguration serviceName="NuGetRole.Azure" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" 4 osFamily="1" 5 osVersion="*"> 6 <Role name="NuGetRole.Web"> 7 <Instances count="1" /> 8 <ConfigurationSettings> 9 <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> 10 <Setting name="PackageSource" value="http://www.myget.org/F/nugetrole/" /> 11 </ConfigurationSettings> 12 </Role> 13 </ServiceConfiguration>

    Packages you publish should only contain a content and/or lib folder. Other package contents will currently be ignored by the NuGetRole. If you want to add some web content like a default page to your role, simply publish the following package:

    NuGet Package Explorer MyGet NuGet NuGetRole Azure

    Just push, and watch your Windows Azure web role farm update their contents. Or have your build server push a NuGet package containing your application and have your server farm update itself. Whatever pleases you.

    How it works

    What I did was create a fairly empty Windows Azure project (download).  In this project, one Web role exists. This web role consists of nothing but a Web.config file and a WebRole.cs class which looks like the following:

    1 public class WebRole : RoleEntryPoint 2 { 3 private bool _isSynchronizing; 4 private PackageSynchronizer _packageSynchronizer = null; 5 6 public override bool OnStart() 7 { 8 var localPath = Path.Combine(Environment.GetEnvironmentVariable("RdRoleRoot") + "\\approot"); 9 10 _packageSynchronizer = new PackageSynchronizer( 11 new Uri(RoleEnvironment.GetConfigurationSettingValue("PackageSource")), localPath); 12 13 _packageSynchronizer.SynchronizationStarted += sender => _isSynchronizing = true; 14 _packageSynchronizer.SynchronizationCompleted += sender => _isSynchronizing = false; 15 16 RoleEnvironment.StatusCheck += (sender, args) => 17 { 18 if (_isSynchronizing) 19 { 20 args.SetBusy(); 21 } 22 }; 23 24 return base.OnStart(); 25 } 26 27 public override void Run() 28 { 29 _packageSynchronizer.SynchronizeForever(TimeSpan.FromSeconds(30)); 30 31 base.Run(); 32 } 33 }

    The above code is essentially wiring some configuration values like the local web root and the NuGet package source to use to a second class in this project: the PackageSynchronizer. This class simply checks the specified NuGet package source every few minutes, checks for the latest package versions and if required, updates content and bin files.  Each synchronization run does the following:

    1 public void SynchronizeOnce() 2 { 3 var packages = _packageRepository.GetPackages() 4 .Where(p => p.IsLatestVersion == true).ToList(); 5 6 var touchedFiles = new List<string>(); 7 8 // Deploy new content 9 foreach (var package in packages) 10 { 11 var packageHash = package.GetHash(); 12 var packageFiles = package.GetFiles(); 13 foreach (var packageFile in packageFiles) 14 { 15 // Keep filename 16 var packageFileName = packageFile.Path.Replace("content\\", "").Replace("lib\\", "bin\\"); 17 18 // Mark file as touched 19 touchedFiles.Add(packageFileName); 20 21 // Do not overwrite content that has not been updated 22 if (!_packageFileHash.ContainsKey(packageFileName) || _packageFileHash[packageFileName] != packageHash) 23 { 24 _packageFileHash[packageFileName] = packageHash; 25 26 Deploy(packageFile.GetStream(), packageFileName); 27 } 28 } 29 30 // Remove obsolete content 31 var obsoleteFiles = _packageFileHash.Keys.Except(touchedFiles).ToList(); 32 foreach (var obsoletePath in obsoleteFiles) 33 { 34 _packageFileHash.Remove(obsoletePath); 35 Undeploy(obsoletePath); 36 } 37 } 38 }

    Or in human language:

    • The specified NuGet package source is checked for packages
    • Every package marked “IsLatest” is being downloaded and deployed onto the machine
    • Files that have not been used in the current synchronization step are deleted

    This is probably not a bullet-proof solution, but I wanted to show you how easy it is to use NuGet not only as a package manager inside Visual Studio, but also from your code: NuGet is not just a package manager but in essence a package management protocol. Which you can easily extend.

    One thing to note: I also made the Windows Azure load balancer ignore the role that’s updating itself. This means a roie instance that is synchronizing its contents will never be available in the load balancing pool so no traffic is sent to the role instance during an update.

    ASP.NET MVC dynamic view sections

    Earlier today, a colleague of mine asked for advice on how he could create a “dynamic” view. To elaborate, he wanted to create a change settings page on which various sections would be rendered based on which plugins are loaded in the application.

    Intrigued by the question and having no clue on how to do this, I quickly hacked together a SettingsViewModel, to which he could add all section view models no matter what type they are:

    1 public class SettingsViewModel 2 { 3 public List<dynamic> SettingsSections = new List<dynamic>(); 4 }

    To my surprise, when looping this collection in the view it just works as expected: every section is rendered using its own DisplayTemplate. Simple and slick.

    1 @model MvcApplication.ViewModels.SettingsViewModel 2 3 @{ 4 ViewBag.Title = "Index"; 5 } 6 7 <h2>Settings</h2> 8 9 @foreach (var item in Model.SettingsSections) 10 { 11 @Html.DisplayFor(model => item); 12 }

    Why MyGet uses Windows Azure

    MyGet - NuGet hosting private feedRecently one of the Tweeps following me started fooling around and hit one of my sweet spots: Windows Azure. Basically, he mocked me for using Windows Azure for MyGet, a website with enough users but not enough to justify the “scalability” aspect he thought Windows Azure was offering. Since Windows Azure is much, much more than scalability alone, I decided to do a quick writeup about the various reasons on why we use Windows Azure for MyGet. And those are not scalability.

    First of all, here’s a high-level overview of our deployment, which may illustrate some of the aspects below:

    image

    Costs

    Windows Azure is cheap. Cheap as in cost-effective, not as in, well, sleezy. Many will disagree with me but the cost perspective of Windows Azure can be real cheap in some cases as well as very expensive in other cases. For example, if someone asks me if they should move to Windows Azure and they now have one server running 300 small sites, I’d probably tell them not to move as it will be a tough price comparison.

    With MyGet we run 2 Windows Azure instances in 2 datacenters across the globe (one in the US and one in the EU). For $180.00 per month this means 2 great machines at two very distant regions of the globe. You can probably find those with other hosters as well, but will they manage your machines? Patch and update them? Probably not, for that amount. In our scenario, Windows Azure is cheap.

    Feel free to look at the cost calculator tool to estimate usage costs.

    Traffic Manager

    Traffic Manager, a great (beta) product in the Windows Azure offering allows us to do geographically distributed applications. For example, US users of MyGet will end up in the US datacenter, European users will end up in the EU datacenter. This is great, and we can easily add extra locations to this policy and have, for example, a third location in Asia.

    Next to geographically distributing MyGet, Traffic Manager also ensures that if one datacenter goes down, the DNS pool will consist of only “live” datacenters and thus provide datacenter fail-over. Not ideal as the web application will be served faster from a server that’s closer to the end user, but the application will not go down.

    One problem we have with this is storage. We use Windows Azure storage (blobs, tables and queues) as those only cost $0.12 per GB. Distributing the application does mean that our US datacenter server has to access storage in the EU datacenter which of course adds some latency. We try to reduce this using extensive caching on all sides, but it’d be nicer if Traffic Manager allowed us to setup georeplication for storage as well. This only affects storing package metadata and packages. Reading packages is not affected by this because we’re using the Windows Azure CDN for that.

    CDN

    The Windows Azure Content Delivery Network allows us to serve users fast. The main use case for MyGet is accessing and downloading packages. Ok, the updating has some latency due to the restrictions mentioned above, but if you download a package from MyGet it will always come from a CDN node near the end user to ensure low latency and fast access. Given the CDN is just a checkbox on the management pages means integrating with CDN is a breeze. The only thing we’ve struggled with is finding an acceptable caching policy to ensure stale data is limited.

    Windows Azure AppFabric Access Control

    MyGet is not one application. MyGet is three applications: our development environment, staging and production. In fact, we even plan for tenants so every tenant in fact is its own application. To streamline, manage and maintain a clear overview of which user can authenticate to which application via which identity provider, we use ACS to facilitate MyGet authentication.

    To give you an example: our dev environment allows logging in via OpenID on a development machine. Production allows for OpenID on a live environment. In staging, we only use Windows Live ID and Facebook whereas our production website uses different identity providers. Tenants will, in the future, be given the option to authenticate to their own ADFS server, we’re pretty sure ACS will allow us to simply configure that and instrument only tenant X can use that ADFS server.

    ACs has been a great time saver and is definitely something we want to use in future project. It really eases common authentication pains and acts as a service bus between users, identity providers and our applications.

    Windows Azure AppFabric Caching

    Currently we don’t use Windows Azure AppFabric Caching in our application. We currently use the ASP.NET in-memory cache on all machines but do feel the need for having a distributed caching solution. While appealing, we think about deploying Memcached in our application because of the cost structure involved. But we might as well end up with Wndows Azure AppFabric Caching anyway as it integrates nicely with our current codebase.

    Conclusion

    In short, Windows Azure is much more than hosting and scalability. It’s the building blocks available such as Traffic Manager, CDN and Access Control Service that make our lives easier. The pricing structure is not always that transparent but if you dig a little into it you’ll find affordable solutions that are really easy to use because you don’t have to roll your own.

    Book review: Microsoft Windows Azure Development Cookbook

    Microsoft Windows Azure Development CookbookOver the past few months, I’ve been doing technical reviewing for a great Windows Azure book: the Windows Azure Development Cookbook published by Packt. During this review I had no idea who the author of the book was but after publishing it seems the author is no one less than my fellow Windows Azure MVP Neil Mackenzie! If you read his blog you should know you should immediately buy this book.

    Why? Well, Neil usually goes both broad and deep: all required context for understanding a recipe is given and the recipe itself goes deep enough to know most of the ins and outs of a specific feature of Windows Azure. Well written, to the point and clear to every reader both novice and expert.

    The book is one of a series of cookbooks published by Packt. They are intended to provide “recipes” showing how to implement specific techniques in a particular technology. They don’t cover getting started scenarios, but do cover some basic techniques, some more advanced techniques and usually one or two expert techniques. From the cookbooks I’ve read, this approach works and should get you up to speed real quick. And that’s no different with this one.

    Here’s a chapter overview:

    1. Controlling Access in the Windows Azure Platform
    2. Handling Blobs in Windows Azure
    3. Going NoSQL with Windows Azure Tables
    4. Disconnecting with Windows Azure Queues
    5. Developing Hosted Services for Windows Azure
    6. Digging into Windows Azure Diagnostics
    7. Managing Hosted Services with the Service Management API
    8. Using SQL Azure
    9. Looking at the Windows Azure AppFabric

    An interesting sample chapter on the Service Management API can be found here.

    Oh and before I forget: Neil, congratulations on your book!  It was a pleasure doing the reviewing!

    A client side Glimpse to your PHP application

    Glimpse for PHPA few months ago, the .NET world was surprised with a magnificent tool called “Glimpse”. Today I’m pleased to release a first draft of a PHP version for Glimpse! Now what is this Glimpse thing… Well: "what Firebug is for the client, Glimpse does for the server... in other words, a client side Glimpse into whats going on in your server."

    For a quick demonstration of what this means, check the video at http://getglimpse.com/. Yes, it’s a .NET based video but the idea behind Glimpse for PHP is the same. And if you do need a PHP-based one, check http://screenr.com/27ds (warning: unedited :-))

    Fundamentally Glimpse is made up of 3 different parts, all of which are extensible and customizable for any platform:

    • Glimpse Server Module
    • Glimpse Client Side Viewer
    • Glimpse Protocol

    This means an server technology that provides support for the Glimpse protocol can provide the Glimpse Client Side Viewer with information. And that’s what I’ve done.

    What can I do with Glimpse?

    A lot of things. The most basic usage of Glimpse would be enabling it and inspecting your requests by hand. Here’s a small view on the information provided:

    Glimpse phpinfo()

    By default, Glimpse offers you a glimpse into the current Ajax requests being made, your PHP Configuration, environment info, request variables, server variables, session variables and a trace viewer. And then there’s the remote tab, Glimpse’s killer feature.

    When configuring Glimpse through www.yoursite.com/?glimpseFile=Config, you can specify a Glimpse session name. If you do that on a separate device, for example a customer’s browser or a mobile device you are working with, you can distinguish remote sessions in the remote tab. This allows debugging requests that are being made live on other devices! A full description is over at http://getglimpse.com/Help/Plugin/Remote.

    PHP debug mobile browser

    Adding Glimpse to your PHP project

    Installing Glimpse in a PHP application is very straightforward. Glimpse is supported starting with PHP 5.2 or higher.

    • For PHP 5.2, copy the source folder of the repository to your server and add <?php include '/path/to/glimpse/index.php'; ?> as early as possible in your PHP script.
    • For PHP 5.3, copy the glimpse.phar file from the build folder of the repository to your server and add <?php include 'phar://path/to/glimpse.phar'; ?> as early as possible in your PHP script.

    Here’s an example of the Hello World page shown above:

    1 <?php 2 require_once 'phar://../build/Glimpse.phar'; 3 ?> 4 <html> 5 <head> 6 <title>Hello world!</title> 7 </head> 8 9 <?php Glimpse_Trace::info('Rendering body...'); ?> 10 <body> 11 <h1>Hello world!</h1> 12 <p>This is just a test.</p> 13 </body> 14 <?php Glimpse_Trace::info('Rendered body.'); ?> 15 </html>

    Enabling Glimpse

    From the moment Glimpse is installed into your web application, navigate to your web application and append the ?glimpseFile=Config query string to enable/disable Glimpse. Optionally, a client name can also be specified to distinguish remote requests.

    Configuring Glimpse for PHP

    After enabling Glimpse, a small “eye” icon will appear in the bottom-right corner of your browser. Click it and behold the magic!

    Now of course: anyone can potentially enable Glimpse. If you don’t want that, ensure you have some conditional mechanism around the <?php require_once 'phar://../build/Glimpse.phar'; ?> statement.

    Creating a first Glimpse plugin

    Not enough information on your screen? Working with Zend Framework and want to have a look at route values? Want to work with Wordpress and view some hidden details about a post through Glimpse? The sky is the limit. All there’s to it is creating a Glimpse plugin and registering it. Implementing Glimpse_Plugin_Interface is enough:

    1 <?php 2 class MyGlimpsePlugin 3 implements Glimpse_Plugin_Interface 4 { 5 public function getData(Glimpse $glimpse) { 6 $data = array( 7 array('Included file path') 8 ); 9 10 foreach (get_included_files() as $includedFile) { 11 $data[] = array($includedFile); 12 } 13 14 return array( 15 "MyGlimpsePlugin" => count($data) > 0 ? $data : null 16 ); 17 } 18 19 public function getHelpUrl() { 20 return null; // or the URL to a help page 21 } 22 } 23 ?>

    To register the plugin, add a call to $glimpse->registerPlugin():

    1 <?php 2 $glimpse->registerPlugin(new MyGlimpsePlugin()); 3 ?>

    And Bob’s your uncle:

    Creating a Glimpse plugin in PHP

    Now what?

    Well, it’s up to you. First of all: all feedback would be welcomed. Second of all: this is on Github (https://github.com/Glimpse/Glimpse.PHP). Feel free to fork and extend! Feel free to contribute plugins, core features, whatever you like! Have a lot of CakePHP projects? Why not contribute a plugin that provides a Glimpse at CakePHP diagnostics?

    ‘Till next time!

    Version 4 of the Windows Azure SDK for PHP released

    Only a few months after the Windows Azure SDK for PHP 3.0.0, Microsoft and RealDolmen are proud to present you the next version of the most complete SDK for Windows Azure out there (yes, that is a rant against the .NET SDK!): Windows Azure SDK for PHP. We’ve been working very hard with an expanding globally distributed team on getting this version out.

    The Windows Azure SDK 4 contains some significant feature enhancements. For example, it now incorporates a PHP library for accessing Windows Azure storage, a logging component, a session sharing component and clients for both the Windows Azure and SQL Azure Management API’s. On top of that, all of these API’s are now also available from the command-line both under Windows and Linux. This means you can batch-script a complete datacenter setup including servers, storage, SQL Azure, firewalls, … If that’s not cool, move to the North Pole.

    Here’s the official change log:

    • New feature: Service Management API support for SQL Azure
    • New feature: Service Management API's exposed as command-line tools
    • New feature: MicrosoftWindowsAzureRoleEnvironment for retrieving environment details
    • New feature: Package scaffolders
    • Integration of the Windows Azure command-line packaging tool
    • Expansion of the autoloader class increasing performance
    • Several minor bugfixes and performance tweaks

    Some interesting links on some of the new features:

    Also keep an eye on www.sdn.nl where I’ll be posting an article on scripting a complete application deployment to Windows Azure, including SQL Azure, storage and firewalls.

    And finally: keep an eye on http://azurephp.interoperabilitybridges.com and http://blogs.technet.com/b/port25/. I have a feeling some cool stuff may be coming following this release...

    Copy packages from one NuGet feed to another

    Copy packages from one NuGet feed to another - MyGet NuGet Server

    Yesterday, a funny discussion was going on at the NuGet Discussion Forum on CodePlex. Funny, you say? Well yes. Funny because it was about a feature we envisioned as being a must-have feature for the NuGet ecosystem: copying packages from the NuGet feed to another feed. And funny because we already have that feature present in MyGet. You may wonder why anyone wants to do that? Allow me to explain.

    Scenarios where copying packages makes sense

    The first scenario is feed stability. Imagine you are building a project and expect to always reference a NuGet package from the official feed. That’s OK as long as you have that package present in the NuGet feed, but what happens if someone removes it or updates it without respecting proper versioning? This should not happen, but it can be an unpleasant surprise if it happens. Copying the package to another feed provides stability: the specific package version is available on that other feed and will never change unless you update or remove it. It puts you in control, not the package owner.

    A second scenario: enhanced speed! It’s still much faster to pull packages from a local feed or a feed that’s geographically distributed, like the one MyGet offers (US and Europe at the moment). This is not to bash any carriers or network providers, it’s just physics: electrons don’t travel that fast and it’s better to have them coming from a closer location.

    But… how to do it? Client side

    There are some solutions to this problem/feature. The first one is a hard one: write a script that just pulls packages from the official feed. You’ll find a suggestion on how to do that here. This thing however does not pull along dependencies and forces you to do ugly, user-unfriendly things. Let’s go for beauty :-)

    Rob Reynolds (aka @ferventcoder) added some extension sauce to the NuGet.exe:

    NuGet.exe Install /ExcludeVersion /OutputDir %LocalAppData%\NuGet\Commands AddConsoleExtension NuGet.exe addextension nuget.copy.extension NuGet.exe copy castle.windsor –destination http://myget.org/F/somefeed

    Sweet! And Rob also shared how he created this extension (warning: interesting read!)

    But… how to do it? Server side

    The easiest solution is to just use MyGet! We have a nifty feature in there named “Mirror packages”. It copies the selected package to your private feed, distributes it across our CDN nodes for a fast download and it pulls along all dependencies.

    Mirror a NuGet package - Copy a NuGet package

    Enjoy making NuGet a component of your enterprise workflow! And MyGet of course as well!