Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Remote profiling Windows Azure Cloud Services with dotTrace

Here’s another cross-post from our JetBrains .NET blog. It’s focused around dotTrace but there are a lot of tips and tricks around Windows Azure Cloud Services in it as well, especially around working with the load balancer. Enjoy the read!

With dotTrace Performance, we can profile applications running on our local computer as well as on remote machines. The latter can be very useful when some performance problems only occur on the staging server (or even worse: only in production). And what if that remote server is a Windows Azure Cloud Service?

Note: in this post we’ll be exploring how to setup a Windows Azure Cloud Service for remote profiling using dotTrace, the “platform-as-a-service” side of Windows Azure. If you are working with regular virtual machines (“infrastructure-as-a-service”), the only thing you have to do is open up any port in the loadbalancer, redirect it to the machine’s port 9000 (dotTrace’s default) and follow the regular remote profiling workflow.

Preparing your Windows Azure Cloud Service for remote profiling

Since we don’t have system administrators at hand when working with cloud services, we have to do some of their work ourselves. The most important piece of work is making sure the load balancer in Windows Azure lets dotTrace’s traffic through to the server instance we want to profile.

We can do this by adding an InstanceInput endpoint type in the web- or worker role’s configuration:

Windows Azure InstanceInput endpoint

By default, the Windows Azure load balancer uses a round-robin approach in routing traffic to role instances. In essence every request gets routed to a random instance. When profiling later on, we want to target a specific machine. And that’s what the InstanceInput endpoint allows us to do: it opens up a range of ports on the load balancer and forwards traffic to a local port. In the example above, we’re opening ports 9000-9019 in the load balancer and forward them to port 9000 on the server. If we want to connect to a specific instance, we can use a port number from this range. Port 9000 will connect to port 9000 on server instance 0. Port 9001 will connect to port 9000 on role instance 1 and so on.

When deploying, make sure to enable remote desktop for the role as well. This will allow us to connect to a specific machine and start dotTrace’s remote agent there.

Windows Azure Remote Desktop RDP

That’s it. Whenever we want to start remote profiling on a specific role instance, we can now connect to the machine directly.

Starting a remote profiling session with a specific instance

And then that moment is there: we need to profile production!

First of all, we want to open a remote desktop connection to one of our role instances. In the Windows Azure management portal, we can connect to a specific instance by selecting it and clicking the Connect button. Save the file that’s being downloaded somewhere on your system: we need to change it before connecting.

Windows Azure connect to specific role instance

The reason for saving and not immediately opening the .rdp file is that we have to copy the dotTrace Remote Agent to the machine. In order to do that we want to enable access to our local drives. Right-click the downloaded .rdp file and select Edit from the context menu. Under the Local Resources tab, check the Drives option to allow access to our local filesystem.

Windows Azure access local filesystem

Save the changes and connect to the remote machine. We can now copy the dotTrace Remote Agent to the role instance by copying all files from our local dotTrace installation. The Remote Agent can be found in C:\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote, but since the machine in Windows Azure has no clue about that path we have to specify \\tsclient\C\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote instead.

From the copied folder, launch the RemoteAgent.exe. A console window similar to the one below will appear:

image

Not there yet: we did open the load balancer in Windows Azure to allow traffic to flow to our machine, but the machine’s own firewall will be blocking our incoming connection. To solve this, configure Windows Firewall to allow access on port 9000. A one-liner which can be run in a command prompt would be the following:

netsh advfirewall firewall add rule name="Profiler" dir=in action=allow protocol=TCP localport=9000

 

Since we’ve opened ports 9000 thru 9019 in the Windows Azure load balancer and every role instance gets their own port number from that range, we can now connect to the machine using dotTrace. We’ve connected to instance 1, which means we have to connect to port 9001 in dotTrace’s Attach to Process window. The Remote Agent URL will look like http://<yourservice>.cloudapp.net:PORT/RemoteAgent/AgentService.asmx.

Attach to process

Next, we can select the process we want to do performance tracing on. I’ve deployed a web application so I’ll be connecting to IIS’s w3wp.exe.

Profile application dotTrace

We can now user our application and try reproducing performance issues. Once we feel we have enough data, the Get Snapshot button will download all required data from the server for local inspection.

dotTrace get performance snapshot

We can now perform our performance analysis tasks and hunt for performance issues. We can analyze the snapshot data just as if we had recorded the snapshot locally. After determining the root cause and deploying a fix, we can repeat the process to collect another snapshot and verify that you have resolved the performance problem. Note that all steps in this post should be executed again in the next profiling session: Windows Azure’s Cloud Service machines are stateless and will probably discard everything we’ve done with them so far.

Analyze snapshot data

Bonus tip: get the instance being profiled out of the load balancer

Since we are profiling a production application, we may get in the way of our users by collecting profiling data. Another issue we have is that our own test data and our live user’s data will show up in the performance snapshot. And if we’re running a lot of instances, not every action we do in the application will be performed by the role instance we’ve connected to because of Windows Azure’s round-robin load balancing.

Ideally we want to temporarily remove the role instance we’re profiling from the load balancer to overcome these issues.The good news is: we can do this! The only thing we have to do is add a small piece of code in our WebRole.cs or WorkerRole.cs class.

1 public class WebRole : RoleEntryPoint 2 { 3 public override bool OnStart() 4 { 5 // For information on handling configuration changes 6 // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357. 7 8 RoleEnvironment.StatusCheck += (sender, args) => 9 { 10 if (File.Exists("C:\\Config\\profiling.txt")) 11 { 12 args.SetBusy(); 13 } 14 }; 15 16 return base.OnStart(); 17 } 18 }

Essentially what we’re doing here is capturing the load balancer’s probes to see if our node is still healthy. We can choose to respond to the load balancer that our current instance is busy and should not receive any new requests. In the example code above we’re checking if the file C:\Config\profiling.txt exists. If it does, we respond the load balancer with a busy status.

When we start profiling, we can now create the C:\Config\profiling.txt file to take the instance we’re profiling out of the server pool. After about a minute, the management portal will report the instance is “Busy”.

Role instance marked Busy

The best thing is we can still attach to the instance-specific endpoint and attach dotTrace to this instance. Just keep in mind that using the application should now happen in the remote desktop session we opened earlier, since we no longer have the current machine available from the Internet.

image

Once finished, we can simply remove the C:\Config\profiling.txt file and Windows Azure will add the machine back to the server pool. Don't forget this as otherwise you'll be paying for the machine without being able to serve the application from it. Reimaging the machine will also add it to the pool again.

Enjoy!

Custom media types for ASP.NET Web API versioning

There is a raging discussion on the interwebs on whether to version API’s by using their URL or by using a custom media type. Some argue that doing it in the URL breaks REST (since a different URL is a different resource while versions don’t necessarily mean a new resource is available). While I still feel good about both approaches, I guess it depends on the domain you are working with.

But that is not the topic of this talk. I recently found a sample on CodePlex providing support for routing versioned URL’s to different namespaces. In short, it maps /api/v1/values to MyApp.V1.Controllers and /api/v2/values to MyApp.V2.Controllers. Great! But that only supports the URL-versioning side of the discussion. Let’s implement this sample and build ASP.NET Web API support for versioning an API using custom media types…

Custom Media Types

If you have no clue about what I am talking about, no worries. I’ll give you a quick primer on this using the GitHub API as an example. Since their API version 3, endpoints for the API (or “resource addresses”) will no longer change every version of the API. Instead, they will be parsing the Accept HTTP header to determine the incoming message version and the expected response version.

Getting a list of repositories from the API? The URL will always be /users/repos. However different incoming and outgoing responses are possible, varying based on their media types. Want to use the V3 message format in JSON? Use application/vnd.github.v3+json. Prefer the V3 message format in XML? Use application/vnd.github.v3+xml. Whenever they update their messages, they can add a new media type such as application/vnd.github.v4 without changing any URL. Nifty trick, aye? Let’s do this for our own API.

IHttpControllerSelector

The IHttpControllerSelector interface allows you to interfere in selecting the right controller for the current request. This is an ideal location for grabbing all contextual information and providing ASP.NET Web API with a controller based on that context.

1 public class AcceptHeaderControllerSelector : IHttpControllerSelector 2 { 3 private const string ControllerKey = "controller"; 4 5 private readonly HttpConfiguration _configuration; 6 private readonly Func<MediaTypeHeaderValue, string> _namespaceResolver; 7 private readonly Lazy<Dictionary<string, HttpControllerDescriptor>> _controllers; 8 private readonly HashSet<string> _duplicates; 9 10 public AcceptHeaderControllerSelector(HttpConfiguration config, Func<MediaTypeHeaderValue, string> namespaceResolver) 11 { 12 _configuration = config; 13 _namespaceResolver = namespaceResolver; 14 _duplicates = new HashSet<string>(StringComparer.OrdinalIgnoreCase); 15 _controllers = new Lazy<Dictionary<string, HttpControllerDescriptor>>(InitializeControllerDictionary); 16 } 17 18 private Dictionary<string, HttpControllerDescriptor> InitializeControllerDictionary() 19 { 20 var dictionary = new Dictionary<string, HttpControllerDescriptor>(StringComparer.OrdinalIgnoreCase); 21 22 // Create a lookup table where key is "namespace.controller". The value of "namespace" is the last 23 // segment of the full namespace. For example: 24 // MyApplication.Controllers.V1.ProductsController => "V1.Products" 25 IAssembliesResolver assembliesResolver = _configuration.Services.GetAssembliesResolver(); 26 IHttpControllerTypeResolver controllersResolver = _configuration.Services.GetHttpControllerTypeResolver(); 27 28 ICollection<Type> controllerTypes = controllersResolver.GetControllerTypes(assembliesResolver); 29 30 foreach (Type t in controllerTypes) 31 { 32 var segments = t.Namespace.Split(Type.Delimiter); 33 34 // For the dictionary key, strip "Controller" from the end of the type name. 35 // This matches the behavior of DefaultHttpControllerSelector. 36 var controllerName = t.Name.Remove(t.Name.Length - DefaultHttpControllerSelector.ControllerSuffix.Length); 37 38 var key = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", segments[segments.Length - 1], controllerName); 39 40 // Check for duplicate keys. 41 if (dictionary.Keys.Contains(key)) 42 { 43 _duplicates.Add(key); 44 } 45 else 46 { 47 dictionary[key] = new HttpControllerDescriptor(_configuration, t.Name, t); 48 } 49 } 50 51 // Remove any duplicates from the dictionary, because these create ambiguous matches. 52 // For example, "Foo.V1.ProductsController" and "Bar.V1.ProductsController" both map to "v1.products". 53 foreach (string s in _duplicates) 54 { 55 dictionary.Remove(s); 56 } 57 return dictionary; 58 } 59 60 // Get a value from the route data, if present. 61 private static T GetRouteVariable<T>(IHttpRouteData routeData, string name) 62 { 63 object result = null; 64 if (routeData.Values.TryGetValue(name, out result)) 65 { 66 return (T)result; 67 } 68 return default(T); 69 } 70 71 public HttpControllerDescriptor SelectController(HttpRequestMessage request) 72 { 73 IHttpRouteData routeData = request.GetRouteData(); 74 if (routeData == null) 75 { 76 throw new HttpResponseException(HttpStatusCode.NotFound); 77 } 78 79 // Get the namespace and controller variables from the route data. 80 string namespaceName = null; 81 foreach (var accepts in request.Headers.Accept) 82 { 83 namespaceName = _namespaceResolver(accepts); 84 if (namespaceName != null) 85 { 86 break; 87 } 88 } 89 if (namespaceName == null) 90 { 91 throw new HttpResponseException(HttpStatusCode.NotFound); 92 } 93 94 string controllerName = GetRouteVariable<string>(routeData, ControllerKey); 95 if (controllerName == null) 96 { 97 throw new HttpResponseException(HttpStatusCode.NotFound); 98 } 99 100 // Find a matching controller. 101 string key = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", namespaceName, controllerName); 102 103 HttpControllerDescriptor controllerDescriptor; 104 if (_controllers.Value.TryGetValue(key, out controllerDescriptor)) 105 { 106 return controllerDescriptor; 107 } 108 else if (_duplicates.Contains(key)) 109 { 110 throw new HttpResponseException( 111 request.CreateErrorResponse(HttpStatusCode.InternalServerError, 112 "Multiple controllers were found that match this request.")); 113 } 114 else 115 { 116 throw new HttpResponseException(HttpStatusCode.NotFound); 117 } 118 } 119 120 public IDictionary<string, HttpControllerDescriptor> GetControllerMapping() 121 { 122 return _controllers.Value; 123 } 124 }

To be honest, I did not write much code in this. I grabbed the IHttpControllerSelector implementation from the sample on CodePlex and added just these lines to check the Accept header instead.

1 // Get the namespace and controller variables from the route data. 2 string namespaceName = null; 3 foreach (var accepts in request.Headers.Accept) 4 { 5 namespaceName = _namespaceResolver(accepts); 6 if (namespaceName != null) 7 { 8 break; 9 } 10 } 11 if (namespaceName == null) 12 { 13 throw new HttpResponseException(HttpStatusCode.NotFound); 14 }

The real logic in finding out the version that is called is delegated to the user of this IHttpControllerSelector. Let’s wire it up!

Wiring it up

ASP.NET Web API has a lot of “plugs”, among which there’s one where we can plug in our custom IHttpControllerSelector, Let’s override the default one and add our own:

1 config.Services.Replace(typeof(IHttpControllerSelector), 2 new AcceptHeaderControllerSelector(config, accept => 3 { 4 foreach (var parameter in accept.Parameters) 5 { 6 if (parameter.Name.Equals("version", StringComparison.InvariantCultureIgnoreCase)) 7 { 8 switch (parameter.Value) 9 { 10 case "1.0": return "v1"; 11 case "2.0": return "v2"; 12 } 13 } 14 } 15 16 return "v2"; // default namespace, return null to throw 404 when namespace not given 17 }));

As you can see, we can pass in a lambda which gets called with the contents of the Accept header and must return the namespace obtained from the header. The above example will work when using the version property of a header, e.g.: application/json;version=1.0 and application/json;version=2.0. The last statement returns “v2” as the default version when no specific media header is given. Return null if you want this to result in a 404 Page Not Found.

Using this header scheme is recommended but of course other options are possible. It’s your lambda!

Another approach would be going "GitHub style" and use things like application/vnd.api.v1+json and similar?

1 config.Services.Replace(typeof(IHttpControllerSelector), 2 new AcceptHeaderControllerSelector(config, accept => 3 { 4 var matches = Regex.Match(accept.MediaType, @"application\/vnd.api.(.*)\+.*"); 5 if (matches.Groups.Count >= 2) 6 { 7 return matches.Groups[1].Value; 8 } 9 return "v2"; // default namespace, return null to throw 404 when namespace not given 10 }));

Note that when using the GitHub-style media type, it’s best to also configure the default media type formatters to recognize these new types. That way you can even use different media type formats for each API version.

1 // Add custom media types as supported to their default formatters 2 config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v1+json")); 3 config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v2+json")); 4 5 config.Formatters.XmlFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v1+xml")); 6 config.Formatters.XmlFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v2+xml"));

That’s basically it. We can now implement our controllers in different namespaces, like so:

1 namespace TestSelector.Controllers.V1 2 { 3 public class ValuesController : ApiController 4 { 5 public string Get() 6 { 7 return "This is a V1 response."; 8 } 9 } 10 } 11 12 namespace TestSelector.Controllers.V2 13 { 14 public class ValuesController : ApiController 15 { 16 public string Get() 17 { 18 return "This is a V2 response."; 19 } 20 } 21 }

When providing different Accept headers, we now get routed to the correct namespace depending on our custom media type. REST maturity level up!

I’ve issued a pull request on the official samples page, in the meanwhile here’s the download: AcceptHeaderControllerSelector.zip (238.43 kb)

Enjoy!

[edit] there's a project on GitHub containing other implementations as well, check http://github.com/Sebazzz/SDammann.WebApi.Versioning

Global Windows Azure Bootcamp - april 27th

On April 27th, 2013, you’ll have the ability to join a Windows Azure Bootcamp on a location close to you. We’ve started this with the idea of maybe having 10 or 15 locations world wide. We were wrong. Here’s what happened:

Much ocations for our bootcamp!

In short: we now have over 50 locations available where a bootcamp will be organized! This one day deep dive class will get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned. It’s free, so find your location and join the fun!

Running unit tests when deploying to Windows Azure Web Sites

When deploying an application to Windows Azure Web Sites, a number of deployment steps are executed. For .NET projects, msbuild is triggered. For node.js applications, a list of dependencies is restored. For PHP applications, files are copied from source control to the actual web root which is served publicly. Wouldn’t it be cool if Windows Azure Web Sites refused to deploy fresh source code whenever unit tests fail? In this post, I’ll show you how.

Disclaimer:  I’m using PHP and PHPUnit here but the same approach can be used for node.js. .NET is a bit harder since most test runners out there are not supported by the Windows Azure Web Sites sandbox. I’m confident however that in the near future this issue will be resolved and the same technique can be used for .NET applications.

Our sample application

First of all, let’s create a simple application. Here’s a very simple one using the Silex framework which is similar to frameworks like Sinatra and Nancy.

1 <?php 2 require_once(__DIR__ . '/../vendor/autoload.php'); 3 4 $app = new \Silex\Application(); 5 6 $app->get('/', function (\Silex\Application $app) { 7 return 'Hello, world!'; 8 }); 9 10 $app->run();

Next, we can create some unit tests for this application. Since our app itself isn’t that massive to test, let’s create some dummy tests instead:

1 <?php 2 namespace Jb\Tests; 3 4 class SampleTest 5 extends \PHPUnit_Framework_TestCase { 6 7 public function testFoo() { 8 $this->assertTrue(true); 9 } 10 11 public function testBar() { 12 $this->assertTrue(true); 13 } 14 15 public function testBar2() { 16 $this->assertTrue(true); 17 } 18 }

As we can see from our IDE, the three unit tests run perfectly fine.

Running PHPUnit in PhpStorm

Now let’s see if we can hook them up to Windows Azure Web Sites…

Creating a Windows Azure Web Sites deployment script

Windows Azure Web Sites allows us to customize deployment. Using the azure-cli tools we can issue the following command:

1 azure site deploymentscript

As you can see from the following screenshot, this command allows us to specify some additional options, such as specifying the project type (ASP.NET, PHP, node.js, …) or the script type (batch or bash).

image

Running this command does two things: it creates a .deployment file which tells Windows Azure Web Sites which command should be run during the deployment process and a deploy.cmd (or deploy.sh if you’ve opted for a bash script) which contains the entire deployment process. Let’s first look at the .deployment file:

1 [config] 2 command = bash deploy.sh

This is a very simple file which tells Windows Azure Web Sites to invoke the deploy.sh script using bash as the shell. The default deploy.sh will look like this:

1 #!/bin/bash 2 3 # ---------------------- 4 # KUDU Deployment Script 5 # ---------------------- 6 7 # Helpers 8 # ------- 9 10 exitWithMessageOnError () { 11 if [ ! $? -eq 0 ]; then 12 echo "An error has occured during web site deployment." 13 echo $1 14 exit 1 15 fi 16 } 17 18 # Prerequisites 19 # ------------- 20 21 # Verify node.js installed 22 where node &> /dev/null 23 exitWithMessageOnError "Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment." 24 25 # Setup 26 # ----- 27 28 SCRIPT_DIR="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 29 ARTIFACTS=$SCRIPT_DIR/artifacts 30 31 if [[ ! -n "$DEPLOYMENT_SOURCE" ]]; then 32 DEPLOYMENT_SOURCE=$SCRIPT_DIR 33 fi 34 35 if [[ ! -n "$NEXT_MANIFEST_PATH" ]]; then 36 NEXT_MANIFEST_PATH=$ARTIFACTS/manifest 37 38 if [[ ! -n "$PREVIOUS_MANIFEST_PATH" ]]; then 39 PREVIOUS_MANIFEST_PATH=$NEXT_MANIFEST_PATH 40 fi 41 fi 42 43 if [[ ! -n "$KUDU_SYNC_COMMAND" ]]; then 44 # Install kudu sync 45 echo Installing Kudu Sync 46 npm install kudusync -g --silent 47 exitWithMessageOnError "npm failed" 48 49 KUDU_SYNC_COMMAND="kuduSync" 50 fi 51 52 if [[ ! -n "$DEPLOYMENT_TARGET" ]]; then 53 DEPLOYMENT_TARGET=$ARTIFACTS/wwwroot 54 else 55 # In case we are running on kudu service this is the correct location of kuduSync 56 KUDU_SYNC_COMMAND="$APPDATA\\npm\\node_modules\\kuduSync\\bin\\kuduSync" 57 fi 58 59 ################################################################################################################################## 60 # Deployment 61 # ---------- 62 63 echo Handling Basic Web Site deployment. 64 65 # 1. KuduSync 66 echo Kudu Sync from "$DEPLOYMENT_SOURCE" to "$DEPLOYMENT_TARGET" 67 $KUDU_SYNC_COMMAND -q -f "$DEPLOYMENT_SOURCE" -t "$DEPLOYMENT_TARGET" -n "$NEXT_MANIFEST_PATH" -p "$PREVIOUS_MANIFEST_PATH" -i ".git;.deployment;deploy.sh" 68 exitWithMessageOnError "Kudu Sync failed" 69 70 ################################################################################################################################## 71 72 echo "Finished successfully." 73

This script does two things: setup a bunch of environment variables so our script has all the paths to the source code repository, the target web site root and some well-known commands, Next, it runs the KuduSync executable, a helper which copies files from the source code repository to the web site root using an optimized algorithm which only copies files that have been modified. For .NET, there would be a third action which is done: running msbuild to compile sources into binaries.

Right before the part that reads # Deployment, we can add some additional steps for running unit tests. We can invoke the php.exe executable (located on the D:\ drive in Windows Azure Web Sites) and run phpunit.php passing in the path to the test configuration file:

1 ################################################################################################################################## 2 # Testing 3 # ------- 4 5 echo Running PHPUnit tests. 6 7 # 1. PHPUnit 8 "D:\Program Files (x86)\PHP\v5.4\php.exe" -d auto_prepend_file="$DEPLOYMENT_SOURCE\\vendor\\autoload.php" "$DEPLOYMENT_SOURCE\\vendor\\phpunit\\phpunit\\phpunit.php" --configuration "$DEPLOYMENT_SOURCE\\app\\phpunit.xml" 9 exitWithMessageOnError "PHPUnit tests failed" 10 echo

On a side note, we can also run other commands like issuing a composer update, similar to NuGet package restore in the .NET world:

1 echo Download composer. 2 curl -O https://getcomposer.org/composer.phar > /dev/null 3 4 echo Run composer update. 5 cd "$DEPLOYMENT_SOURCE" 6 "D:\Program Files (x86)\PHP\v5.4\php.exe" composer.phar update --optimize-autoloader 7

Putting our deployment script to the test

All that’s left to do now is commit and push our changes to Windows Azure Web Sites. If everything goes right, the output for the git push command should contain details of running our unit tests:

image

Here’s what happens when a test fails:

image

And even better, the Windows Azure Web Sites portal shows us that the latest sources were commited to the git repository but not deployed because tests failed:

image

As you can see, using deployment scripts we can customize deployment on Windows Azure Web Sites to fit our needs. We can run unit tests, fetch source code from a different location and so on. Enjoy!

Tales from the trenches: resizing a Windows Azure virtual disk the smooth way

We’ve all been there. Running a virtual machine on Windows Azure and all of a sudden you notice that a virtual disk is running full. Having no access to the hypervisor nor to its storage (directly), there’s no easy way out…

Big disclaimer: use the provided code on your own risk! I’m not responsible if something breaks! The provided code is as-is without warranty! I have tested this on a couple of data disks without any problems. I've tested this on OS disks and this sometimes works, sometimes fails. Be warned.

Download/contribute: on GitHub

When searching for a solution to this issue,the typical solution you’ll find is the following:

  • Delete the VM
  • Download the .vhd
  • Resize the downloaded .vhd
  • Delete the original .vhd from blob storage
  • Upload the resized .vhd
  • Recreate the VM
  • Use diskpart to resize the partition

That’s a lot of work. Deleting and re-creating the VM isn’t that bad, it can be done pretty quickly. But doing a download of a 30GB disk, resizing the disk and re-uploading it is a serious PITA! Even if you do this on a temporary VM that sits in the same datacenter as your storage account.

Last saturday, I was in this situation… A decision would have to be made: spend an estimated 3 hours in doing the entire download/resize/upload process or reading up on the VHD file format and finding an easier way. With the possibility of having to fall back to doing the entire process…

Now what!

Being a bit geeked out, I decided to read up on the VHD file format and download the specs.

Before we dive in: why would I even read up on the VHD file format? Well, since Windows Azure storage is used as the underlying store for Windows Azure Virtual Machine VHD’s and Windows Azure storage supports byte operations without having to download an entire file, it occurred to me that combining both would result in a less-than-one-second VHD resize. Or would it?

Note that if you’re just interested in the bits to “get it done”, check the last section of this post.

Researching the VHD file format specs

The specs for the VHD file format are publicly available. Which means it shouldn't be to hard to learn how VHD files, the underlying format for virtual disks on Windows Azure Virtual Machines, are structured. Having fear of extremely complex file structures, I started reading and found that a VHD isn’t actually that complicated.

Apparently, VHD files created with Virtual PC 2004 are a bit different from newer VHD files. But hey, Microsoft will probably not use that old beast in their datacenters, right? Using that assumption and the assumption that VHD files for Windows Azure Virtual Machines are always fixed in size, I learnt the following over-generalized lesson:

A fixed-size VHD for Windows Azure Virtual Machines is a bunch of bytes representing the actual disk contents, followed by a 512-byte file footer that holds some metadata.
Maarten Balliauw – last Saturday

A-ha! So in short, if the size of the VHD file is known, the offset to the footer can be calculated and the entire footer can be read. And this footer is just a simple byte array. From the specs:

VHD footer specification

Let’s see what’s needed to do some dynamic VHD resizing…

Resizing a VHD file - take 1

My first approach to “fixing” this issue was simple:

  • Read the footer bytes
  • Write null values over it and resize the disk to (desired size + 512 bytes)
  • Write the footer in those last 512 bytes

Guess what? I tried mounting an updated VHD file in Windows, without any successful result. Time for some more reading… resulting in the big Eureka! scream: the “current size” field in the footer must be updated!

So I did that… and got failure again. But Eureka! again: the checksum must be updated so that the VHD driver can verify the footer is valid!

So I did that… and found more failure.

*sigh* – the fallback scenario of download/resize/update came to mind again…

Resizing a VHD file - take 2

Being a persistent developer, I decided to do some more searching. For most problems, at least a partial solution is available out there! And there was: CodePlex holds a library called .NET DiscUtils which supports reading from and writing to a giant load of file container formats such as ISO, VHD, various file systems, Udf, Vdi and much more!

Going through the sources and doing some research, I found the one missing piece from my first attempt: “geometry”. An old class on basic computer principles came to mind where the professor taught us that disks have geometry: cylinder-head-sector or CHS information for the disk driver which can use this info for determining physical data blocks on the disk.

Being lazy, I decided to copy-and-adapt the Footer class from this library. Why reinvent the wheel? Why risk  going sub-zero on the WIfe Acceptance Factor since this was saturday?

So I decided to generate a fresh VHD file in Windows and try to resize that one using this Footer class. Let’s start simple: specify the file to open, the desired new size and open a read/write stream to it.

1 string file = @"c:\temp\path\to\some.vhd"; 2 long newSize = 20971520; // resize to 20 MB 3 4 using (Stream stream = new FileStream(file, FileMode.OpenOrCreate, FileAccess.ReadWrite)) 5 { 6 // code goes here 7 }

Since we know the size of the file we’ve just opened, the footer is at length – 512, the Footer class takes these bytes and creates a .NET object for it:

1 stream.Seek(-512, SeekOrigin.End); 2 var currentFooterPosition = stream.Position; 3 4 // Read current footer 5 var footer = new byte[512]; 6 stream.Read(footer, 0, 512); 7 8 var footerInstance = Footer.FromBytes(footer, 0);

Of course, we want to make sure we’re working on a fixed-size disk and that it’s smaller than the requested new size.

1 if (footerInstance.DiskType != FileType.Fixed 2 || footerInstance.CurrentSize >= newSize) 3 { 4 throw new Exception("You are one serious nutcase!"); 5 }

If all is well, we can start resizing the disk. Simply writing a series of zeroes in the least optimal way will do:

1 // Write 0 values 2 stream.Seek(currentFooterPosition, SeekOrigin.Begin); 3 while (stream.Length < newSize) 4 { 5 stream.WriteByte(0); 6 }

Now that we have a VHD file that holds the desired new size capacity, there’s one thing left: updating the VHD file footer. Again, the Footer class can help us here by updating the current size, original size, geometry and checksum fields:

1 // Change footer size values 2 footerInstance.CurrentSize = newSize; 3 footerInstance.OriginalSize = newSize; 4 footerInstance.Geometry = Geometry.FromCapacity(newSize); 5 6 footerInstance.UpdateChecksum();

One thing left: writing the footer to our VHD file:

1 footer = new byte[512]; 2 footerInstance.ToBytes(footer, 0); 3 4 // Write new footer 5 stream.Write(footer, 0, footer.Length);

That’s it. And my big surprise after running this? Great success! A VHD that doubled in size.

Resize VHD Windows Azure disk

So we can now resize VHD files in under a second. That’s much faster than any VHD resizer tool you find out here! But still: what about the download/upload?

Resizing a VHD file stored in blob storage

Now that we have the code for resizing a local VHD, porting this to using blob storage and more specifically, the features provided for manipulating page blobs, is pretty straightforward. The Windows Azure Storage SDK gives us access to every single page of 512 bytes of a page blob, meaning we can work with files that span gigabytes of data while only downloading and uploading a couple of bytes…

Let’s give it a try. First of all, our file is now a URL to a blob:

1 var blob = new CloudPageBlob( 2 "http://account.blob.core.windows.net/vhds/some.vhd", 3 new StorageCredentials("accountname", "accountkey));

Next, we can fetch the last page of this blob to read our VHD’s footer:

1 blob.FetchAttributes(); 2 var originalLength = blob.Properties.Length; 3 4 var footer = new byte[512]; 5 using (Stream stream = new MemoryStream()) 6 { 7 blob.DownloadRangeToStream(stream, originalLength - 512, 512); 8 stream.Position = 0; 9 stream.Read(footer, 0, 512); 10 stream.Close(); 11 } 12 13 var footerInstance = Footer.FromBytes(footer, 0);

After doing the check on disk type again (fixed and smaller than the desired new size), we can resize the VHD. This time not by writing zeroes to it, but by calling one simple method on the storage SDK.

1 blob.Resize(newSize + 512);

In theory, it’s not required to overwrite the current footer with zeroes, but let’s play it clean:

1 blob.ClearPages(originalLength - 512, 512);

Next, we can change our footer values again:

1 footerInstance.CurrentSize = newSize; 2 footerInstance.OriginalSize = newSize; 3 footerInstance.Geometry = Geometry.FromCapacity(newSize); 4 5 footerInstance.UpdateChecksum(); 6 7 footer = new byte[512]; 8 footerInstance.ToBytes(footer, 0);

And write them to the last page of our page blob:

1 using (Stream stream = new MemoryStream(footer)) 2 { 3 blob.WritePages(stream, newSize); 4 }

And that’s all, folks! Using this code you’ll be able to resize a VHD file stored on blob storage in less than a second without having to download and upload several gigabytes of data.

Meet WindowsAzureDiskResizer

Since resizing Windows Azure VHD files is a well-known missing feature, I decided to wrap all my code in a console application and share it on GitHub. Feel free to fork, contribute and so on. WindowsAzureDiskResizer takes at least two parameters: the desired new size (in bytes) and a blob URL to the VHD. This can be a URL containing a Shared Access SIgnature.

Resize windows azure VM disk

Now let’s resize a disk. Here are the steps to take:

  • Shutdown the VM
  • Delete the VM -or- detach the disk if it’s not the OS disk
  • In the Windows Azure portal, delete the disk (retain the data!) do that the lease Windows Azure has on it is removed
  • Run WindowsAzureDiskResizer
  • In the Windows Azure portal, recreate the disk based on the existing blob
  • Recreate the VM  -or- reattach the disk if it’s not the OS disk
  • Start the VM
  • Use diskpart / disk management to resize the partition

Here’s how fast the resizing happens:

VhdResizer

Woah! Enjoy!

We’re good for now, at least until Microsoft decides to switch to the newer VHDX file format…

Download/contribute: on GitHub or binaries: WindowsAzureDiskResizer-1.0.0.0.zip (831.69 kb)

Storing user uploads in Windows Azure blob storage

On one of the mailing lists I follow, an interesting question came up: “We want to write a VSTO plugin for Outlook which copies attachments to blob storage. What’s the best way to do this? What about security?”. Shortly thereafter, an answer came around: “That can be done directly from the client. And storage credentials can be encrypted for use in your VSTO plugin.”

While that’s certainly a solution to the problem, it’s not the best. Let’s try and answer…

What’s the best way to uploads data to blob storage directly from the client?

The first solution that comes to mind is implementing the following flow: the client authenticates and uploads data to your service which then stores the upload on blob storage.

Upload data to blob storage

While that is in fact a valid solution, think about the following: you are creating an expensive layer in your application that just sits there copying data from one network connection to another. If you have to scale this solution, you will have to scale out the service layer in between. If you want redundancy, you need at least two machines for doing this simple copy operation… A better approach would be one where the client authenticates with your service and then uploads the data directly to blob storage.

Upload data to blob storage using shared access signature

This approach allows you to have a “cheap” service layer: it can even run on the free version of Windows Azure Web Sites if you have a low traffic volume. You don’t have to scale out the service layer once your number of clients grows (at least, not for the uploading scenario).But how would you handle uploading to blob storage from a security point of view…

What about security? Shared access signatures!

The first suggested answer on the mailing list was this: “(…) storage credentials can be encrypted for use in your VSTO plugin.” That’s true, but you only have 2 access keys to storage. It’s like giving the master key of your house to someone you don’t know. It’s encrypted, sure, but still, the master key is at the client and that’s a potential risk. The solution? Using a shared access signature!

Shared access signatures (SAS) allow us to separate the code that signs a request from the code that executes it. It basically is a set of query string parameters attached to a blob (or container!) URL that serves as the authentication ticket to blob storage. Of course, these parameters are signed using the real storage access key, so that no-one can change this signature without knowing the master key. And that’s the scenario we want to support…

On the service side, the place where you’ll be authenticating your user, you can create a Web API method (or ASMX or WCF or whatever you feel like) similar to this one:

public class UploadController : ApiController { [Authorize] public string Put(string fileName) { var account = CloudStorageAccount.DevelopmentStorageAccount; var blobClient = account.CreateCloudBlobClient(); var blobContainer = blobClient.GetContainerReference("uploads"); blobContainer.CreateIfNotExists(); var blob = blobContainer.GetBlockBlobReference("customer1-" + fileName); var uriBuilder = new UriBuilder(blob.Uri); uriBuilder.Query = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Write, SharedAccessStartTime = DateTime.UtcNow, SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5) }).Substring(1); return uriBuilder.ToString(); } }

This method does a couple of things:

  • Authenticate the client using your authentication mechanism
  • Create a blob reference (not the actual blob, just a URL)
  • Signs the blob URL with write access, allowed from now until now + 5 minutes. That should give the client 5 minutes to start the upload.

On the client side, in our VSTO plugin, the only thing to do now is call this method with a filename. The web service will create a shared access signature to a non-existing blob and returns that to the client. The VSTO plugin can then use this signed blob URL to perform the upload:

Uri url = new Uri("http://...../uploads/customer1-test.txt?sv=2012-02-12&st=2012-12-18T08%3A11%3A57Z&se=2012-12-18T08%3A16%3A57Z&sr=b&sp=w&sig=Rb5sHlwRAJp7mELGBiog%2F1t0qYcdA9glaJGryFocj88%3D"); var blob = new CloudBlockBlob(url); blob.Properties.ContentType = "test/plain"; using (var data = new MemoryStream( Encoding.UTF8.GetBytes("Hello, world!"))) { blob.UploadFromStream(data); }

Easy, secure and scalable. Enjoy!

Protecting your ASP.NET Web API using OAuth2 and the Windows Azure Access Control Service

An article I wrote a while ago has been posted on DeveloperFusion:

The world in which we live evolves at a vast speed. Today, many applications on the Internet expose an API which can be consumed by everyone using a web browser or a mobile application on their smartphone or tablet. How would you build your API if you want these apps to be a full-fledged front-end to your service without compromising security? In this article, I’ll dive into that. We’ll be using OAuth2 and the Windows Azure Access Control Service to secure our API yet provide access to all those apps out there.

Why would I need an API?

A couple of years ago, having a web-based application was enough. Users would navigate to it using their computer’s browser, do their dance and log out again. Nowadays, a web-based application isn’t enough anymore. People have smartphones, tablets and maybe even a refrigerator with Internet access on which applications can run. Applications or “apps”. We’re moving from the web towards apps.

If you want to expose your data and services to external third-parties, you may want to think about building an API. Having an API gives you a giant advantage on the Internet nowadays. Having an API will allow your web application to reach more users. App developers will jump onto your API and build their app around it. Other websites or apps will integrate with your services by consuming your API. The only thing you have to do is expose an API and get people to know it. Apps will come. Integration will come.

A great example of an API is Twitter. They have a massive data store containing tweets and data related to that. They have user profiles. And a web site. And an API. Are you using www.twitter.com to post tweets? I am using the website, maybe once a year. All other tweets come either from my Windows Phone 7’s Twitter application or through www.hootsuite.com, a third-party Twitter client which provides added value in the form of statistics and scheduling. Both the app on my phone as well as the third-party service are using the Twitter API. By exposing an API, Twitter has created a rich ecosystem which drives adoption of their service, reaches more users and adds to their real value: data which they can analyze and sell.

(…)

Getting to know OAuth2

If you decide that your API isn’t public or specific actions can only be done for a certain user (let that third party web site get me my tweets, Twitter!), you’ll be facing authentication and authorization problems. With ASP.NET Web API, this is simple: add an [Authorize] attribute on top of a controller or action method and you’re done, right? Well, sort of…

When using the out-of-the-box authentication/authorization mechanisms of ASP.NET Web API, you are relying on basic or Windows authentication. Both require the user to log in. While perfectly viable and a good way of securing your API, a good alternative may be to use delegation.

In many cases, typically with public API’s, your API user will not really be your user, but an application acting on behalf of that user. That means that the application should know the user’s credentials. In an ideal world, you would only give your username and password to the service you’re using rather than just trusting the third-party application or website with it. You’ll be delegating access to these third parties. If you look at Facebook for example, many apps and websites redirect you to Facebook to do the login there instead of through the app itself.

Head over to the original article for more! (I’ll also be doing a talk on this on some upcoming conferences)

Configuring IIS methods for ASP.NET Web API on Windows Azure Websites and elsewhere

That’s a pretty long title, I agree. When working on my implementation of RFC2324, also known as the HyperText Coffee Pot Control Protocol, I’ve been struggling with something that you will struggle with as well in your ASP.NET Web API’s: supporting additional HTTP methods like HEAD, PATCH or PROPFIND. ASP.NET Web API has no issue with those, but when hosting them on IIS you’ll find yourself in Yellow-screen-of-death heaven.

The reason why IIS blocks these methods (or fails to route them to ASP.NET) is because it may happen that your IIS installation has some configuration leftovers from another API: WebDAV. WebDAV allows you to work with a virtual filesystem (and others) using a HTTP API. IIS of course supports this (because flagship product “SharePoint” uses it, probably) and gets in the way of your API.

Bottom line of the story: if you need those methods or want to provide your own HTTP methods, here’s the bit of configuration to add to your Web.config file:

<?xml version="1.0" encoding="utf-8"?> <configuration> <!-- ... --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="WebDAVModule" /> </modules> <security> <requestFiltering> <verbs applyToWebDAV="false"> <add verb="XYZ" allowed="true" /> </verbs> </requestFiltering> </security> <handlers> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" /> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" /> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> </system.webServer> <!-- ... --> </configuration>

Here’s what each part does:

  • Under modules, the WebDAVModule is being removed. Just to make sure that it’s not going to get in our way ever again.
  • The security/requestFiltering element I’ve added only applies if you want to define your own HTTP methods. So unless you need the XYZ method I’ve defined here, don’t add it to your config.
  • Under handlers, I’m removing the default handlers that route into ASP.NET. Then, I’m adding them again. The important part? The "verb attribute. You can provide a list of comma-separated methods that you want to route into ASP.NET. Again, I’ve added my XYZ methodbut you probably don’t need it.

This will work on any IIS server as well as on Windows Azure Websites. It will make your API… happy.

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

Push to NuGet

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Build GitHub project

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

MyGet build hook

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

GitHub build hook

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

A phone call from the cloud: Windows Azure, SignalR & Twilio

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 05 ConfirmPhoneNumberDemo.zip (1.32 mb).
It has been written earlier this year, some versions of packages used (like jQuery or SignalR) may be outdated in this post. Live with it.

In the previous blog post we saw how you can send e-mails from Windows Azure. Why not take communication a step further and make a phone call from Windows Azure? I’ve already mentioned that Windows Azure is a platform which will run your code, topped with some awesomesauce in the form of a large number of components that will speed up development. One of those components is the API provided by Twilio, a third-party service.

Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio Voice allows your applications to make and receive phone calls. Twilio SMS allows your applications to make and receive SMS messages. We’ll use Twilio Voice in conjunction with jQuery and SignalR to spice up a sign-up process.

The scenario

The idea is simple: we want users to sign up using a username and password. In addition, they’ll have to provide their phone number. The user will submit the sign-up form and will be displayed a confirmation code. In the background, the user will be called and asked to enter this confirmation code in order to validate his phone number. Once finished, the browser will automatically continue the sign-up process. Here’s a visual:

clip_image002

Sounds too good to be true? Get ready, as it’s relatively simple using Windows Azure and Twilio.

Let’s start…

Before we begin, make sure you have a Twilio account. Twilio offers some free credits, enough to test with. After registering, make sure that you enable international calls and that your phone number is registered as a developer. Twilio takes this step in order to ensure that their service isn’t misused for making abusive phone calls using free developer accounts.

Next, create a Windows Azure project containing an ASP.NET MVC 4 web role. Install the following NuGet packages in it (right-click, Library Package Manager, go wild):

  • jQuery
  • jQuery.UI.Combined
  • jQuery.Validation
  • json2
  • Modernizr
  • SignalR
  • Twilio
  • Twilio.Mvc
  • Twilio.TwiML

It may also be useful to develop some familiarity with the concepts behind SignalR.

The registration form

Let’s create our form. Using a simple model class, SignUpModel, create the following action method:

public ActionResult Index() { return View(new SignUpModel()); }

This action method is accompanied with a view, a simple form requesting the required information from our user:

@using (Html.BeginForm("SignUp", "Home", FormMethod.Post)) { @Html.ValidationSummary(true) <fieldset> <legend>Sign Up for this awesome service</legend> @* etc etc etc *@ <div class="editor-label"> @Html.LabelFor(model => model.Phone) </div> <div class="editor-field"> @Html.EditorFor(model => model.Phone) @Html.ValidationMessageFor(model => model.Phone) </div> <p> <input type="submit" value="Sign up!" /> </p> </fieldset> }

We’ll spice up this form with a dialog first. Using jQuery UI, we can create a simple <div> element which will serve as the dialog’s content. Note the ui-helper-hidden class which is used to make it invisible to view.

<div id="phoneDialog" class="ui-helper-hidden"> <h1>Keep an eye on your phone...</h1> <p>Pick up the phone and follow the instructions.</p> <p>You will be asked to enter the following code:</p> <h2>1743</h2> </div>

This is a simple dialog in which we’ll show a hardcoded confirmation code which the user will have to provide when called using Twilio.

Next, let’s code our JavaScript logic which will spice up this form. First, add the required JavaScript libraries for SignalR (more on that later):

<script src="@Url.Content("~/Scripts/jquery.signalR-0.5.0.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/signalr/hubs")" type="text/javascript"></script>

Next, capture the form’s submit event and, if the phone number has not been validated yet, cancel the submit event and show our dialog instead:

$('form:first').submit(function (e) { if ($(this).valid() && $('#Phone').data('validated') != true) { // Show a dialog $('#phoneDialog').dialog({ title: '', modal: true, width: 400, height: 400, resizable: false, beforeClose: function () { if ($('#Phone').data('validated') != true) { return false; } } }); // Don't submit. Yet. e.preventDefault(); } });

Nothing fancy yet. If you now run this code, you’ll see that a dialog opens and remains open for eternity. Let’s craft some SignalR code now. SignalR uses a concept of Hubs to enable client-server communication, but also server-client communication. We’ll need the latter to inform our view whenever the user has confirmed his phone number. In the project, add the following class:

[HubName("phonevalidator")] public class PhoneValidatorHub : Hub { public void StartValidation(string phoneNumber) { } }

This class defines a service that the client can call. SignalR will also keep the connection with the client open so that this PhoneValidatorHub can later send a message to the client as well. Let’s connect our view to this hub. In the form submit event handler, add the following line of code:

// Validate the phone number using Twilio $.connection.phonevalidator.startValidation($('#Phone').val());

We’ve created a C# class with a StartValidation method and we’re calling the startValidation message from JavaScript. Coincidence? No. SignalR makes this possible. But we’re not finished yet. We can now call a method on the server side, but how would the server inform the client when the phone number has been validated? I’ll get to that point later. First, let’s make sure our JavaScript code can receive that call from the server. To do so, connect to the PhoneValidator hub and add a callback function to it:

var validatorHub = $.connection.phonevalidator; validatorHub.validated = function (phoneNumber) { if (phoneNumber == $('#Phone').val()) { $('#Phone').data('validated', true); $('#phoneDialog').dialog('destroy'); $('form:first').trigger('submit'); } }; $.connection.hub.start();

What we’re doing here is adding a client-side function named validated to the SignalR hub. We can call this function, sitting at the client side, from our server-side code later on. The function itself is easy: it checks whether the phone number that was validated matches the one the user entered and, if so, it submits the form and completes the signup.

All that’s left is calling the user and, when the confirmation succeeds, we’ll have to inform our client by calling the validated message on the hub.

Initiating a phone call

The phone call to our user will be initiated in the PhoneValidatorHub’s StartValidation method. Add the following code there:

var twilioClient = new TwilioRestClient("api user", "api password"); string url = "http://mas.cloudapp.net/Home/TwilioValidationMessage?passcode=1743" + "&phoneNumber=" + HttpContext.Current.Server.UrlEncode(phoneNumber); // Instantiate the call options that are passed to the outbound call CallOptions options = new CallOptions(); options.From = "+14155992671"; // Twilio's developer number options.To = phoneNumber; options.Url = url; // Make the call. twilioClient.InitiateOutboundCall(options);

Using the TwilioRestClient class, we create a request to Twilio. We also pass on a URL which points to our application. Twilio uses TwiML, an XML format to instruct their phone services. When calling the InitiateOutboundCall method, Twilio will issue a request to the URL we are hosting (http://.....cloudapp.net/Home/TwilioValidationMessage) to fetch the TwiML which tells Twilio what to say, ask, record, gather, … on the phone.

Next up: implementing the TwilioValidationMessage action method.

public ActionResult TwilioValidationMessage(string passcode, string phoneNumber) { var response = new TwilioResponse(); response.Say("Hi there, welcome to Maarten's Awesome Service."); response.Say("To validate your phone number, please enter the 4 digit" + " passcode displayed on your screen followed by the pound sign."); response.BeginGather(new { numDigits = 4, action = "http://mas.cloudapp.net/Home/TwilioValidationCallback?phoneNumber=" + Server.UrlEncode(phoneNumber), method = "GET" }); response.EndGather(); return new TwiMLResult(response); }

That’s right. We’re creating some TwiML here. Our ASP.NET MVC action method is telling Twilio to say some text and to gather 4 digits from his phone pad. These 4 digits will be posted to the TwilioValidationCallback action method by the Twilio service. Which is the next method we’ll have to implement.

public ActionResult TwilioValidationCallback(string phoneNumber) { var hubContext = GlobalHost.ConnectionManager.GetHubContext<PhoneValidatorHub>(); hubContext.Clients.validated(phoneNumber); var response = new TwilioResponse(); response.Say("Thank you! Your browser should automatically continue. Bye!"); response.Hangup(); return new TwiMLResult(response); }

The TwilioValidationCallback action method does two things. First, it gets a reference to our SignalR hub and calls the validated function on it. As you may recall, we created this method on the hub’s client side, so in fact our ASP.NET MVC server application is calling a method on the client side. Doing this triggers the client to hide the validation dialog and complete the user sign-up process.

Another action we’re doing here is generating some more TwiML (it’s fun!). We thank the user for validating his phone number and, after that, we hang up the call.

You see? Working with voice (and text messages too, if you want) isn’t that hard. It enables additional scenarios that can make your application stand out from all the many others out there. Enjoy!

05 ConfirmPhoneNumberDemo.zip (1.32 mb)