Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Working with Windows Azure from within PhpStorm

Working with Windows Azure and my new toy (PhpStorm), I wanted to have support for doing specific actions like creating a new web site or a new database in the IDE. Since I’m not a Java guy, writing a plugin was not an option. Fortunately, PhpStorm (or WebStorm for that matter) provide support for issuing commands from the IDE. Which led me to think that it may be possible to hook up the Windows Azure Command Line Tools in my IDE… Let’s see what we can do…

First of all, we’ll need the ‘azure’ tools. These are available for download for Windows or Mac. If you happen to have Node and NPM installed, simply issue npm install azure-cli -g and we’re good to go.

Next, we’ll have to configure PhpStorm with a custom command so that we can invoke these commands from within our IDE. From the File > Settings menu find the Command Line Tool Support pane and add a new framework:

PhpStorm custom framework

Next, enter the following detail. Note that the tool path may be different on your machine. It should be the full path to the command line tools for Windows Azure, which on my machine is C:\Program Files (x86)\Microsoft SDKs\Windows Azure\CLI\0.6.9\wbin\azure.cmd.

PhpStorm custom framework settings

Click Ok, close the settings dialog and return to your working environment. From there, we can open a command line through the Tools > Run Command menu or by simply using the Ctrl+Shift+X keyboard combo. Let’s invoke the azure command:

Running Windows Azure bash tools in PhpStrom WebStorm

Cool aye? Let’s see if we can actually do some things. The first thing we’ll have to do before being able to do anything with these tools is making sure we can access the Windows Azure management service. Invoke the azure account download command and save the generated .publishsettings file somewhere on your system. Next, we’ll have to import that file using the azure account import <path to publishsettings file> command.

If everything went according to plan, we’ll now be able to do some really interesting things from inside our PhpStorm IDE… How about we create a new Windows Azure Website named “GroovyBaby” in the West US datacenter with Git support and a local clone lined to it? Here’s the command:

azure site create GroovyBaby --git --location "West US"

And here’s the result:

Create a new website in PhpStorm

I seriously love this stuff! For reference, here’s the complete list of available commands. And Glenn Block cooked up some cool commands too.

Windows Azure Websites and PhpStorm

In my new role as Technical Evangelist at JetBrains, I’ve been experimenting with one of our products a lot: PhpStorm. I was kind of curious how this tool would integrate with Windows Azure Web Sites. Now before you quit reading this post because of that acronym: if you are a Node-head you can also use WebStorm to do the same things I will describe in this post. Let’s see if we can get a simple PHP application running on Windows Azure right from within our IDE…

Setting up a Windows Azure Web Site

Let’s go through setting up a Windows Azure Web Site real quickly. If this is the first time you hear about Web Sites and want more detail on getting started, check the Windows Azure website for a detailed step-by-step explanation.

From the management portal, click the big “New” button and create a new web site. Use the “quick create” so you just have to specify a URL and select the datacenter location where you’ll be hosted. Click “Create” and endure the 4 second wait before everything is provisioned.

Create a Windows Azure web site

Next, make sure Git support is enabled. From your newly created web site,click “Enable Git publishing”. This will create a new Git repository for your web site.

Windows Azure git

From now on, we have a choice to make. We can opt to “deploy from GitHub”, which will link the web site to a project on GitHub and deploy fresh code on every change on a specific branch. It’s very easy to do that, but we’ll be taking the other option: let’s use our Windows Azure Web Site as a Git repository instead.

Creating a PhpStorm project

After starting PhpStorm, go to VCS > Checkout from Version Control > Git. For repository URL, enter the repository that is listed in the Windows Azure management portal. It’s probably similar to @.scm.azurewebsites.net/stormy.git">https://<yourusername>@<your web name>.scm.azurewebsites.net/stormy.git.

Windows Azure PHPStorm WebStorm

Once that’s done, simply click “Clone”. PhpStorm will ask for credentials after which it will download the contents of your Windows Azure Web Site. For this post, we started from an empty web site but if we would have started with creating a web site from gallery, PhpStorm would simply download the entire web site’s contents. After the cloning finishes, this should be your PhpStorm project:

PHPStorm clone web site

Let’s add a new file by right-clicking the project and clicking New > File. Name the file “index.php” since that is one of the root documents recognized by Windows Azure Web Sites. If PhpStorm asks you if you want to add he file to the Git repository, answer affirmative. We want this file to end up being deployed some day.

The following code will do:

<?php echo "Hello world!";

Now let’s get this beauty online!

Publishing the application to Windows Azure

To commit the changes we’ve made earlier, press CTRL + K or use the menu VCS > Commit Changes. This will commit the created and modified files to our local copy of the remote Git repository.

Commit VCS changes PHPStorm

On the “Commit” button, click the little arrow and go with Commit and Push. This will make PhpStorm do two things at once: create a changeset containing our modifications and push it to Windows Azure Web Sites. We’ll be asked for a final confirmation:

Push to Windows Azure

After having clicked Push, PhpStorm will send our contents to Windows Azure Web Sites and create a new deployment as you can see from the management portal:

Windows Azure Web Sites deployment from PHPStorm

Guess what this all did? Our web site is now up and running at http://stormy.azurewebsites.net/.

image

A non-Microsoft language on Windows Azure? A non-Microsoft IDE? It all works seamlessly together! Enjoy!

Storing user uploads in Windows Azure blob storage

On one of the mailing lists I follow, an interesting question came up: “We want to write a VSTO plugin for Outlook which copies attachments to blob storage. What’s the best way to do this? What about security?”. Shortly thereafter, an answer came around: “That can be done directly from the client. And storage credentials can be encrypted for use in your VSTO plugin.”

While that’s certainly a solution to the problem, it’s not the best. Let’s try and answer…

What’s the best way to uploads data to blob storage directly from the client?

The first solution that comes to mind is implementing the following flow: the client authenticates and uploads data to your service which then stores the upload on blob storage.

Upload data to blob storage

While that is in fact a valid solution, think about the following: you are creating an expensive layer in your application that just sits there copying data from one network connection to another. If you have to scale this solution, you will have to scale out the service layer in between. If you want redundancy, you need at least two machines for doing this simple copy operation… A better approach would be one where the client authenticates with your service and then uploads the data directly to blob storage.

Upload data to blob storage using shared access signature

This approach allows you to have a “cheap” service layer: it can even run on the free version of Windows Azure Web Sites if you have a low traffic volume. You don’t have to scale out the service layer once your number of clients grows (at least, not for the uploading scenario).But how would you handle uploading to blob storage from a security point of view…

What about security? Shared access signatures!

The first suggested answer on the mailing list was this: “(…) storage credentials can be encrypted for use in your VSTO plugin.” That’s true, but you only have 2 access keys to storage. It’s like giving the master key of your house to someone you don’t know. It’s encrypted, sure, but still, the master key is at the client and that’s a potential risk. The solution? Using a shared access signature!

Shared access signatures (SAS) allow us to separate the code that signs a request from the code that executes it. It basically is a set of query string parameters attached to a blob (or container!) URL that serves as the authentication ticket to blob storage. Of course, these parameters are signed using the real storage access key, so that no-one can change this signature without knowing the master key. And that’s the scenario we want to support…

On the service side, the place where you’ll be authenticating your user, you can create a Web API method (or ASMX or WCF or whatever you feel like) similar to this one:

public class UploadController : ApiController { [Authorize] public string Put(string fileName) { var account = CloudStorageAccount.DevelopmentStorageAccount; var blobClient = account.CreateCloudBlobClient(); var blobContainer = blobClient.GetContainerReference("uploads"); blobContainer.CreateIfNotExists(); var blob = blobContainer.GetBlockBlobReference("customer1-" + fileName); var uriBuilder = new UriBuilder(blob.Uri); uriBuilder.Query = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Write, SharedAccessStartTime = DateTime.UtcNow, SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5) }).Substring(1); return uriBuilder.ToString(); } }

This method does a couple of things:

  • Authenticate the client using your authentication mechanism
  • Create a blob reference (not the actual blob, just a URL)
  • Signs the blob URL with write access, allowed from now until now + 5 minutes. That should give the client 5 minutes to start the upload.

On the client side, in our VSTO plugin, the only thing to do now is call this method with a filename. The web service will create a shared access signature to a non-existing blob and returns that to the client. The VSTO plugin can then use this signed blob URL to perform the upload:

Uri url = new Uri("http://...../uploads/customer1-test.txt?sv=2012-02-12&st=2012-12-18T08%3A11%3A57Z&se=2012-12-18T08%3A16%3A57Z&sr=b&sp=w&sig=Rb5sHlwRAJp7mELGBiog%2F1t0qYcdA9glaJGryFocj88%3D"); var blob = new CloudBlockBlob(url); blob.Properties.ContentType = "test/plain"; using (var data = new MemoryStream( Encoding.UTF8.GetBytes("Hello, world!"))) { blob.UploadFromStream(data); }

Easy, secure and scalable. Enjoy!

Protecting your ASP.NET Web API using OAuth2 and the Windows Azure Access Control Service

An article I wrote a while ago has been posted on DeveloperFusion:

The world in which we live evolves at a vast speed. Today, many applications on the Internet expose an API which can be consumed by everyone using a web browser or a mobile application on their smartphone or tablet. How would you build your API if you want these apps to be a full-fledged front-end to your service without compromising security? In this article, I’ll dive into that. We’ll be using OAuth2 and the Windows Azure Access Control Service to secure our API yet provide access to all those apps out there.

Why would I need an API?

A couple of years ago, having a web-based application was enough. Users would navigate to it using their computer’s browser, do their dance and log out again. Nowadays, a web-based application isn’t enough anymore. People have smartphones, tablets and maybe even a refrigerator with Internet access on which applications can run. Applications or “apps”. We’re moving from the web towards apps.

If you want to expose your data and services to external third-parties, you may want to think about building an API. Having an API gives you a giant advantage on the Internet nowadays. Having an API will allow your web application to reach more users. App developers will jump onto your API and build their app around it. Other websites or apps will integrate with your services by consuming your API. The only thing you have to do is expose an API and get people to know it. Apps will come. Integration will come.

A great example of an API is Twitter. They have a massive data store containing tweets and data related to that. They have user profiles. And a web site. And an API. Are you using www.twitter.com to post tweets? I am using the website, maybe once a year. All other tweets come either from my Windows Phone 7’s Twitter application or through www.hootsuite.com, a third-party Twitter client which provides added value in the form of statistics and scheduling. Both the app on my phone as well as the third-party service are using the Twitter API. By exposing an API, Twitter has created a rich ecosystem which drives adoption of their service, reaches more users and adds to their real value: data which they can analyze and sell.

(…)

Getting to know OAuth2

If you decide that your API isn’t public or specific actions can only be done for a certain user (let that third party web site get me my tweets, Twitter!), you’ll be facing authentication and authorization problems. With ASP.NET Web API, this is simple: add an [Authorize] attribute on top of a controller or action method and you’re done, right? Well, sort of…

When using the out-of-the-box authentication/authorization mechanisms of ASP.NET Web API, you are relying on basic or Windows authentication. Both require the user to log in. While perfectly viable and a good way of securing your API, a good alternative may be to use delegation.

In many cases, typically with public API’s, your API user will not really be your user, but an application acting on behalf of that user. That means that the application should know the user’s credentials. In an ideal world, you would only give your username and password to the service you’re using rather than just trusting the third-party application or website with it. You’ll be delegating access to these third parties. If you look at Facebook for example, many apps and websites redirect you to Facebook to do the login there instead of through the app itself.

Head over to the original article for more! (I’ll also be doing a talk on this on some upcoming conferences)

Configuring IIS methods for ASP.NET Web API on Windows Azure Websites and elsewhere

That’s a pretty long title, I agree. When working on my implementation of RFC2324, also known as the HyperText Coffee Pot Control Protocol, I’ve been struggling with something that you will struggle with as well in your ASP.NET Web API’s: supporting additional HTTP methods like HEAD, PATCH or PROPFIND. ASP.NET Web API has no issue with those, but when hosting them on IIS you’ll find yourself in Yellow-screen-of-death heaven.

The reason why IIS blocks these methods (or fails to route them to ASP.NET) is because it may happen that your IIS installation has some configuration leftovers from another API: WebDAV. WebDAV allows you to work with a virtual filesystem (and others) using a HTTP API. IIS of course supports this (because flagship product “SharePoint” uses it, probably) and gets in the way of your API.

Bottom line of the story: if you need those methods or want to provide your own HTTP methods, here’s the bit of configuration to add to your Web.config file:

<?xml version="1.0" encoding="utf-8"?> <configuration> <!-- ... --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="WebDAVModule" /> </modules> <security> <requestFiltering> <verbs applyToWebDAV="false"> <add verb="XYZ" allowed="true" /> </verbs> </requestFiltering> </security> <handlers> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" /> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" /> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS,XYZ" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> </system.webServer> <!-- ... --> </configuration>

Here’s what each part does:

  • Under modules, the WebDAVModule is being removed. Just to make sure that it’s not going to get in our way ever again.
  • The security/requestFiltering element I’ve added only applies if you want to define your own HTTP methods. So unless you need the XYZ method I’ve defined here, don’t add it to your config.
  • Under handlers, I’m removing the default handlers that route into ASP.NET. Then, I’m adding them again. The important part? The "verb attribute. You can provide a list of comma-separated methods that you want to route into ASP.NET. Again, I’ve added my XYZ methodbut you probably don’t need it.

This will work on any IIS server as well as on Windows Azure Websites. It will make your API… happy.

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

Push to NuGet

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Build GitHub project

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

MyGet build hook

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

GitHub build hook

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

A phone call from the cloud: Windows Azure, SignalR & Twilio

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 05 ConfirmPhoneNumberDemo.zip (1.32 mb).
It has been written earlier this year, some versions of packages used (like jQuery or SignalR) may be outdated in this post. Live with it.

In the previous blog post we saw how you can send e-mails from Windows Azure. Why not take communication a step further and make a phone call from Windows Azure? I’ve already mentioned that Windows Azure is a platform which will run your code, topped with some awesomesauce in the form of a large number of components that will speed up development. One of those components is the API provided by Twilio, a third-party service.

Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio Voice allows your applications to make and receive phone calls. Twilio SMS allows your applications to make and receive SMS messages. We’ll use Twilio Voice in conjunction with jQuery and SignalR to spice up a sign-up process.

The scenario

The idea is simple: we want users to sign up using a username and password. In addition, they’ll have to provide their phone number. The user will submit the sign-up form and will be displayed a confirmation code. In the background, the user will be called and asked to enter this confirmation code in order to validate his phone number. Once finished, the browser will automatically continue the sign-up process. Here’s a visual:

clip_image002

Sounds too good to be true? Get ready, as it’s relatively simple using Windows Azure and Twilio.

Let’s start…

Before we begin, make sure you have a Twilio account. Twilio offers some free credits, enough to test with. After registering, make sure that you enable international calls and that your phone number is registered as a developer. Twilio takes this step in order to ensure that their service isn’t misused for making abusive phone calls using free developer accounts.

Next, create a Windows Azure project containing an ASP.NET MVC 4 web role. Install the following NuGet packages in it (right-click, Library Package Manager, go wild):

  • jQuery
  • jQuery.UI.Combined
  • jQuery.Validation
  • json2
  • Modernizr
  • SignalR
  • Twilio
  • Twilio.Mvc
  • Twilio.TwiML

It may also be useful to develop some familiarity with the concepts behind SignalR.

The registration form

Let’s create our form. Using a simple model class, SignUpModel, create the following action method:

public ActionResult Index() { return View(new SignUpModel()); }

This action method is accompanied with a view, a simple form requesting the required information from our user:

@using (Html.BeginForm("SignUp", "Home", FormMethod.Post)) { @Html.ValidationSummary(true) <fieldset> <legend>Sign Up for this awesome service</legend> @* etc etc etc *@ <div class="editor-label"> @Html.LabelFor(model => model.Phone) </div> <div class="editor-field"> @Html.EditorFor(model => model.Phone) @Html.ValidationMessageFor(model => model.Phone) </div> <p> <input type="submit" value="Sign up!" /> </p> </fieldset> }

We’ll spice up this form with a dialog first. Using jQuery UI, we can create a simple <div> element which will serve as the dialog’s content. Note the ui-helper-hidden class which is used to make it invisible to view.

<div id="phoneDialog" class="ui-helper-hidden"> <h1>Keep an eye on your phone...</h1> <p>Pick up the phone and follow the instructions.</p> <p>You will be asked to enter the following code:</p> <h2>1743</h2> </div>

This is a simple dialog in which we’ll show a hardcoded confirmation code which the user will have to provide when called using Twilio.

Next, let’s code our JavaScript logic which will spice up this form. First, add the required JavaScript libraries for SignalR (more on that later):

<script src="@Url.Content("~/Scripts/jquery.signalR-0.5.0.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/signalr/hubs")" type="text/javascript"></script>

Next, capture the form’s submit event and, if the phone number has not been validated yet, cancel the submit event and show our dialog instead:

$('form:first').submit(function (e) { if ($(this).valid() && $('#Phone').data('validated') != true) { // Show a dialog $('#phoneDialog').dialog({ title: '', modal: true, width: 400, height: 400, resizable: false, beforeClose: function () { if ($('#Phone').data('validated') != true) { return false; } } }); // Don't submit. Yet. e.preventDefault(); } });

Nothing fancy yet. If you now run this code, you’ll see that a dialog opens and remains open for eternity. Let’s craft some SignalR code now. SignalR uses a concept of Hubs to enable client-server communication, but also server-client communication. We’ll need the latter to inform our view whenever the user has confirmed his phone number. In the project, add the following class:

[HubName("phonevalidator")] public class PhoneValidatorHub : Hub { public void StartValidation(string phoneNumber) { } }

This class defines a service that the client can call. SignalR will also keep the connection with the client open so that this PhoneValidatorHub can later send a message to the client as well. Let’s connect our view to this hub. In the form submit event handler, add the following line of code:

// Validate the phone number using Twilio $.connection.phonevalidator.startValidation($('#Phone').val());

We’ve created a C# class with a StartValidation method and we’re calling the startValidation message from JavaScript. Coincidence? No. SignalR makes this possible. But we’re not finished yet. We can now call a method on the server side, but how would the server inform the client when the phone number has been validated? I’ll get to that point later. First, let’s make sure our JavaScript code can receive that call from the server. To do so, connect to the PhoneValidator hub and add a callback function to it:

var validatorHub = $.connection.phonevalidator; validatorHub.validated = function (phoneNumber) { if (phoneNumber == $('#Phone').val()) { $('#Phone').data('validated', true); $('#phoneDialog').dialog('destroy'); $('form:first').trigger('submit'); } }; $.connection.hub.start();

What we’re doing here is adding a client-side function named validated to the SignalR hub. We can call this function, sitting at the client side, from our server-side code later on. The function itself is easy: it checks whether the phone number that was validated matches the one the user entered and, if so, it submits the form and completes the signup.

All that’s left is calling the user and, when the confirmation succeeds, we’ll have to inform our client by calling the validated message on the hub.

Initiating a phone call

The phone call to our user will be initiated in the PhoneValidatorHub’s StartValidation method. Add the following code there:

var twilioClient = new TwilioRestClient("api user", "api password"); string url = "http://mas.cloudapp.net/Home/TwilioValidationMessage?passcode=1743" + "&phoneNumber=" + HttpContext.Current.Server.UrlEncode(phoneNumber); // Instantiate the call options that are passed to the outbound call CallOptions options = new CallOptions(); options.From = "+14155992671"; // Twilio's developer number options.To = phoneNumber; options.Url = url; // Make the call. twilioClient.InitiateOutboundCall(options);

Using the TwilioRestClient class, we create a request to Twilio. We also pass on a URL which points to our application. Twilio uses TwiML, an XML format to instruct their phone services. When calling the InitiateOutboundCall method, Twilio will issue a request to the URL we are hosting (http://.....cloudapp.net/Home/TwilioValidationMessage) to fetch the TwiML which tells Twilio what to say, ask, record, gather, … on the phone.

Next up: implementing the TwilioValidationMessage action method.

public ActionResult TwilioValidationMessage(string passcode, string phoneNumber) { var response = new TwilioResponse(); response.Say("Hi there, welcome to Maarten's Awesome Service."); response.Say("To validate your phone number, please enter the 4 digit" + " passcode displayed on your screen followed by the pound sign."); response.BeginGather(new { numDigits = 4, action = "http://mas.cloudapp.net/Home/TwilioValidationCallback?phoneNumber=" + Server.UrlEncode(phoneNumber), method = "GET" }); response.EndGather(); return new TwiMLResult(response); }

That’s right. We’re creating some TwiML here. Our ASP.NET MVC action method is telling Twilio to say some text and to gather 4 digits from his phone pad. These 4 digits will be posted to the TwilioValidationCallback action method by the Twilio service. Which is the next method we’ll have to implement.

public ActionResult TwilioValidationCallback(string phoneNumber) { var hubContext = GlobalHost.ConnectionManager.GetHubContext<PhoneValidatorHub>(); hubContext.Clients.validated(phoneNumber); var response = new TwilioResponse(); response.Say("Thank you! Your browser should automatically continue. Bye!"); response.Hangup(); return new TwiMLResult(response); }

The TwilioValidationCallback action method does two things. First, it gets a reference to our SignalR hub and calls the validated function on it. As you may recall, we created this method on the hub’s client side, so in fact our ASP.NET MVC server application is calling a method on the client side. Doing this triggers the client to hide the validation dialog and complete the user sign-up process.

Another action we’re doing here is generating some more TwiML (it’s fun!). We thank the user for validating his phone number and, after that, we hang up the call.

You see? Working with voice (and text messages too, if you want) isn’t that hard. It enables additional scenarios that can make your application stand out from all the many others out there. Enjoy!

05 ConfirmPhoneNumberDemo.zip (1.32 mb)

Sending e-mail from Windows Azure

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 04 SendingEmailsFromTheCloud.zip (922.27 kb).

When a user subscribes, you send him a thank-you e-mail. When his account expires, you send him a warning message containing a link to purchase a new subscription. When he places an order, you send him an order confirmation. I think you get the picture: a fairly common scenario in almost any application is sending out e-mails.

Now, why would I spend a blog post on sending out an e-mail? Well, for two reasons. First, Windows Azure doesn’t have a built-in mail server. No reason to panic! I’ll explain why and how to overcome this in a minute. Second, I want to demonstrate a technique that will make your applications a lot more scalable and resilient to errors.

E-mail services for Windows Azure

Windows Azure doesn’t have a built-in mail server. And for good reasons: if you deploy your application to Windows Azure, it will be hosted on an IP address which previously belonged to someone else. It’s a shared platform, after all. Now, what if some obscure person used Windows Azure to send out a number of spam messages? Chances are your newly-acquired IP address has already been blacklisted, and any e-mail you send from it ends up in people’s spam filters.

All that is fine, but of course, you still want to send out e-mails. If you have your own SMTP server, you can simply configure your .NET application hosted on Windows Azure to make use of your own mail server. There are a number of so-called SMTP relay services out there as well. Even the Belgian hosters like Combell, Hostbasket or OVH offer this service. Microsoft has also partnered with SendGrid to have an officially-supported service for sending out e-mails too. Windows Azure customers receive a special offer of 25,000 free e-mails per month from them. It’s a great service to get started with sending e-mails from your applications: after all, you’ll be able to send 25,000 free e-mails every month. I’ll be using SendGrid in this blog post.

Asynchronous operations

I said earlier that I wanted to show you two things: sending e-mails and building scalable and fault-resilient applications. This can be done using asynchronous operations. No, I don’t mean AJAX. What I mean is that you should create loosely-coupled applications.

Imagine that I was to send out an e-mail whenever a user registers. If the mail server is not available for that millisecond when I want to use it, the send fails and I might have to show an error message to my user (or even worse: a YSOD). Why would that happen? Because my application logic expects that it can communicate with a mail server in a synchronous manner.

clip_image002

Now let’s remove that expectation. If we introduce a queue in between both services, the front-end can keep accepting registrations even when the mail server is down. And when it’s back up, the queue will be processed and e-mails will be sent out. Also, if you experience high loads, simply scale out the front-end and add more servers there. More e-mail messages will end up in the queue, but they are guaranteed to be processed in the future at the pace of the mail server. With synchronous communication, the mail service would probably experience high loads or even go down when a large number of front-end servers is added.

clip_image004

Show me the code!

Let’s combine the two approaches described earlier in this post: sending out e-mails over an asynchronous service. Before we start, make sure you have a SendGrid account (free!). Next, familiarise yourself with Windows Azure storage queues using this simple tutorial.

In a fresh Windows Azure web role, I’ve created a quick-and-dirty user registration form:

clip_image006

Nothing fancy, just a form that takes a post to an ASP.NET MVC action method. This action method stores the user in a database and adds a message to a queue named emailconfirmation. Here’s the code for this action method:

[HttpPost, ActionName("Register")] public ActionResult Register_Post(RegistrationModel model) { if (ModelState.IsValid) { // ... store the user in the database ... // serialize the model var serializer = new JavaScriptSerializer(); var modelAsString = serializer.Serialize(model); // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage(modelAsString)); return RedirectToAction("Thanks"); } return View(model); }

As you can see, it’s not difficult to work with queues. You just enter some data in a message and push it onto the queue. In the code above, I’ve serialized the registration model containing my newly-created user’s name and e-mail to the JSON format (using JavaScriptSerializer). A message can contain binary or textual data: as long as it’s less than 64 KB in data size, the message can be added to a queue.

Being cheap with Web Workers

When boning up on Windows Azure, you’ve probably read about so-called Worker Roles, virtual machines that are able to run your back-end code. The problem I see with Worker Roles is that they are expensive to start with. If your application has 100 users and your back-end load is low, why would you reserve an entire server to run that back-end code? The cloud and Windows Azure are all about scalability and using a “Web Worker” will be much more cost-efficient to start with - until you have a large user base, that is.

A Worker Role consists of a class that inherits the RoleEntryPoint class. It looks something along these lines:

public class WebRole : RoleEntryPoint { public override bool OnStart() { // ... return base.OnStart(); } public override void Run() { while (true) { // ... } } }

You can run this same code in a Web Role too! And that’s what I mean by a Web Worker: by simply adding this class which inherits RoleEntryPoint to your Web Role, it will act as both a Web and Worker role in one machine.

Call me cheap, but I think this is a nice hidden gem. The best part about this is that whenever your application’s load requires a separate virtual machine running the worker role code, you can simply drag and drop this file from the Web Role to the Worker Role and scale out your application as it grows.

Did you send that e-mail already?

Now that we have a pending e-mail message in our queue and we know we can reduce costs using a web worker, let’s get our e-mail across the wire. First of all, using SendGrid as our external e-mail service offers us a giant development speed advantage, since they are distributing their API client as a NuGet package. In Visual Studio, right-click your web role project and click the “Library Package Manager” menu. In the dialog (shown below), search for Sendgrid and install the package found. This will take a couple of seconds: it will download the SendGrid API client and will add an assembly reference to your project.

clip_image008

All that’s left to do is write the code that reads out the messages from the queue and sends the e-mails using SendGrid. Here’s the queue reading:

public class WebRole : RoleEntryPoint { public override bool OnStart() { CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => { string value = ""; if (RoleEnvironment.IsAvailable) { value = RoleEnvironment.GetConfigurationSettingValue(configName); } else { value = ConfigurationManager.AppSettings[configName]; } configSetter(value); }); return base.OnStart(); } public override void Run() { // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); while (true) { var message = queue.GetMessage(); if (message != null) { // ... // mark the message as processed queue.DeleteMessage(message); } else { Thread.Sleep(TimeSpan.FromSeconds(30)); } } } }

As you can see, reading from the queue is very straightforward. You use a storage account, get a queue reference from it and then, in an infinite loop, you fetch a message from the queue. If a message is present, process it. If not, sleep for 30 seconds. On a side note: why wait 30 seconds for every poll? Well, Windows Azure will bill you per 100,000 requests to your storage account. It’s a small amount, around 0.01 cent, but it may add up quickly if this code is polling your queue continuously on an 8 core machine… Bottom line: on any cloud platform, try to architect for cost as well.

Now that we have our message, we can deserialize it and create a new e-mail that can be sent out using SendGrid:

// deserialize the model var serializer = new JavaScriptSerializer(); var model = serializer.Deserialize<RegistrationModel>(message.AsString); // create a new email object using SendGrid var email = SendGrid.GenerateInstance(); email.From = new MailAddress("maarten@example.com", "Maarten"); email.AddTo(model.Email); email.Subject = "Welcome to Maarten's Awesome Service!"; email.Html = string.Format( "<html><p>Hello {0},</p><p>Welcome to Maarten's Awesome Service!</p>" + "<p>Best regards, <br />Maarten</p></html>", model.Name); var transportInstance = REST.GetInstance(new NetworkCredential("username", "password")); transportInstance.Deliver(email); // mark the message as processed queue.DeleteMessage(message);

Sending e-mail using SendGrid is in fact getting a new e-mail message instance from the SendGrid API client, passing the e-mail details (from, to, body, etc.) on to it and handing it your SendGrid username and password upon sending.

One last thing: you notice we’re only deleting the message from the queue after processing it has succeeded. This is to ensure the message is actually processed. If for some reason the worker role crashes during processing, the message will become visible again on the queue and will be processed by a new worker role which processes this specific queue. That way, messages are never lost and always guaranteed to be processed at least once.

(Almost) time for something new...

September 1st, 2005. Fresh from school, I got the opportunity to start at RealDolmen (Dolmen, back then). Not just a “welcome, here’s your customer, cya!”-start, but a start where my fresh colleagues and I got a 4 months deep dive from people in the industry. Entirely different from what school taught me, focused “on the job”. 4 months later, I started at my first customer, then the second, third, a project developed in-house, did some TFS customizations, some Windows Azure, …

During these past 7 years, I actually have never looked at a different job. Not once. Call me naïve, but I actually very much liked working at RealDolmen. Of course from time to time a project wasn’t as pleasant as you wanted it to be, but not everything can be rosy all the time, right? I’ve had a lot of opportunities (“Hey, would you mind diving in this Windows Azure thing?” – “Hey, public speaking, is that something you want to do?” – “Writing a book? Cool idea!”), all thanks to an awesome group of managers who really value personal growth in a direction you value yourself, not a direction which the company would value. I wasn’t planning on going away.

Until Hadi Hariri, who I met 4 years ago drinking wodka and beer water on one of those public speaking opportunities, asked me if I would like to join JetBrains as a technical evangelist. I found this a tough question. It was something I had in the back of my mind as “job I would love to do”, but I liked what I was doing at RealDolmen and all the opportunities that I’ve been able to jump on during the past years. I would even dare to say that this new opportunity at JetBrains is part the result of opportunities at RealDolmen in the past. RealDolmen is a great company to work with and I wouldn’t hesitate for a moment to go back.

Back to the question. It made my mind go in overdrive. Weighing pro’s and con’s, both at the job level as well as at a practical level (from a local company to an international company, from consultancy to evangelism, from some travel to a lot of travel, …). It meant considering leaving a company which felt (and still feels) like home behind, with colleagues I’ve come to consider friends. After a lot of pondering, I decided that taking this plunge would be the next step. Leaving behind these 7 years still causes mixed feelings. On the other hand an opportunity that checks of a box on your bucket list is one you can’t ignore. Hence…

I’m leaving RealDolmen. I will start working for JetBrains as a technical evangelist starting December 10, 2012.

I’m both excited and nervous about this change as it’s different from what I’ve been doing until now. For starters: my Twitter stream will contain less complaining about traffic. Why? Because my office is where my Internet connection is. At home, at my parents, while traveling, when parked near your house with an open WiFi, …

And then, of course the job itself. Going from “technical consultant”, doing mainly the deep-tech parts in the architectural and project starting phase, I’m now going to “technical evangelist”, sharing technology and knowledge about .NET and PHP with others through various platforms. I’ll be gathering product feedback, doing tutorials, demo, screencasts and a lot of conferences, interacting with the community. I’ve been doing some of these as a “side job” (thank you, RealDolmen, for being able to do all this in the past!) but now it’ll become my main job. Something very, very appealing.

Does this mean I’ll no longer be involved with the Belgian or international community? On the contrary! I’m planning to keep doing the things I do today on Windows Azure and ASP.NET-related technologies and have a chance to renew my focus on PHP as well. They are at the heart of my interest and I’ll keep them there.

So in short… I’m leaving RealDolmen with mixed emotions, changing a great partnership. Thank you for these past 7 years. And let’s go for at least 7 years at JetBrains. I’m confident that this will be a great partnership as well.

From API key to user with ASP.NET Web API

ASP.NET Web API is a great tool to build an API with. Or as my buddy Kristof Rennen (and the French) always say: “it makes you ‘api”. One of the things I like a lot is the fact that you can do very powerful things that you know and love from the ASP.NET MVC stack, like, for example, using filter attributes. Action filters, result filters and… authorization filters.

Say you wanted to protect your API and make use of the controller’s User property to return user-specific information. You probably will add an [Authorize] attribute (to ensure the user is authenticated) to either the entire API controller or to one of its action methods, like this:

[Authorize] public class SuperSecretController : ApiController { public string Get() { return string.Format("Hello, {0}", User.Identity.Name); } }

Great! But… How will your application know who’s calling? Forms authentication doesn’t really make sense for a lot of API’s. Configuring IIS and switching to Windows authentication or basic authentication may be an option. But not every ASP.NET Web API will live in IIS, right? And maybe you want to use some other form of authentication for your API, for example one that uses a custom HTTP header containing an API key? Let’s see how you can do that…

Our API authentication? An API key

API keys may make sense for your API. They provide an easy means of authenticating your API consumers based on a simple token that is passed around in a custom header. OAuth2 may make sense as well, but even that one boils down to a custom Authorization header at the HTTP level. (hint: the approach outlined in this post can be used for OAuth2 tokens as well)

Let’s build our API and require every API consumer to pass in a custom header, named “X-ApiKey”. Calls to our API will look like this:

GET http://localhost:60573/api/v1/SuperSecret HTTP/1.1 Host: localhost:60573 X-ApiKey: 12345

In our SuperSecretController above, we want to make sure that we’re working with a traditional IPrincipal which we can query for username, roles and possibly even claims if needed. How do we get that identity there?

Translating the API key using a DelegatingHandler

The title already gives you a pointer. We want to add a plugin into ASP.NET Web API’s pipeline which replaces the current thread’s IPrincipal with one that is mapped from the incoming API key. That plugin will come in the form of a DelegatingHandler, a class that’s plugged in really early in the ASP.NET Web API pipeline. I’m not going to elaborate on what DelegatingHandler does and where it fits, there’s a perfect post on that to be found here.

Our handler, which I’ll call AuthorizationHeaderHandler will be inheriting ASP.NET Web API’s DelegatingHandler. The method we’re interested in is SendAsync, which will be called on every request into our API.

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { // ... } }

This method offers access to the HttpRequestMessage, which contains everything you’ll probably be needing such as… HTTP headers! Let’s read out our X-ApiKey header, convert it to a ClaimsIdentity (so we can add additional claims if needed) and assign it to the current thread:

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { IEnumerable<string> apiKeyHeaderValues = null; if (request.Headers.TryGetValues("X-ApiKey", out apiKeyHeaderValues)) { var apiKeyHeaderValue = apiKeyHeaderValues.First(); // ... your authentication logic here ... var username = (apiKeyHeaderValue == "12345" ? "Maarten" : "OtherUser"); var usernameClaim = new Claim(ClaimTypes.Name, username); var identity = new ClaimsIdentity(new[] {usernameClaim}, "ApiKey"); var principal = new ClaimsPrincipal(identity); Thread.CurrentPrincipal = principal; } return base.SendAsync(request, cancellationToken); } }

Easy, no? The only thing left to do is registering this handler in the pipeline during your application’s start:

GlobalConfiguration.Configuration.MessageHandlers.Add(new AuthorizationHeaderHandler());

From now on, any request coming in with the X-ApiKey header will be translated into an IPrincipal which you can easily use throughout your web API. Enjoy!

PS: if you’re looking into OAuth2, I’ve used a similar approach in  “ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service” to handle OAuth2 tokens.