Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

Push to NuGet

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Build GitHub project

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

MyGet build hook

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

GitHub build hook

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

A phone call from the cloud: Windows Azure, SignalR & Twilio

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 05 ConfirmPhoneNumberDemo.zip (1.32 mb).
It has been written earlier this year, some versions of packages used (like jQuery or SignalR) may be outdated in this post. Live with it.

In the previous blog post we saw how you can send e-mails from Windows Azure. Why not take communication a step further and make a phone call from Windows Azure? I’ve already mentioned that Windows Azure is a platform which will run your code, topped with some awesomesauce in the form of a large number of components that will speed up development. One of those components is the API provided by Twilio, a third-party service.

Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio Voice allows your applications to make and receive phone calls. Twilio SMS allows your applications to make and receive SMS messages. We’ll use Twilio Voice in conjunction with jQuery and SignalR to spice up a sign-up process.

The scenario

The idea is simple: we want users to sign up using a username and password. In addition, they’ll have to provide their phone number. The user will submit the sign-up form and will be displayed a confirmation code. In the background, the user will be called and asked to enter this confirmation code in order to validate his phone number. Once finished, the browser will automatically continue the sign-up process. Here’s a visual:

clip_image002

Sounds too good to be true? Get ready, as it’s relatively simple using Windows Azure and Twilio.

Let’s start…

Before we begin, make sure you have a Twilio account. Twilio offers some free credits, enough to test with. After registering, make sure that you enable international calls and that your phone number is registered as a developer. Twilio takes this step in order to ensure that their service isn’t misused for making abusive phone calls using free developer accounts.

Next, create a Windows Azure project containing an ASP.NET MVC 4 web role. Install the following NuGet packages in it (right-click, Library Package Manager, go wild):

  • jQuery
  • jQuery.UI.Combined
  • jQuery.Validation
  • json2
  • Modernizr
  • SignalR
  • Twilio
  • Twilio.Mvc
  • Twilio.TwiML

It may also be useful to develop some familiarity with the concepts behind SignalR.

The registration form

Let’s create our form. Using a simple model class, SignUpModel, create the following action method:

public ActionResult Index() { return View(new SignUpModel()); }

This action method is accompanied with a view, a simple form requesting the required information from our user:

@using (Html.BeginForm("SignUp", "Home", FormMethod.Post)) { @Html.ValidationSummary(true) <fieldset> <legend>Sign Up for this awesome service</legend> @* etc etc etc *@ <div class="editor-label"> @Html.LabelFor(model => model.Phone) </div> <div class="editor-field"> @Html.EditorFor(model => model.Phone) @Html.ValidationMessageFor(model => model.Phone) </div> <p> <input type="submit" value="Sign up!" /> </p> </fieldset> }

We’ll spice up this form with a dialog first. Using jQuery UI, we can create a simple <div> element which will serve as the dialog’s content. Note the ui-helper-hidden class which is used to make it invisible to view.

<div id="phoneDialog" class="ui-helper-hidden"> <h1>Keep an eye on your phone...</h1> <p>Pick up the phone and follow the instructions.</p> <p>You will be asked to enter the following code:</p> <h2>1743</h2> </div>

This is a simple dialog in which we’ll show a hardcoded confirmation code which the user will have to provide when called using Twilio.

Next, let’s code our JavaScript logic which will spice up this form. First, add the required JavaScript libraries for SignalR (more on that later):

<script src="@Url.Content("~/Scripts/jquery.signalR-0.5.0.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/signalr/hubs")" type="text/javascript"></script>

Next, capture the form’s submit event and, if the phone number has not been validated yet, cancel the submit event and show our dialog instead:

$('form:first').submit(function (e) { if ($(this).valid() && $('#Phone').data('validated') != true) { // Show a dialog $('#phoneDialog').dialog({ title: '', modal: true, width: 400, height: 400, resizable: false, beforeClose: function () { if ($('#Phone').data('validated') != true) { return false; } } }); // Don't submit. Yet. e.preventDefault(); } });

Nothing fancy yet. If you now run this code, you’ll see that a dialog opens and remains open for eternity. Let’s craft some SignalR code now. SignalR uses a concept of Hubs to enable client-server communication, but also server-client communication. We’ll need the latter to inform our view whenever the user has confirmed his phone number. In the project, add the following class:

[HubName("phonevalidator")] public class PhoneValidatorHub : Hub { public void StartValidation(string phoneNumber) { } }

This class defines a service that the client can call. SignalR will also keep the connection with the client open so that this PhoneValidatorHub can later send a message to the client as well. Let’s connect our view to this hub. In the form submit event handler, add the following line of code:

// Validate the phone number using Twilio $.connection.phonevalidator.startValidation($('#Phone').val());

We’ve created a C# class with a StartValidation method and we’re calling the startValidation message from JavaScript. Coincidence? No. SignalR makes this possible. But we’re not finished yet. We can now call a method on the server side, but how would the server inform the client when the phone number has been validated? I’ll get to that point later. First, let’s make sure our JavaScript code can receive that call from the server. To do so, connect to the PhoneValidator hub and add a callback function to it:

var validatorHub = $.connection.phonevalidator; validatorHub.validated = function (phoneNumber) { if (phoneNumber == $('#Phone').val()) { $('#Phone').data('validated', true); $('#phoneDialog').dialog('destroy'); $('form:first').trigger('submit'); } }; $.connection.hub.start();

What we’re doing here is adding a client-side function named validated to the SignalR hub. We can call this function, sitting at the client side, from our server-side code later on. The function itself is easy: it checks whether the phone number that was validated matches the one the user entered and, if so, it submits the form and completes the signup.

All that’s left is calling the user and, when the confirmation succeeds, we’ll have to inform our client by calling the validated message on the hub.

Initiating a phone call

The phone call to our user will be initiated in the PhoneValidatorHub’s StartValidation method. Add the following code there:

var twilioClient = new TwilioRestClient("api user", "api password"); string url = "http://mas.cloudapp.net/Home/TwilioValidationMessage?passcode=1743" + "&phoneNumber=" + HttpContext.Current.Server.UrlEncode(phoneNumber); // Instantiate the call options that are passed to the outbound call CallOptions options = new CallOptions(); options.From = "+14155992671"; // Twilio's developer number options.To = phoneNumber; options.Url = url; // Make the call. twilioClient.InitiateOutboundCall(options);

Using the TwilioRestClient class, we create a request to Twilio. We also pass on a URL which points to our application. Twilio uses TwiML, an XML format to instruct their phone services. When calling the InitiateOutboundCall method, Twilio will issue a request to the URL we are hosting (http://.....cloudapp.net/Home/TwilioValidationMessage) to fetch the TwiML which tells Twilio what to say, ask, record, gather, … on the phone.

Next up: implementing the TwilioValidationMessage action method.

public ActionResult TwilioValidationMessage(string passcode, string phoneNumber) { var response = new TwilioResponse(); response.Say("Hi there, welcome to Maarten's Awesome Service."); response.Say("To validate your phone number, please enter the 4 digit" + " passcode displayed on your screen followed by the pound sign."); response.BeginGather(new { numDigits = 4, action = "http://mas.cloudapp.net/Home/TwilioValidationCallback?phoneNumber=" + Server.UrlEncode(phoneNumber), method = "GET" }); response.EndGather(); return new TwiMLResult(response); }

That’s right. We’re creating some TwiML here. Our ASP.NET MVC action method is telling Twilio to say some text and to gather 4 digits from his phone pad. These 4 digits will be posted to the TwilioValidationCallback action method by the Twilio service. Which is the next method we’ll have to implement.

public ActionResult TwilioValidationCallback(string phoneNumber) { var hubContext = GlobalHost.ConnectionManager.GetHubContext<PhoneValidatorHub>(); hubContext.Clients.validated(phoneNumber); var response = new TwilioResponse(); response.Say("Thank you! Your browser should automatically continue. Bye!"); response.Hangup(); return new TwiMLResult(response); }

The TwilioValidationCallback action method does two things. First, it gets a reference to our SignalR hub and calls the validated function on it. As you may recall, we created this method on the hub’s client side, so in fact our ASP.NET MVC server application is calling a method on the client side. Doing this triggers the client to hide the validation dialog and complete the user sign-up process.

Another action we’re doing here is generating some more TwiML (it’s fun!). We thank the user for validating his phone number and, after that, we hang up the call.

You see? Working with voice (and text messages too, if you want) isn’t that hard. It enables additional scenarios that can make your application stand out from all the many others out there. Enjoy!

05 ConfirmPhoneNumberDemo.zip (1.32 mb)

Sending e-mail from Windows Azure

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 04 SendingEmailsFromTheCloud.zip (922.27 kb).

When a user subscribes, you send him a thank-you e-mail. When his account expires, you send him a warning message containing a link to purchase a new subscription. When he places an order, you send him an order confirmation. I think you get the picture: a fairly common scenario in almost any application is sending out e-mails.

Now, why would I spend a blog post on sending out an e-mail? Well, for two reasons. First, Windows Azure doesn’t have a built-in mail server. No reason to panic! I’ll explain why and how to overcome this in a minute. Second, I want to demonstrate a technique that will make your applications a lot more scalable and resilient to errors.

E-mail services for Windows Azure

Windows Azure doesn’t have a built-in mail server. And for good reasons: if you deploy your application to Windows Azure, it will be hosted on an IP address which previously belonged to someone else. It’s a shared platform, after all. Now, what if some obscure person used Windows Azure to send out a number of spam messages? Chances are your newly-acquired IP address has already been blacklisted, and any e-mail you send from it ends up in people’s spam filters.

All that is fine, but of course, you still want to send out e-mails. If you have your own SMTP server, you can simply configure your .NET application hosted on Windows Azure to make use of your own mail server. There are a number of so-called SMTP relay services out there as well. Even the Belgian hosters like Combell, Hostbasket or OVH offer this service. Microsoft has also partnered with SendGrid to have an officially-supported service for sending out e-mails too. Windows Azure customers receive a special offer of 25,000 free e-mails per month from them. It’s a great service to get started with sending e-mails from your applications: after all, you’ll be able to send 25,000 free e-mails every month. I’ll be using SendGrid in this blog post.

Asynchronous operations

I said earlier that I wanted to show you two things: sending e-mails and building scalable and fault-resilient applications. This can be done using asynchronous operations. No, I don’t mean AJAX. What I mean is that you should create loosely-coupled applications.

Imagine that I was to send out an e-mail whenever a user registers. If the mail server is not available for that millisecond when I want to use it, the send fails and I might have to show an error message to my user (or even worse: a YSOD). Why would that happen? Because my application logic expects that it can communicate with a mail server in a synchronous manner.

clip_image002

Now let’s remove that expectation. If we introduce a queue in between both services, the front-end can keep accepting registrations even when the mail server is down. And when it’s back up, the queue will be processed and e-mails will be sent out. Also, if you experience high loads, simply scale out the front-end and add more servers there. More e-mail messages will end up in the queue, but they are guaranteed to be processed in the future at the pace of the mail server. With synchronous communication, the mail service would probably experience high loads or even go down when a large number of front-end servers is added.

clip_image004

Show me the code!

Let’s combine the two approaches described earlier in this post: sending out e-mails over an asynchronous service. Before we start, make sure you have a SendGrid account (free!). Next, familiarise yourself with Windows Azure storage queues using this simple tutorial.

In a fresh Windows Azure web role, I’ve created a quick-and-dirty user registration form:

clip_image006

Nothing fancy, just a form that takes a post to an ASP.NET MVC action method. This action method stores the user in a database and adds a message to a queue named emailconfirmation. Here’s the code for this action method:

[HttpPost, ActionName("Register")] public ActionResult Register_Post(RegistrationModel model) { if (ModelState.IsValid) { // ... store the user in the database ... // serialize the model var serializer = new JavaScriptSerializer(); var modelAsString = serializer.Serialize(model); // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage(modelAsString)); return RedirectToAction("Thanks"); } return View(model); }

As you can see, it’s not difficult to work with queues. You just enter some data in a message and push it onto the queue. In the code above, I’ve serialized the registration model containing my newly-created user’s name and e-mail to the JSON format (using JavaScriptSerializer). A message can contain binary or textual data: as long as it’s less than 64 KB in data size, the message can be added to a queue.

Being cheap with Web Workers

When boning up on Windows Azure, you’ve probably read about so-called Worker Roles, virtual machines that are able to run your back-end code. The problem I see with Worker Roles is that they are expensive to start with. If your application has 100 users and your back-end load is low, why would you reserve an entire server to run that back-end code? The cloud and Windows Azure are all about scalability and using a “Web Worker” will be much more cost-efficient to start with - until you have a large user base, that is.

A Worker Role consists of a class that inherits the RoleEntryPoint class. It looks something along these lines:

public class WebRole : RoleEntryPoint { public override bool OnStart() { // ... return base.OnStart(); } public override void Run() { while (true) { // ... } } }

You can run this same code in a Web Role too! And that’s what I mean by a Web Worker: by simply adding this class which inherits RoleEntryPoint to your Web Role, it will act as both a Web and Worker role in one machine.

Call me cheap, but I think this is a nice hidden gem. The best part about this is that whenever your application’s load requires a separate virtual machine running the worker role code, you can simply drag and drop this file from the Web Role to the Worker Role and scale out your application as it grows.

Did you send that e-mail already?

Now that we have a pending e-mail message in our queue and we know we can reduce costs using a web worker, let’s get our e-mail across the wire. First of all, using SendGrid as our external e-mail service offers us a giant development speed advantage, since they are distributing their API client as a NuGet package. In Visual Studio, right-click your web role project and click the “Library Package Manager” menu. In the dialog (shown below), search for Sendgrid and install the package found. This will take a couple of seconds: it will download the SendGrid API client and will add an assembly reference to your project.

clip_image008

All that’s left to do is write the code that reads out the messages from the queue and sends the e-mails using SendGrid. Here’s the queue reading:

public class WebRole : RoleEntryPoint { public override bool OnStart() { CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => { string value = ""; if (RoleEnvironment.IsAvailable) { value = RoleEnvironment.GetConfigurationSettingValue(configName); } else { value = ConfigurationManager.AppSettings[configName]; } configSetter(value); }); return base.OnStart(); } public override void Run() { // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); while (true) { var message = queue.GetMessage(); if (message != null) { // ... // mark the message as processed queue.DeleteMessage(message); } else { Thread.Sleep(TimeSpan.FromSeconds(30)); } } } }

As you can see, reading from the queue is very straightforward. You use a storage account, get a queue reference from it and then, in an infinite loop, you fetch a message from the queue. If a message is present, process it. If not, sleep for 30 seconds. On a side note: why wait 30 seconds for every poll? Well, Windows Azure will bill you per 100,000 requests to your storage account. It’s a small amount, around 0.01 cent, but it may add up quickly if this code is polling your queue continuously on an 8 core machine… Bottom line: on any cloud platform, try to architect for cost as well.

Now that we have our message, we can deserialize it and create a new e-mail that can be sent out using SendGrid:

// deserialize the model var serializer = new JavaScriptSerializer(); var model = serializer.Deserialize<RegistrationModel>(message.AsString); // create a new email object using SendGrid var email = SendGrid.GenerateInstance(); email.From = new MailAddress("maarten@example.com", "Maarten"); email.AddTo(model.Email); email.Subject = "Welcome to Maarten's Awesome Service!"; email.Html = string.Format( "<html><p>Hello {0},</p><p>Welcome to Maarten's Awesome Service!</p>" + "<p>Best regards, <br />Maarten</p></html>", model.Name); var transportInstance = REST.GetInstance(new NetworkCredential("username", "password")); transportInstance.Deliver(email); // mark the message as processed queue.DeleteMessage(message);

Sending e-mail using SendGrid is in fact getting a new e-mail message instance from the SendGrid API client, passing the e-mail details (from, to, body, etc.) on to it and handing it your SendGrid username and password upon sending.

One last thing: you notice we’re only deleting the message from the queue after processing it has succeeded. This is to ensure the message is actually processed. If for some reason the worker role crashes during processing, the message will become visible again on the queue and will be processed by a new worker role which processes this specific queue. That way, messages are never lost and always guaranteed to be processed at least once.

From API key to user with ASP.NET Web API

ASP.NET Web API is a great tool to build an API with. Or as my buddy Kristof Rennen (and the French) always say: “it makes you ‘api”. One of the things I like a lot is the fact that you can do very powerful things that you know and love from the ASP.NET MVC stack, like, for example, using filter attributes. Action filters, result filters and… authorization filters.

Say you wanted to protect your API and make use of the controller’s User property to return user-specific information. You probably will add an [Authorize] attribute (to ensure the user is authenticated) to either the entire API controller or to one of its action methods, like this:

[Authorize] public class SuperSecretController : ApiController { public string Get() { return string.Format("Hello, {0}", User.Identity.Name); } }

Great! But… How will your application know who’s calling? Forms authentication doesn’t really make sense for a lot of API’s. Configuring IIS and switching to Windows authentication or basic authentication may be an option. But not every ASP.NET Web API will live in IIS, right? And maybe you want to use some other form of authentication for your API, for example one that uses a custom HTTP header containing an API key? Let’s see how you can do that…

Our API authentication? An API key

API keys may make sense for your API. They provide an easy means of authenticating your API consumers based on a simple token that is passed around in a custom header. OAuth2 may make sense as well, but even that one boils down to a custom Authorization header at the HTTP level. (hint: the approach outlined in this post can be used for OAuth2 tokens as well)

Let’s build our API and require every API consumer to pass in a custom header, named “X-ApiKey”. Calls to our API will look like this:

GET http://localhost:60573/api/v1/SuperSecret HTTP/1.1 Host: localhost:60573 X-ApiKey: 12345

In our SuperSecretController above, we want to make sure that we’re working with a traditional IPrincipal which we can query for username, roles and possibly even claims if needed. How do we get that identity there?

Translating the API key using a DelegatingHandler

The title already gives you a pointer. We want to add a plugin into ASP.NET Web API’s pipeline which replaces the current thread’s IPrincipal with one that is mapped from the incoming API key. That plugin will come in the form of a DelegatingHandler, a class that’s plugged in really early in the ASP.NET Web API pipeline. I’m not going to elaborate on what DelegatingHandler does and where it fits, there’s a perfect post on that to be found here.

Our handler, which I’ll call AuthorizationHeaderHandler will be inheriting ASP.NET Web API’s DelegatingHandler. The method we’re interested in is SendAsync, which will be called on every request into our API.

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { // ... } }

This method offers access to the HttpRequestMessage, which contains everything you’ll probably be needing such as… HTTP headers! Let’s read out our X-ApiKey header, convert it to a ClaimsIdentity (so we can add additional claims if needed) and assign it to the current thread:

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { IEnumerable<string> apiKeyHeaderValues = null; if (request.Headers.TryGetValues("X-ApiKey", out apiKeyHeaderValues)) { var apiKeyHeaderValue = apiKeyHeaderValues.First(); // ... your authentication logic here ... var username = (apiKeyHeaderValue == "12345" ? "Maarten" : "OtherUser"); var usernameClaim = new Claim(ClaimTypes.Name, username); var identity = new ClaimsIdentity(new[] {usernameClaim}, "ApiKey"); var principal = new ClaimsPrincipal(identity); Thread.CurrentPrincipal = principal; } return base.SendAsync(request, cancellationToken); } }

Easy, no? The only thing left to do is registering this handler in the pipeline during your application’s start:

GlobalConfiguration.Configuration.MessageHandlers.Add(new AuthorizationHeaderHandler());

From now on, any request coming in with the X-ApiKey header will be translated into an IPrincipal which you can easily use throughout your web API. Enjoy!

PS: if you’re looking into OAuth2, I’ve used a similar approach in  “ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service” to handle OAuth2 tokens.

Get your Windows 8 up to speed fast

With the release of Windows 8 on MSDN yesterday, I have a gut feeling that today, around the globe, people are installing this fresh operating system on their machine. I’ve done so too and I wanted to share with your two tools: one that helped me get up to speed fast, one that will help me up to speed even faster the next time I want to reset my PC.

Chocolatey

One of the best things created for Windows, ever, is Chocolatey. If you are familiar with Ninite, you will find that both serve the same purpose, however Chocolatey is more developer focused.

Chocolatey provides a catalog of software packages like Notepad++, ReSharper, Paint.Net and a whole lot more. After installing Chocolatey, all you have to do to install such a package is invoke, from the command line, “cinst <package>”. The keyword command line is pretty important: what if you could just create a batch file containing all packages you need, like I did here?

Batch files are great, but even easier is creating a custom Chocolatey feed on www.myget.org (create a feed, go to package sources, add Chocolatey): you can simply add whatever you need on a fresh system to this feed and whenever you want to install every package from your custom feed, like I did yesterday evening, you invoke

cinst All -source "http://www.myget.org/F/chocolateymaarten"

and go to bed. In the morning, everything is on your PC.

Windows 8 - Reset Your PC

There’s a new feature in Windows 8 called “Refresh/reset Your PC”. What it does is revert to a certain baseline whenever you feel the need of a format C: coming up. This baseline, by default, is a fresh install. Now what if you could just set your own baseline and revert back to that one next time you need a reinstall? The good news: you can do this!

  • Configure your PC at will
  • From an elevated command prompt, issue:
    mkdir C:\SoFreshThatItSmellsGreat
    recimg -CreateImage C:\SoFreshThatItSmellsGreat

Done!

ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service

OAuth 2 Windows AzureIf you are familiar with OAuth2’s protocol flow, you know there’s a lot of things you should implement if you want to protect your ASP.NET Web API using OAuth2. To refresh your mind, here’s what’s required (at least):

  • OAuth authorization server
  • Keep track of consuming applications
  • Keep track of user consent (yes, I allow application X to act on my behalf)
  • OAuth token expiration & refresh token handling
  • Oh, and your API

That’s a lot to build there. Wouldn’t it be great to outsource part of that list to a third party? A little-known feature of the Windows Azure Access Control Service is that you can use it to keep track of applications, user consent and token expiration & refresh token handling. That leaves you with implementing:

  • OAuth authorization server
  • Your API

Let’s do it!

On a side note: I’m aware of the road-to-hell post released last week on OAuth2. I still think that whoever offers OAuth2 should be responsible enough to implement the protocol in a secure fashion. The protocol gives you the options to do so, and, as with regular web page logins, you as the implementer should think about security.

Building a simple API

I’ve been doing some demos lately using www.brewbuddy.net, a sample application (sources here) which enables hobby beer brewers to keep track of their recipes and current brews. There are a lot of applications out there that may benefit from being able to consume my recipes. I love the smell of a fresh API in the morning!

Here’s an API which would enable access to my BrewBuddy recipes:

1 [Authorize] 2 public class RecipesController 3 : ApiController 4 { 5 protected IRecipeService RecipeService { get; private set; } 6 7 public RecipesController(IRecipeService recipeService) 8 { 9 RecipeService = recipeService; 10 } 11 12 public IQueryable<RecipeViewModel> Get() 13 { 14 var recipes = RecipeService.GetRecipes(User.Identity.Name); 15 var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); 16 17 return model.AsQueryable(); 18 } 19 }

Nothing special, right? We’re just querying our RecipeService for the current user’s recipes. And the current user should be logged in as specified using the [Authorize] attribute.  Wait a minute! The current user?

I’ve built this API on the standard ASP.NET Web API features such as the [Authorize] attribute and the expectation that the User.Identity.Name property is populated. The reason for that is simple: my API requires a user and should not care how that user is populated. If someone wants to consume my API by authenticating over Forms authentication, fine by me. If someone configures IIS to use Windows authentication or even hacks in basic authentication, fine by me. My API shouldn’t care about that.

OAuth2 is a different state of mind

OAuth2 adds a layer of complexity. Mental complexity that is. Your API consumer is not your end user. Your API consumer is acting on behalf of your end user. That’s a huge difference! Here’s what really happens:

OAuth2 protocol flow

The end user loads a consuming application (a mobile app or a web app that doesn’t really matter). That application requests a token from an authorization server trusted by your application. The user has to login, and usually accept the fact that the app can perform actions on the user’s behalf (think of Twitter’s “Allow/Deny” screen). If successful, the authorization server returns a code to the app which the app can then exchange for an access token containing the user’s username and potentially other claims.

Now remember what we started this post with? We want to get rid of part of the OAuth2 implementation. We don’t want to be bothered by too much of this. Let’s try to accomplish the following:

OAuth2 protocol flow with Windows Azure

Let’s introduce you to…

WindowsAzure.Acs.Oauth2

“That looks like an assembly name. Heck, even like a NuGet package identifier!” You’re right about that. I’ve done a lot of the integration work for you (sources / NuGet package).

WindowsAzure.Acs.Oauth2 is currently in alpha status, so you’ll will have to register this package in your ASP.NET MVC Web API project using the package manager console, issuing the following command:

Install-Package WindowsAzure.Acs.Oauth2 -IncludePrerelease

This command will bring some dependencies to your project and installs the following source files:

  • App_Start/AppStart_OAuth2API.cs - Makes sure that OAuth2-signed SWT tokens are transformed into a ClaimsIdentity for use in your API. Remember where I used User.Identity.Name in my API? Populating that is performed by this guy.

  • Controllers/AuthorizeController.cs - A standard authorization server implementation which is configured by the Web.config settings. You can override certain methods here, for example if you want to show additional application information on the consent page.

  • Views/Shared/_AuthorizationServer.cshtml - A default consent page. This can be customized at will.

Next to these files, the following entries are added to your Web.config file:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <configuration> 3 <appSettings> 4 <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> 5 <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> 6 <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> 7 <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> 8 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> 9 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> 10 </appSettings> 11 </configuration>

These settings should be configured based on the Windows Azure Access Control settings. Details on this can be found on the Github page.

Consuming the API

After populating Windows Azure Access Control Service with a client_id and client_secret for my consuming app (which you can do using the excellent FluentACS package or manually, as shown in the following screenshot), you’re good to go.

ACS OAuth2 Service Identity

The WindowsAzure.Acs.Oauth2 package adds additional functionality to your application: it provides your ASP.NET Web API with the current user’s details (after a successful OAuth2 authorization flow took place) and it adds a controller and view to your app which provides a simple consent page (that can be customized):

image

After granting access, WindowsAzure.Acs.Oauth2 will store the choice of the user in Windows Azure ACS and redirect you back to the application. From there on, the application can ask Windows Azure ACS for an access token and refresh the access token once it expires. Without your application having to interfere with that process ever again. WindowsAzure.Acs.Oauth2 transforms the incoming OAuth2 token into a ClaimsIdentity which your API can use to determine which user is accessing your API. Focus on your API, not on OAuth.

Enjoy!

Protecting Windows Azure Web and Worker roles from malware

Most IT administrators will install some sort of virus scanner on your precious servers. Since the cloud, from a technical perspective, is just a server, why not follow that security best practice on Windows Azure too? It has gone by almost unnoticed, but last week Microsoft released the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview. For the sake of bandwidth, I’ll be referring to it as EP.

EP offers real-time protection, scheduled scanning, malware remediation (a fancy word for quarantining), active protection and automatic signature updates. Sounds a lot like Microsoft Endpoint Protection or Windows Security Essentials? That’s no coincidence: EP is a Windows Azurified version of it.

Enabling anti-malware on Windows Azure

After installing the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview, sorry, EP, a new Windows Azure import will be available. As with remote desktop or diagnostics, EP can be enabled by a simple XML one liner:

1 <Import moduleName="Antimalware" />

Here’s a sample web role ServiceDefinition.csdef file containing this new import:

1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="ChuckProject" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 4 <WebRole name="ChuckNorris" vmsize="Small"> 5 <Sites> 6 <Site name="Web"> 7 <Bindings> 8 <Binding name="Endpoint1" endpointName="Endpoint1" /> 9 </Bindings> 10 </Site> 11 </Sites> 12 <Endpoints> 13 <InputEndpoint name="Endpoint1" protocol="http" port="80" /> 14 </Endpoints> 15 <Imports> 16 <Import moduleName="Antimalware" /> 17 <Import moduleName="Diagnostics" /> 18 </Imports> 19 </WebRole> 20 </ServiceDefinition>

That’s it! When you now deploy your Windows Azure solution, Microsoft Endpoint Protection will be installed, enabled and configured on your Windows Azure virtual machines.

Now since I started this blog post with “IT administrators”, chances are you want to fine-tune this plugin a little. No problem! The ServiceConfiguration.cscfg file has some options waiting to be eh, touched. And since these are in the service configuration, you can also modify them through the management portal, the management API, or sysadmin-style using PowerShell. Anyway, the following options are available:

  • Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation – Specify the datacenter region where your application is deployed, for example “West Europe” or “East Asia”. This will speed up deployment time.
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware – Should EP be enabled or not?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection – Should real-time protection be enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans – Weekly scheduled scans enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans – Which day of the week (0 – 7 where 0 means daily)
  • Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans – What time should the scheduled scan run?
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions – Specify file extensions to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths – Specify paths to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses – Specify processes to exclude from scanning (pip-delimited)

Monitoring anti-malware on Windows Azure

How will you know if a threat has been detected? Well, luckily for us, Windows Endpoint Protection writes its logs to the System event log. Which means that you can simply add a specific data source in your diagnostics monitor and you’re done:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // Note: if you need informational / verbose, also subscribe to levels 4 and 5 4 configuration.WindowsEventLog.DataSources.Add( 5 "System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]"); 6 7 configuration.WindowsEventLog.ScheduledTransferPeriod 8 = System.TimeSpan.FromMinutes(1); 9 10 DiagnosticMonitor.Start( 11 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 12 configuration);

In addition, EP also logs its inner workings to its installation folders. You can also include these in your diagnostics configuration:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // ...add the event logs like in the previous code sample... 4 5 var mep1 = new DirectoryConfiguration(); 6 mep1.Container = "wad-endpointprotection-container"; 7 mep1.DirectoryQuotaInMB = 5; 8 mep1.Path = "%programdata%\Microsoft Endpoint Protection"; 9 10 var mep2 = new DirectoryConfiguration(); 11 mep2.Container = "wad-endpointprotection-container"; 12 mep2.DirectoryQuotaInMB = 5; 13 mep2.Path = "%programdata%\Microsoft\Microsoft Security Client"; 14 15 configuration.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); 16 configuration.Directories.DataSources.Add(mep1); 17 configuration.Directories.DataSources.Add(mep2); 18 19 DiagnosticMonitor.Start( 20 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 21 configuration);

From this moment one, you can use a tool like Cerebrata’s Diagnostics Monitor to check the event logs of all your Windows Azure instances that have anti-malware enabled.

TechDays Finland - Architectural Patterns for the Cloud - NuGet

As promised, here are the slide decks for the two sessions delivered at TechDays Finland last week.

Architectural Patterns for the Cloud

The promise of all cloud vendors out there is they can run your applications without changes. While that claim is true, it’s better to optimize existing software or design specifically for the cloud when moving or building an application. Architectural optimization will speed up your application, make it more scalable and even will make it cheaper to run on Windows Azure. This session will take you along some common patterns that are easy to implement and will make your cloud more sunny.

Organize your Chickens - NuGet for the Enterprise

Managing software dependencies, whether those created in-house or from third parties can be a pain in the behind. Whether dependencies feel like wild chickens or people run around like chickens dealing with dependencies, the NuGet package manager can be a cure. Let us guide you to creating enterprise (chicken) NuGets and dealing with them in a structured, easy-to-maintain manner. From developer workstation to build server, NuGet tastes great! We'll provide you the dip sauce.

Enjoy! And if there’s any feedback or questions, I would love to hear it.

Introducing MyGet package source proxy (beta)

My blog already has quite the number of blog posts around MyGet, our NuGet-as-a-Service solution which my colleague Xavier and I are running. There are a lot of reasons to host your own personal NuGet feed (such as protecting your intellectual property or only adding approved packages to the feed, but there’s many more as you can <plug>read in our book</plug>). We’ve added support for another scenario: MyGet now supports proxying remote feeds.

Up until now, MyGet required you to upload your own NuGet packages and to include packages from the NuGet feed. The problem with this is that you either required your team to register multiple NuGet feeds in Visual Studio (which still is a good option) or to register just your MyGet feed and add all packages your team is using to it. Which, again, is also a good option.

With our package source proxy in place, we now provide a third option: MyGet can proxy upstream NuGet feeds. Let’s start with a quick diagram and afterwards walk you through a scenario elaborating on this:

MyGet Feed Proxy Aggregate Feed Connector

You are seeing this correctly: you can now register just your MyGet feed in Visual Studio and we’ll add upstream packages to your feed automatically, optionally filtered as well.

Enabling MyGet package source proxy

Enabling the MyGet package source proxy is very straightforward. Navigate to your feed of choice (or create a new one) and click the Package Sources item. This will present you with a screen similar to this:

MyGet hosted package source

From there, you can add external (or MyGet) feeds to your personal feed and add packages directly from them using the Add package dialog. More on that in Xavier’s blog post. What’s more: with the tick of a checkbox, these external feeds can also be aggregated with your feed in Visual Studio’s search results. Here’s the magical add dialog and the proxy checkbox:

Add package source proxy

As you may see, we also offer the option to filter upstream packages. For example, the filter string substringof('wp7', Tags) eq true that we used will filter all upstream packages where the tags contain “wp7”.

What will Visual Studio display us? Well, just the Windows Phone 7 packages from NuGet, served through our single-endpoint MyGet feed.

Conclusion

Instead of working with a number of NuGet feeds, your development team will just work with one feed that is aggregating packages from both MyGet and other package sources out there (NuGet, Orchard Gallery, Chocolatey, …). This centralizes managing external packages and makes it easier for your team members to find the packages they can use in your projects.

Do let us know what you think of this feature! Our UserVoice is there for you, and in fact, that’s where we got the idea for this feature from in the first place. Your voice is heard!

Tracking API usage with Google Analytics

So you have an API. Congratulations! You should have one. But how do you track who uses it, what client software they use and so on? You may be logging API calls yourself. You may be relying on services like Apigee.com who make you pay (for a great service, though!). Being cheap, we thought about another approach for MyGet. We’re already using Google Analytics to track pageviews and so on, why not use Google Analytics for tracking API calls as well?

Meet GoogleAnalyticsTracker. It is a three-classes assembly which allows you to track requests from within C# to Google Analytics.

Go and  fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using GoogleAnalyticsTracker

Using GoogleAnalyticsTracker in your projects is simple. Simply Install-Package GoogleAnalyticsTracker and be an API tracking bad-ass! There are two things required: a Google Analytics tracking ID (something in the form of UA-XXXXXXX-X) and the domain you wish to track, preferably the same domain as the one registered with Google Analytics.

After installing GoogleAnalyticsTracker into your project, you currently have two options to track your API calls: use the Tracker class or use the included ASP.NET MVC Action Filter.

Here’s a quick demo of using the Tracker class:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView("My API - Create", "api/create");

Unfortunately, this class has no notion of a web request. This means that if you want to track user agents and user languages, you’ll have to add some more code:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 3 var request = HttpContext.Request; 4 tracker.Hostname = request.UserHostName; 5 tracker.UserAgent = request.UserAgent; 6 tracker.Language = request.UserLanguages != null ? string.Join(";", request.UserLanguages) : ""; 7 8 tracker.TrackPageView("My API - Create", "api/create");

Whaah! No worries though: there’s an extension method which does just that:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView(HttpContext, "My API - Create", "api/create");

The sad part is: this code quickly clutters all your action methods. No worries! There’s an ActionFilter for that!

1 [ActionTracking("UA-XXXXXX-XX", "www.example.org")] 2 public class ApiController 3 : Controller 4 { 5 public JsonResult Create() 6 { 7 return Json(true); 8 } 9 }

And what’s better: you can register it globally and optionally filter it to only track specific controllers and actions!

1 public class MvcApplication : System.Web.HttpApplication 2 { 3 public static void RegisterGlobalFilters(GlobalFilterCollection filters) 4 { 5 filters.Add(new HandleErrorAttribute()); 6 filters.Add(new ActionTrackingAttribute( 7 "UA-XXXXXX-XX", "www.example.org", 8 action => action.ControllerDescriptor.ControllerName == "Api") 9 ); 10 } 11 }

And here’s what it could look like (we’re only tracking for the second day now…):

WCF Web API analytics google

We even have stats about the versions of the NuGet Command Line used to access our API!

NuGet API tracking Google

Enjoy! And fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!