Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Using Amazon Login (and LinkedIn and …) with Windows Azure Access Control

One of the services provided by the Windows Azure cloud computing platform is the Windows Azure Access Control Service (ACS). It is a service that provides federated authentication and rules-driven, claims-based authorization. It has some social providers like Microsoft Account, Google Account, Yahoo! and Facebook. But what about the other social identity providers out there? For example the newly introduced Login with Amazon, or LinkedIn? As they are OAuth2 implementations they don’t really fit into ACS.

Meet SocialSTS.com. It’s a service I created which does a protocol conversion and allows integrating ACS with other social identities. Currently it has support for integrating ACS with Twitter, GitHub, LinkedIn, BitBucket, StackExchange and Amazon. Let’s see how this works. There are 2 steps we have to take:

  • Link SocialSTS with the social identity provider
  • Link our ACS namespace with SocialSTS

Link SocialSTS with the social identity provider

Once an account has been created through www.socialsts.com, we are presented with a dashboard in which we can configure the social identities. Most of them require that you register your application with them and in turn, you will receive some identifiers which will allow integration.

SocialSTS - Register social identity provider

As you can see, instructions for registering with the social identity provider are listed on the configuration page. For Amazon, we have to register an application with Amazon and configure the following:

If we do this, Amazon will give us a client ID and client secret in return, which we can enter in the SocialSTS dashboard.

Amazon Login with Access Control on Windows Azure

That’s basically all configuration there is to it. We can now add our Amazon, LinkedIn, Twitter or GitHub login page to Windows Azure Access Control Service!

Link our ACS namespace with SocialSTS

In the Windows Azure Access Control Service management dashboard, we can register SocialSTS as an identity provider. SocialSTS will provide us with a FederationMetadata.xml URL which we can copy into ACS:

Add LinkedIn to ACS

We can now save this new identity provider, add some claims transformation rules through the rule groups (important!) and then start using it in our application:

Windows Identity Foundation claims from Amazon,LinkedIn and so on

Enjoy! And let me know your thoughts on this service.

Throttling ASP.NET Web API calls

Many API’s out there, such as GitHub’s API, have a concept called “rate limiting” or “throttling” in place. Rate limiting is used to prevent clients from issuing too many requests over a short amount of time to your API. For example, we can limit anonymous API clients to a maximum of 60 requests per hour whereas we can allow more requests to authenticated clients. But how can we implement this?

Intercepting API calls to enforce throttling

Just like ASP.NET MVC, ASP.NET Web API allows us to write action filters. An action filter is an attribute that you can apply to a controller action, an entire controller and even to all controllers in a project. The attribute modifies the way in which the action is executed by intercepting calls to it. Sound like a great approach, right?

Well… yes! Implementing throttling as an action filter would make sense, although in my opinion it has some disadvantages:

  • We have to implement it as an IAuthorizationFilter to make sure it hooks into the pipeline before most other action filters. This feels kind of dirty but it would do the trick as throttling is some sort of “authorization” to make a number of requests to the API.
  • It gets executed quite late in the overall ASP.NET Web API pipeline. While not a big problem, perhaps we want to skip executing certain portions of code whenever throttling occurs.

So while it makes sense to implement throttling as an action filter, I would prefer plugging it earlier in the pipeline. Luckily for us, ASP.NET Web API also provides the concept of message handlers. They accept an HTTP request and return an HTTP response and plug into the pipeline quite early. Here’s a sample throttling message handler:

1 public class ThrottlingHandler 2 : DelegatingHandler 3 { 4 protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) 5 { 6 var identifier = request.GetClientIpAddress(); 7 8 long currentRequests = 1; 9 long maxRequestsPerHour = 60; 10 11 if (HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] != null) 12 { 13 currentRequests = (long)System.Web.HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] + 1; 14 HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] = currentRequests; 15 } 16 else 17 { 18 HttpContext.Current.Cache.Add(string.Format("throttling_{0}", identifier), currentRequests, 19 null, Cache.NoAbsoluteExpiration, TimeSpan.FromHours(1), 20 CacheItemPriority.Low, null); 21 } 22 23 Task<HttpResponseMessage> response = null; 24 if (currentRequests > maxRequestsPerHour) 25 { 26 response = CreateResponse(request, HttpStatusCode.Conflict, "You are being throttled."); 27 } 28 else 29 { 30 response = base.SendAsync(request, cancellationToken); 31 } 32 33 return response; 34 } 35 36 protected Task<HttpResponseMessage> CreateResponse(HttpRequestMessage request, HttpStatusCode statusCode, string message) 37 { 38 var tsc = new TaskCompletionSource<HttpResponseMessage>(); 39 var response = request.CreateResponse(statusCode); 40 response.ReasonPhrase = message; 41 response.Content = new StringContent(message); 42 tsc.SetResult(response); 43 return tsc.Task; 44 } 45 }

We have to register it as well, which we can do when our application starts:

1 config.MessageHandlers.Add(new ThrottlingHandler());

The throttling handler above isn’t ideal. It’s not very extensible nor does it allow scaling out on a web farm. And it’s bound to being hosted in ASP.NET on IIS. It’s bad! Since there’s already a great project called WebApiContrib, I decided to contribute a better throttling handler to it.

Using the WebApiContrib ThrottlingHandler

The easiest way of using the ThrottlingHandler is by registering it using simple parameters like the following, which throttles every user at 60 requests per hour:

1 config.MessageHandlers.Add(new ThrottlingHandler( 2 new InMemoryThrottleStore(), 3 id => 60, 4 TimeSpan.FromHours(1)));

The IThrottleStore interface stores id + current number of requests. There’s only an in-memory store available but you can easily extend it to write this in a distributed cache or a database.

What’s interesting is we can change how our ThrottlingHandler behaves quite easily. Let’s give a specific IP address a better rate limit:

1 config.MessageHandlers.Add(new ThrottlingHandler( 2 new InMemoryThrottleStore(), 3 id => 4 { 5 if (id == "10.0.0.1") 6 { 7 return 5000; 8 } 9 return 60; 10 }, 11 TimeSpan.FromHours(1)));

Wait… Are you telling me this is all IP based? Well yes, by default. But overriding the ThrottlingHandler allows you to do funky things! Here’s a wireframe:

1 public class MyThrottlingHandler : ThrottlingHandler 2 { 3 // ... 4 5 protected override string GetUserIdentifier(HttpRequestMessage request) 6 { 7 // your user id generation logic here 8 } 9 }

By implementing the GetUserIdentifier method, we can for example return an IP address for unauthenticated users and their username for authenticated users. We can then decide on the throttling quota based on username.

Once using it, the ThrottlingHandler will inject two HTTP headers in every response, informing the client about the rate limit:

image

Enjoy! And do checkout WebApiContrib, it contains almost all extensions to ASP.NET Web API you will ever need!

Storing user uploads in Windows Azure blob storage

On one of the mailing lists I follow, an interesting question came up: “We want to write a VSTO plugin for Outlook which copies attachments to blob storage. What’s the best way to do this? What about security?”. Shortly thereafter, an answer came around: “That can be done directly from the client. And storage credentials can be encrypted for use in your VSTO plugin.”

While that’s certainly a solution to the problem, it’s not the best. Let’s try and answer…

What’s the best way to uploads data to blob storage directly from the client?

The first solution that comes to mind is implementing the following flow: the client authenticates and uploads data to your service which then stores the upload on blob storage.

Upload data to blob storage

While that is in fact a valid solution, think about the following: you are creating an expensive layer in your application that just sits there copying data from one network connection to another. If you have to scale this solution, you will have to scale out the service layer in between. If you want redundancy, you need at least two machines for doing this simple copy operation… A better approach would be one where the client authenticates with your service and then uploads the data directly to blob storage.

Upload data to blob storage using shared access signature

This approach allows you to have a “cheap” service layer: it can even run on the free version of Windows Azure Web Sites if you have a low traffic volume. You don’t have to scale out the service layer once your number of clients grows (at least, not for the uploading scenario).But how would you handle uploading to blob storage from a security point of view…

What about security? Shared access signatures!

The first suggested answer on the mailing list was this: “(…) storage credentials can be encrypted for use in your VSTO plugin.” That’s true, but you only have 2 access keys to storage. It’s like giving the master key of your house to someone you don’t know. It’s encrypted, sure, but still, the master key is at the client and that’s a potential risk. The solution? Using a shared access signature!

Shared access signatures (SAS) allow us to separate the code that signs a request from the code that executes it. It basically is a set of query string parameters attached to a blob (or container!) URL that serves as the authentication ticket to blob storage. Of course, these parameters are signed using the real storage access key, so that no-one can change this signature without knowing the master key. And that’s the scenario we want to support…

On the service side, the place where you’ll be authenticating your user, you can create a Web API method (or ASMX or WCF or whatever you feel like) similar to this one:

public class UploadController : ApiController { [Authorize] public string Put(string fileName) { var account = CloudStorageAccount.DevelopmentStorageAccount; var blobClient = account.CreateCloudBlobClient(); var blobContainer = blobClient.GetContainerReference("uploads"); blobContainer.CreateIfNotExists(); var blob = blobContainer.GetBlockBlobReference("customer1-" + fileName); var uriBuilder = new UriBuilder(blob.Uri); uriBuilder.Query = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Write, SharedAccessStartTime = DateTime.UtcNow, SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5) }).Substring(1); return uriBuilder.ToString(); } }

This method does a couple of things:

  • Authenticate the client using your authentication mechanism
  • Create a blob reference (not the actual blob, just a URL)
  • Signs the blob URL with write access, allowed from now until now + 5 minutes. That should give the client 5 minutes to start the upload.

On the client side, in our VSTO plugin, the only thing to do now is call this method with a filename. The web service will create a shared access signature to a non-existing blob and returns that to the client. The VSTO plugin can then use this signed blob URL to perform the upload:

Uri url = new Uri("http://...../uploads/customer1-test.txt?sv=2012-02-12&st=2012-12-18T08%3A11%3A57Z&se=2012-12-18T08%3A16%3A57Z&sr=b&sp=w&sig=Rb5sHlwRAJp7mELGBiog%2F1t0qYcdA9glaJGryFocj88%3D"); var blob = new CloudBlockBlob(url); blob.Properties.ContentType = "test/plain"; using (var data = new MemoryStream( Encoding.UTF8.GetBytes("Hello, world!"))) { blob.UploadFromStream(data); }

Easy, secure and scalable. Enjoy!

ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service

OAuth 2 Windows AzureIf you are familiar with OAuth2’s protocol flow, you know there’s a lot of things you should implement if you want to protect your ASP.NET Web API using OAuth2. To refresh your mind, here’s what’s required (at least):

  • OAuth authorization server
  • Keep track of consuming applications
  • Keep track of user consent (yes, I allow application X to act on my behalf)
  • OAuth token expiration & refresh token handling
  • Oh, and your API

That’s a lot to build there. Wouldn’t it be great to outsource part of that list to a third party? A little-known feature of the Windows Azure Access Control Service is that you can use it to keep track of applications, user consent and token expiration & refresh token handling. That leaves you with implementing:

  • OAuth authorization server
  • Your API

Let’s do it!

On a side note: I’m aware of the road-to-hell post released last week on OAuth2. I still think that whoever offers OAuth2 should be responsible enough to implement the protocol in a secure fashion. The protocol gives you the options to do so, and, as with regular web page logins, you as the implementer should think about security.

Building a simple API

I’ve been doing some demos lately using www.brewbuddy.net, a sample application (sources here) which enables hobby beer brewers to keep track of their recipes and current brews. There are a lot of applications out there that may benefit from being able to consume my recipes. I love the smell of a fresh API in the morning!

Here’s an API which would enable access to my BrewBuddy recipes:

1 [Authorize] 2 public class RecipesController 3 : ApiController 4 { 5 protected IRecipeService RecipeService { get; private set; } 6 7 public RecipesController(IRecipeService recipeService) 8 { 9 RecipeService = recipeService; 10 } 11 12 public IQueryable<RecipeViewModel> Get() 13 { 14 var recipes = RecipeService.GetRecipes(User.Identity.Name); 15 var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); 16 17 return model.AsQueryable(); 18 } 19 }

Nothing special, right? We’re just querying our RecipeService for the current user’s recipes. And the current user should be logged in as specified using the [Authorize] attribute.  Wait a minute! The current user?

I’ve built this API on the standard ASP.NET Web API features such as the [Authorize] attribute and the expectation that the User.Identity.Name property is populated. The reason for that is simple: my API requires a user and should not care how that user is populated. If someone wants to consume my API by authenticating over Forms authentication, fine by me. If someone configures IIS to use Windows authentication or even hacks in basic authentication, fine by me. My API shouldn’t care about that.

OAuth2 is a different state of mind

OAuth2 adds a layer of complexity. Mental complexity that is. Your API consumer is not your end user. Your API consumer is acting on behalf of your end user. That’s a huge difference! Here’s what really happens:

OAuth2 protocol flow

The end user loads a consuming application (a mobile app or a web app that doesn’t really matter). That application requests a token from an authorization server trusted by your application. The user has to login, and usually accept the fact that the app can perform actions on the user’s behalf (think of Twitter’s “Allow/Deny” screen). If successful, the authorization server returns a code to the app which the app can then exchange for an access token containing the user’s username and potentially other claims.

Now remember what we started this post with? We want to get rid of part of the OAuth2 implementation. We don’t want to be bothered by too much of this. Let’s try to accomplish the following:

OAuth2 protocol flow with Windows Azure

Let’s introduce you to…

WindowsAzure.Acs.Oauth2

“That looks like an assembly name. Heck, even like a NuGet package identifier!” You’re right about that. I’ve done a lot of the integration work for you (sources / NuGet package).

WindowsAzure.Acs.Oauth2 is currently in alpha status, so you’ll will have to register this package in your ASP.NET MVC Web API project using the package manager console, issuing the following command:

Install-Package WindowsAzure.Acs.Oauth2 -IncludePrerelease

This command will bring some dependencies to your project and installs the following source files:

  • App_Start/AppStart_OAuth2API.cs - Makes sure that OAuth2-signed SWT tokens are transformed into a ClaimsIdentity for use in your API. Remember where I used User.Identity.Name in my API? Populating that is performed by this guy.

  • Controllers/AuthorizeController.cs - A standard authorization server implementation which is configured by the Web.config settings. You can override certain methods here, for example if you want to show additional application information on the consent page.

  • Views/Shared/_AuthorizationServer.cshtml - A default consent page. This can be customized at will.

Next to these files, the following entries are added to your Web.config file:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <configuration> 3 <appSettings> 4 <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> 5 <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> 6 <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> 7 <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> 8 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> 9 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> 10 </appSettings> 11 </configuration>

These settings should be configured based on the Windows Azure Access Control settings. Details on this can be found on the Github page.

Consuming the API

After populating Windows Azure Access Control Service with a client_id and client_secret for my consuming app (which you can do using the excellent FluentACS package or manually, as shown in the following screenshot), you’re good to go.

ACS OAuth2 Service Identity

The WindowsAzure.Acs.Oauth2 package adds additional functionality to your application: it provides your ASP.NET Web API with the current user’s details (after a successful OAuth2 authorization flow took place) and it adds a controller and view to your app which provides a simple consent page (that can be customized):

image

After granting access, WindowsAzure.Acs.Oauth2 will store the choice of the user in Windows Azure ACS and redirect you back to the application. From there on, the application can ask Windows Azure ACS for an access token and refresh the access token once it expires. Without your application having to interfere with that process ever again. WindowsAzure.Acs.Oauth2 transforms the incoming OAuth2 token into a ClaimsIdentity which your API can use to determine which user is accessing your API. Focus on your API, not on OAuth.

Enjoy!

Authenticate Orchard users with AppFabric Access Control Service

From the initial release of Orchard, the new .NET CMS, I have been wondering how difficult (or easy) it would be to integrate external (“federated”) authentication like Windows Azure AppFabric Access Control Service with it. After a few attempts, I managed to wrap-up a module for Orchard which does that: Authentication.Federated.

After installing, configuring and enabling this module, Orchard’s logon page is replaced with any SAML 2.0 STS that you configure. To give you a quick idea of what this looks like, here are a few screenshots:

Orchard Log On link is being overriddenOrchard authentication via AppFabricOrchard authenticated via SAML - Username is from the username claim

As you can see from the sequence above, Authentication.Federated does the following:

  • Override the default logon link
  • Redirect to the configured STS issuer URL
  • Use claims like username or nameidentifier to register the external user with Orchard. Optionally, it is also possible to configure roles through claims.

Just as a reference, I’ll show you how to configure the module.

Configuring Authentication.Federated – Windows Azure AppFabric side

In my tests, I’ve been using the AppFabric LABS release, over at https://portal.appfabriclabs.com. From there, create a new namespace and configure Access Control Service with the following settings:

Identity Providers

  • Pick the ones you want… I chose Windows Live ID and Google

Relying Party Applications

Add your application here, using the following settings:

  • Name: pick one :-)
  • Realm: The http(s) root URL for your site. When using a local Orchard CMS installation on localhost, enter a non-localhost URL here, e.g. https://www.examle.org
  • Return URL: The root URL of your site. I chose http://localhost:12758/ here to test my local Orchard CMS installation
  • Error URL: anything you want
  • Token format: SAML 2.0
  • Token encryption: none
  • Token lifetime: anything you want
  • Identity providers: the ones you want
  • Rule groups: Create new rule group
  • Token signing certificate: create a Service Namespace token and upload a certificate for it. This can be self-signed. Ensure you know the certificate thumbprint as we will need this later on.

Edit Rule Group

Edit the newly created rule group. Click “generate” to generate some default rules for the identity providers chosen, so that nameidentifier and email claims are passed to Orchard CMS. Also, if you want to be the site administrator later on, ensure you issue a roles claim for your Google/Windows Live ID, like so:

Add a role claim for your administrator

Configuring Authentication.Federated – Orchard side

In Orchard, download Authentication.Federated from the modules gallery and enable it. After that, you’ll find the configuration settings under the general “Settings” menu item in the Orchard dashboard:

Authentication.Federated configuration

These settings speak for themselves mostly, but I want to give you some pointers:

  • Enable federated authentication? – Enables the module. Ensure you’ve first tested the configuration before enabling it. If you don’t, you may lose access to your Orchard installation unless you do some database fiddling…
  • Translate claims to Orchard user properties? – Will use claims values to enrich user data.
  • Translate claims to Orchard roles? – Will assign Orchard roles based on the Roles claim
  • Prefix for federated usernames (e.g. "federated_") – Just a prefix for federated users.
  • STS issuer URL – The STS issuer URL, most likely the root for your STS, e.g. .accesscontrol.appfabriclabs.com">https://<account>.accesscontrol.appfabriclabs.com
  • STS login page URL – The STS’ login page, e.g. .accesscontrol.appfabriclabs.com:443/v2/wsfederation">https://<account>.accesscontrol.appfabriclabs.com:443/v2/wsfederation
  • Realm – The realm configured in the Windows Azure AppFabric Access Control Service settings
  • Return URL base – The root URL for your website
  • Audience URL – Best to set this identical to the realm URL
  • X509 certificate thumbprint (used for issuer URL token signing) – The token signing certificate thumbprint

BlogEngine.NET comment spam filtering

SpamIt’s been a month or three since I was utterly fed up with comment spam on my blog. Sure, I did turn on comment moderation so you, as a visitor, would not notice this spam if I did not approve it as a valid comment. However, I found myself cleaning up comment spam from in between legitimate comments in the BlogEngine.NET admin interface.

In an effort of trying to reduce comment spam, I tried the following:

  • Close comments after 90 days – This effort worked for a few days, but afterwards I was just seeing more comment spam on the topics that were still open to comments.
  • Use a CAPTCHA – This effort reduced some comment spam, but not all. Which makes me believe there are people actually making a living by just sending out comment spam and filling out CAPTCHA’s out there.
  • Whining and cursing while again cleaning out comments manually – This effort worked, until I found out that this was what I’ve been doing before the other 2 efforts. Back to start…

Luckily, the latest version of BlogEngine.NET (and also earlier version if you go down the hacky road) featured a new comment system, including spam filtering. After using it for a few months, I must say I’m very close to zero comment spam!

The results

I have configured BlogEngine.NET as follows:

  • Comments enabled, never closed
  • Comment moderation: “on” and “automatic”
  • Whitelisting rules enabled (if you have 5 legitimate comments, you are probably OK)
  • Spam filters enabled: AkismetFilter, StopForumSpam and TypePadFilter

Now if you look at the results, there’s an interesting difference between the spam filter services being used:

image

The accuracy of the spam filters is mostly > 90%, for Akismet it’s even 97.30 %. Which I also feel: a small check every week on whether there are spam filter mistakes is quite enough. Only the TypePadFilter is letting me down there, and I will probably disable this one and rely on only two filters.

Simplified access control using Windows Azure AppFabric Labs

Windows Azure ApFabric Access ControlEarlier this week, Zane Adam announced the availability of the New AppFabric Access Control service in LABS. The highlights for this release (and I quote):

  • Expanded Identity provider support - allowing developers to build applications and services that accept both enterprise identities (through integration with Active Directory Federation Services 2.0), and a broad range of web identities (through support of Windows Live ID, Open ID, Google, Yahoo, Facebook identities) using a single code base.
  • WS-Trust and WS-Federation protocol support – Interoperable WS-* support is important to many of our enterprise customers.
  • Full integration with Windows Identity Foundation (WIF) - developers can apply the familiar WIF identity programming model and tooling for cloud applications and services.
  • A new management web portal -  gives simple, complete control over all Access Control settings.

Wow! This just *has* to be good! Let’s see how easy it is to work with claims based authentication and the AppFabric Labs Access Control Service, which I’ll abbreviate to “ACS” throughout this post.

kick it on DotNetKicks.com

What are you doing?

In essence, I’ll be “outsourcing” the access control part of my application to the ACS. When a user comes to the application, he will be asked to present certain “claims”, for example a claim that tells what the user’s role is. Of course, the application will only trust claims that have been signed by a trusted party, which in this case will be the ACS.

Fun thing is: my application only has to know about the ACS. As an administrator, I can then tell the ACS to trust claims provided by Windows Live ID or Google Accounts, which will be reflected to my application automatically: users will be able to authenticate through any service I configure in the ACS, without my application having to know. Very flexible, as I can tell the ACS to trust for example my company’s Active Directory and perhaps also the Active Directory of a customer who uses the application

Prerequisites

Before you start, make sure you have the latest version of Windows Identity Foundation installed. This will make things easy, I promise! Other prerequisites, of course, are Visual Studio and an account on https://portal.appfabriclabs.com. Note that, since it’s still a “preview” version, this is free to use.

In the labs account, create a project and in that project create a service namespace. This is what you should be seeing (or at least: something similar):

AppFabric labs project

Getting started: setting up the application side

Before starting, we will require a certificate for signing tokens and things like that. Let’s just start with creating one so we don’t have to worry about that further down the road. Issue the following command in a Visual Studio command prompt:

MakeCert.exe -r -pe -n "CN=<your service namespace>.accesscontrol.appfabriclabs.com" -sky exchange -ss my

This will create a certificate that is valid for your ACS project. It will be installed in the local certificate store on your computer. Make sure to export both the public and private key (.cer and .pkx).

That being said and done: let’s add claims-based authentication to a new ASP.NET Website. Simply fire up Visual Studio, create a new ASP.NET application. I called it “MyExternalApp” but in fact the name is all up to you. Next, edit the Default.aspx page and paste in the following code:

1 <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" 2 CodeBehind="Default.aspx.cs" Inherits="MyExternalApp._Default" %> 3 4 <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 5 </asp:Content> 6 <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 7 <p>Your claims:</p> 8 <asp:GridView ID="gridView" runat="server" AutoGenerateColumns="False"> 9 <Columns> 10 <asp:BoundField DataField="ClaimType" HeaderText="ClaimType" ReadOnly="True" /> 11 <asp:BoundField DataField="Value" HeaderText="Value" ReadOnly="True" /> 12 </Columns> 13 </asp:GridView> 14 </asp:Content>

Next, edit Default.aspx.cs and add the following Page_Load event handler:

1 protected void Page_Load(object sender, EventArgs e) 2 { 3 IClaimsIdentity claimsIdentity = 4 ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities.FirstOrDefault(); 5 6 if (claimsIdentity != null) 7 { 8 gridView.DataSource = claimsIdentity.Claims; 9 gridView.DataBind(); 10 } 11 }

So far, so good. If we had everything configured, Default.aspx would simply show us the claims we received from ACS once we have everything running. Now in order to configure the application to use the ACS, there’s two steps left to do:

  • Add a reference to Microsoft.IdentityModel (located somewhere at C:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll)
  • Add an STS reference…

That first step should be easy: add a reference to Microsoft.IdentityModel in your ASP.NET application. The second step is almost equally easy: right-click the project and select “Add STS reference…”, like so:

Add STS reference

A wizard will pop-up. Here’s a secret: this wizard will do a lot for us! On the first screen, enter the full URL to your application. I have mine hosted on IIS and enabled SSL, hence the following screenshot:

Specify application URI

In the next step, enter the URL to the STS federation metadata. To the what where? Well, to the metadata provided by ACS. This metadata contains the types of claims offered, the certificates used for signing, … The URL to enter will be something like https://<your service namespace>.accesscontrol.appfabriclabs.com:443/FederationMetadata/2007-06/FederationMetadata.xml:

Security Token Service

In the next step, select “Disable security chain validation”. Because we are using self-signed certificates, selecting the second option would lead us to doom because all infrastructure would require a certificate provided by a valid certificate authority.

From now on, it’s just “Next”, “Next”, “Finish”. If you now have a look at your Web.config file, you’ll see that the wizard has configured the application to use ACS as the federation authentication provider. Furthermore, a new folder called “FederationMetadata” has been created, which contains an XML file that specifies which claims this application requires. Oh, and some other details on the application, but nothing to worry about at this point.

Our application has now been configured: off to the ACS side!

Getting started: setting up the ACS side

First of all, we need to register our application with the Windows Azure AppFabric ACS. his can be done by clicking “Manage” on the management portal over at https://portal.appfabriclabs.com. Next, click “Relying Party Applications” and “Add Relying Party Application”. The following screen will be presented:

Add Relying Party Application

Fill out the form as follows:

  • Name: a descriptive name for your application.
  • Realm: the URI that the issued token will be valid for. This can be a complete domain (i.e. www.example.com) or the full path to your application. For now, enter the full URL to your application, which will be something like https://localhost/MyApp.
  • Return URL: where to return after successful sign-in
  • Token format: we’ll be using the defaults in WIF, so go for SAML 2.0.
  • For the token encryption certificate, select X.509 certificate and upload the certificate file (.cer) we’ve been using before
  • Rule groups: pick one, best is to create a new one specific to the application we are registering

Afterwards click “Save”. Your application is now registered with ACS.

The next step is to select the Identity Providers we want to use. I selected Windows Live ID and Google Accounts as shown in the next screenshot:

Identity Providers

One thing left: since we are using Windows Identity Foundation, we have to upload a token signing certificate to the portal. Export the private key of the previously created certificate and upload that to the “Certificates and Keys” part of the management portal. Make sure to specify that the certificate is to be used for token signing.

Signing certificate Windows Identity Foundation WIF

Allright, we’re almost done. Well, in fact: we are done! An optional next step would be to edit the rule group we’ve created before. This rule group will describe the claims that will be presented to the application asking for the user’s claims. Which is very powerful, because it also supports so-called claim transformations: if an identity provider provides ACS with a claim that says “the user is part of a group named Administrators”, the rules can then transform the claim into a new claim stating “the user has administrative rights”.

Testing our setup

With all this information and configuration in place, press F5 inside Visual Studio and behold… Your application now redirects to the STS in the form of ACS’ login page.

Sign in using AppFabric

So far so good. Now sign in using one of the identity providers listed. After a successful sign-in, you will be redirected back to ACS, which will in turn redirect you back to your application. And then: misery :-)

Request validation

ASP.NET request validation kicked in since it detected unusual headers. Let’s fix that. Two possible approaches:

  • Disable request validation, but I’d prefer not to do that
  • Create a custom RequestValidator

Let’s go with the latter option… Here’s a class that you can copy-paste in your application:

1 public class WifRequestValidator : RequestValidator 2 { 3 protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) 4 { 5 validationFailureIndex = 0; 6 7 if (requestValidationSource == RequestValidationSource.Form && collectionKey.Equals(WSFederationConstants.Parameters.Result, StringComparison.Ordinal)) 8 { 9 SignInResponseMessage message = WSFederationMessage.CreateFromFormPost(context.Request) as SignInResponseMessage; 10 11 if (message != null) 12 { 13 return true; 14 } 15 } 16 17 return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); 18 } 19 }

Basically, it’s just validating the request and returning true to ASP.NET request validation if a SignInMesage is in the request. One thing left to do: register this provider with ASP.NET. Add the following line of code in the <system.web> section of Web.config:

<httpRuntime requestValidationType="MyExternalApp.Modules.WifRequestValidator" />

If you now try loading the application again, chances are you will actually see claims provided by ACS:

Claims output from Windows Azure AppFabric Access Control Service

There', that’s it. We now have successfully delegated access control to ACS. Obviously the next step would be to specify which claims are required for specific actions in your application, provide the necessary claims transformations in ACS, … All of that can easily be found on Google Bing.

Conclusion

To be honest: I’ve always found claims-based authentication and Windows Azure AppFabric Access Control a good match in theory, but an ugly and cumbersome beast to work with. With this labs release, things get interesting and almost self-explaining, allowing for easier implementation of it in your own application. As an extra bonus to this blog post, I also decided to link my ADFS server to ACS: it took me literally 5 minutes to do so and works like a charm!

Final conclusion: AppFabric team, please ship this soon :-) I really like the way this labs release works and I think many users who find the step up to using ACS today may as well take the step if they can use ACS in the easy manner the labs release provides.

By the way: more information can be found on http://acs.codeplex.com.

kick it on DotNetKicks.com

ASP.NET MVC - Upcoming preview 4 release

ScottGu just posted that there's an upcoming preview 4 release of the ASP.NET MVC framework. What I immediately noticed, is that there are actually some community concepts being integrated in the framework, yay! And what's even cooler: 2 of these new features are things that I've already contributed to the community (the fact that it these are included in the MVC framework now could be coincidence, though...).

Thank you, ASP.NET MVC team! This preview 4 release seems like a great step in the evolution of the ASP.NET MVC framework. Thumbs up!

kick it on DotNetKicks.com

To all BlogEngine.NET users... Go patch!

image This morning, I read about a serious security issue in BlogEngine.NET. The security issue is in the JavaScript HTTP handler, which lets all files pass trough... In short: if you open http://your.blog.com/js.axd?path=app_data\users,xml, anyone can see your usernames/passwords! None of the other HttpHandlers are affected by this security hole.

My recommendation: if you are using BlogEngine.NET: go patch!

(and yes, I patched it Cool  http://blog.maartenballiauw.be/js.axd?path=app_data\users.xml)

kick it on DotNetKicks.com