Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

From API key to user with ASP.NET Web API

ASP.NET Web API is a great tool to build an API with. Or as my buddy Kristof Rennen (and the French) always say: “it makes you ‘api”. One of the things I like a lot is the fact that you can do very powerful things that you know and love from the ASP.NET MVC stack, like, for example, using filter attributes. Action filters, result filters and… authorization filters.

Say you wanted to protect your API and make use of the controller’s User property to return user-specific information. You probably will add an [Authorize] attribute (to ensure the user is authenticated) to either the entire API controller or to one of its action methods, like this:

[Authorize] public class SuperSecretController : ApiController { public string Get() { return string.Format("Hello, {0}", User.Identity.Name); } }

Great! But… How will your application know who’s calling? Forms authentication doesn’t really make sense for a lot of API’s. Configuring IIS and switching to Windows authentication or basic authentication may be an option. But not every ASP.NET Web API will live in IIS, right? And maybe you want to use some other form of authentication for your API, for example one that uses a custom HTTP header containing an API key? Let’s see how you can do that…

Our API authentication? An API key

API keys may make sense for your API. They provide an easy means of authenticating your API consumers based on a simple token that is passed around in a custom header. OAuth2 may make sense as well, but even that one boils down to a custom Authorization header at the HTTP level. (hint: the approach outlined in this post can be used for OAuth2 tokens as well)

Let’s build our API and require every API consumer to pass in a custom header, named “X-ApiKey”. Calls to our API will look like this:

GET http://localhost:60573/api/v1/SuperSecret HTTP/1.1 Host: localhost:60573 X-ApiKey: 12345

In our SuperSecretController above, we want to make sure that we’re working with a traditional IPrincipal which we can query for username, roles and possibly even claims if needed. How do we get that identity there?

Translating the API key using a DelegatingHandler

The title already gives you a pointer. We want to add a plugin into ASP.NET Web API’s pipeline which replaces the current thread’s IPrincipal with one that is mapped from the incoming API key. That plugin will come in the form of a DelegatingHandler, a class that’s plugged in really early in the ASP.NET Web API pipeline. I’m not going to elaborate on what DelegatingHandler does and where it fits, there’s a perfect post on that to be found here.

Our handler, which I’ll call AuthorizationHeaderHandler will be inheriting ASP.NET Web API’s DelegatingHandler. The method we’re interested in is SendAsync, which will be called on every request into our API.

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { // ... } }

This method offers access to the HttpRequestMessage, which contains everything you’ll probably be needing such as… HTTP headers! Let’s read out our X-ApiKey header, convert it to a ClaimsIdentity (so we can add additional claims if needed) and assign it to the current thread:

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { IEnumerable<string> apiKeyHeaderValues = null; if (request.Headers.TryGetValues("X-ApiKey", out apiKeyHeaderValues)) { var apiKeyHeaderValue = apiKeyHeaderValues.First(); // ... your authentication logic here ... var username = (apiKeyHeaderValue == "12345" ? "Maarten" : "OtherUser"); var usernameClaim = new Claim(ClaimTypes.Name, username); var identity = new ClaimsIdentity(new[] {usernameClaim}, "ApiKey"); var principal = new ClaimsPrincipal(identity); Thread.CurrentPrincipal = principal; } return base.SendAsync(request, cancellationToken); } }

Easy, no? The only thing left to do is registering this handler in the pipeline during your application’s start:

GlobalConfiguration.Configuration.MessageHandlers.Add(new AuthorizationHeaderHandler());

From now on, any request coming in with the X-ApiKey header will be translated into an IPrincipal which you can easily use throughout your web API. Enjoy!

PS: if you’re looking into OAuth2, I’ve used a similar approach in  “ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service” to handle OAuth2 tokens.

Get your Windows 8 up to speed fast

With the release of Windows 8 on MSDN yesterday, I have a gut feeling that today, around the globe, people are installing this fresh operating system on their machine. I’ve done so too and I wanted to share with your two tools: one that helped me get up to speed fast, one that will help me up to speed even faster the next time I want to reset my PC.

Chocolatey

One of the best things created for Windows, ever, is Chocolatey. If you are familiar with Ninite, you will find that both serve the same purpose, however Chocolatey is more developer focused.

Chocolatey provides a catalog of software packages like Notepad++, ReSharper, Paint.Net and a whole lot more. After installing Chocolatey, all you have to do to install such a package is invoke, from the command line, “cinst <package>”. The keyword command line is pretty important: what if you could just create a batch file containing all packages you need, like I did here?

Batch files are great, but even easier is creating a custom Chocolatey feed on www.myget.org (create a feed, go to package sources, add Chocolatey): you can simply add whatever you need on a fresh system to this feed and whenever you want to install every package from your custom feed, like I did yesterday evening, you invoke

cinst All -source "http://www.myget.org/F/chocolateymaarten"

and go to bed. In the morning, everything is on your PC.

Windows 8 - Reset Your PC

There’s a new feature in Windows 8 called “Refresh/reset Your PC”. What it does is revert to a certain baseline whenever you feel the need of a format C: coming up. This baseline, by default, is a fresh install. Now what if you could just set your own baseline and revert back to that one next time you need a reinstall? The good news: you can do this!

  • Configure your PC at will
  • From an elevated command prompt, issue:
    mkdir C:\SoFreshThatItSmellsGreat
    recimg -CreateImage C:\SoFreshThatItSmellsGreat

Done!

ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service

OAuth 2 Windows AzureIf you are familiar with OAuth2’s protocol flow, you know there’s a lot of things you should implement if you want to protect your ASP.NET Web API using OAuth2. To refresh your mind, here’s what’s required (at least):

  • OAuth authorization server
  • Keep track of consuming applications
  • Keep track of user consent (yes, I allow application X to act on my behalf)
  • OAuth token expiration & refresh token handling
  • Oh, and your API

That’s a lot to build there. Wouldn’t it be great to outsource part of that list to a third party? A little-known feature of the Windows Azure Access Control Service is that you can use it to keep track of applications, user consent and token expiration & refresh token handling. That leaves you with implementing:

  • OAuth authorization server
  • Your API

Let’s do it!

On a side note: I’m aware of the road-to-hell post released last week on OAuth2. I still think that whoever offers OAuth2 should be responsible enough to implement the protocol in a secure fashion. The protocol gives you the options to do so, and, as with regular web page logins, you as the implementer should think about security.

Building a simple API

I’ve been doing some demos lately using www.brewbuddy.net, a sample application (sources here) which enables hobby beer brewers to keep track of their recipes and current brews. There are a lot of applications out there that may benefit from being able to consume my recipes. I love the smell of a fresh API in the morning!

Here’s an API which would enable access to my BrewBuddy recipes:

1 [Authorize] 2 public class RecipesController 3 : ApiController 4 { 5 protected IRecipeService RecipeService { get; private set; } 6 7 public RecipesController(IRecipeService recipeService) 8 { 9 RecipeService = recipeService; 10 } 11 12 public IQueryable<RecipeViewModel> Get() 13 { 14 var recipes = RecipeService.GetRecipes(User.Identity.Name); 15 var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); 16 17 return model.AsQueryable(); 18 } 19 }

Nothing special, right? We’re just querying our RecipeService for the current user’s recipes. And the current user should be logged in as specified using the [Authorize] attribute.  Wait a minute! The current user?

I’ve built this API on the standard ASP.NET Web API features such as the [Authorize] attribute and the expectation that the User.Identity.Name property is populated. The reason for that is simple: my API requires a user and should not care how that user is populated. If someone wants to consume my API by authenticating over Forms authentication, fine by me. If someone configures IIS to use Windows authentication or even hacks in basic authentication, fine by me. My API shouldn’t care about that.

OAuth2 is a different state of mind

OAuth2 adds a layer of complexity. Mental complexity that is. Your API consumer is not your end user. Your API consumer is acting on behalf of your end user. That’s a huge difference! Here’s what really happens:

OAuth2 protocol flow

The end user loads a consuming application (a mobile app or a web app that doesn’t really matter). That application requests a token from an authorization server trusted by your application. The user has to login, and usually accept the fact that the app can perform actions on the user’s behalf (think of Twitter’s “Allow/Deny” screen). If successful, the authorization server returns a code to the app which the app can then exchange for an access token containing the user’s username and potentially other claims.

Now remember what we started this post with? We want to get rid of part of the OAuth2 implementation. We don’t want to be bothered by too much of this. Let’s try to accomplish the following:

OAuth2 protocol flow with Windows Azure

Let’s introduce you to…

WindowsAzure.Acs.Oauth2

“That looks like an assembly name. Heck, even like a NuGet package identifier!” You’re right about that. I’ve done a lot of the integration work for you (sources / NuGet package).

WindowsAzure.Acs.Oauth2 is currently in alpha status, so you’ll will have to register this package in your ASP.NET MVC Web API project using the package manager console, issuing the following command:

Install-Package WindowsAzure.Acs.Oauth2 -IncludePrerelease

This command will bring some dependencies to your project and installs the following source files:

  • App_Start/AppStart_OAuth2API.cs - Makes sure that OAuth2-signed SWT tokens are transformed into a ClaimsIdentity for use in your API. Remember where I used User.Identity.Name in my API? Populating that is performed by this guy.

  • Controllers/AuthorizeController.cs - A standard authorization server implementation which is configured by the Web.config settings. You can override certain methods here, for example if you want to show additional application information on the consent page.

  • Views/Shared/_AuthorizationServer.cshtml - A default consent page. This can be customized at will.

Next to these files, the following entries are added to your Web.config file:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <configuration> 3 <appSettings> 4 <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> 5 <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> 6 <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> 7 <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> 8 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> 9 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> 10 </appSettings> 11 </configuration>

These settings should be configured based on the Windows Azure Access Control settings. Details on this can be found on the Github page.

Consuming the API

After populating Windows Azure Access Control Service with a client_id and client_secret for my consuming app (which you can do using the excellent FluentACS package or manually, as shown in the following screenshot), you’re good to go.

ACS OAuth2 Service Identity

The WindowsAzure.Acs.Oauth2 package adds additional functionality to your application: it provides your ASP.NET Web API with the current user’s details (after a successful OAuth2 authorization flow took place) and it adds a controller and view to your app which provides a simple consent page (that can be customized):

image

After granting access, WindowsAzure.Acs.Oauth2 will store the choice of the user in Windows Azure ACS and redirect you back to the application. From there on, the application can ask Windows Azure ACS for an access token and refresh the access token once it expires. Without your application having to interfere with that process ever again. WindowsAzure.Acs.Oauth2 transforms the incoming OAuth2 token into a ClaimsIdentity which your API can use to determine which user is accessing your API. Focus on your API, not on OAuth.

Enjoy!

Protecting Windows Azure Web and Worker roles from malware

Most IT administrators will install some sort of virus scanner on your precious servers. Since the cloud, from a technical perspective, is just a server, why not follow that security best practice on Windows Azure too? It has gone by almost unnoticed, but last week Microsoft released the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview. For the sake of bandwidth, I’ll be referring to it as EP.

EP offers real-time protection, scheduled scanning, malware remediation (a fancy word for quarantining), active protection and automatic signature updates. Sounds a lot like Microsoft Endpoint Protection or Windows Security Essentials? That’s no coincidence: EP is a Windows Azurified version of it.

Enabling anti-malware on Windows Azure

After installing the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview, sorry, EP, a new Windows Azure import will be available. As with remote desktop or diagnostics, EP can be enabled by a simple XML one liner:

1 <Import moduleName="Antimalware" />

Here’s a sample web role ServiceDefinition.csdef file containing this new import:

1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="ChuckProject" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 4 <WebRole name="ChuckNorris" vmsize="Small"> 5 <Sites> 6 <Site name="Web"> 7 <Bindings> 8 <Binding name="Endpoint1" endpointName="Endpoint1" /> 9 </Bindings> 10 </Site> 11 </Sites> 12 <Endpoints> 13 <InputEndpoint name="Endpoint1" protocol="http" port="80" /> 14 </Endpoints> 15 <Imports> 16 <Import moduleName="Antimalware" /> 17 <Import moduleName="Diagnostics" /> 18 </Imports> 19 </WebRole> 20 </ServiceDefinition>

That’s it! When you now deploy your Windows Azure solution, Microsoft Endpoint Protection will be installed, enabled and configured on your Windows Azure virtual machines.

Now since I started this blog post with “IT administrators”, chances are you want to fine-tune this plugin a little. No problem! The ServiceConfiguration.cscfg file has some options waiting to be eh, touched. And since these are in the service configuration, you can also modify them through the management portal, the management API, or sysadmin-style using PowerShell. Anyway, the following options are available:

  • Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation – Specify the datacenter region where your application is deployed, for example “West Europe” or “East Asia”. This will speed up deployment time.
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware – Should EP be enabled or not?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection – Should real-time protection be enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans – Weekly scheduled scans enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans – Which day of the week (0 – 7 where 0 means daily)
  • Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans – What time should the scheduled scan run?
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions – Specify file extensions to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths – Specify paths to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses – Specify processes to exclude from scanning (pip-delimited)

Monitoring anti-malware on Windows Azure

How will you know if a threat has been detected? Well, luckily for us, Windows Endpoint Protection writes its logs to the System event log. Which means that you can simply add a specific data source in your diagnostics monitor and you’re done:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // Note: if you need informational / verbose, also subscribe to levels 4 and 5 4 configuration.WindowsEventLog.DataSources.Add( 5 "System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]"); 6 7 configuration.WindowsEventLog.ScheduledTransferPeriod 8 = System.TimeSpan.FromMinutes(1); 9 10 DiagnosticMonitor.Start( 11 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 12 configuration);

In addition, EP also logs its inner workings to its installation folders. You can also include these in your diagnostics configuration:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // ...add the event logs like in the previous code sample... 4 5 var mep1 = new DirectoryConfiguration(); 6 mep1.Container = "wad-endpointprotection-container"; 7 mep1.DirectoryQuotaInMB = 5; 8 mep1.Path = "%programdata%\Microsoft Endpoint Protection"; 9 10 var mep2 = new DirectoryConfiguration(); 11 mep2.Container = "wad-endpointprotection-container"; 12 mep2.DirectoryQuotaInMB = 5; 13 mep2.Path = "%programdata%\Microsoft\Microsoft Security Client"; 14 15 configuration.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); 16 configuration.Directories.DataSources.Add(mep1); 17 configuration.Directories.DataSources.Add(mep2); 18 19 DiagnosticMonitor.Start( 20 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 21 configuration);

From this moment one, you can use a tool like Cerebrata’s Diagnostics Monitor to check the event logs of all your Windows Azure instances that have anti-malware enabled.

TechDays Finland - Architectural Patterns for the Cloud - NuGet

As promised, here are the slide decks for the two sessions delivered at TechDays Finland last week.

Architectural Patterns for the Cloud

The promise of all cloud vendors out there is they can run your applications without changes. While that claim is true, it’s better to optimize existing software or design specifically for the cloud when moving or building an application. Architectural optimization will speed up your application, make it more scalable and even will make it cheaper to run on Windows Azure. This session will take you along some common patterns that are easy to implement and will make your cloud more sunny.

Organize your Chickens - NuGet for the Enterprise

Managing software dependencies, whether those created in-house or from third parties can be a pain in the behind. Whether dependencies feel like wild chickens or people run around like chickens dealing with dependencies, the NuGet package manager can be a cure. Let us guide you to creating enterprise (chicken) NuGets and dealing with them in a structured, easy-to-maintain manner. From developer workstation to build server, NuGet tastes great! We'll provide you the dip sauce.

Enjoy! And if there’s any feedback or questions, I would love to hear it.

Introducing MyGet package source proxy (beta)

My blog already has quite the number of blog posts around MyGet, our NuGet-as-a-Service solution which my colleague Xavier and I are running. There are a lot of reasons to host your own personal NuGet feed (such as protecting your intellectual property or only adding approved packages to the feed, but there’s many more as you can <plug>read in our book</plug>). We’ve added support for another scenario: MyGet now supports proxying remote feeds.

Up until now, MyGet required you to upload your own NuGet packages and to include packages from the NuGet feed. The problem with this is that you either required your team to register multiple NuGet feeds in Visual Studio (which still is a good option) or to register just your MyGet feed and add all packages your team is using to it. Which, again, is also a good option.

With our package source proxy in place, we now provide a third option: MyGet can proxy upstream NuGet feeds. Let’s start with a quick diagram and afterwards walk you through a scenario elaborating on this:

MyGet Feed Proxy Aggregate Feed Connector

You are seeing this correctly: you can now register just your MyGet feed in Visual Studio and we’ll add upstream packages to your feed automatically, optionally filtered as well.

Enabling MyGet package source proxy

Enabling the MyGet package source proxy is very straightforward. Navigate to your feed of choice (or create a new one) and click the Package Sources item. This will present you with a screen similar to this:

MyGet hosted package source

From there, you can add external (or MyGet) feeds to your personal feed and add packages directly from them using the Add package dialog. More on that in Xavier’s blog post. What’s more: with the tick of a checkbox, these external feeds can also be aggregated with your feed in Visual Studio’s search results. Here’s the magical add dialog and the proxy checkbox:

Add package source proxy

As you may see, we also offer the option to filter upstream packages. For example, the filter string substringof('wp7', Tags) eq true that we used will filter all upstream packages where the tags contain “wp7”.

What will Visual Studio display us? Well, just the Windows Phone 7 packages from NuGet, served through our single-endpoint MyGet feed.

Conclusion

Instead of working with a number of NuGet feeds, your development team will just work with one feed that is aggregating packages from both MyGet and other package sources out there (NuGet, Orchard Gallery, Chocolatey, …). This centralizes managing external packages and makes it easier for your team members to find the packages they can use in your projects.

Do let us know what you think of this feature! Our UserVoice is there for you, and in fact, that’s where we got the idea for this feature from in the first place. Your voice is heard!

Tracking API usage with Google Analytics

So you have an API. Congratulations! You should have one. But how do you track who uses it, what client software they use and so on? You may be logging API calls yourself. You may be relying on services like Apigee.com who make you pay (for a great service, though!). Being cheap, we thought about another approach for MyGet. We’re already using Google Analytics to track pageviews and so on, why not use Google Analytics for tracking API calls as well?

Meet GoogleAnalyticsTracker. It is a three-classes assembly which allows you to track requests from within C# to Google Analytics.

Go and  fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using GoogleAnalyticsTracker

Using GoogleAnalyticsTracker in your projects is simple. Simply Install-Package GoogleAnalyticsTracker and be an API tracking bad-ass! There are two things required: a Google Analytics tracking ID (something in the form of UA-XXXXXXX-X) and the domain you wish to track, preferably the same domain as the one registered with Google Analytics.

After installing GoogleAnalyticsTracker into your project, you currently have two options to track your API calls: use the Tracker class or use the included ASP.NET MVC Action Filter.

Here’s a quick demo of using the Tracker class:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView("My API - Create", "api/create");

Unfortunately, this class has no notion of a web request. This means that if you want to track user agents and user languages, you’ll have to add some more code:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 3 var request = HttpContext.Request; 4 tracker.Hostname = request.UserHostName; 5 tracker.UserAgent = request.UserAgent; 6 tracker.Language = request.UserLanguages != null ? string.Join(";", request.UserLanguages) : ""; 7 8 tracker.TrackPageView("My API - Create", "api/create");

Whaah! No worries though: there’s an extension method which does just that:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView(HttpContext, "My API - Create", "api/create");

The sad part is: this code quickly clutters all your action methods. No worries! There’s an ActionFilter for that!

1 [ActionTracking("UA-XXXXXX-XX", "www.example.org")] 2 public class ApiController 3 : Controller 4 { 5 public JsonResult Create() 6 { 7 return Json(true); 8 } 9 }

And what’s better: you can register it globally and optionally filter it to only track specific controllers and actions!

1 public class MvcApplication : System.Web.HttpApplication 2 { 3 public static void RegisterGlobalFilters(GlobalFilterCollection filters) 4 { 5 filters.Add(new HandleErrorAttribute()); 6 filters.Add(new ActionTrackingAttribute( 7 "UA-XXXXXX-XX", "www.example.org", 8 action => action.ControllerDescriptor.ControllerName == "Api") 9 ); 10 } 11 }

And here’s what it could look like (we’re only tracking for the second day now…):

WCF Web API analytics google

We even have stats about the versions of the NuGet Command Line used to access our API!

NuGet API tracking Google

Enjoy! And fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Techniques for real-time client-server communication on the web (SignalR to the rescue)

SignalR websockets html5 long pollingWhen building web applications, you often face the fact that HTTP, the foundation of the web, is a request/response protocol. A client issues a request, a server handles this request and sends back a response. All the time, with no relation between the first request and subsequent requests. Also, since it’s request-based, there is no way to send messages from the server to the client without having the client create a request first.

Today users expect that in their projects, sorry, “experiences”, a form of “real time” is available. Questions like “I want this stock ticker to update whenever the price changes” or “I want to view real-time GPS locations of my vehicles on this map”. Or even better: experiences where people collaborate often require live notifications and changes in the browser so that whenever a user triggers a task or event, the other users collaborating immediately are notified. Think Google Spreadsheets where you can work together. Think Facebook chat. Think Twitter where new messages automatically appear. Think your apps with a sprinkle of real-time sauce.

How would you implement this?

But what if the server wants to communicate with the client?

Over the years, web developers have been very inventive working around the request/response nature of the web. Two techniques are being used on different platforms and provide a relatively easy workaround to the “problem” of HTTP’s paradigm where the client initiates any connection: simple polling using Ajax and a variant of that, long polling.

Simple Ajax polling is, well, simple: the client “polls” the server via an Ajax request the server answers if there is data. The client waits for a while and goes through this process again. Schematically, this would be the following:

image

A problem with this is that the server still relies on the client initiating the connection. Whenever during the polling interval the server has data for the client, this data can only be sent to the client when the next polling cycle occurs. This is probably no problem when the polling interval is in the 100ms range, apart from the fact that you’ll be hammering your servers when doing that. From the moment your polling interval goes up to say 2 seconds or even 30 seconds, you lose part of the “real time” communication idea.

This problem has been solved by using a technique called “long polling”. The idea is that the client opens an Ajax-based connection to the server, the server does not reply until it has data. The client just has the false feeling that the request is taking a while, and eventually will have some data coming back from the server. Whenever data is returned, the client immediately opens up a “long polling” connection again. Schematically:

image

There’s no polling interval: as long as the connection is open, the server has the ability to send data back to the client. Awesome, right? Not really… Your servers will not be hammered with billions of requests, but the requests it handles will take a while to complete (the “long poll”). Imagine what your ASP.NET thread pool will do in such case… Well, unless you implement your server-side using an IAsyncHttpHandler or similar. Otherwise, your servers will simply stop accepting requests.

HTML5 to the rescue?

As we’ve seen, both techniques that exist work to simulate full-duplex communication between client and server. However, both of them have some disadvantages if you don’t know what you are doing. Also, the techniques described before are just simulating bi-directional communication. Wouldn’t it be nice to have a solution that works great and was intended to do this? Meet HTML5 WebSockets.

WebSockets offer a real, bi-directional TCP connection between the client and the server. That’s right, a TCP (non-HTTP) connection. To establish a WebSocket connection, the client sends a WebSocket handshake request over HTTP, the server sends a WebSocket handshake response with details on how to open the actual TCP connection. Easy as that! Schematically:

image

Unfortunately, the web does not evolve that fast… WebSockets are still a draft specification (“CTP” or “alpha” as you will). Not all browsers support them. And because they are using a raw TCP connection, many proxy servers used at companies don’t support them, yet. We’ll get there, just not today. Also, if you want to use WebSockets with ASP.NET, you’ll be forced to use the preview release of .NET 4.5.

So what to do in this messed up world? How to achieve real-time, bi-directional communication over the interwebs, today?

SignalR to the rescue!

SignalR is “an asynchronous signaling library for ASP.NET that Microsoft is working on to help build real-time multi-user web applications”.  Let me rephrase that: SignalR is an ASP.NET library which leverages the three techniques I described before to create a seamless experience between client and server.

The main idea of SignalR is that the boundary between client and server should become easier to tackle. A really quick example would be two parts of code. The client side:

1 var helloConnection = $.connection.hello; 2 3 helloConnection.sayHelloToMe = function (message) { 4 alert(message); 5 }; 6 7 $.connection.hub.start(function() { 8 helloConnection.sayHelloToAll("Hello all!"); 9 });

The server side:

1 public class Hello : Hub { 2 public void SayHelloToAll(string message) { 3 Clients.sayHelloToMe(message); 4 } 5 }

Are you already seeing the link? The JavaScript client calls the C# method “SayHelloToAll” as if it were a JavaScript function. The C# side calls all of its clients (meaning the 200.000 browser windows connecting to this service :-)) JavaScript method “sayHelloToMe” as if it were a C# method.

If I add that not only JavaScript clients are supported but also Windows Phone, Silverlight and plain .NET, does this sound of interest? If I add that SignalR can use any of the three techniques described earlier in this post based on what the client and the server support, without you even having to care… does this sound of interest? If the answer is yes, stay tuned for some follow up posts…

Repaving your PC: the easier way

It"’s been a while since I had to repave my laptop. I have a Windows Home Server (WHS) at home which images my PC almost daily and allows restoring it to a given point in time in less than 30 minutes. Which is awesome! And which is how I usually “restore” my PC into a stable state.  Over the past year some hardware changes have been made of which the most noteworthy is the replacement of the existing hard drive with an SSD. A great addition, and it was easy to restore as well: swap the disks and restore the image from WHS. SSD and full system install? 30 minutes.

imageThe downside of restoring an image which came from a non-SSD drive has been bugging me for a while though. My SSD did not feel as fast as it should have felt, resulting in me reinstalling Windows on it just to check if that led to any speed improvements. And it did. And I knew I was in trouble: that would be a load of software to re-install and reconfigure. Here’s a list of what I had on my system before and is absolutely required for me to be able to do my job:

  • Telnet client
  • PDFCreator
  • ZoomIt
  • Win7 SP1
  • Virtual CloneDrive
  • HP Printer Corporate Edition
  • Ccleaner
  • Virus scanner
  • Adobe Flash
  • Adobe PDF
  • Silverlight
  • Office 2010
  • Windows Live Writer
  • Windows Live Mesh
  • WinRAR
  • Office Live Meeting & Communicator
  • VS 2010
  • VS 2010 SP1
  • GhostDoc
  • Resharper
  • Windows Azure Tools
  • WIF tools
  • MVC 3 tools
  • SQL Express R2
  • SQL Express Management Tools
  • Webmatrix
  • IIS Express
  • Firefox
  • Chrome
  • Notepad++
  • NuGet Package Explorer
  • Paint.net
  • Skype
  • TortoiseHg
  • TortoiseSVN
  • Fiddler2
  • Java (sorry :-))
  • Zune

Oh boy… Knowing how '”fast” some of these can be installed, that would cost me a day of clicking and waiting.

[edit]Also checkout https://github.com/chocolatey/chocolatey/issues/46[/edit]

Three tools can save you a lot of that work

Fortunately, we live in this time of computers. A time where some things can be automated and it seems like a PC repave should be relatively easy to do. There are three tools that will save you time:

  • Ninite, which you can find at www.ninite.com. Ninite allows you to download and install some items of the list above in one go. I’ve packaged Flash, Acrobat Reader, Chrome, Firefox, Java, … using Ninite and was able to install these items in one go. Great!
  • Web Platform Installer (Web PI) – command line version. A small executable which is able to pull a lot of software from Microsoft and install it in one go. Things like .NET 4, Silverlight, the ASP.NET MVC 3 tooling, … are all on the Web PI feed and can be downloaded in one go.
  • Chocolatey, available at www.chocolatey.org. Chocolatey is a tool based on NuGet which uses a feed of known software and can install these from the command line. For example, “cinst notepadplusplus” is enough to get NotePad++ running on your system.

Using these three tools, I have created a script which you have to run in a PowerShell administrative console. The scripts consist of calls to the Web PI, Ninite and Chocolatey. I’ll give you an example:

1 # Windows Installer 2 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:WindowsInstaller31" 3 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:WindowsInstaller45" 4 5 # Powershell 6 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:PowerShell" 7 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:PowerShell2" 8 9 # .NET 10 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:NETFramework20SP2" 11 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:NETFramework35" 12 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:NETFramework4" 13 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:JUNEAUNETFX4" 14 15 # Ninite stuff 16 cmd /C "ninite\ninite.exe" 17 18 # Chocolatey stuff 19 iex ((new-object net.webclient).DownloadString("http://bit.ly/psChocInstall")) 20 21 cinst windowstelnet 22 cinst virtualclonedrive 23 cinst sysinternals 24 cinst notepadplusplus 25 cinst adobereader 26 cinst msysgit 27 cinst fiddler 28 cinst filezilla 29 cinst skype 30 cinst paint.net 31 cinst ccleaner 32 cinst tortoisesvn 33 cinst tortoisehg 34 35 # IIS 36 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:IIS7" 37 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:ASPNET" 38 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:BasicAuthentication" 39 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:DefaultDocument" 40 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:DigestAuthentication" 41 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:DirectoryBrowse" 42 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:HTTPErrors" 43 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:HTTPLogging" 44 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:HTTPRedirection" 45 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:IIS7_ExtensionLessURLs" 46 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:IISManagementConsole" 47 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:IPSecurity" 48 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:ISAPIExtensions" 49 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:ISAPIFilters" 50 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:LoggingTools" 51 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:MetabaseAndIIS6Compatibility" 52 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:NETExtensibility" 53 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:RequestFiltering" 54 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:RequestMonitor" 55 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:StaticContent" 56 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:StaticContentCompression" 57 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:Tracing" 58 cmd /C "webpicmdline\webpicmdline.exe /AcceptEula /SuppressReboot /Products:WindowsAuthentication"

For those interested, here’s the set of scripts I have used: Repave.zip (986.66 kb). These contain a number of commands that use the tools mentioned above to do 75% of the install work on my PC. All I had to do was install Office 2010, VS2010 and my scripts did the rest. Not the holy grail yet, but certainly a big relief of a lot of frustration finding software and clicking next-next-finish. And now my PC has been repaved, it’s time for a WHS image again. Enjoy!

Publishing symbol packages for a MyGet feed

MyGet host your NuGet feed serverEver since NuGet 1.2, there is a great way for NuGet package authors to let their users debug into the package’s binaries. With almost no additional effort, package authors can publish their symbols and sources, and package consumers can debug into them from Visual Studio, simply by pushing a symbols package in addition to the standard NuGet package.

SymbolSourceToday, we’re proud to announce MyGet has partnered with SymbolSource.org to offer an easy workflow to publish symbol packages for a private MyGet feed. This means from now on you can publish symbol packages for your private feeds as well!

On a sidenote: we're sharing API keys between both services. If you also want to share the same password with both services, simply go to your MyGet profile page and re-enter your password. We'll keep it in sync after that.

Publishing a symbols package for use with MyGet

As I will assume you are used to publishing packages to NuGet and SymbolSource, here’s what changes. First of all, you will require the URLs to which to publish. Log in to MyGet and browse to your feed details. The Feed Details tab will give you all the information you need, as you can see in the following screenshot:

image

In short, your feed URL remains the same. If you want to consume your private feed in Visual Studio or using the NuGet Package Manager Console, simply add http://www.myget.org/F/yourfeedname as the source. The thing that changed is the publish URL: if you want to publish your packages to MyGet, use the URL http://www.myget.org/F/yourfeedname/api/v1 as the publish URL. For symbol packages, your URL will be in the form of http://nuget.gw.symbolsource.org/MyGet/yourfeedname.

The publish workflow to publish the SamplePackage.1.0.0.nupkg to a MyGet feed, including symbols, would be issuing the following two commands from the console:

1 nuget push SamplePackage.1.0.0.nupkg 00000000-0000-0000-0000-00000000000 -Source http://www.myget.org/F/somefeed/api/v1 2 3 nuget push SamplePackage.1.0.0.Symbols.nupkg 00000000-0000-0000-0000-00000000000 -Source http://nuget.gw.symbolsource.org/MyGet/somefeed

An example of these commands can also be found on the Feed Details tab for your MyGet feed.

Consuming symbol packages in Visual Studio

When logging in to MyGet, you can find the symbols URL compatible with Visual Studio under the Feed Details tab for your MyGet feed. This URL will be the same for all feeds you are allowed to consume, so no need to configure 10+ symbol servers in Visual Studio. Here’s how to configure it.

First of all, Visual Studio typically will only debug your own source code, the source code of the project or projects that are currently opened in Visual Studio. To disable this behavior and to instruct Visual Studio to also try to debug code other than the projects that are currently opened, open the Options dialog (under the menu Tools > Options). Find the Debugging node on the left and click the General node underneath. Turn off the option Enable Just My Code. Also turn on the option Enable source server support. This usually triggers a warning message but it is safe to just click Yes and continue with the settings specified.

MyGet symbol server in Visual Studio

Keep the Options dialog opened and find the Symbols node under the Debugging node on the left. In the dialog shown in Figure 4-14, add the symbol server URL for your MyGet feed: http://srv.symbolsource.org/pdb/MyGet/username/11111111-1111-1111-1111-11111111111. After that, click OK to confirm configuration changes and consume symbols for NuGet packages.

Enjoy!