Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Running unit tests when deploying ASP.NET to Windows Azure Web Sites

Deployment failedOne of the well-loved features of Windows Azure Web Sites is the fact that you can simply push our ASP.NET application’s source code to the platform using Git (or TFS or DropBox) and that sources are compiled and deployed on your Windows Azure Web Site. If you’ve checked the management portal earlier, you may have noticed that a number of deployment steps are executed: the deployment process searches for the project file to compile, compiles it, copies the build artifacts to the web root and has your website running. But did you know you can customize this process?

[update] Mstest seems to work now as well, using the console runner from VS2012.

Customizing the build process

To get an understanding of how to customize the build process, I want to explain you how this works. In the root of your repository, you can add a .deployment file, containing a simple directive: which command should be run upon deployment.

1 [config] 2 command = build.bat

This command can be a batch file, a PHP file, a bash file and so on. As long as we can tell Windows Azure Web Sites what to execute. Let’s go with a batch file.

1 @echo off 2 echo This is a custom deployment script, yay!

When pushing this to Windows Azure Web Sites, here’s what you’ll see:

Windows Azure Web Sites custom build

In this batch file, we can use some environment variables to further customize the script:

  • DEPLOYMENT_SOURCE - The initial "working directory"
  • DEPLOYMENT_TARGET - The wwwroot path (deployment destination)
  • DEPLOYMENT_TEMP - Path to a temporary directory (removed after the deployment)
  • MSBUILD_PATH - Path to msbuild

After compiling, you can simply xcopy our application to the %DEPLOYMENT_TARGET% variable and have your website live.

Generating deployment scripts

Creating deployment scripts can be a tedious job, good thing that the azure-cli tools are there! Once those are installed, simply invoke the following command and have both the .deployment file as well as a batch or bash file generated:

1 azure site deploymentscript --aspWAP "path\to\project.csproj"

For reference, here’s what is generated:

1 @echo off 2 3 :: ---------------------- 4 :: KUDU Deployment Script 5 :: ---------------------- 6 7 :: Prerequisites 8 :: ------------- 9 10 :: Verify node.js installed 11 where node 2>nul >nul 12 IF %ERRORLEVEL% NEQ 0 ( 13 echo Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment. 14 goto error 15 ) 16 17 :: Setup 18 :: ----- 19 20 setlocal enabledelayedexpansion 21 22 SET ARTIFACTS=%~dp0%artifacts 23 24 IF NOT DEFINED DEPLOYMENT_SOURCE ( 25 SET DEPLOYMENT_SOURCE=%~dp0%. 26 ) 27 28 IF NOT DEFINED DEPLOYMENT_TARGET ( 29 SET DEPLOYMENT_TARGET=%ARTIFACTS%\wwwroot 30 ) 31 32 IF NOT DEFINED NEXT_MANIFEST_PATH ( 33 SET NEXT_MANIFEST_PATH=%ARTIFACTS%\manifest 34 35 IF NOT DEFINED PREVIOUS_MANIFEST_PATH ( 36 SET PREVIOUS_MANIFEST_PATH=%ARTIFACTS%\manifest 37 ) 38 ) 39 40 IF NOT DEFINED KUDU_SYNC_COMMAND ( 41 :: Install kudu sync 42 echo Installing Kudu Sync 43 call npm install kudusync -g --silent 44 IF !ERRORLEVEL! NEQ 0 goto error 45 46 :: Locally just running "kuduSync" would also work 47 SET KUDU_SYNC_COMMAND=node "%appdata%\npm\node_modules\kuduSync\bin\kuduSync" 48 ) 49 IF NOT DEFINED DEPLOYMENT_TEMP ( 50 SET DEPLOYMENT_TEMP=%temp%\___deployTemp%random% 51 SET CLEAN_LOCAL_DEPLOYMENT_TEMP=true 52 ) 53 54 IF DEFINED CLEAN_LOCAL_DEPLOYMENT_TEMP ( 55 IF EXIST "%DEPLOYMENT_TEMP%" rd /s /q "%DEPLOYMENT_TEMP%" 56 mkdir "%DEPLOYMENT_TEMP%" 57 ) 58 59 IF NOT DEFINED MSBUILD_PATH ( 60 SET MSBUILD_PATH=%WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild.exe 61 ) 62 63 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 64 :: Deployment 65 :: ---------- 66 67 echo Handling .NET Web Application deployment. 68 69 :: 1. Build to the temporary path 70 %MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\path.csproj" /nologo /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release 71 IF !ERRORLEVEL! NEQ 0 goto error 72 73 :: 2. KuduSync 74 echo Kudu Sync from "%DEPLOYMENT_TEMP%" to "%DEPLOYMENT_TARGET%" 75 call %KUDU_SYNC_COMMAND% -q -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.deployment;deploy.cmd" 2>nul 76 IF !ERRORLEVEL! NEQ 0 goto error 77 78 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 79 80 goto end 81 82 :error 83 echo An error has occured during web site deployment. 84 exit /b 1 85 86 :end 87 echo Finished successfully. 88

This script does a couple of things:

  • Ensure node.js is installed on Windows Azure Web Sites (needed later on for synchronizing files)
  • Setting up a bunch of environment variables
  • Run msbuild on the project file we specified
  • Use kudusync (a node.js based tool, hence node.js) to synchronize modified files to the wwwroot of our site

Try it: after pushing this to Windows Azure Web Sites, you’ll see the custom script being used. Not much added value so far, but that’s what you have to provide.

Unit testing before deploying

Unit tests would be nice! All you need is a couple of unit tests and a test runner. You can add it to your repository and store it there, or simply download it during the deployment. In my example, I’m using the Gallio test runner because it runs almost all test frameworks, but feel free to use the test runner for NUnit or xUnit instead.

Somewhere before the line that invokes msbuild and ideally in the “setup” region of the deployment script, add the following:

1 IF NOT DEFINED GALLIO_COMMAND ( 2 IF NOT EXIST "%appdata%\Gallio\bin\Gallio.Echo.exe" ( 3 :: Downloading unzip 4 echo Downloading unzip 5 curl -O http://stahlforce.com/dev/unzip.exe 6 IF !ERRORLEVEL! NEQ 0 goto error 7 8 :: Downloading Gallio 9 echo Downloading Gallio 10 curl -O http://mb-unit.googlecode.com/files/GallioBundle-3.4.14.0.zip 11 IF !ERRORLEVEL! NEQ 0 goto error 12 13 :: Extracting Gallio 14 echo Extracting Gallio 15 unzip -q -n GallioBundle-3.4.14.0.zip -d %appdata%\Gallio 16 IF !ERRORLEVEL! NEQ 0 goto error 17 ) 18 19 :: Set Gallio runner path 20 SET GALLIO_COMMAND=%appdata%\Gallio\bin\Gallio.Echo.exe 21 )

See what happens there?  We check if the local system on which your files are stored in WindowsAzure Web Sites already has a copy of the Gallio.Echo.exetest runner. If not, let’s download a tool which allows us to unzip. Next, the entire Gallio test runner is downloaded and extracted. As a final step, the %GALLIO_COMMAND% variable is populated with the full path to the test runner executable.

Right before the line that calls “kudusync”, add the following:

1 echo Running unit tests 2 "%GALLIO_COMMAND%" "%DEPLOYMENT_SOURCE%\SampleApp.Tests\bin\Release\SampleApp.Tests.dll" 3 IF !ERRORLEVEL! NEQ 0 goto error

Yes, the name of your test assembly will be different, you should obviously change that. What happens here? Well, we’re invoking the test runner on our unit tests. If it fails, we abort deployment. Push it to Windows Azure and see for yourself. Here’s what is displayed on success:

Windows Azure Web Site unit tests

All green! And on failure, we get:

Gallio test runner Windows Azure

In the portal, you can clearly see that deployment was aborted:

Deployment fail when unit tests fail

That’s it. Enjoy!

NuGet Package Source Discovery

It’s already been 2 years since NuGet was introduced. This.NET package manager features the concept of feeds, or “package sources”, on which packages containing .NET libraries and tools can be hosted. In fact, support for feeds inspired us to build www.myget.org. While not all people are aware of this, Microsoft started out with two feeds as well: one for www.nuget.org, the other one for the Orchard CMS.

More and more feeds are being created daily, both by Microsoft as well as others. Here’s a list of feeds Microsoft has that I know of (there are probably more):

Wouldn’t it be nice if we could add them all to our Visual Studio package sources without having to know these URL’s? Meet the NuGet Package Source Discovery specification, or in short: PSD, a specification Xavier, Scott, PhilJeff, Howard and myself have been working on (thanks guys!)

Package Source Discovery

Because PowerShell says more than words, try the following. Open Visual Studio and open any solution. Then issue the following in the Package Manager Console:

1 Install-Package DiscoverPackageSources 2 Discover-PackageSources -Url "http://blog.maartenballiauw.be"

While we’re at it, perhaps the Glimpse project has something to discover as well.

1 Discover-PackageSources -Url "http://getglimpse.com"

Close and re-open Visual Studio and check your package sources. Notice anything new? My blog has provided you with 2 feeds. And you’ve also been subscribed to Glimpse’s nightly builds feed.

But there’s more. If you would have been authenticated when connecting to my blog, it will yield API keys as well. This allows the PSD client to setup everything that is needed for me to work with my personal feeds, both consuming and producing, by just remembering the URL of my blog.

Package Source Discovery boils down to trust. Since you apparently trust me, you can discover feeds from my blog. If you trust Microsoft, discover feeds from www.microsoft.com. Do you trust Windows Azure? Get their packages by discovering feeds at www.windowsazure.com. Need your company feeds? Discover them at http://nuget. A lot of options and possibilities there!

Recycling standards

If you are a blogger and are using Windows Live Writer, you’ve already used this before. We’ve written the NuGet Package Source Discovery specification based on what happens with blogs: when a simple <link /> element is added to your HTML, you are compatible with feed discovery. Here are the two elements that are listed in the source code for my blog:

1 <link rel="nuget" type="application/atom+xml" title="Maarten Balliauw NuGet feed" href="http://www.myget.org/F/maartenballiauw" /> 2 <link rel="nuget" type="application/rsd+xml" href="http://www.myget.org/Discovery/Feed/googleanalyticstracker" />

The first one points directly to a feed. Using the URL and the title attribute, we can add this one to our NuGet package sources with ease. The second one points to an RSD file, known since ages as the Really Simple Discovery format described on https://github.com/danielberlinger/rsd. We’ve recycled it to allow a lot of things at the client side. Since not all required metadata can be obtained from the RSD format, the Dublin Core schema is present in the PSD response as well.

Here’s an an example:

1 <?xml version="1.0" encoding="utf-8"?> 2 <rsd version="1.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> 3 <service> 4 <engineName>MyGet</engineName> 5 <engineLink>http://www.myget.org</engineLink> 6 7 <dc:identifier>http://www.myget.org/F/googleanalyticstracker</dc:identifier> 8 <dc:creator>maartenba</dc:creator> 9 <dc:owner>maartenba</dc:owner> 10 <dc:title>Staging feed for GoogleAnalyticsTracker</dc:title> 11 <dc:description>Staging feed for GoogleAnalyticsTracker</dc:description> 12 <homePageLink>http://www.myget.org/gallery/googleanalyticstracker</homePageLink> 13 14 <apis> 15 <api name="nuget-v2-packages" preferred="true" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v2" blogID="" /> 16 <api name="nuget-v2-push" preferred="true" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v2/package" blogID=""> 17 <settings> 18 <setting name="apiKey">abcdefghijkl</setting> 19 </settings> 20 </api> 21 <api name="nuget-v1-packages" preferred="false" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v1" blogID="" /> 22 </apis> 23 </service> 24 </rsd> 25

As you can see, using RSD we can embed a lot more information about a feed in this document. If we wanted to add a link to someone’s GitHub and have a client that wants to use this, we can add another <api /> element in here.

Who is using this?

I am (http://blog.maartenballiauw.be), Xavier is (http://www.xavierdecoster.com), Glimpse is (http://getglimpse.com), NancyFX is (http://www.nancyfx.org) and MyGet has implemented several endpoints as well. Why don't you join the wonderful world of package source discovery?

Feedback needed!

This is not part of NuGet out of the box yet. We need your feedback, comments, implementations and so on. Head over to our GitHub repository, read through the spec and all examples and provide us with your thoughts. Try the two clients we’ve crafted (more on Xavier's blog) and make your NuGet repositories discoverable. Feel free to post a link to your blog below.

Enjoy and let the commenting begin!

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

Push to NuGet

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Build GitHub project

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

MyGet build hook

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

GitHub build hook

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

MyGet Build Services - Join the private beta!

Good news! Over the past 4 weeks we’ve been sending out tweets about our secret project MyGet project “wonka”. Today is the day Wonka shows his great stuff to the world… In short: MyGet Build Services enable you to add packages to your feed by just giving us your GitHub repo. We build it, we package it, we publish it.

Our build server searches for a file called MyGet.sln and builds that. No probem if it's not there: we'll try and build other projects then. We'll run unit tests (NUnit, XUnit, MSTest and some more) and fail when those fail. We'll search for packages generated by your solution and if none are generated, we take a wild guess and create them for you.

To make it more visual, here are some screenshots. First, you have to add a build source, for example a GitHub repository (in fact, GitHub is all we currently support):

MyGet Add build source

After that, you simply click “Build”. A couple of seconds or minutes later, your fresh package is available on your feed:

MyGet build package

MyGet package result

If you want to see what happened, the build log is available for review as well:

MyGet build log

Enroll now!

Starting today, you can enroll for our private beta. You’ll get on a waiting list and as we improve build capacity, you will be granted access to the beta. If you’re in, tell us how it behaves. What works, what doesn’t, what would you like to see improved. Enroll for this private beta now via http://www.myget.org/buildservices. Limited seats!

Do note it’s still a beta, and as Willy Wonka would say… “Little surprises around every corner, but nothing dangerous.”

Happy packaging!

Pro NuGet is finally there!

Short version: Install-Package ProNuget or http://amzn.to/pronuget

Pro NuGet - Continuous integration Package RestoreIt’s been a while since I wrote my first book. After I’ve been telling that writing a book is horrendous (try writing a chapter per week after your office hours…) and that I would never write on again, my partner-in-crime Xavier Decoster and I had the same idea at the same time: what about a book on NuGet? So here it is: Pro NuGet is fresh off the presses (or on Kindle).

Special thanks go out to Scott Hanselman and Phil Haack for writing our foreword. Also big kudos to all who’ve helped us out now and then and did some small reviews. Yes Rob, Paul, David, Phil, Hadi: that’s you guys.

Why a book on NuGet?

Why not? At the time we decided we would start writing a book (september 2011), NuGet was out there for a while already. Yet, most users then (and still today) were using NuGet only as a means of installing packages, some creating packages. But NuGet is much more! And that’s what we wanted to write about. We did not want to create a reference guide on what NuGet command were available. We wanted to focus on best practices we’ve learned over the past few months using NuGet.

Some scenarios covered in our book:

  • What’s the big picture on package management?
  • Flashback last week: NuGet.org was down. How do you keep your team working if you depend on that external resource?
  • Is it a good idea to auto-update NuGet packages in a continous integration process?
  • Use the PowerShell console in VS2010/11. How do I write my own NuGet PowerShell Cmdlets? What can I do in there?
  • Why would you host your own NuGet repository?
  • Using NuGet for continuous delivery
  • More!

I feel we’ve managed to cover a lot of concepts that go beyond “how to use NuGet vX” and instead have given as much guidance as possible. Questions, suggestions, remarks, … are all welcome. And a click on “Add to cart” is also a good idea ;-)

Introducing MyGet package source proxy (beta)

My blog already has quite the number of blog posts around MyGet, our NuGet-as-a-Service solution which my colleague Xavier and I are running. There are a lot of reasons to host your own personal NuGet feed (such as protecting your intellectual property or only adding approved packages to the feed, but there’s many more as you can <plug>read in our book</plug>). We’ve added support for another scenario: MyGet now supports proxying remote feeds.

Up until now, MyGet required you to upload your own NuGet packages and to include packages from the NuGet feed. The problem with this is that you either required your team to register multiple NuGet feeds in Visual Studio (which still is a good option) or to register just your MyGet feed and add all packages your team is using to it. Which, again, is also a good option.

With our package source proxy in place, we now provide a third option: MyGet can proxy upstream NuGet feeds. Let’s start with a quick diagram and afterwards walk you through a scenario elaborating on this:

MyGet Feed Proxy Aggregate Feed Connector

You are seeing this correctly: you can now register just your MyGet feed in Visual Studio and we’ll add upstream packages to your feed automatically, optionally filtered as well.

Enabling MyGet package source proxy

Enabling the MyGet package source proxy is very straightforward. Navigate to your feed of choice (or create a new one) and click the Package Sources item. This will present you with a screen similar to this:

MyGet hosted package source

From there, you can add external (or MyGet) feeds to your personal feed and add packages directly from them using the Add package dialog. More on that in Xavier’s blog post. What’s more: with the tick of a checkbox, these external feeds can also be aggregated with your feed in Visual Studio’s search results. Here’s the magical add dialog and the proxy checkbox:

Add package source proxy

As you may see, we also offer the option to filter upstream packages. For example, the filter string substringof('wp7', Tags) eq true that we used will filter all upstream packages where the tags contain “wp7”.

What will Visual Studio display us? Well, just the Windows Phone 7 packages from NuGet, served through our single-endpoint MyGet feed.

Conclusion

Instead of working with a number of NuGet feeds, your development team will just work with one feed that is aggregating packages from both MyGet and other package sources out there (NuGet, Orchard Gallery, Chocolatey, …). This centralizes managing external packages and makes it easier for your team members to find the packages they can use in your projects.

Do let us know what you think of this feature! Our UserVoice is there for you, and in fact, that’s where we got the idea for this feature from in the first place. Your voice is heard!

Tracking API usage with Google Analytics

So you have an API. Congratulations! You should have one. But how do you track who uses it, what client software they use and so on? You may be logging API calls yourself. You may be relying on services like Apigee.com who make you pay (for a great service, though!). Being cheap, we thought about another approach for MyGet. We’re already using Google Analytics to track pageviews and so on, why not use Google Analytics for tracking API calls as well?

Meet GoogleAnalyticsTracker. It is a three-classes assembly which allows you to track requests from within C# to Google Analytics.

Go and  fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using GoogleAnalyticsTracker

Using GoogleAnalyticsTracker in your projects is simple. Simply Install-Package GoogleAnalyticsTracker and be an API tracking bad-ass! There are two things required: a Google Analytics tracking ID (something in the form of UA-XXXXXXX-X) and the domain you wish to track, preferably the same domain as the one registered with Google Analytics.

After installing GoogleAnalyticsTracker into your project, you currently have two options to track your API calls: use the Tracker class or use the included ASP.NET MVC Action Filter.

Here’s a quick demo of using the Tracker class:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView("My API - Create", "api/create");

Unfortunately, this class has no notion of a web request. This means that if you want to track user agents and user languages, you’ll have to add some more code:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 3 var request = HttpContext.Request; 4 tracker.Hostname = request.UserHostName; 5 tracker.UserAgent = request.UserAgent; 6 tracker.Language = request.UserLanguages != null ? string.Join(";", request.UserLanguages) : ""; 7 8 tracker.TrackPageView("My API - Create", "api/create");

Whaah! No worries though: there’s an extension method which does just that:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView(HttpContext, "My API - Create", "api/create");

The sad part is: this code quickly clutters all your action methods. No worries! There’s an ActionFilter for that!

1 [ActionTracking("UA-XXXXXX-XX", "www.example.org")] 2 public class ApiController 3 : Controller 4 { 5 public JsonResult Create() 6 { 7 return Json(true); 8 } 9 }

And what’s better: you can register it globally and optionally filter it to only track specific controllers and actions!

1 public class MvcApplication : System.Web.HttpApplication 2 { 3 public static void RegisterGlobalFilters(GlobalFilterCollection filters) 4 { 5 filters.Add(new HandleErrorAttribute()); 6 filters.Add(new ActionTrackingAttribute( 7 "UA-XXXXXX-XX", "www.example.org", 8 action => action.ControllerDescriptor.ControllerName == "Api") 9 ); 10 } 11 }

And here’s what it could look like (we’re only tracking for the second day now…):

WCF Web API analytics google

We even have stats about the versions of the NuGet Command Line used to access our API!

NuGet API tracking Google

Enjoy! And fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using SignalR to broadcast a slide deck

imageLast week, I’ve discussed Techniques for real-time client-server communication on the web (SignalR to the rescue). We’ve seen that when building web applications, you often face the fact that HTTP, the foundation of the web, is a request/response protocol. A client issues a request, a server handles this request and sends back a response. All the time, with no relation between the first request and subsequent requests. Also, since it’s request-based, there is no way to send messages from the server to the client without having the client create a request first.

We’ve had a look at how to tackle this problem: using Ajax polling, Long polling an WebSockets. the conclusion was that each of these solutions has it pros and cons. SignalR, an open source project led by some Microsoft developers, is an ASP.NET library which leverages the three techniques described before to create a seamless experience between client and server.

The main idea of SignalR is that the boundary between client and server should become easier to tackle. In this blog post, I will go deeper into how you can use SignalR to achieve real-time communication between client and server over HTTP.

Meet DeckCast

DeckCast is a small sample which I’ve created for this blog post series on SignalR. You can download the source code here: DeckCast.zip (291.58 kb)

The general idea of DeckCast is that a presenter can navigate to http://example.org/Deck/Present/SomeDeck and he can navigate through the slide deck with the arrow keys on his keyboard. One or more clients can then navigate to http://example.org/Deck/View/SomeDeck and view the “presentation”. Note the idea is not to create something you can use to present slide decks over HTTP, instead I’m just showing how to use SignalR to communicate between client and server.

The presenter and viewers will navigate to their URLs:SignalR presentation JavaScript Slide

The presenter will then navigate to a different slide using the arrow keys on his keyboard. All viewers will automatically navigate to the exact same slide at the same time. How much more real-time can it get?

SignalR presentation JavaScript Slide

And just for fun… Wouldn’t it be great if one of these clients was a console application?

SignalR.Client console application

Try doing this today on the web. Chances are you’ll get stuck in a maze of HTML, JavaScript and insanity. We’ll use SignalR to achieve this in a non-complex way, as well as Deck.JS to create nice HTML 5 slides without me having to code too much. Fair deal.

Connections and Hubs

SignalR is built around Connections and Hubs. A Connection is a persistent connection between a client and the server. It represents a communication channel between client and server (and vice-versa, of course). Hubs on the other hand offer an additional abstraction layer and can be used to provide “multicast” connections where multiple clients communicate with the Hub and the Hub can distribute data back to just one specific client or to a group of clients. Or to all, for that matter.

DeckCast would be a perfect example to create a SignalR Hub: clients will connect to the Hub, select the slide deck they want to view and receive slide contents and navigational actions for just that slide deck. The Hub will know which clients are viewing which slide deck and communicate changes and movements to only the intended group of clients.

One additional note: a Connection or a Hub are “transport agnostic”. This means that the SignalR framework will decide which transport is best for each client and server combination. Ajax polling? Long polling? Websockets? A hidden iframe? You don’t have to care about that. The raw connection details are abstracted by SignalR and you get to work with a nice Connection or Hub class. Which we’ll do: let’s create the server side using a Hub.

Creating the server side

First of all: make sure you have NuGet installed and install the SignalR package. Install-Package SignalR will bring down two additional packages: SignalR.Server, the server implementation, and SignalR.Js, the JavaScript libraries for communicating with the server. If your server implementation will not host the JavaScript client as well, installing SignalR.Server will do as well.

We will make use of a SignalR Hub to distribute data between clients and server. A Hub has a type (the class that you create for it), a name (which is, by default, the same as the type) and inherits SignalR’s Hub class. The wireframe for our presentation Hub would look like this:

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 }

SignalR’s Hub class will do the heavy lifting for us. It will expose all public methods to any client connecting to the Hub, whether it’s a JavaScript, Console or Silverlight or even Windows Phone client. I see 4 methods for the PresentationHub class: Join, GotoSlide, GotoCurrentSlide and GetDeckContents. The first one serves as the starting point to identify a client is watching a specific slide deck. GotoSlide will be used by the presenter to navigate to slide 0, 1, 2 and so on. GotoCurrentSlide is one used by the viewer to go to the current slide selected by the presenter. GetDeckContents is one which returns the presentation structure to the client: the Title, all slides and their bullets. Let’s translate that to some C#:

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 static ConcurrentDictionary<string, int> CurrentSlide { get; set; } 6 7 public void Join(string deckId) 8 { 9 } 10 11 public void GotoSlide(int slideId) 12 { 13 } 14 15 public void GotoCurrentSlide() 16 { 17 } 18 19 public Deck GetDeckContents(string id) 20 { 21 } 22 }

The concurrent dictionary will be used to store which slide is currently being viewed for a specific presentation. All the rest are just standard C# methods. Let’s implement them.

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 static ConcurrentDictionary<string, int> DeckLocation { get; set; } 6 7 static PresentationHub() 8 { 9 DeckLocation = new ConcurrentDictionary<string, int>(); 10 } 11 12 public void Join(string deckId) 13 { 14 Caller.DeckId = deckId; 15 16 AddToGroup(deckId); 17 } 18 19 public void GotoSlide(int slideId) 20 { 21 string deckId = Caller.DeckId; 22 DeckLocation.AddOrUpdate(deckId, (k) => slideId, (k, v) => slideId); 23 24 Clients[deckId].showSlide(slideId); 25 } 26 27 public void GotoCurrentSlide() 28 { 29 int slideId = 0; 30 DeckLocation.TryGetValue(Caller.DeckId, out slideId); 31 Caller.showSlide(slideId); 32 } 33 34 public Deck GetDeckContents(string id) 35 { 36 var deckRepository = new DeckRepository(); 37 return deckRepository.GetDeck(id); 38 } 39 }

The code should be pretty straightforward, although there are some things I would like to mention about SignalR Hubs:

  • The Join method does two things: it sets a property on the client calling into this method. That’s right: the server tells the client to set a specific property at the client side. It also adds the client to a group identified by the slide deck id. Reason for this is that we want to group clients based on the slide deck id so that we can broadcast to a group of clients instead of having to broadcast messages to all.
  • The GotoSlide method will be called by the presenter to advance the slide deck. It calls into Clients[deckId]. Remember the grouping we just did? Well, this method returns me the group of clients viewing slide deck deckId. We’re calling the method showSlide n his group. showSlide? That’s a method we will define on the client side! Again, the server is calling the client(s) here.
  • GotoCurrentSlide calls into one client’s showSlide method.
  • GetDeckContents just fetches the Deck from a database and returns the complete object tree to the client.

Let’s continue with the web client side!

Creating the web client side

Assuming you have installed SignalR.JS and that you have already referenced jQuery, add the following two script references to your view:

1 <script type="text/javascript" src="@Url.Content("~/Scripts/jquery.signalR.min.js")"></script> 2 <script type="text/javascript" src="/signalr/hubs"></script>

That first reference is just the SignalR client library. The second reference is a reference to SignalR’s metadata endpoint: it contains information about available Hubs on the server side. The client library will use that metadata to connect to a Hub and maintain the persistent connection between client and server.

The viewer of a presentation will then have to connect to the Hub on the server side. This is probably the easiest piece of code you’ve ever seen (next to Hello World):

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 $.connection.hub.start(); 6 }); 7 </script>

We’ve just established a connection with the PresentationHub on the server side. We also start the hub connection (one call, even if you are connecting to multiple hubs at once). Of course, the code above will not do a lot. Let’s add some more body.

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 presentation.showSlide = function (slideId) { 6 $.deck('go', slideId); 7 }; 8 }); 9 </script>

Remember the showSlide method we were calling from the server? This is the one. We’re allowing SignalR to call into the JavaScript method showSlide from the server side. This will call into Deck.JS and advance the presentation. The only thing left to do is tell the server which slide deck we are interested in. We do this immediately after the connection to the hub has been established using a callback function:

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 presentation.showSlide = function (slideId) { 6 $.deck('go', slideId); 7 }; 8 $.connection.hub.start(function () { 9 // Deck initialization 10 $.extend(true, $.deck.defaults, { 11 keys: { 12 next: 0, 13 previous: 0, 14 goto: 0 15 } 16 }); 17 $.deck('.slide'); 18 19 // Join presentation 20 presentation.join('@Model.DeckId', function () { 21 presentation.gotoCurrentSlide(); 22 }); 23 }); 24 }); 25 </script>

Cool, no? The presenter side of things is very similar, except that it also calls into the server’s GotoSlide method.

Let’s see if we can convert this JavaScript code into come C#as well.  I promised to add a Console application to the mix to show you SignalR is not just about web. It’s also about desktop apps, Silverlight apps and Windows Phone apps. And since it’s open source, I also expect someone to contribute client libraries for Android or iPhone. David, if you are reading this: you have work to do ;-)

Creating the console client side

Install the NuGet package SignalR.Client. This will add the required client library for the console application. If you’re creating a Windows Phone client, SignalR.WP71 will do (Mango).

The first thing to do would be connecting to SignalR’s Hub metadata and creating a proxy for the presentation Hub. Note that I am using the root URL this time (which is enough) and the full type name for the Hub (important!).

1 var connection = new HubConnection("http://localhost:56285/"); 2 var presentationHub = connection.CreateProxy("DeckCast.Hubs.PresentationHub"); 3 dynamic presentation = presentationHub;

Note that I’m also using a dynamic representation of this proxy. It may facilitate the rest of my code later.

Next up: connecting our client to the server. SignalR makes extensive use of the Task Parallel Library (TPL) and there’s no escaping there for you. Start the connection and if something fails, let’s show the client:

1 connection.Start() 2 .ContinueWith(task => 3 { 4 if (task.IsFaulted) 5 { 6 System.Console.ForegroundColor = ConsoleColor.Red; 7 System.Console.WriteLine("There was an error connecting to the DeckCast presentation hub."); 8 } 9 });

Just like with the JavaScript client, we have to join the presentation of our choice. Again, the TPL is used here but I’m explicitly telling it to Wait for the result before continuing my code.

1 presentationHub.Invoke("Join", deckId).Wait();

Because the console does not have any notion about the Deck class and its Slide objects, let’s fetch the slide contents into a dynamic object again. Here’s how:

1 var getDeckContents = presentationHub.Invoke<dynamic>("GetDeckContents", deckId); 2 getDeckContents.Wait(); 3 4 var deck = getDeckContents.Result;

We also want to respond to the showSlide method. Since there’s no means of defining that method on the client in C#in the same fashion as we did it on the JavaScript side, let’s simply use the On method exposed by the hub proxy. It subscribes to a server-side event (such as showSlide) and takes action whenever that occurs. Here’s the code:

1 presentationHub.On<int>("showSlide", slideId => 2 { 3 System.Console.Clear(); 4 System.Console.ForegroundColor = ConsoleColor.White; 5 System.Console.WriteLine(""); 6 System.Console.WriteLine(deck.Slides[slideId].Title); 7 System.Console.WriteLine(""); 8 9 if (deck.Slides[slideId].Bullets != null) 10 { 11 foreach (var bullet in deck.Slides[slideId].Bullets) 12 { 13 System.Console.WriteLine(" * {0}", bullet); 14 System.Console.WriteLine(""); 15 } 16 } 17 18 if (deck.Slides[slideId].Quote != null) 19 { 20 System.Console.WriteLine(" \"{0}\"", deck.Slides[slideId].Quote); 21 } 22 });

We also want to move to the current slide:

1 presentationHub.Invoke("GotoCurrentSlide").Wait();

There we go. The presenter can now switch between slides and all clients, both web and console will be informed and updated accordingly.

SignalR Console Client

Conclusion

SignalR offers a relatively easy to use abstraction over various bidirectional connection paradigms introduced on the web. The fact that its open source and features clients for JavaScript, .NET, Silverlight and Windows Phone in my opinion makes it a viable alternative for applications where you typically use polling or a bidirectional WCF communication channel. Even WCF RIA Services should be a little bit afraid of SignalR as it’s lean and mean!

[edit] There's the Objective-C client: https://github.com/DyKnow/SignalR-ObjC

The sample code for this blog post can be downloaded here: DeckCast.zip (291.58 kb)

Publishing symbol packages for a MyGet feed

MyGet host your NuGet feed serverEver since NuGet 1.2, there is a great way for NuGet package authors to let their users debug into the package’s binaries. With almost no additional effort, package authors can publish their symbols and sources, and package consumers can debug into them from Visual Studio, simply by pushing a symbols package in addition to the standard NuGet package.

SymbolSourceToday, we’re proud to announce MyGet has partnered with SymbolSource.org to offer an easy workflow to publish symbol packages for a private MyGet feed. This means from now on you can publish symbol packages for your private feeds as well!

On a sidenote: we're sharing API keys between both services. If you also want to share the same password with both services, simply go to your MyGet profile page and re-enter your password. We'll keep it in sync after that.

Publishing a symbols package for use with MyGet

As I will assume you are used to publishing packages to NuGet and SymbolSource, here’s what changes. First of all, you will require the URLs to which to publish. Log in to MyGet and browse to your feed details. The Feed Details tab will give you all the information you need, as you can see in the following screenshot:

image

In short, your feed URL remains the same. If you want to consume your private feed in Visual Studio or using the NuGet Package Manager Console, simply add http://www.myget.org/F/yourfeedname as the source. The thing that changed is the publish URL: if you want to publish your packages to MyGet, use the URL http://www.myget.org/F/yourfeedname/api/v1 as the publish URL. For symbol packages, your URL will be in the form of http://nuget.gw.symbolsource.org/MyGet/yourfeedname.

The publish workflow to publish the SamplePackage.1.0.0.nupkg to a MyGet feed, including symbols, would be issuing the following two commands from the console:

1 nuget push SamplePackage.1.0.0.nupkg 00000000-0000-0000-0000-00000000000 -Source http://www.myget.org/F/somefeed/api/v1 2 3 nuget push SamplePackage.1.0.0.Symbols.nupkg 00000000-0000-0000-0000-00000000000 -Source http://nuget.gw.symbolsource.org/MyGet/somefeed

An example of these commands can also be found on the Feed Details tab for your MyGet feed.

Consuming symbol packages in Visual Studio

When logging in to MyGet, you can find the symbols URL compatible with Visual Studio under the Feed Details tab for your MyGet feed. This URL will be the same for all feeds you are allowed to consume, so no need to configure 10+ symbol servers in Visual Studio. Here’s how to configure it.

First of all, Visual Studio typically will only debug your own source code, the source code of the project or projects that are currently opened in Visual Studio. To disable this behavior and to instruct Visual Studio to also try to debug code other than the projects that are currently opened, open the Options dialog (under the menu Tools > Options). Find the Debugging node on the left and click the General node underneath. Turn off the option Enable Just My Code. Also turn on the option Enable source server support. This usually triggers a warning message but it is safe to just click Yes and continue with the settings specified.

MyGet symbol server in Visual Studio

Keep the Options dialog opened and find the Symbols node under the Debugging node on the left. In the dialog shown in Figure 4-14, add the symbol server URL for your MyGet feed: http://srv.symbolsource.org/pdb/MyGet/username/11111111-1111-1111-1111-11111111111. After that, click OK to confirm configuration changes and consume symbols for NuGet packages.

Enjoy!

Setting up a NuGet repository in seconds: MyGet public feeds

A few months ago, my colleague Xavier Decoster and I introduced MyGet as a tool where you can create your own, private NuGet feeds. A couple of weeks later we introduced some options to delegate feed privileges to other MyGet users allowing you to make another MyGet user “co-admin” or “contributor” to a feed. Since then we’ve expanded our view on the NuGet ecosystem and moved MyGet from a solution to create your private feeds to a service that allows you to set up a NuGet feed, whether private or public.

Supporting public feeds allows you to set up a structure similar to www.nuget.org: you can give any user privileges to publish a package to your feed while the user can never manage other packages on your feed. This is great in several scenarios:

  • You run an open source project and want people to contribute modules or plugins to your feed
  • You are a business and you want people to contribute internal packages to your feed whilst prohibiting them from updating or deleting other packages

Setting up a public feed

Setting up a public feed on MyGet is similar to setting up a private feed. In fact, both are identical except for the default privileges assigned to users. Navigate to www.myget.org and sign in using an identity provider of choice. Next, create a feed, for example:

Create a MyGet NuGet feed and host your own NuGet packages

This new feed may be named “public”, however it is private by obscurity: if someone knows the URL to the feed, he/she can consume packages from it. Let’s change that. Go to the “Feed Security” tab and have a look at the assigned privileges for Everyone. By default, these are set to “Can consume this feed”, meaning that everyone can add the feed URL to Visual Studio and consume packages. Other options are “No access” (requires authentication prior to being able to consume the feed) and “Can contribute own packages to this feed”. This last one is what we want:

Setting up a NuGet feed

Assigning the “Can contribute own packages to this feed” privilege to a specific user or to everyone means that the user (or everyone) will be able to contribute packages to the feed, as long as the package id used is not already on the feed and as long as the package id was originally submitted by this user. Exactly the same model as www.nuget.org, that is.

For reference, all available privileges are:

  • Has no access to this feed (speaks for itself)
  • Can consume this feed (allows the user to use the feed in Visual Studio / NuGet)
  • Can contribute own packages to this feed '(allows the user to contribute packages but can only update and remove his own packages and not those of others)
  • Can manage all packages for this feed (allows the user to add packages to the feed via the website and via the NuGet push API)
  • Can manage users and all packages for this feed (extends the above with feed privilege management capabilities)

Contributing to a public feed

Of course, if you have a public feed you may want to have people contributing to it. This is very easy: provide them with a link to your feed editing page (for example, http://www.myget.org/Feed/Edit/public). Users can publish their packages via the MyGet user interface in no time.

If you want to have users push packages using nuget.exe or NuGet Package Explorer, provide them a link to the feed endpoint (for example, http://www.myget.org/F/public/). Using their API key (which can be found in the MyGet profile for the user) they can push packages to the public feed from any API consumer.

Enjoy!

 

PS: We’re working on lots more, but will probably provide that in a MyGet Premium version. Make sure to subscribe to our newsletter on www.myget.org if this is of interest.