Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Tracking API usage with Google Analytics

So you have an API. Congratulations! You should have one. But how do you track who uses it, what client software they use and so on? You may be logging API calls yourself. You may be relying on services like Apigee.com who make you pay (for a great service, though!). Being cheap, we thought about another approach for MyGet. We’re already using Google Analytics to track pageviews and so on, why not use Google Analytics for tracking API calls as well?

Meet GoogleAnalyticsTracker. It is a three-classes assembly which allows you to track requests from within C# to Google Analytics.

Go and  fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using GoogleAnalyticsTracker

Using GoogleAnalyticsTracker in your projects is simple. Simply Install-Package GoogleAnalyticsTracker and be an API tracking bad-ass! There are two things required: a Google Analytics tracking ID (something in the form of UA-XXXXXXX-X) and the domain you wish to track, preferably the same domain as the one registered with Google Analytics.

After installing GoogleAnalyticsTracker into your project, you currently have two options to track your API calls: use the Tracker class or use the included ASP.NET MVC Action Filter.

Here’s a quick demo of using the Tracker class:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView("My API - Create", "api/create");

Unfortunately, this class has no notion of a web request. This means that if you want to track user agents and user languages, you’ll have to add some more code:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 3 var request = HttpContext.Request; 4 tracker.Hostname = request.UserHostName; 5 tracker.UserAgent = request.UserAgent; 6 tracker.Language = request.UserLanguages != null ? string.Join(";", request.UserLanguages) : ""; 7 8 tracker.TrackPageView("My API - Create", "api/create");

Whaah! No worries though: there’s an extension method which does just that:

1 Tracker tracker = new Tracker("UA-XXXXXX-XX", "www.example.org"); 2 tracker.TrackPageView(HttpContext, "My API - Create", "api/create");

The sad part is: this code quickly clutters all your action methods. No worries! There’s an ActionFilter for that!

1 [ActionTracking("UA-XXXXXX-XX", "www.example.org")] 2 public class ApiController 3 : Controller 4 { 5 public JsonResult Create() 6 { 7 return Json(true); 8 } 9 }

And what’s better: you can register it globally and optionally filter it to only track specific controllers and actions!

1 public class MvcApplication : System.Web.HttpApplication 2 { 3 public static void RegisterGlobalFilters(GlobalFilterCollection filters) 4 { 5 filters.Add(new HandleErrorAttribute()); 6 filters.Add(new ActionTrackingAttribute( 7 "UA-XXXXXX-XX", "www.example.org", 8 action => action.ControllerDescriptor.ControllerName == "Api") 9 ); 10 } 11 }

And here’s what it could look like (we’re only tracking for the second day now…):

WCF Web API analytics google

We even have stats about the versions of the NuGet Command Line used to access our API!

NuGet API tracking Google

Enjoy! And fork this thing and add out-of-the-box support for WCF Web API, Nancy or even “plain old” WCF or ASMX!

Using SignalR to broadcast a slide deck

imageLast week, I’ve discussed Techniques for real-time client-server communication on the web (SignalR to the rescue). We’ve seen that when building web applications, you often face the fact that HTTP, the foundation of the web, is a request/response protocol. A client issues a request, a server handles this request and sends back a response. All the time, with no relation between the first request and subsequent requests. Also, since it’s request-based, there is no way to send messages from the server to the client without having the client create a request first.

We’ve had a look at how to tackle this problem: using Ajax polling, Long polling an WebSockets. the conclusion was that each of these solutions has it pros and cons. SignalR, an open source project led by some Microsoft developers, is an ASP.NET library which leverages the three techniques described before to create a seamless experience between client and server.

The main idea of SignalR is that the boundary between client and server should become easier to tackle. In this blog post, I will go deeper into how you can use SignalR to achieve real-time communication between client and server over HTTP.

Meet DeckCast

DeckCast is a small sample which I’ve created for this blog post series on SignalR. You can download the source code here: DeckCast.zip (291.58 kb)

The general idea of DeckCast is that a presenter can navigate to http://example.org/Deck/Present/SomeDeck and he can navigate through the slide deck with the arrow keys on his keyboard. One or more clients can then navigate to http://example.org/Deck/View/SomeDeck and view the “presentation”. Note the idea is not to create something you can use to present slide decks over HTTP, instead I’m just showing how to use SignalR to communicate between client and server.

The presenter and viewers will navigate to their URLs:SignalR presentation JavaScript Slide

The presenter will then navigate to a different slide using the arrow keys on his keyboard. All viewers will automatically navigate to the exact same slide at the same time. How much more real-time can it get?

SignalR presentation JavaScript Slide

And just for fun… Wouldn’t it be great if one of these clients was a console application?

SignalR.Client console application

Try doing this today on the web. Chances are you’ll get stuck in a maze of HTML, JavaScript and insanity. We’ll use SignalR to achieve this in a non-complex way, as well as Deck.JS to create nice HTML 5 slides without me having to code too much. Fair deal.

Connections and Hubs

SignalR is built around Connections and Hubs. A Connection is a persistent connection between a client and the server. It represents a communication channel between client and server (and vice-versa, of course). Hubs on the other hand offer an additional abstraction layer and can be used to provide “multicast” connections where multiple clients communicate with the Hub and the Hub can distribute data back to just one specific client or to a group of clients. Or to all, for that matter.

DeckCast would be a perfect example to create a SignalR Hub: clients will connect to the Hub, select the slide deck they want to view and receive slide contents and navigational actions for just that slide deck. The Hub will know which clients are viewing which slide deck and communicate changes and movements to only the intended group of clients.

One additional note: a Connection or a Hub are “transport agnostic”. This means that the SignalR framework will decide which transport is best for each client and server combination. Ajax polling? Long polling? Websockets? A hidden iframe? You don’t have to care about that. The raw connection details are abstracted by SignalR and you get to work with a nice Connection or Hub class. Which we’ll do: let’s create the server side using a Hub.

Creating the server side

First of all: make sure you have NuGet installed and install the SignalR package. Install-Package SignalR will bring down two additional packages: SignalR.Server, the server implementation, and SignalR.Js, the JavaScript libraries for communicating with the server. If your server implementation will not host the JavaScript client as well, installing SignalR.Server will do as well.

We will make use of a SignalR Hub to distribute data between clients and server. A Hub has a type (the class that you create for it), a name (which is, by default, the same as the type) and inherits SignalR’s Hub class. The wireframe for our presentation Hub would look like this:

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 }

SignalR’s Hub class will do the heavy lifting for us. It will expose all public methods to any client connecting to the Hub, whether it’s a JavaScript, Console or Silverlight or even Windows Phone client. I see 4 methods for the PresentationHub class: Join, GotoSlide, GotoCurrentSlide and GetDeckContents. The first one serves as the starting point to identify a client is watching a specific slide deck. GotoSlide will be used by the presenter to navigate to slide 0, 1, 2 and so on. GotoCurrentSlide is one used by the viewer to go to the current slide selected by the presenter. GetDeckContents is one which returns the presentation structure to the client: the Title, all slides and their bullets. Let’s translate that to some C#:

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 static ConcurrentDictionary<string, int> CurrentSlide { get; set; } 6 7 public void Join(string deckId) 8 { 9 } 10 11 public void GotoSlide(int slideId) 12 { 13 } 14 15 public void GotoCurrentSlide() 16 { 17 } 18 19 public Deck GetDeckContents(string id) 20 { 21 } 22 }

The concurrent dictionary will be used to store which slide is currently being viewed for a specific presentation. All the rest are just standard C# methods. Let’s implement them.

1 [HubName("presentation")] 2 public class PresentationHub 3 : Hub 4 { 5 static ConcurrentDictionary<string, int> DeckLocation { get; set; } 6 7 static PresentationHub() 8 { 9 DeckLocation = new ConcurrentDictionary<string, int>(); 10 } 11 12 public void Join(string deckId) 13 { 14 Caller.DeckId = deckId; 15 16 AddToGroup(deckId); 17 } 18 19 public void GotoSlide(int slideId) 20 { 21 string deckId = Caller.DeckId; 22 DeckLocation.AddOrUpdate(deckId, (k) => slideId, (k, v) => slideId); 23 24 Clients[deckId].showSlide(slideId); 25 } 26 27 public void GotoCurrentSlide() 28 { 29 int slideId = 0; 30 DeckLocation.TryGetValue(Caller.DeckId, out slideId); 31 Caller.showSlide(slideId); 32 } 33 34 public Deck GetDeckContents(string id) 35 { 36 var deckRepository = new DeckRepository(); 37 return deckRepository.GetDeck(id); 38 } 39 }

The code should be pretty straightforward, although there are some things I would like to mention about SignalR Hubs:

  • The Join method does two things: it sets a property on the client calling into this method. That’s right: the server tells the client to set a specific property at the client side. It also adds the client to a group identified by the slide deck id. Reason for this is that we want to group clients based on the slide deck id so that we can broadcast to a group of clients instead of having to broadcast messages to all.
  • The GotoSlide method will be called by the presenter to advance the slide deck. It calls into Clients[deckId]. Remember the grouping we just did? Well, this method returns me the group of clients viewing slide deck deckId. We’re calling the method showSlide n his group. showSlide? That’s a method we will define on the client side! Again, the server is calling the client(s) here.
  • GotoCurrentSlide calls into one client’s showSlide method.
  • GetDeckContents just fetches the Deck from a database and returns the complete object tree to the client.

Let’s continue with the web client side!

Creating the web client side

Assuming you have installed SignalR.JS and that you have already referenced jQuery, add the following two script references to your view:

1 <script type="text/javascript" src="@Url.Content("~/Scripts/jquery.signalR.min.js")"></script> 2 <script type="text/javascript" src="/signalr/hubs"></script>

That first reference is just the SignalR client library. The second reference is a reference to SignalR’s metadata endpoint: it contains information about available Hubs on the server side. The client library will use that metadata to connect to a Hub and maintain the persistent connection between client and server.

The viewer of a presentation will then have to connect to the Hub on the server side. This is probably the easiest piece of code you’ve ever seen (next to Hello World):

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 $.connection.hub.start(); 6 }); 7 </script>

We’ve just established a connection with the PresentationHub on the server side. We also start the hub connection (one call, even if you are connecting to multiple hubs at once). Of course, the code above will not do a lot. Let’s add some more body.

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 presentation.showSlide = function (slideId) { 6 $.deck('go', slideId); 7 }; 8 }); 9 </script>

Remember the showSlide method we were calling from the server? This is the one. We’re allowing SignalR to call into the JavaScript method showSlide from the server side. This will call into Deck.JS and advance the presentation. The only thing left to do is tell the server which slide deck we are interested in. We do this immediately after the connection to the hub has been established using a callback function:

1 <script type="text/javascript"> 2 $(function () { 3 // SignalR hub initialization 4 var presentation = $.connection.presentation; 5 presentation.showSlide = function (slideId) { 6 $.deck('go', slideId); 7 }; 8 $.connection.hub.start(function () { 9 // Deck initialization 10 $.extend(true, $.deck.defaults, { 11 keys: { 12 next: 0, 13 previous: 0, 14 goto: 0 15 } 16 }); 17 $.deck('.slide'); 18 19 // Join presentation 20 presentation.join('@Model.DeckId', function () { 21 presentation.gotoCurrentSlide(); 22 }); 23 }); 24 }); 25 </script>

Cool, no? The presenter side of things is very similar, except that it also calls into the server’s GotoSlide method.

Let’s see if we can convert this JavaScript code into come C#as well.  I promised to add a Console application to the mix to show you SignalR is not just about web. It’s also about desktop apps, Silverlight apps and Windows Phone apps. And since it’s open source, I also expect someone to contribute client libraries for Android or iPhone. David, if you are reading this: you have work to do ;-)

Creating the console client side

Install the NuGet package SignalR.Client. This will add the required client library for the console application. If you’re creating a Windows Phone client, SignalR.WP71 will do (Mango).

The first thing to do would be connecting to SignalR’s Hub metadata and creating a proxy for the presentation Hub. Note that I am using the root URL this time (which is enough) and the full type name for the Hub (important!).

1 var connection = new HubConnection("http://localhost:56285/"); 2 var presentationHub = connection.CreateProxy("DeckCast.Hubs.PresentationHub"); 3 dynamic presentation = presentationHub;

Note that I’m also using a dynamic representation of this proxy. It may facilitate the rest of my code later.

Next up: connecting our client to the server. SignalR makes extensive use of the Task Parallel Library (TPL) and there’s no escaping there for you. Start the connection and if something fails, let’s show the client:

1 connection.Start() 2 .ContinueWith(task => 3 { 4 if (task.IsFaulted) 5 { 6 System.Console.ForegroundColor = ConsoleColor.Red; 7 System.Console.WriteLine("There was an error connecting to the DeckCast presentation hub."); 8 } 9 });

Just like with the JavaScript client, we have to join the presentation of our choice. Again, the TPL is used here but I’m explicitly telling it to Wait for the result before continuing my code.

1 presentationHub.Invoke("Join", deckId).Wait();

Because the console does not have any notion about the Deck class and its Slide objects, let’s fetch the slide contents into a dynamic object again. Here’s how:

1 var getDeckContents = presentationHub.Invoke<dynamic>("GetDeckContents", deckId); 2 getDeckContents.Wait(); 3 4 var deck = getDeckContents.Result;

We also want to respond to the showSlide method. Since there’s no means of defining that method on the client in C#in the same fashion as we did it on the JavaScript side, let’s simply use the On method exposed by the hub proxy. It subscribes to a server-side event (such as showSlide) and takes action whenever that occurs. Here’s the code:

1 presentationHub.On<int>("showSlide", slideId => 2 { 3 System.Console.Clear(); 4 System.Console.ForegroundColor = ConsoleColor.White; 5 System.Console.WriteLine(""); 6 System.Console.WriteLine(deck.Slides[slideId].Title); 7 System.Console.WriteLine(""); 8 9 if (deck.Slides[slideId].Bullets != null) 10 { 11 foreach (var bullet in deck.Slides[slideId].Bullets) 12 { 13 System.Console.WriteLine(" * {0}", bullet); 14 System.Console.WriteLine(""); 15 } 16 } 17 18 if (deck.Slides[slideId].Quote != null) 19 { 20 System.Console.WriteLine(" \"{0}\"", deck.Slides[slideId].Quote); 21 } 22 });

We also want to move to the current slide:

1 presentationHub.Invoke("GotoCurrentSlide").Wait();

There we go. The presenter can now switch between slides and all clients, both web and console will be informed and updated accordingly.

SignalR Console Client

Conclusion

SignalR offers a relatively easy to use abstraction over various bidirectional connection paradigms introduced on the web. The fact that its open source and features clients for JavaScript, .NET, Silverlight and Windows Phone in my opinion makes it a viable alternative for applications where you typically use polling or a bidirectional WCF communication channel. Even WCF RIA Services should be a little bit afraid of SignalR as it’s lean and mean!

[edit] There's the Objective-C client: https://github.com/DyKnow/SignalR-ObjC

The sample code for this blog post can be downloaded here: DeckCast.zip (291.58 kb)

Publishing symbol packages for a MyGet feed

MyGet host your NuGet feed serverEver since NuGet 1.2, there is a great way for NuGet package authors to let their users debug into the package’s binaries. With almost no additional effort, package authors can publish their symbols and sources, and package consumers can debug into them from Visual Studio, simply by pushing a symbols package in addition to the standard NuGet package.

SymbolSourceToday, we’re proud to announce MyGet has partnered with SymbolSource.org to offer an easy workflow to publish symbol packages for a private MyGet feed. This means from now on you can publish symbol packages for your private feeds as well!

On a sidenote: we're sharing API keys between both services. If you also want to share the same password with both services, simply go to your MyGet profile page and re-enter your password. We'll keep it in sync after that.

Publishing a symbols package for use with MyGet

As I will assume you are used to publishing packages to NuGet and SymbolSource, here’s what changes. First of all, you will require the URLs to which to publish. Log in to MyGet and browse to your feed details. The Feed Details tab will give you all the information you need, as you can see in the following screenshot:

image

In short, your feed URL remains the same. If you want to consume your private feed in Visual Studio or using the NuGet Package Manager Console, simply add http://www.myget.org/F/yourfeedname as the source. The thing that changed is the publish URL: if you want to publish your packages to MyGet, use the URL http://www.myget.org/F/yourfeedname/api/v1 as the publish URL. For symbol packages, your URL will be in the form of http://nuget.gw.symbolsource.org/MyGet/yourfeedname.

The publish workflow to publish the SamplePackage.1.0.0.nupkg to a MyGet feed, including symbols, would be issuing the following two commands from the console:

1 nuget push SamplePackage.1.0.0.nupkg 00000000-0000-0000-0000-00000000000 -Source http://www.myget.org/F/somefeed/api/v1 2 3 nuget push SamplePackage.1.0.0.Symbols.nupkg 00000000-0000-0000-0000-00000000000 -Source http://nuget.gw.symbolsource.org/MyGet/somefeed

An example of these commands can also be found on the Feed Details tab for your MyGet feed.

Consuming symbol packages in Visual Studio

When logging in to MyGet, you can find the symbols URL compatible with Visual Studio under the Feed Details tab for your MyGet feed. This URL will be the same for all feeds you are allowed to consume, so no need to configure 10+ symbol servers in Visual Studio. Here’s how to configure it.

First of all, Visual Studio typically will only debug your own source code, the source code of the project or projects that are currently opened in Visual Studio. To disable this behavior and to instruct Visual Studio to also try to debug code other than the projects that are currently opened, open the Options dialog (under the menu Tools > Options). Find the Debugging node on the left and click the General node underneath. Turn off the option Enable Just My Code. Also turn on the option Enable source server support. This usually triggers a warning message but it is safe to just click Yes and continue with the settings specified.

MyGet symbol server in Visual Studio

Keep the Options dialog opened and find the Symbols node under the Debugging node on the left. In the dialog shown in Figure 4-14, add the symbol server URL for your MyGet feed: http://srv.symbolsource.org/pdb/MyGet/username/11111111-1111-1111-1111-11111111111. After that, click OK to confirm configuration changes and consume symbols for NuGet packages.

Enjoy!

Setting up a NuGet repository in seconds: MyGet public feeds

A few months ago, my colleague Xavier Decoster and I introduced MyGet as a tool where you can create your own, private NuGet feeds. A couple of weeks later we introduced some options to delegate feed privileges to other MyGet users allowing you to make another MyGet user “co-admin” or “contributor” to a feed. Since then we’ve expanded our view on the NuGet ecosystem and moved MyGet from a solution to create your private feeds to a service that allows you to set up a NuGet feed, whether private or public.

Supporting public feeds allows you to set up a structure similar to www.nuget.org: you can give any user privileges to publish a package to your feed while the user can never manage other packages on your feed. This is great in several scenarios:

  • You run an open source project and want people to contribute modules or plugins to your feed
  • You are a business and you want people to contribute internal packages to your feed whilst prohibiting them from updating or deleting other packages

Setting up a public feed

Setting up a public feed on MyGet is similar to setting up a private feed. In fact, both are identical except for the default privileges assigned to users. Navigate to www.myget.org and sign in using an identity provider of choice. Next, create a feed, for example:

Create a MyGet NuGet feed and host your own NuGet packages

This new feed may be named “public”, however it is private by obscurity: if someone knows the URL to the feed, he/she can consume packages from it. Let’s change that. Go to the “Feed Security” tab and have a look at the assigned privileges for Everyone. By default, these are set to “Can consume this feed”, meaning that everyone can add the feed URL to Visual Studio and consume packages. Other options are “No access” (requires authentication prior to being able to consume the feed) and “Can contribute own packages to this feed”. This last one is what we want:

Setting up a NuGet feed

Assigning the “Can contribute own packages to this feed” privilege to a specific user or to everyone means that the user (or everyone) will be able to contribute packages to the feed, as long as the package id used is not already on the feed and as long as the package id was originally submitted by this user. Exactly the same model as www.nuget.org, that is.

For reference, all available privileges are:

  • Has no access to this feed (speaks for itself)
  • Can consume this feed (allows the user to use the feed in Visual Studio / NuGet)
  • Can contribute own packages to this feed '(allows the user to contribute packages but can only update and remove his own packages and not those of others)
  • Can manage all packages for this feed (allows the user to add packages to the feed via the website and via the NuGet push API)
  • Can manage users and all packages for this feed (extends the above with feed privilege management capabilities)

Contributing to a public feed

Of course, if you have a public feed you may want to have people contributing to it. This is very easy: provide them with a link to your feed editing page (for example, http://www.myget.org/Feed/Edit/public). Users can publish their packages via the MyGet user interface in no time.

If you want to have users push packages using nuget.exe or NuGet Package Explorer, provide them a link to the feed endpoint (for example, http://www.myget.org/F/public/). Using their API key (which can be found in the MyGet profile for the user) they can push packages to the public feed from any API consumer.

Enjoy!

 

PS: We’re working on lots more, but will probably provide that in a MyGet Premium version. Make sure to subscribe to our newsletter on www.myget.org if this is of interest.

NuGet push... to Windows Azure

When looking at how people like to deploy their applications to a cloud environment, a large faction seems to prefer being able to use their source control system as a source for their production deployment. While interesting, I see a lot of problems there: your source code may not run immediately and probably has to be compiled. You don’t want to maintain compiled assemblies in source control, right? Also, maybe some QA process is in place where a deployment can only occur after approval. Why not use source control for what it’s there for: source control? And how about using a NuGet repository as the source for our deployment? Meet the Windows Azure NuGetRole.

Disclaimer/Warning: this is demo material and should probably not be used for real-life deployments without making it bullet proof!

Download the sample code: NuGetRole.zip (262.22 kb)

How to use it

If you compile the source code (download), you have X steps left in getting your NuGetRole running on Windows Azure:

  • Specifying the package source to use
  • Add some packages to the package source feed (which you can easily host on MyGet)
  • Deploy to Windows Azure

When all these steps have been taken care of, the NuGetRole will download all latest package versions from the package source specified in ServiceConfiguration.cscfg:

1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceConfiguration serviceName="NuGetRole.Azure" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" 4 osFamily="1" 5 osVersion="*"> 6 <Role name="NuGetRole.Web"> 7 <Instances count="1" /> 8 <ConfigurationSettings> 9 <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> 10 <Setting name="PackageSource" value="http://www.myget.org/F/nugetrole/" /> 11 </ConfigurationSettings> 12 </Role> 13 </ServiceConfiguration>

Packages you publish should only contain a content and/or lib folder. Other package contents will currently be ignored by the NuGetRole. If you want to add some web content like a default page to your role, simply publish the following package:

NuGet Package Explorer MyGet NuGet NuGetRole Azure

Just push, and watch your Windows Azure web role farm update their contents. Or have your build server push a NuGet package containing your application and have your server farm update itself. Whatever pleases you.

How it works

What I did was create a fairly empty Windows Azure project (download).  In this project, one Web role exists. This web role consists of nothing but a Web.config file and a WebRole.cs class which looks like the following:

1 public class WebRole : RoleEntryPoint 2 { 3 private bool _isSynchronizing; 4 private PackageSynchronizer _packageSynchronizer = null; 5 6 public override bool OnStart() 7 { 8 var localPath = Path.Combine(Environment.GetEnvironmentVariable("RdRoleRoot") + "\\approot"); 9 10 _packageSynchronizer = new PackageSynchronizer( 11 new Uri(RoleEnvironment.GetConfigurationSettingValue("PackageSource")), localPath); 12 13 _packageSynchronizer.SynchronizationStarted += sender => _isSynchronizing = true; 14 _packageSynchronizer.SynchronizationCompleted += sender => _isSynchronizing = false; 15 16 RoleEnvironment.StatusCheck += (sender, args) => 17 { 18 if (_isSynchronizing) 19 { 20 args.SetBusy(); 21 } 22 }; 23 24 return base.OnStart(); 25 } 26 27 public override void Run() 28 { 29 _packageSynchronizer.SynchronizeForever(TimeSpan.FromSeconds(30)); 30 31 base.Run(); 32 } 33 }

The above code is essentially wiring some configuration values like the local web root and the NuGet package source to use to a second class in this project: the PackageSynchronizer. This class simply checks the specified NuGet package source every few minutes, checks for the latest package versions and if required, updates content and bin files.  Each synchronization run does the following:

1 public void SynchronizeOnce() 2 { 3 var packages = _packageRepository.GetPackages() 4 .Where(p => p.IsLatestVersion == true).ToList(); 5 6 var touchedFiles = new List<string>(); 7 8 // Deploy new content 9 foreach (var package in packages) 10 { 11 var packageHash = package.GetHash(); 12 var packageFiles = package.GetFiles(); 13 foreach (var packageFile in packageFiles) 14 { 15 // Keep filename 16 var packageFileName = packageFile.Path.Replace("content\\", "").Replace("lib\\", "bin\\"); 17 18 // Mark file as touched 19 touchedFiles.Add(packageFileName); 20 21 // Do not overwrite content that has not been updated 22 if (!_packageFileHash.ContainsKey(packageFileName) || _packageFileHash[packageFileName] != packageHash) 23 { 24 _packageFileHash[packageFileName] = packageHash; 25 26 Deploy(packageFile.GetStream(), packageFileName); 27 } 28 } 29 30 // Remove obsolete content 31 var obsoleteFiles = _packageFileHash.Keys.Except(touchedFiles).ToList(); 32 foreach (var obsoletePath in obsoleteFiles) 33 { 34 _packageFileHash.Remove(obsoletePath); 35 Undeploy(obsoletePath); 36 } 37 } 38 }

Or in human language:

  • The specified NuGet package source is checked for packages
  • Every package marked “IsLatest” is being downloaded and deployed onto the machine
  • Files that have not been used in the current synchronization step are deleted

This is probably not a bullet-proof solution, but I wanted to show you how easy it is to use NuGet not only as a package manager inside Visual Studio, but also from your code: NuGet is not just a package manager but in essence a package management protocol. Which you can easily extend.

One thing to note: I also made the Windows Azure load balancer ignore the role that’s updating itself. This means a roie instance that is synchronizing its contents will never be available in the load balancing pool so no traffic is sent to the role instance during an update.

A client side Glimpse to your PHP application

Glimpse for PHPA few months ago, the .NET world was surprised with a magnificent tool called “Glimpse”. Today I’m pleased to release a first draft of a PHP version for Glimpse! Now what is this Glimpse thing… Well: "what Firebug is for the client, Glimpse does for the server... in other words, a client side Glimpse into whats going on in your server."

For a quick demonstration of what this means, check the video at http://getglimpse.com/. Yes, it’s a .NET based video but the idea behind Glimpse for PHP is the same. And if you do need a PHP-based one, check http://screenr.com/27ds (warning: unedited :-))

Fundamentally Glimpse is made up of 3 different parts, all of which are extensible and customizable for any platform:

  • Glimpse Server Module
  • Glimpse Client Side Viewer
  • Glimpse Protocol

This means an server technology that provides support for the Glimpse protocol can provide the Glimpse Client Side Viewer with information. And that’s what I’ve done.

What can I do with Glimpse?

A lot of things. The most basic usage of Glimpse would be enabling it and inspecting your requests by hand. Here’s a small view on the information provided:

Glimpse phpinfo()

By default, Glimpse offers you a glimpse into the current Ajax requests being made, your PHP Configuration, environment info, request variables, server variables, session variables and a trace viewer. And then there’s the remote tab, Glimpse’s killer feature.

When configuring Glimpse through www.yoursite.com/?glimpseFile=Config, you can specify a Glimpse session name. If you do that on a separate device, for example a customer’s browser or a mobile device you are working with, you can distinguish remote sessions in the remote tab. This allows debugging requests that are being made live on other devices! A full description is over at http://getglimpse.com/Help/Plugin/Remote.

PHP debug mobile browser

Adding Glimpse to your PHP project

Installing Glimpse in a PHP application is very straightforward. Glimpse is supported starting with PHP 5.2 or higher.

  • For PHP 5.2, copy the source folder of the repository to your server and add <?php include '/path/to/glimpse/index.php'; ?> as early as possible in your PHP script.
  • For PHP 5.3, copy the glimpse.phar file from the build folder of the repository to your server and add <?php include 'phar://path/to/glimpse.phar'; ?> as early as possible in your PHP script.

Here’s an example of the Hello World page shown above:

1 <?php 2 require_once 'phar://../build/Glimpse.phar'; 3 ?> 4 <html> 5 <head> 6 <title>Hello world!</title> 7 </head> 8 9 <?php Glimpse_Trace::info('Rendering body...'); ?> 10 <body> 11 <h1>Hello world!</h1> 12 <p>This is just a test.</p> 13 </body> 14 <?php Glimpse_Trace::info('Rendered body.'); ?> 15 </html>

Enabling Glimpse

From the moment Glimpse is installed into your web application, navigate to your web application and append the ?glimpseFile=Config query string to enable/disable Glimpse. Optionally, a client name can also be specified to distinguish remote requests.

Configuring Glimpse for PHP

After enabling Glimpse, a small “eye” icon will appear in the bottom-right corner of your browser. Click it and behold the magic!

Now of course: anyone can potentially enable Glimpse. If you don’t want that, ensure you have some conditional mechanism around the <?php require_once 'phar://../build/Glimpse.phar'; ?> statement.

Creating a first Glimpse plugin

Not enough information on your screen? Working with Zend Framework and want to have a look at route values? Want to work with Wordpress and view some hidden details about a post through Glimpse? The sky is the limit. All there’s to it is creating a Glimpse plugin and registering it. Implementing Glimpse_Plugin_Interface is enough:

1 <?php 2 class MyGlimpsePlugin 3 implements Glimpse_Plugin_Interface 4 { 5 public function getData(Glimpse $glimpse) { 6 $data = array( 7 array('Included file path') 8 ); 9 10 foreach (get_included_files() as $includedFile) { 11 $data[] = array($includedFile); 12 } 13 14 return array( 15 "MyGlimpsePlugin" => count($data) > 0 ? $data : null 16 ); 17 } 18 19 public function getHelpUrl() { 20 return null; // or the URL to a help page 21 } 22 } 23 ?>

To register the plugin, add a call to $glimpse->registerPlugin():

1 <?php 2 $glimpse->registerPlugin(new MyGlimpsePlugin()); 3 ?>

And Bob’s your uncle:

Creating a Glimpse plugin in PHP

Now what?

Well, it’s up to you. First of all: all feedback would be welcomed. Second of all: this is on Github (https://github.com/Glimpse/Glimpse.PHP). Feel free to fork and extend! Feel free to contribute plugins, core features, whatever you like! Have a lot of CakePHP projects? Why not contribute a plugin that provides a Glimpse at CakePHP diagnostics?

‘Till next time!

Version 4 of the Windows Azure SDK for PHP released

Only a few months after the Windows Azure SDK for PHP 3.0.0, Microsoft and RealDolmen are proud to present you the next version of the most complete SDK for Windows Azure out there (yes, that is a rant against the .NET SDK!): Windows Azure SDK for PHP. We’ve been working very hard with an expanding globally distributed team on getting this version out.

The Windows Azure SDK 4 contains some significant feature enhancements. For example, it now incorporates a PHP library for accessing Windows Azure storage, a logging component, a session sharing component and clients for both the Windows Azure and SQL Azure Management API’s. On top of that, all of these API’s are now also available from the command-line both under Windows and Linux. This means you can batch-script a complete datacenter setup including servers, storage, SQL Azure, firewalls, … If that’s not cool, move to the North Pole.

Here’s the official change log:

  • New feature: Service Management API support for SQL Azure
  • New feature: Service Management API's exposed as command-line tools
  • New feature: MicrosoftWindowsAzureRoleEnvironment for retrieving environment details
  • New feature: Package scaffolders
  • Integration of the Windows Azure command-line packaging tool
  • Expansion of the autoloader class increasing performance
  • Several minor bugfixes and performance tweaks

Some interesting links on some of the new features:

Also keep an eye on www.sdn.nl where I’ll be posting an article on scripting a complete application deployment to Windows Azure, including SQL Azure, storage and firewalls.

And finally: keep an eye on http://azurephp.interoperabilitybridges.com and http://blogs.technet.com/b/port25/. I have a feeling some cool stuff may be coming following this release...

Copy packages from one NuGet feed to another

Copy packages from one NuGet feed to another - MyGet NuGet Server

Yesterday, a funny discussion was going on at the NuGet Discussion Forum on CodePlex. Funny, you say? Well yes. Funny because it was about a feature we envisioned as being a must-have feature for the NuGet ecosystem: copying packages from the NuGet feed to another feed. And funny because we already have that feature present in MyGet. You may wonder why anyone wants to do that? Allow me to explain.

Scenarios where copying packages makes sense

The first scenario is feed stability. Imagine you are building a project and expect to always reference a NuGet package from the official feed. That’s OK as long as you have that package present in the NuGet feed, but what happens if someone removes it or updates it without respecting proper versioning? This should not happen, but it can be an unpleasant surprise if it happens. Copying the package to another feed provides stability: the specific package version is available on that other feed and will never change unless you update or remove it. It puts you in control, not the package owner.

A second scenario: enhanced speed! It’s still much faster to pull packages from a local feed or a feed that’s geographically distributed, like the one MyGet offers (US and Europe at the moment). This is not to bash any carriers or network providers, it’s just physics: electrons don’t travel that fast and it’s better to have them coming from a closer location.

But… how to do it? Client side

There are some solutions to this problem/feature. The first one is a hard one: write a script that just pulls packages from the official feed. You’ll find a suggestion on how to do that here. This thing however does not pull along dependencies and forces you to do ugly, user-unfriendly things. Let’s go for beauty :-)

Rob Reynolds (aka @ferventcoder) added some extension sauce to the NuGet.exe:

NuGet.exe Install /ExcludeVersion /OutputDir %LocalAppData%\NuGet\Commands AddConsoleExtension NuGet.exe addextension nuget.copy.extension NuGet.exe copy castle.windsor –destination http://myget.org/F/somefeed

Sweet! And Rob also shared how he created this extension (warning: interesting read!)

But… how to do it? Server side

The easiest solution is to just use MyGet! We have a nifty feature in there named “Mirror packages”. It copies the selected package to your private feed, distributes it across our CDN nodes for a fast download and it pulls along all dependencies.

Mirror a NuGet package - Copy a NuGet package

Enjoy making NuGet a component of your enterprise workflow! And MyGet of course as well!

Delegate feed privileges to other users on MyGet

MyGetOne of the first features we had envisioned for MyGet and which seemed increasingly popular was the ability to provide other users a means of managing packages on another user’s feed.

As of today, we’re proud to announce the following new features:

  • Delegating feed privileges to other users – This allows you to make another MyGet user “co-admin” or “contributor” to a feed. This eases management of a private feed as that work can be spread across multiple people.
  • Making private feeds private by requiring authentication – It’s now possible to configure a feed so that nobody can consult its list of packages unless a valid login is provided. This feature is not yet available for use with NuGet 1.4.
  • Global deployment – We’ve updated our deployment so managing feeds can now be done on a server that’s closer to you.

Now when is Microsoft going to buy us out :-)

Delegating feed privileges to other users

MyGet now allows you to make another MyGet user “co-admin” or “contributor” to a feed. This eases management of a private feed as that work can be spread across multiple people. If combined with the “private feeds” option, it’s also possible to give some users read access to the feed while unauthenticated users can not access the feed created.

To delegate privileges to a user, navigate to the feed details and click the Feed security tab. This tab allows you to change feed privileges for different users. Adding feed privileges can be done by clicking the Add feed privileges… button (duh!).

Add MyGet feed privileges

Available privileges are:

  • Has no access to this feed (speaks for itself)
  • Can consume this feed (allows the user to use the feed in Visual Studio / NuGet)
  • Can manage packages for this feed (allows the user to add packages to the feed via the website and via the NuGet push API)
  • Can manage users and packages for this feed (extends the above with feed privilege management capabilities)

After selecting the privileges, the user receives an e-mail in which he/she can claim the acquired privileges:

Claim MyGet feed privileges

Privileges are not granted per direct: after assigning privileges, the user has to claim these privileges by clicking a link in an automated e-mail that has been sent.

Making private feeds private by requiring authentication

It’s now possible to configure a feed so that nobody can consult its list of packages unless a valid login is provided. Combined with the feed privilege delegation feature one can granularly control who can and who can not consume a feed from MyGet. Note that his feature is not yet available for use with NuGet 1.4, we hope to see support for this shipping with NuGet 1.5.

In order to enable this feature, on the Feed security tab change feed privileges for Everyone to Has no access to this feed.

NuGet feed authentication

This will instruct MyGet to request for basic authentication when someone accesses a MyGet feed. For example, try our sample feed: http://www.myget.org/F/mygetsample/

Global deployment

We’ve updated our deployment so managing feeds can now be done on a server that’s closer to you. Currently we have a deployment running in a European datacenter and one in the US. We hope to expand this further as well as leverage a content delivery network for high-speed distribution of packages.

image

We need your opinion!

As features keep popping into our head, the time we have to work on MyGet in our spare time is not enough. To support some extra development, we are thinking along the lines of introducing a premium version which you can host in your own datacenter or on a dedicated cloud environment. We would love some feedback on the following survey:

Enabling conditional Basic HTTP authentication on a WCF OData service

imageYes, a long title, but also something I was not able to find too easily using Google. Here’s the situation: for MyGet, we are implementing basic authentication to the OData feed serving available NuGet packages. If you recall my post Using dynamic WCF service routes, you may have deducted that MyGet uses that technique to have one WCF OData service serving the feeds of all our users. It’s just convenient! Unless you want basic HTTP authentication for some feeds and not for others…

After doing some research, I thought the easiest way to resolve this was to use WCF intercepting. Convenient, but how would you go about this? And moreover: how to make it extensible so we can use this for other WCF OData (or WebAPi) services in the future?

The solution to this was to create a message inspector (IDispatchMessageInspector). Here’s the implementation we created for MyGet: (disclaimer: this will only work for OData services and WebApi!)

 

1 public class ConditionalBasicAuthenticationMessageInspector : IDispatchMessageInspector 2 { 3 protected IBasicAuthenticationCondition Condition { get; private set; } 4 protected IBasicAuthenticationProvider Provider { get; private set; } 5 6 public ConditionalBasicAuthenticationMessageInspector( 7 IBasicAuthenticationCondition condition, IBasicAuthenticationProvider provider) 8 { 9 Condition = condition; 10 Provider = provider; 11 } 12 13 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 14 { 15 // Determine HttpContextBase 16 if (HttpContext.Current == null) 17 { 18 return null; 19 } 20 HttpContextBase httpContext = new HttpContextWrapper(HttpContext.Current); 21 22 // Is basic authentication required? 23 if (Condition.Evaluate(httpContext)) 24 { 25 // Extract credentials 26 string[] credentials = ExtractCredentials(request); 27 28 // Are credentials present? If so, is the user authenticated? 29 if (credentials.Length > 0 && Provider.Authenticate(httpContext, credentials[0], credentials[1])) 30 { 31 httpContext.User = new GenericPrincipal( 32 new GenericIdentity(credentials[0]), new string[] { }); 33 return null; 34 } 35 36 // Require authentication 37 HttpContext.Current.Response.StatusCode = 401; 38 HttpContext.Current.Response.StatusDescription = "Unauthorized"; 39 HttpContext.Current.Response.Headers.Add("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", Provider.Realm)); 40 HttpContext.Current.Response.End(); 41 } 42 43 return null; 44 } 45 46 public void BeforeSendReply(ref Message reply, object correlationState) 47 { 48 // Noop 49 } 50 51 private string[] ExtractCredentials(Message requestMessage) 52 { 53 HttpRequestMessageProperty request = (HttpRequestMessageProperty)requestMessage.Properties[HttpRequestMessageProperty.Name]; 54 55 string authHeader = request.Headers["Authorization"]; 56 57 if (authHeader != null && authHeader.StartsWith("Basic")) 58 { 59 string encodedUserPass = authHeader.Substring(6).Trim(); 60 61 Encoding encoding = Encoding.GetEncoding("iso-8859-1"); 62 string userPass = encoding.GetString(Convert.FromBase64String(encodedUserPass)); 63 int separator = userPass.IndexOf(':'); 64 65 string[] credentials = new string[2]; 66 credentials[0] = userPass.Substring(0, separator); 67 credentials[1] = userPass.Substring(separator + 1); 68 69 return credentials; 70 } 71 72 return new string[] { }; 73 } 74 }

Our ConditionalBasicAuthenticationMessageInspector implements a WCF message inspector that, once a request has been received, checks the HTTP authentication headers to check for a basic username/password. One extra there: since we wanted conditional authentication, we have also implemented an IBasicAuthenticationCondition interface which we have to implement. This interface decides whether to invoke authentication or not. The authentication itself is done by calling into our IBasicAuthenticationProvider. Implementations of these can be found on our CodePlex site.

If you are getting optimistic: great! But how do you apply this message inspector to a WCF service? No worries: you can create a behavior for that. All you have to do is create a new Attribute and implement IServiceBehavior. In this implementation, you can register the ConditionalBasicAuthenticationMessageInspector on the service endpoint. Here’s the implementation:

1 [AttributeUsage(AttributeTargets.Class)] 2 public class ConditionalBasicAuthenticationInspectionBehaviorAttribute 3 : Attribute, IServiceBehavior 4 { 5 protected IBasicAuthenticationCondition Condition { get; private set; } 6 protected IBasicAuthenticationProvider Provider { get; private set; } 7 8 public ConditionalBasicAuthenticationInspectionBehaviorAttribute( 9 IBasicAuthenticationCondition condition, IBasicAuthenticationProvider provider) 10 { 11 Condition = condition; 12 Provider = provider; 13 } 14 15 public ConditionalBasicAuthenticationInspectionBehaviorAttribute( 16 Type condition, Type provider) 17 { 18 Condition = Activator.CreateInstance(condition) as IBasicAuthenticationCondition; 19 Provider = Activator.CreateInstance(provider) as IBasicAuthenticationProvider; 20 } 21 22 public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) 23 { 24 // Noop 25 } 26 27 public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 28 { 29 foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers) 30 { 31 foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints) 32 { 33 endpointDispatcher.DispatchRuntime.MessageInspectors.Add( 34 new ConditionalBasicAuthenticationMessageInspector(Condition, Provider)); 35 } 36 } 37 } 38 39 public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 40 { 41 // Noop 42 } 43 }

One step to do: apply this service behavior to our OData service. Easy! Just add an attribute to the service class and you’re done! Note that we specify the IBasicAuthenticationCondition and IBasicAuthenticationProvider on the attribute.

1 [ConditionalBasicAuthenticationInspectionBehavior( 2 typeof(MyGetBasicAuthenticationCondition), 3 typeof(MyGetBasicAuthenticationProvider))] 4 public class PackageFeedHandler 5 : DataService<PackageEntities>, 6 IDataServiceStreamProvider, 7 IServiceProvider 8 { 9 }

Enjoy!