Maarten Balliauw {blog}

An autoscaling build farm using TeamCity and Windows Azure

NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.

Cloud computing is often referred to as a cost saver due to its billing models. If we can move workloads that are seasonal to the cloud, cost reduction is something that will come. No matter if it’s really “seasonal seasonal” (e.g. a temporary high workload around the holidays) or “daily seasonal” where workloads are different depending on the time of day, these workloads have written cloud all over them.

A workload that may be seasonal is the workload done by build servers. Take TeamCity for example. A TeamCity server instruments a pool of build agents that are either idle or compiling source code into binaries. Depending on how your team is structured and when people work, there is a big chance that pool of build agents is doing nothing for several hours every day, except incurring cost. What if we could move the build agents to a platform like Windows Azure and have them autoscale, depending on the actual load on the build farm?

Creating a build agent virtual machine

The first step in setting this brilliant scheme in motion is to set up a build agent virtual machine. We can select any virtual machine image we want for our build agent, even upload our own vhd’s if needed. I'm selecting a Windows Server 2012 image here but if you need a different OS for your build agent you can select that instead.

During the creation of this build agent, there is nothing special we should do. We can select a small/medium/large/extra large instance, give ourselves an administrator password and so on. The only important step here is that we set up an endpoint for the TeamCity build agent, listening on TCP port 9090.

Once the machine is started, we will have to install all required prerequisites for our build agent. We can connect using remote desktop (or SSH if it’s a Linux machine). On the machine I have here, I installed all .NET framework versions to ensure I'm able to build .NET projects. On a build machine for Java, we would install the correct runtimes and JDK's for our projects. Anything, really, if it is needed for the sources we’ll be building. On Windows, I typically use Web Platform Installer and Chocolatey to get this done as automated as possible.

Installing TeamCity build agent

In order for our build agent to communicate with the TeamCity server, we have to install the build agent. We can do this by navigating to our TeamCity server from within the virtual machine and use the Install Build Agents link from the Agents page.

On a Windows server, we can use the Windows Installer but we can also use Java Web Start or even simply extract a ZIP file. This last option can be useful on a Linux machine, for example.

Installing the build agent is pretty much a next, next, finish operation. The only important thing is that we run the agent as a Windows service (or have it automatically start at boot time on other operating systems). We also want to specify the URL to our TeamCity server as well as the port on which the build agent will listen for incoming data from TeamCity. Note that this port should be the one opened in the load balancer earlier, in the case of this machine port 9090.

Before starting the build agent, make sure the local firewall allows incoming connections. Through the Windows firewall, allow incoming connections for port 9090 (and while we’re at it, for a range of ports so we can easily clone this machine and not care about the firewall anymore).

If we now start the build agent service, it should connect to our TeamCity server. Under the agents tab, we should be seeing a new unauthorized agent popping up. If that works, we’re good to go with our build agent farm.

Don’t shut down the machine just yet, we still need to prepare it for creating a build agent image.

Creating a build agent image

While still connected through remote desktop, open a command prompt and run the sysprep /generalize command from the c:\windows\system32\sysprep folder. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting its own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Once finished, our RDP connection should be gone and our machine can be shutdown in the Windows Azure Management Portal. In fact, it must be shut down. Once that is done, we can use the Capture button and transform our virtual machine into a template we can create new virtual machines from.

The capturing process will take a couple of minutes and results in having no more build server virtual machine to be found in the Windows Azure Management Portal. Is that bad? No, we can now start cloning the machine and create multiple, all having the exact same configuration and components installed.

Setting up multiple build agent machines

The next thing we want to have is multiple build agent machines. From the Windows Azure Management portal, we can create them. Not based on a platform image but using the image created during the previous step.

The virtual machine configuration can be whatever we want. Do we want extra small instances or extra large? It’s all up to us and our credit card. On the next page, we have to specify some more important details. First, we have to select or create a cloud service. This will be the DNS host name under which all of our build agents are going to live. We also have to specify the affinity group, in essence a setting telling Windows Azure to never unplug power or networking for all machines in this group at the same time.

We will be creating a couple of machines, so it’s important to get the next page right. Since all our machines will share the same hostname and IP address to the outside world, our build agents have to listen on different TCP ports. Make sure that the first agent maps port 9090 to port 9090, the second one 9091 to 9091 and so on. Not doing this will mess with your mind afterwards when troubleshooting.

Finish the process, let Windows Azure start the machine and create a new one. Important: same cloud service, same availability set and correct endpoint mappings!

Configuring the build agents

Once we have several machines running, we have to connect to them using remote desktop again. This can be done through the portal. Once in, locate the build agent configuration file (c:\BuildAgent\conf\buildAgent.properties in a default installation) and set the port number on which it listens to the one that was mapped as an external endpoint. Again, agent one will listen on port 9090, agent number two on 9091 and so on. We can also set a better name for the build agent, in my case I’ve chosen to go with “agent2”. Very inspirational and all.

Save and restart the build agent (or the machine). The TeamCity server should now start listing all build agents.

Make sure to authorize them all, as we want to be sure they can connect to TeamCity server later on. Once that has been done and all build agents are listed here, we can shut them all down except for one. We want to have something running, right?

Configuring autoscaling

It might have been a good question: why did we have to move all these machines under the same cloud service? The reason is simple: we wanted to autoscale our farm and this can only be done within one cloud service. From the cloud service, click the Scale tab and start configuring.

For this post, I’ve chosen the following values:

• Autoscale based on CPU
• Have a minimum of one instance, and a maximum of, well… all of them.
• The target CPU range is 0 to 10. For a production environment this will typically be between 60 and 80 or 40 and 80, depending on the chosen machine size for the build agents. Windows Azure will trigger an autoscaling operation if we go outside this range, having a small and low range means it will trigger a scale operation much faster. Bigger numbers means slower to respond.
• Scale up by and scale down by as well as the number of minutes to wait after the previous operations are up to you. If you want a build agent to remain online for 30 minutes after it has been started, even if CPU usage drops, set it to 30 minutes. If 2 machines should be started at once, increase that number as well.

Scaling will happen based on the average CPU percentage of all running machines. If our builds run at 100% CPU all the time on our agent we can set the thresholds a bit higher. If builds are only taking 20% we might want to run multiple agents on one machine or decrease the scaling thresholds a bit. Want to measure CPU utilization for a given build? Better read up on the TeamCity Performance Monitor then.

Putting it to the test

Putting it to the test shouldn’t be that hard. Start some builds and make sure the agent gets loaded with builds. Once we hit the CPU threshold, Windows Azure will launch a virtual machine that was previously turned off.

Once it has booted, we will also see it surface on the TeamCity server.

Once the load goes down again, Windows Azure will shutdown machines that are below the thresholds and make sure they don’t incur costs any longer. Which is pretty impressive!

If a development team triggers a massive amount of builds during the day, Windows Azure will pretty soon scale out to a higher number of virtual build agents. And at night when there are only some builds being triggered, it will scale back to lesser instances. If, for example, we manage to run machines only for 12 hours instead of 24 hours a day, that means our build farm’s price goes down by half.

TeamCity’s architecture as well as the way Windows Azure works makes this cost reduction possible. It’s also fun to set up, it gives us a wide range of options (how about a Windows Server 2012 farm, a Linux farm and so on).

Enjoy!

Just released: MvcSiteMapProvider 4.0

After a beta version about a month ago, we are proud to release MvcSiteMapProvider 4.0 stable! (get it from NuGet, it’s fresh!) It took 6 months to complete this major version but I think our GitHub contributors have done a great job. Thank you all and especially Shad for taking the lead on this release!

MvcSiteMapProvider is a tool targeted at ASP.NET MVC that provides menus, site maps, site map path functionality, and more. It provides the ability to configure a hierarchical navigation structure using a pluggable architecture that can be XML, database, or code driven. We have moved beyond a mere ASP.NET SiteMapProvider implementation to provide support for multi-tenant applications, flexible caching, dependency injection, and several interface-based extensibility points where virtually any part of the provider can be replaced with a custom implementation.

Based on areas, controller and action method names rather than hardcoded URL references, sitemap nodes are completely dynamic based on the routing engine used in an application. Search Engine Optimization support is also provided in the form of dynamic sitemaps XML, canonical URL tags, and meta robots tags to ensure you send the search engines consistent - rather than conflicting - information about your URLs.

What has changed?

What I originally intended to do in v2 (but decided against based on popular request) is something that now has been done. The biggest change in this release is that we have stepped away from being an ASP.NET SiteMapProvider implementation. This means a lot of code had to be rewritten making v4 a pretty clean release. We’re not there yet completely as we want to have unit tests for all (and some more changes will be required for that).

Next to stepping away from the ASP.NET provider model, we’ve improved support for dependency injection. If you don’t need it, no worries. If you do need it: every component of the MvcSiteMapProvider is now pluggable. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup. Read more in the documentation.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

The Menu() helper has been rewritten to become a more performant and reliable helper (thanks for the contribution, midishero!)

A great bunch of performance enhancements and stability fixes are in as well.

Since MvcSiteMapProvider has had some significant updates going from v3 to v4, it is best to read the upgrade guide. The first part of the upgrade from v3 to v4 will be updating the NuGet package. Before, MvcSiteMapProvider only had one NuGet package. Today, it has been split in multiple, of which the following ones are good to know at this time:

• MvcSiteMapProvider.Web containing all views and web.config changes
• MvcSiteMapProvider.MVC<version>.Core containing the library itself

Upgrading from v3 to v4 consists of installing the correct packages for your ASP.NET MVC version:

• For MVC 2, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC2
• For MVC 3, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC3
• For MVC 4, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC4
• Note that for MVC 4 we have made it possible to upgrade MvcSiteMapProvider instead, which will pull in all required dependencies. Do know that this is not the recommended scenario and it is preferred to install MvcSiteMapProvider.MVC4 instead.

The MvcSiteMapProvider.Web update will add views and all required runtime dependencies to your project. This package is a dependency of each of the above options and generally will not need to be installed explicitly.

In .NET versions prior to .NET 4.0, one line of code should be added to the Application_Start() event of Global.asax:

MvcSiteMapProvider.DI.Composer.Compose();

Note that this code is automatically executed if using .NET 4.0 or higher by the use of WebActivator, so in most cases you will not need to call it manually.

What’s next?

NuGet all the things! Install the new MvcSiteMapProvider.MVCx package (replace X with your ASP.NET MVC version) and try it out! Leave your comments, ideas and pull requests on our GitHub page.

Enjoy!

And there it is - MvcSiteMapProvider v4 (beta)

It has been a while since a new major update has been done to the MvcSiteMapProvider project, but today is the day! MvcSiteMapProvider is a tool that provides flexible menus, breadcrumb trails, and SEO features for the ASP.NET MVC framework, similar to the ASP.NET SiteMapProvider model.

To be honest, I have not done a lot of work. Thanks to the power of open source (and Shad who did a massive job on refactoring the whole, thanks!), MvcSiteMapProvider v4 is around the corner.

A lot of things have changed. And by a lot, I mean A LOT! The most important change is that we’ve stepped away from the ASP.NET SiteMapProvider dependency. This has been a massive pain in the behind and source of a lot of issues. Whereas I initially planned on ditching this dependency with v3, it happened now anyway.

Other improvements have been done around dependency injection: every component in the MvcSiteMapProvider can now be replaced with custom implementations. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

Next to that, a series of new helpers have been added, bugs have been fixed, the security model has been made more performant and lots more. Consider v4 as almost a rewrite for the entire project!

We’ve tried to make the upgrade path as smooth as possible but there may be some breaking changes in the provider. If you currently have the ASP.NET MVC SiteMapProvider installed in your project, feel free to give the new version a try using the NuGet package of your choice (only one is needed for your ASP.NET MVC version).

Install-Package MvcSiteMapProvider.MVC2 -Pre
Install-Package MvcSiteMapProvider.MVC3 -Pre
Install-Package MvcSiteMapProvider.MVC4 -Pre

Speaking of NuGet packages: by popular demand, the core of MvcSIteMapProvider has been extracted into a separate package (MvcSiteMapProvider.MVC<version>.Core) so that you don’t have to include views and so on in your library projects.

Please give the beta a try and let us know your thoughts on GitHub (or the comments below). Pull requests currently go in the v4 branch.

Create a list of favorite ReSharper plugins

With the latest version of the ReSharper 8 EAP, JetBrains shipped an extension manager for plugins, annotations and settings. Where it previously was a hassle and a suboptimal experience to install plugins into ReSharper, it’s really easy to do now. And what is really nice is that this extension manager is built on top of NuGet! Which means we can do all sorts of tricks…

The first thing that comes to mind is creating a personal NuGet feed containing just those plugins that are of interest to me. And where better to create such feed than MyGet? Create a new feed, navigate to the Package Sources pane and add a new package source. There’s a preset available for using the ReSharper extension gallery!

After adding the ReSharper extension gallery as a package source, we can start adding our favorite plugins, annotations and extensions to our own feed.

Of course there are some other things we can do as well:

• “Proxy” the plugins from the ReSharper extension gallery and post your project/team/organization specific plugins, annotations and settings to your private feed. Check this post for more information.
• Push prerelease versions of your own plugins, annotations and settings to a MyGet feed. Once stable, push them “upstream” to the ReSharper extension gallery.

Enjoy!

Using Amazon Login (and LinkedIn and …) with Windows Azure Access Control

One of the services provided by the Windows Azure cloud computing platform is the Windows Azure Access Control Service (ACS). It is a service that provides federated authentication and rules-driven, claims-based authorization. It has some social providers like Microsoft Account, Google Account, Yahoo! and Facebook. But what about the other social identity providers out there? For example the newly introduced Login with Amazon, or LinkedIn? As they are OAuth2 implementations they don’t really fit into ACS.

Meet SocialSTS.com. It’s a service I created which does a protocol conversion and allows integrating ACS with other social identities. Currently it has support for integrating ACS with Twitter, GitHub, LinkedIn, BitBucket, StackExchange and Amazon. Let’s see how this works. There are 2 steps we have to take:

• Link SocialSTS with the social identity provider
• Link our ACS namespace with SocialSTS

Link SocialSTS with the social identity provider

Once an account has been created through www.socialsts.com, we are presented with a dashboard in which we can configure the social identities. Most of them require that you register your application with them and in turn, you will receive some identifiers which will allow integration.

As you can see, instructions for registering with the social identity provider are listed on the configuration page. For Amazon, we have to register an application with Amazon and configure the following:

If we do this, Amazon will give us a client ID and client secret in return, which we can enter in the SocialSTS dashboard.

That’s basically all configuration there is to it. We can now add our Amazon, LinkedIn, Twitter or GitHub login page to Windows Azure Access Control Service!

Link our ACS namespace with SocialSTS

In the Windows Azure Access Control Service management dashboard, we can register SocialSTS as an identity provider. SocialSTS will provide us with a FederationMetadata.xml URL which we can copy into ACS:

We can now save this new identity provider, add some claims transformation rules through the rule groups (important!) and then start using it in our application:

Enjoy! And let me know your thoughts on this service.

Throttling ASP.NET Web API calls

Many API’s out there, such as GitHub’s API, have a concept called “rate limiting” or “throttling” in place. Rate limiting is used to prevent clients from issuing too many requests over a short amount of time to your API. For example, we can limit anonymous API clients to a maximum of 60 requests per hour whereas we can allow more requests to authenticated clients. But how can we implement this?

Intercepting API calls to enforce throttling

Just like ASP.NET MVC, ASP.NET Web API allows us to write action filters. An action filter is an attribute that you can apply to a controller action, an entire controller and even to all controllers in a project. The attribute modifies the way in which the action is executed by intercepting calls to it. Sound like a great approach, right?

Well… yes! Implementing throttling as an action filter would make sense, although in my opinion it has some disadvantages:

• We have to implement it as an IAuthorizationFilter to make sure it hooks into the pipeline before most other action filters. This feels kind of dirty but it would do the trick as throttling is some sort of “authorization” to make a number of requests to the API.
• It gets executed quite late in the overall ASP.NET Web API pipeline. While not a big problem, perhaps we want to skip executing certain portions of code whenever throttling occurs.

So while it makes sense to implement throttling as an action filter, I would prefer plugging it earlier in the pipeline. Luckily for us, ASP.NET Web API also provides the concept of message handlers. They accept an HTTP request and return an HTTP response and plug into the pipeline quite early. Here’s a sample throttling message handler:

 1 public class ThrottlingHandler
2     : DelegatingHandler
3 {
4     protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
5     {
7
8         long currentRequests = 1;
9         long maxRequestsPerHour = 60;
10
11         if (HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] != null)
12         {
13             currentRequests = (long)System.Web.HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] + 1;
14             HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] = currentRequests;
15         }
16         else
17         {
19                                                      null, Cache.NoAbsoluteExpiration, TimeSpan.FromHours(1),
20                                                      CacheItemPriority.Low, null);
21         }
22
24         if (currentRequests > maxRequestsPerHour)
25         {
26             response = CreateResponse(request, HttpStatusCode.Conflict, "You are being throttled.");
27         }
28         else
29         {
30             response = base.SendAsync(request, cancellationToken);
31         }
32
33         return response;
34     }
35
36     protected Task<HttpResponseMessage> CreateResponse(HttpRequestMessage request, HttpStatusCode statusCode, string message)
37     {
38         var tsc = new TaskCompletionSource<HttpResponseMessage>();
39         var response = request.CreateResponse(statusCode);
40         response.ReasonPhrase = message;
41         response.Content = new StringContent(message);
42         tsc.SetResult(response);
44     }
45 }

We have to register it as well, which we can do when our application starts:

1 config.MessageHandlers.Add(new ThrottlingHandler());

The throttling handler above isn’t ideal. It’s not very extensible nor does it allow scaling out on a web farm. And it’s bound to being hosted in ASP.NET on IIS. It’s bad! Since there’s already a great project called WebApiContrib, I decided to contribute a better throttling handler to it.

Using the WebApiContrib ThrottlingHandler

The easiest way of using the ThrottlingHandler is by registering it using simple parameters like the following, which throttles every user at 60 requests per hour:

1 config.MessageHandlers.Add(new ThrottlingHandler(
2     new InMemoryThrottleStore(),
3     id => 60,
4     TimeSpan.FromHours(1)));

The IThrottleStore interface stores id + current number of requests. There’s only an in-memory store available but you can easily extend it to write this in a distributed cache or a database.

What’s interesting is we can change how our ThrottlingHandler behaves quite easily. Let’s give a specific IP address a better rate limit:

 1 config.MessageHandlers.Add(new ThrottlingHandler(
2     new InMemoryThrottleStore(),
3     id =>
4         {
5             if (id == "10.0.0.1")
6             {
7                 return 5000;
8             }
9             return 60;
10         },
11     TimeSpan.FromHours(1)));

Wait… Are you telling me this is all IP based? Well yes, by default. But overriding the ThrottlingHandler allows you to do funky things! Here’s a wireframe:

1 public class MyThrottlingHandler : ThrottlingHandler
2 {
3     // ...
4
5     protected override string GetUserIdentifier(HttpRequestMessage request)
6     {
7         // your user id generation logic here
8     }
9 }

By implementing the GetUserIdentifier method, we can for example return an IP address for unauthenticated users and their username for authenticated users. We can then decide on the throttling quota based on username.

Once using it, the ThrottlingHandler will inject two HTTP headers in every response, informing the client about the rate limit:

Enjoy! And do checkout WebApiContrib, it contains almost all extensions to ASP.NET Web API you will ever need!

Running unit tests when deploying ASP.NET to Windows Azure Web Sites

One of the well-loved features of Windows Azure Web Sites is the fact that you can simply push our ASP.NET application’s source code to the platform using Git (or TFS or DropBox) and that sources are compiled and deployed on your Windows Azure Web Site. If you’ve checked the management portal earlier, you may have noticed that a number of deployment steps are executed: the deployment process searches for the project file to compile, compiles it, copies the build artifacts to the web root and has your website running. But did you know you can customize this process?

[update] Mstest seems to work now as well, using the console runner from VS2012.

Customizing the build process

To get an understanding of how to customize the build process, I want to explain you how this works. In the root of your repository, you can add a .deployment file, containing a simple directive: which command should be run upon deployment.

1 [config]
2 command = build.bat

This command can be a batch file, a PHP file, a bash file and so on. As long as we can tell Windows Azure Web Sites what to execute. Let’s go with a batch file.

1 @echo off
2 echo This is a custom deployment script, yay!

When pushing this to Windows Azure Web Sites, here’s what you’ll see:

In this batch file, we can use some environment variables to further customize the script:

• DEPLOYMENT_SOURCE - The initial "working directory"
• DEPLOYMENT_TARGET - The wwwroot path (deployment destination)
• DEPLOYMENT_TEMP - Path to a temporary directory (removed after the deployment)
• MSBUILD_PATH - Path to msbuild

After compiling, you can simply xcopy our application to the %DEPLOYMENT_TARGET% variable and have your website live.

Generating deployment scripts

Creating deployment scripts can be a tedious job, good thing that the azure-cli tools are there! Once those are installed, simply invoke the following command and have both the .deployment file as well as a batch or bash file generated:

1 azure site deploymentscript --aspWAP "path\to\project.csproj"

For reference, here’s what is generated:

 1 @echo off
2
3 :: ----------------------
4 :: KUDU Deployment Script
5 :: ----------------------
6
7 :: Prerequisites
8 :: -------------
9
10 :: Verify node.js installed
11 where node 2>nul >nul
12 IF %ERRORLEVEL% NEQ 0 (
13   echo Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment.
14   goto error
15 )
16
17 :: Setup
18 :: -----
19
20 setlocal enabledelayedexpansion
21
22 SET ARTIFACTS=%~dp0%artifacts
23
24 IF NOT DEFINED DEPLOYMENT_SOURCE (
25   SET DEPLOYMENT_SOURCE=%~dp0%.
26 )
27
28 IF NOT DEFINED DEPLOYMENT_TARGET (
29   SET DEPLOYMENT_TARGET=%ARTIFACTS%\wwwroot
30 )
31
32 IF NOT DEFINED NEXT_MANIFEST_PATH (
33   SET NEXT_MANIFEST_PATH=%ARTIFACTS%\manifest
34
35   IF NOT DEFINED PREVIOUS_MANIFEST_PATH (
36     SET PREVIOUS_MANIFEST_PATH=%ARTIFACTS%\manifest
37   )
38 )
39
40 IF NOT DEFINED KUDU_SYNC_COMMAND (
41   :: Install kudu sync
42   echo Installing Kudu Sync
43   call npm install kudusync -g --silent
44   IF !ERRORLEVEL! NEQ 0 goto error
45
46   :: Locally just running "kuduSync" would also work
47   SET KUDU_SYNC_COMMAND=node "%appdata%\npm\node_modules\kuduSync\bin\kuduSync"
48 )
49 IF NOT DEFINED DEPLOYMENT_TEMP (
50   SET DEPLOYMENT_TEMP=%temp%\___deployTemp%random%
51   SET CLEAN_LOCAL_DEPLOYMENT_TEMP=true
52 )
53
54 IF DEFINED CLEAN_LOCAL_DEPLOYMENT_TEMP (
55   IF EXIST "%DEPLOYMENT_TEMP%" rd /s /q "%DEPLOYMENT_TEMP%"
56   mkdir "%DEPLOYMENT_TEMP%"
57 )
58
59 IF NOT DEFINED MSBUILD_PATH (
60   SET MSBUILD_PATH=%WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild.exe
61 )
62
63 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
64 :: Deployment
65 :: ----------
66
67 echo Handling .NET Web Application deployment.
68
69 :: 1. Build to the temporary path
70 %MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\path.csproj" /nologo /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release
71 IF !ERRORLEVEL! NEQ 0 goto error
72
73 :: 2. KuduSync
74 echo Kudu Sync from "%DEPLOYMENT_TEMP%" to "%DEPLOYMENT_TARGET%"
75 call %KUDU_SYNC_COMMAND% -q -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.deployment;deploy.cmd" 2>nul
76 IF !ERRORLEVEL! NEQ 0 goto error
77
78 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
79
80 goto end
81
82 :error
83 echo An error has occured during web site deployment.
84 exit /b 1
85
86 :end
87 echo Finished successfully.
88 

This script does a couple of things:

• Ensure node.js is installed on Windows Azure Web Sites (needed later on for synchronizing files)
• Setting up a bunch of environment variables
• Run msbuild on the project file we specified
• Use kudusync (a node.js based tool, hence node.js) to synchronize modified files to the wwwroot of our site

Try it: after pushing this to Windows Azure Web Sites, you’ll see the custom script being used. Not much added value so far, but that’s what you have to provide.

Unit testing before deploying

Unit tests would be nice! All you need is a couple of unit tests and a test runner. You can add it to your repository and store it there, or simply download it during the deployment. In my example, I’m using the Gallio test runner because it runs almost all test frameworks, but feel free to use the test runner for NUnit or xUnit instead.

Somewhere before the line that invokes msbuild and ideally in the “setup” region of the deployment script, add the following:

 1 IF NOT DEFINED GALLIO_COMMAND (
2   IF NOT EXIST "%appdata%\Gallio\bin\Gallio.Echo.exe" (
5     curl -O http://stahlforce.com/dev/unzip.exe
6     IF !ERRORLEVEL! NEQ 0 goto error
7
11     IF !ERRORLEVEL! NEQ 0 goto error
12
13     :: Extracting Gallio
14     echo Extracting Gallio
15     unzip -q -n GallioBundle-3.4.14.0.zip -d %appdata%\Gallio
16     IF !ERRORLEVEL! NEQ 0 goto error
17   )
18
19   :: Set Gallio runner path
20   SET GALLIO_COMMAND=%appdata%\Gallio\bin\Gallio.Echo.exe
21 )

See what happens there?  We check if the local system on which your files are stored in WindowsAzure Web Sites already has a copy of the Gallio.Echo.exetest runner. If not, let’s download a tool which allows us to unzip. Next, the entire Gallio test runner is downloaded and extracted. As a final step, the %GALLIO_COMMAND% variable is populated with the full path to the test runner executable.

Right before the line that calls “kudusync”, add the following:

1 echo Running unit tests
2 "%GALLIO_COMMAND%" "%DEPLOYMENT_SOURCE%\SampleApp.Tests\bin\Release\SampleApp.Tests.dll"
3 IF !ERRORLEVEL! NEQ 0 goto error

Yes, the name of your test assembly will be different, you should obviously change that. What happens here? Well, we’re invoking the test runner on our unit tests. If it fails, we abort deployment. Push it to Windows Azure and see for yourself. Here’s what is displayed on success:

All green! And on failure, we get:

In the portal, you can clearly see that deployment was aborted:

That’s it. Enjoy!

NuGet Package Source Discovery

It’s already been 2 years since NuGet was introduced. This.NET package manager features the concept of feeds, or “package sources”, on which packages containing .NET libraries and tools can be hosted. In fact, support for feeds inspired us to build www.myget.org. While not all people are aware of this, Microsoft started out with two feeds as well: one for www.nuget.org, the other one for the Orchard CMS.

More and more feeds are being created daily, both by Microsoft as well as others. Here’s a list of feeds Microsoft has that I know of (there are probably more):

Wouldn’t it be nice if we could add them all to our Visual Studio package sources without having to know these URL’s? Meet the NuGet Package Source Discovery specification, or in short: PSD, a specification Xavier, Scott, PhilJeff, Howard and myself have been working on (thanks guys!)

Package Source Discovery

Because PowerShell says more than words, try the following. Open Visual Studio and open any solution. Then issue the following in the Package Manager Console:

1 Install-Package DiscoverPackageSources
2 Discover-PackageSources -Url "http://blog.maartenballiauw.be"

While we’re at it, perhaps the Glimpse project has something to discover as well.

1 Discover-PackageSources -Url "http://getglimpse.com"

Close and re-open Visual Studio and check your package sources. Notice anything new? My blog has provided you with 2 feeds. And you’ve also been subscribed to Glimpse’s nightly builds feed.

But there’s more. If you would have been authenticated when connecting to my blog, it will yield API keys as well. This allows the PSD client to setup everything that is needed for me to work with my personal feeds, both consuming and producing, by just remembering the URL of my blog.

Package Source Discovery boils down to trust. Since you apparently trust me, you can discover feeds from my blog. If you trust Microsoft, discover feeds from www.microsoft.com. Do you trust Windows Azure? Get their packages by discovering feeds at www.windowsazure.com. Need your company feeds? Discover them at http://nuget. A lot of options and possibilities there!

Recycling standards

If you are a blogger and are using Windows Live Writer, you’ve already used this before. We’ve written the NuGet Package Source Discovery specification based on what happens with blogs: when a simple <link /> element is added to your HTML, you are compatible with feed discovery. Here are the two elements that are listed in the source code for my blog:

1 <link rel="nuget" type="application/atom+xml" title="Maarten Balliauw NuGet feed" href="http://www.myget.org/F/maartenballiauw" />
2 <link rel="nuget" type="application/rsd+xml" href="http://www.myget.org/Discovery/Feed/googleanalyticstracker" />

The first one points directly to a feed. Using the URL and the title attribute, we can add this one to our NuGet package sources with ease. The second one points to an RSD file, known since ages as the Really Simple Discovery format described on https://github.com/danielberlinger/rsd. We’ve recycled it to allow a lot of things at the client side. Since not all required metadata can be obtained from the RSD format, the Dublin Core schema is present in the PSD response as well.

Here’s an an example:

 1 <?xml version="1.0" encoding="utf-8"?>
2 <rsd version="1.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
3   <service>
4     <engineName>MyGet</engineName>
6
8     <dc:creator>maartenba</dc:creator>
9     <dc:owner>maartenba</dc:owner>
13
14     <apis>
17         <settings>
18           <setting name="apiKey">abcdefghijkl</setting>
19         </settings>
20       </api>
22     </apis>
23   </service>
24 </rsd>
25 

As you can see, using RSD we can embed a lot more information about a feed in this document. If we wanted to add a link to someone’s GitHub and have a client that wants to use this, we can add another <api /> element in here.

Who is using this?

I am (http://blog.maartenballiauw.be), Xavier is (http://www.xavierdecoster.com), Glimpse is (http://getglimpse.com), NancyFX is (http://www.nancyfx.org) and MyGet has implemented several endpoints as well. Why don't you join the wonderful world of package source discovery?

Feedback needed!

This is not part of NuGet out of the box yet. We need your feedback, comments, implementations and so on. Head over to our GitHub repository, read through the spec and all examples and provide us with your thoughts. Try the two clients we’ve crafted (more on Xavier's blog) and make your NuGet repositories discoverable. Feel free to post a link to your blog below.

Enjoy and let the commenting begin!

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

MyGet Build Services - Join the private beta!

Good news! Over the past 4 weeks we’ve been sending out tweets about our secret project MyGet project “wonka”. Today is the day Wonka shows his great stuff to the world… In short: MyGet Build Services enable you to add packages to your feed by just giving us your GitHub repo. We build it, we package it, we publish it.

Our build server searches for a file called MyGet.sln and builds that. No probem if it's not there: we'll try and build other projects then. We'll run unit tests (NUnit, XUnit, MSTest and some more) and fail when those fail. We'll search for packages generated by your solution and if none are generated, we take a wild guess and create them for you.

To make it more visual, here are some screenshots. First, you have to add a build source, for example a GitHub repository (in fact, GitHub is all we currently support):

After that, you simply click “Build”. A couple of seconds or minutes later, your fresh package is available on your feed:

If you want to see what happened, the build log is available for review as well:

Enroll now!

Starting today, you can enroll for our private beta. You’ll get on a waiting list and as we improve build capacity, you will be granted access to the beta. If you’re in, tell us how it behaves. What works, what doesn’t, what would you like to see improved. Enroll for this private beta now via http://www.myget.org/buildservices. Limited seats!

Do note it’s still a beta, and as Willy Wonka would say… “Little surprises around every corner, but nothing dangerous.”

Happy packaging!