Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Visual Studio Online for Windows Azure Web Sites

Today’s official Visual Studio 2013 launch provides some interesting novelties, especially for Windows Azure Web Sites. There is now the choice of choosing which pipeline to run in (classic or integrated), we can define separate applications in subfolders of our web site, debug a web site right from within Visual Studio. But the most impressive one is this. How about… an in-browser editor for your application?

Editing Node.JS in browser

Let’s take a quick tour of it. After creating a web site we can go to the web site’s configuration we can enable the Visual Studio Online preview.

Edit in Visual Studio Online

Once enabled, simply navigate to https://<yoursitename>.scm.azurewebsites.net/dev or click the link from the dashboard, provide your site credentials and be greeted with Visual Studio Online.

On the left-hand menu, we can select the feature to work with. Explore does as it says: it gives you the possibility to explore the files in your site, open them, save them, delete them and so on. We can enable Git integration, search for files and classes and so on. When working in the editor we get features like autocompletion, FInd References, Peek Definition and so on. Apparently these don’t work for all languages yet, currently JavaScript and node.js seem to work, C# and PHP come with syntax highlighting but nothing more than that.

Peek definition

Most actions in the editor come with keyboard shortcuts, for example Ctrl+, opens navigation towards files in our application.

Navigation

The console comes with things like npm and autocompletion on most commands as well.

Console in Visual Studio Online

I can see myself using this for some scenarios like on-the-road editing from a Git repository (yes, you can clone any repo you want in this tool) or make live modifications to some simple sites I have running. What would you use this for?

Developing Windows Azure Mobile Services server-side

Word of warning: This is a partial cross-post from the JetBrains WebStorm blog. The post you are currently reading adds some more information around Windows Azure Mobile Services and builds on a full example and is a bit more in-depth.

With Microsoft’s Windows Azure Mobile Services, we can build a back-end for iOS, Android, HTML, Windows Phone and Windows 8 apps that supports storing data, authentication, push notifications across all platforms and more. There are client libraries available for all these platforms which can be used when developing in an IDE of choice, e.g. AppCode, Google Android Studio or Visual Studio. In this post, let’s focus on what these different platforms have in common: the server-side code.

This post was sparked by my buddy Kristof Rennen’s session for our user group. During his session he mentioned a couple of times how he dislikes Node.js and the trial-and-error manner of building the server-side due to lack of good tooling. Working for a tooling vendor and intrigued by the quest of finding a better way, I decided to post the short article you are currently reading.

Do note that I will focus more on how to get your development environment set-up and less on the Windows Azure Mobile Services feature set. Yes, you will learn some of the very basics but there are way better resources available for getting in-depth knowledge on the topic.

Here’s what we will see in this post:

  • Setting up a Windows Azure Mobile Service
  • Creating a table and storing data
  • A simple HTML/JS client
  • Adding logic to our API
  • Working on server-side logic with WebStorm
  • Sending e-mail using an Node.js module
  • Putting our API to the test with the REST client
  • Unit testing our logic

The scenario

Doing some exploration is always more fun when we can do it based on a simple scenario. Whenever JetBrains goes to a conference and we have a booth, we like to do a raffle for licenses. The idea is simple: come to our booth for a chat, fill out a simple form and we will pick random names after the conference and send a free license.

For this post, I’ve created a very simple form in HTML and JavaScript, collecting visitor name and e-mail address.

1

Once someone participates in the raffle, the name and e-mail address are stored in a database and we send out an e-mail thanking that person for visiting the booth together with a link to download a product trial.

Setting up a Windows Azure Mobile Service

First things first: we will require a Windows Azure account to start developing. Next, we can create a new Mobile Service through the Windows Azure Management Portal.

2

Next, we can give our service a name and pick the datacenter location for it. We also have to provide the type of database we want to use: a free, 20 MB database, or a full-fledged SQL Database. While Windows Azure Mobile Services is always coupled to a database, we can build a custom API with it as well.

3

Once completed, we get several tabs to work with. There’s the initial welcome screen, displaying links to documentation and client libraries. The other tabs give access to monitoring, scaling, how we want to authenticate users, push notification settings and logs. Since we want to store data of booth visitors, let’s enter the Data tab.

Creating a table and storing data

From the Data tab, we can create a new table. Let’s call it Visitor. When creating a new table, we have to specify access rules for the API that will be available on top of it.

4

We can tell who can read (API GET request), insert (API POST request), update (API PATCH request) and delete (API DELETE request). Since our application will only insert new data and we don’t want to force booth visitors to log in with their social profiles, we can specify inserts can be done if an API key is provided. All other operations will be blocked for outside users: reading and deleting will only be available through the Windows Azure Management Portal with the above settings.

Do we have to create columns for storing booth visitor data? By default, Windows Azure Mobile Services has “dynamic schema” enabled which means we can throw some JSON at our Mobile Service it and it will store data for us.

A simple HTML/JS client

As promised earlier in this post, let’s see how we can build a simple client for the service we have just created. We’ll go with an HTML and JavaScript based client as it’s fairly easy to demonstrate. Again, have a look at other client SDK’s for the platform you are developing for.

Our HTML page exists of nothing but two text boxes and a button, conveniently named name, email and send. There are two ways of sending data to our Mobile Service: calling the API directly or making use of the client library provided. Both are easy to do: the API lives at https://<servicename>.azure-mobile.net/tables/<tablename> and we can POST a JSON-serialized object to it, an approach we’ll take later in this blog post. There is also a JavaScript client library available from https://<servicename>.azure-mobile.net/client/MobileServices.Web-1.0.0.min.js which our client is using.

5

As we can see, a new MobileServiceClient is created on which we can get a table reference (getTable) and insert a JSON-formatted object. Do note that we have to pass in an API key in the client constructor, which can be obtained from the Windows Azure Management Portal under the Manage Keys toolbar button.

From the portal, we can now see the data we’re submitting from our simple application:

6

Adding logic to our API

Let’s make it a bit more exciting! What if we wanted to store a timestamp with every record? We may want to have some insight into when our booth was busiest. We can send a timestamp from the client but that would only add clutter to our client-side code. Also if we wanted to port the HTML/JS client to other platforms it would mean we have to make sure every client sends this data to our mobile service. In short: this calls for some server-side logic.

For every table created, we can make use of the Script tab to add custom logic to read, insert, update and delete operations which we can write in JavaScript. By default, this is what a script for insert may look like:

7

The insert function will be called with 3 parameters: the item to be stored (our JSON-serialized object), the current user and the full request. By default, the request.execute() function is called which will make use of the other two parameters internally. Let’s enrich our item with a timestamp.

8

Hitting Save will deploy this script to our mobile service which from now on will store an inserted timestamp in our database as well.

This is a very trivial example. There are a lot of things that can be done server-side: enforcing validation, record filtering, storing data in other tables as well, sending e-mail or text messages, … Here’s a post with some common scenarios. Full reference to the server-side objects is also available.

Working on server-side logic with WebStorm

Unfortunately, the in-browser editor for server-side scripts is a bit limited. It features no autocompletion and all code has to go in one file. How would we create shared logic which can be re-used across different scripts? How would we unit test our code? This is where WebStorm comes in. We can access the complete server-side code through a Git repository and work on it in a full IDE!

The Git access to our mobile service is disabled by default. Through the portal’s right-hand side menu, we can enable it by clicking the Set up source control link. Next, we can find repository details from the Configure tab.

9

We can now use WebStorm’s VCS | Checkout From Version Control | Git menu to bring down the server-side code for our Windows Azure Mobile Service.

10

In our project, we can see several folders and files. The service/api folder can hold custom API’s (check the readme.md file for more info). service/scheduler can hold scripts that execute at a given time or interval, much like CRON jobs. service/shared can hold shared scripts that can be used inside table logic, custom API’s and scheduler scripts. In the service/table folder we can find the script we have created through the portal: visitor.insert.js. Also note the visitor.json file which contains the access rules we configured through the portal earlier.

11

From now on, we can work inside WebStorm and push to the remote Git repository if we want to deploy our new code.

Sending e-mail using a Node.js module

Let’s go back to our initial requirements: whenever someone enters their name and e-mail address in our application, we want to send out an e-mail thanking them for participating. We can do this by making use of an NPM module, for example SendGrid.

Windows Azure Mobile Services comes with some NPM modules preinstalled, like SendGrid and Twilio. However we want to make sure we are always using the same version of the NPM package, so let’s install it into our project. WebStorm has a built-in package manager to do this, however Windows Azure Mobile Services requires us to install the module in a non-standard location (the service folder) hence we will use the Terminal tool window to install it.

12

Once finished, we can start working on our e-mail logic. Since we may want to re-use the e-mail logic (and we want to unit test it later), it’s best to create our logic in the shared folder.

13

In our shared module, we can make use of the SendGrid module to create and send an e-mail. We can export our sendThankYouMessage function to consumers of our shared module. In the visitor.insert.js script we can require our shared module and make use of the functionality it exposes. And as an added bonus, WebStorm provides us with autocompletion, code analysis and so on.

14

Once we’ve updated our code, we can transfer our server-side code to Windows Azure Mobile Services. Ctrl+K (or Cmd+K on Mac OS X) allows us to commit and push from within the IDE.

15

Putting our API to the test with the REST client

Once our changes have been deployed, we can test our API. This can be done using one of the client libraries or by making use of WebStorm’s built-in REST client. From the Tools | Test RESTful Web Service menu we can craft our API calls manually.

We can specify the HTTP method to use (POST since we want to insert) and the URL to our Windows Azure Mobile Services endpoint. In the headers section, we can add a Content-Type header and set it to application/json. We also have to specify an API key in the X-ZUMO-APPLICATION header. This API key can be found in the Windows Azure Management Portal. On the right-hand side we can provide the text to post, in this case a JSON-serialized object with some properties.

16

After running the request, we get back response headers and a response body:

17

No error message but an object is being returned? Great, that means our code works (and should also be sending out an e-mail). If something does go wrong, the Logs tab in the Windows Azure portal can be a tremendous help in finding out what went wrong.

Through the toolbar on the left, we can export/import requests, making it easy to create a number of predefined requests that can easily be run over and over for testing the REST API.

Unit testing our logic

With WebStorm we can easily test our JavaScript code and custom Node.js modules. Let’s first set up our IDE. Unit testing can be done using thenodeunit testing framework which we can install using the Node.js package manager.

18

Next, we can create a new Run Configuration from the toolbar selecting Nodeunit as the configuration type and entering all required configuration details. In our case, let’s run all tests from the test directory.

19

Next, we can create a folder that will hold our tests and mark it as a Test Source Root (open the context menu and use Mark Directory As | Test Source Root). Tests for Nodeunit are always considered modules and should export their test functions. Here’s a very basic example which tells Nodeunit to wait for one assertion, assert that a boolean is true and marks the test case completed.

20

Of course we can also test our business logic. It’s best to create separate modules under the shared folder as they will be easier to unit test. However if you do have to test the actual table scripts (like insert functionality), there is a little trick that allows doing just that. The following snippet exports the insert function outside of the table-specific module:

21

We can now test the complete visitor.insert.js module and even provide mocks to work with. The following example loads all our modules and sets up test expectation. We’re also overriding specific functionalities such as the sendThankYouMessage function to just make sure it’s called by our table API logic.

22

The full source code for both the server-side and client-side application can be found onhttps://github.com/maartenba/JetBrainsBoothMobileService.

If you would like to learn more about Windows Azure Mobile Services and work with authentication, push notifications or custom API’s checkout the getting started documentation. And if you haven’t already, give WebStorm a try.

Enjoy!

Using the Windows Azure Content Delivery Network (CDN)

CDNWith the Windows Azure Content Delivery Network (CDN) released as a preview, I thought it was a good time to write up some details about how to work with it. The CDN can be used for offloading content to a globally distributed network of servers, ensuring faster throughput to your end users.

Note: this is a modified and updated version of my article at ACloudyPlace.com roughly two years ago. I have added information on how to work with ASP.NET MVC bundling and the Windows Azure CDN, updated screenshots and so on.

Reasons for using a CDN

There are a number of reasons to use a CDN. One of the obvious reasons lies in the nature of the CDN itself: a CDN is globally distributed and caches static content on edge nodes, closer to the end user. If a user accesses your web application and some of the files are cached on the CDN, the end user will download those files directly from the CDN, experiencing less latency in their request.

Windows Azure CDN graphically

Another reason for using the CDN is throughput. If you look at a typical webpage, about 20% of it is HTML which was dynamically rendered based on the user’s request. The other 80% goes to static files like images, CSS, JavaScript and so forth. Your server has to read those static files from disk and write them on the response stream, both actions which take away some of the resources available on your virtual machine. By moving static content to the CDN, your virtual machine will have more capacity available for generating dynamic content.

Enabling the Windows Azure CDN

The Windows Azure CDN is built for two services that are available in your subscription: storage and cloud services. The easiest way to get started with the CDN is by using the Windows Azure Management Portal. From the New menu at the bottom, select App Services | CDN | Quick Create.

Enabling Windows Azure CDN

From the dropdown that is shown, select either a storage account or a cloud service which will serve as the source of our CDN edge data. After clicking Create, the CDN will be initialized. This may take up to 60 minutes because the settings you’ve just applied may take that long to propagate to all CDN edge locations globally (over 24 was the last number I read). Your CDN will be assigned a URL in the form of .vo.msecnd.net">http://<id>.vo.msecnd.net.

Once the CDN endpoint is created, there are some options that can be managed. Currently they are somewhat limited but I’m pretty sure this will expand. For now, you can for example assign a custom domain name to the CDN by clicking the “Manage Domains” button in the toolbar.

Manage the Windows Azure CDN - Add custom domain

Note that the CDN works using HTTP by default, but HTTPS is supported as well and can be enabled through the management portal. Unfortunately, SSL is using a certificate that Microsoft provides and there’s currently no option to use your own, making it hard to use a custom domain name and HTTPS.

Serving blob storage content through the CDN

Let’s start and offload our static content (CSS, images, JavaScript) to the Windows Azure CDN using a storage account as the source for CDN content. In an ASP.NET MVC project, edit the _Layout.cshtml view. Instead of using the bundles for CSS and scripts, let’s include them manually from a URL hosted on your newly created CDN:

1 <!DOCTYPE html> 2 <html> 3 <head> 4 <title>@ViewBag.Title</title> 5 <link href="http://az172665.vo.msecnd.net/static/Content/Site.css" rel="stylesheet" type="text/css" /> 6 <script src="http://az172665.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js" type="text/javascript"></script> 7 </head> 8 <!-- more HTML --> 9 </html>

Note that the CDN URL includes a reference to a folder named “static”.

If you now run this application, you’ll find no CSS or JavaScript applied. The reason for this is obvious: we have specified the URL to our CDN but haven’t uploaded any files to our storage account backing the CDN.

Where are our styles?

Uploading files to the CDN is easy. All you need is a public blob container and some blobs hosted in there. You can use tools like Cerebrata’s Cloud Storage Studio or upload the files from code. For example, I’ve created an action method taking care of uploading static content for me:

1 [HttpPost, ActionName("Synchronize")] 2 public ActionResult Synchronize_Post() 3 { 4 var account = CloudStorageAccount.Parse( 5 ConfigurationManager.AppSettings["StorageConnectionString"]); 6 var client = account.CreateCloudBlobClient(); 7 8 var container = client.GetContainerReference("static"); 9 container.CreateIfNotExist(); 10 container.SetPermissions( 11 new BlobContainerPermissions { 12 PublicAccess = BlobContainerPublicAccessType.Blob }); 13 14 var approot = HostingEnvironment.MapPath("~/"); 15 var files = new List<string>(); 16 files.AddRange(Directory.EnumerateFiles( 17 HostingEnvironment.MapPath("~/Content"), "*", SearchOption.AllDirectories)); 18 files.AddRange(Directory.EnumerateFiles( 19 HostingEnvironment.MapPath("~/Scripts"), "*", SearchOption.AllDirectories)); 20 21 foreach (var file in files) 22 { 23 var contentType = "application/octet-stream"; 24 switch (Path.GetExtension(file)) 25 { 26 case "png": contentType = "image/png"; break; 27 case "css": contentType = "text/css"; break; 28 case "js": contentType = "text/javascript"; break; 29 } 30 31 var blob = container.GetBlobReference(file.Replace(approot, "")); 32 blob.Properties.ContentType = contentType; 33 blob.Properties.CacheControl = "public, max-age=3600"; 34 blob.UploadFile(file); 35 blob.SetProperties(); 36 } 37 38 ViewBag.Message = "Contents have been synchronized with the CDN."; 39 40 return View(); 41 }

There are two very important lines of code in there. The first one, container.SetPermissions, ensures that the blob storage container we’re uploading to allows public access. The Windows Azure CDN can only cache blobs stored in public containers.

The second important line of code, blob.Properties.CacheControl, is more interesting. How does the Windows Azure CDN know how long a blob should be cached on each edge node? By default, each blob will be cached for roughly 72 hours. This has some important consequences. First, you cannot invalidate the cache and have to wait for content expiration to occur. Second, the CDN will possibly refresh your blob every 72 hours.

As a general best practice, make sure that you specify the Cache-Control HTTP header for every blob you want to have cached on the CDN. If you want to have the possibility to update content every hour, make sure you specify a low TTL of, say, 3600 seconds. If you want less traffic to occur between the CDN and your storage account, specify a longer TTL of a few days or even a few weeks.

Another best practice is to address CDN URLs using a version number. Since the CDN can create a separate cache of a blob based on the query string, appending a version number to the URL may make it easier to refresh contents in the CDN based on the version of your application. For example, main.css?v1 and main.css?v2 may return different versions of main.css cached on the CDN edge node. Do note that the query string support is opt-in and should be enabled through the management portal. Here’s a quick code snippet which appends the AssemblyVersion to the CDN URLs to version content based on the deployed application version:

1 @{ 2 var version = System.Reflection.Assembly.GetAssembly( 3 typeof(WindowsAzureCdn.Web.Controllers.HomeController)) 4 .GetName().Version.ToString(); 5 } 6 <!DOCTYPE html> 7 <html> 8 <head> 9 <title>@ViewBag.Title</title> 10 <link href="http://az172729.vo.msecnd.net/static/Content/Site.css?@version" rel="stylesheet" type="text/css" /> 11 <script src="http://az172729.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js?@version" type="text/javascript"></script> 12 </head> 13 <!-- more HTML --> 14 </html>

Using cloud services with the CDN

So far we’ve seen how you can offload static content to the Windows Azure CDN. We can upload blobs to a storage account and have them cached on different edge nodes around the globe. Did you know you can also use your cloud service as a source for files cached on the CDN? The only thing to do is, again, go to the Windows Azure Management Portal and ensure the CDN is enabled for the cloud service you want to use.

Serving static content through the CDN

The main difference with using a storage account as the source for the CDN is that the CDN will look into the /cdn/* folder on your cloud service to retrieve its contents. There are two options for doing this: either moving static content to the /cdn folder, or using IIS URL rewriting to “fake” a /cdn folder.

When using ASP.NET MVC’s bundling features, we’ll have to modify the bundle configuration in BundleConfig.cs. First, we’ll have to set bundle.EnableCdn to true. Next, we’ll have to provide the URL to the CDN version of our bundles. Here’s a snippet which does just that for the Content/css bundle. We’re still working with a version number to make sure we can update the CDN contents for every deployment of our application.

1 var version = System.Reflection.Assembly.GetAssembly(typeof(BundleConfig)).GetName().Version.ToString(); 2 var cdnUrl = "http://az170459.vo.msecnd.net/{0}?" + version; 3 4 bundles.UseCdn = true; 5 bundles.Add(new StyleBundle("~/Content/css", string.Format(cdnUrl, "Content/css")).Include("~/Content/site.css"));

Note that this time, the CDN URL does not include any reference to a blob container.

Whether you are using bundling or not, the trick will be to request URLs straight from the CDN instead of from your server to be able to make use of the CDN.

Exposing static content to the CDN with IIS URL rewriting

The Windows Azure CDN only looks at the /cdn folder as a source of files to cache. This means that if you simply copy your static content into the /cdn folder, you’re finished. Your web application and the CDN will play happily together. But this means the static content really has to be static. In the previous example of using ASP.NET MVC bundling, our static “bundles” aren’t really static…

An alternative to copying static content to a /cdn folder explicitly is to use IIS URL rewriting. IIS URL rewriting is enabled on Windows Azure by default and can be configured to translate a /cdn URL to a / URL. For example, if the CDN requests the /cdn/Content/css bundle, IIS URL rewriting will simply serve the /Content/css bundle leaving you with no additional work.

To configure IIS URL rewriting, add a <rewrite> section under the <system.webServer> section in Web.config:

1 <system.webServer> 2 <!-- More settings --> 3 4 <rewrite> 5 <rules> 6 <rule name="RewriteIncomingCdnRequest" stopProcessing="true"> 7 <match url="^cdn/(.*)$" /> 8 <action type="Rewrite" url="{R:1}" /> 9 </rule> 10 </rules> 11 </rewrite> 12 </system.webServer>

As a side note, you can also configure an outbound rule in IIS URL rewriting to automatically modify your HTML into using the Windows Azure CDN. Do know that this option is only supported when not using dynamic content compression and adds additional workload to your web server due to having to parse and modify your outgoing HTML.

Serving dynamic content through the CDN

Some dynamic content is static in a sense. For example, generating an image on the server or generating a PDF report based on the same inputs. Why would you generate those files over and over again? This kind of content is a perfect candidate to cache on the CDN as well!

Imagine you have an ASP.NET MVC action method which generates an image based on a given string. For every different string the output would be different, however if someone uses the same input string the image being generated would be exactly the same.

As an example, we’ll be using this action method in a view to display the page title as an image. Here’s the view’s Razor code:

1 @{ 2 ViewBag.Title = "Home Page"; 3 } 4 5 <h2><img src="/Home/GenerateImage/@ViewBag.Message" alt="@ViewBag.Message" /></h2> 6 <p> 7 To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8 </p>

In the previous section, we’ve seen how an IIS rewrite rule can map all incoming requests from the CDN. The same rule can be applied here: if the CDN requests /cdn/Home/GenerateImage/Welcome, IIS will rewrite this to /Home/GenerateImage/Welcome and render the image once and cache it on the CDN from then on.

As mentioned earlier, a best practice is to specify the Cache-Control HTTP header. This can be done in our action method by using the [OutputCache] attribute, specifying the time-to-live in seconds:

1 [OutputCache(VaryByParam = "*", Duration = 3600, Location = OutputCacheLocation.Downstream)] 2 public ActionResult GenerateImage(string id) 3 { 4 // ... generate image ... 5 6 return File(image, "image/png"); 7 }

We would now only have to generate this image once for every different string requested. The Windows Azure CDN will take care of all intermediate caching.

Conclusion

The Windows Azure CDN is one of the building blocks to create fault-tolerant, reliable and fast applications running on Windows Azure. By caching static content on the CDN, the web server has more resources available to process other requests. Next to that, users will experience faster loading of your applications because content is delivered from a server closer to their location.

Enjoy!

An autoscaling build farm using TeamCity and Windows Azure

Autoscaling... myself!Cloud computing is often referred to as a cost saver due to its billing models. If we can move workloads that are seasonal to the cloud, cost reduction is something that will come. No matter if it’s really “seasonal seasonal” (e.g. a temporary high workload around the holidays) or “daily seasonal” where workloads are different depending on the time of day, these workloads have written cloud all over them.

A workload that may be seasonal is the workload done by build servers. Take TeamCity for example. A TeamCity server instruments a pool of build agents that are either idle or compiling source code into binaries. Depending on how your team is structured and when people work, there is a big chance that pool of build agents is doing nothing for several hours every day, except incurring cost. What if we could move the build agents to a platform like Windows Azure and have them autoscale, depending on the actual load on the build farm?

Creating a build agent virtual machine

The first step in setting this brilliant scheme in motion is to set up a build agent virtual machine. We can select any virtual machine image we want for our build agent, even upload our own vhd’s if needed. I'm selecting a Windows Server 2012 image here but if you need a different OS for your build agent you can select that instead.

During the creation of this build agent, there is nothing special we should do. We can select a small/medium/large/extra large instance, give ourselves an administrator password and so on. The only important step here is that we set up an endpoint for the TeamCity build agent, listening on TCP port 9090.

Open load balancer endpoint

Once the machine is started, we will have to install all required prerequisites for our build agent. We can connect using remote desktop (or SSH if it’s a Linux machine). On the machine I have here, I installed all .NET framework versions to ensure I'm able to build .NET projects. On a build machine for Java, we would install the correct runtimes and JDK's for our projects. Anything, really, if it is needed for the sources we’ll be building. On Windows, I typically use Web Platform Installer and Chocolatey to get this done as automated as possible.

Web Platform Installer in action

Installing TeamCity build agent

In order for our build agent to communicate with the TeamCity server, we have to install the build agent. We can do this by navigating to our TeamCity server from within the virtual machine and use the Install Build Agents link from the Agents page.

Installing build agent

On a Windows server, we can use the Windows Installer but we can also use Java Web Start or even simply extract a ZIP file. This last option can be useful on a Linux machine, for example.

Installing the build agent is pretty much a next, next, finish operation. The only important thing is that we run the agent as a Windows service (or have it automatically start at boot time on other operating systems). We also want to specify the URL to our TeamCity server as well as the port on which the build agent will listen for incoming data from TeamCity. Note that this port should be the one opened in the load balancer earlier, in the case of this machine port 9090.

Specify port and server

Before starting the build agent, make sure the local firewall allows incoming connections. Through the Windows firewall, allow incoming connections for port 9090 (and while we’re at it, for a range of ports so we can easily clone this machine and not care about the firewall anymore).

Windows Firewall configuration

If we now start the build agent service, it should connect to our TeamCity server. Under the agents tab, we should be seeing a new unauthorized agent popping up. If that works, we’re good to go with our build agent farm.

A new unauthorized build agent shows up

Don’t shut down the machine just yet, we still need to prepare it for creating a build agent image.

Creating a build agent image

While still connected through remote desktop, open a command prompt and run the sysprep /generalize command from the c:\windows\system32\sysprep folder. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting its own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Sysprep our machine

Once finished, our RDP connection should be gone and our machine can be shutdown in the Windows Azure Management Portal. In fact, it must be shut down. Once that is done, we can use the Capture button and transform our virtual machine into a template we can create new virtual machines from.Capturing a virtual machine image

The capturing process will take a couple of minutes and results in having no more build server virtual machine to be found in the Windows Azure Management Portal. Is that bad? No, we can now start cloning the machine and create multiple, all having the exact same configuration and components installed.

Setting up multiple build agent machines

The next thing we want to have is multiple build agent machines. From the Windows Azure Management portal, we can create them. Not based on a platform image but using the image created during the previous step.

Using a virtual machine image as the base

The virtual machine configuration can be whatever we want. Do we want extra small instances or extra large? It’s all up to us and our credit card. On the next page, we have to specify some more important details. First, we have to select or create a cloud service. This will be the DNS host name under which all of our build agents are going to live. We also have to specify the affinity group, in essence a setting telling Windows Azure to never unplug power or networking for all machines in this group at the same time.

Selecting cloud service and availability set

We will be creating a couple of machines, so it’s important to get the next page right. Since all our machines will share the same hostname and IP address to the outside world, our build agents have to listen on different TCP ports. Make sure that the first agent maps port 9090 to port 9090, the second one 9091 to 9091 and so on. Not doing this will mess with your mind afterwards when troubleshooting.

Endpoint configuration

Finish the process, let Windows Azure start the machine and create a new one. Important: same cloud service, same availability set and correct endpoint mappings!

Configuring the build agents

Once we have several machines running, we have to connect to them using remote desktop again. This can be done through the portal. Once in, locate the build agent configuration file (c:\BuildAgent\conf\buildAgent.properties in a default installation) and set the port number on which it listens to the one that was mapped as an external endpoint. Again, agent one will listen on port 9090, agent number two on 9091 and so on. We can also set a better name for the build agent, in my case I’ve chosen to go with “agent2”. Very inspirational and all.

Setting build agent name and port

Save and restart the build agent (or the machine). The TeamCity server should now start listing all build agents.

Windows Azure build agents for TeamCity

Make sure to authorize them all, as we want to be sure they can connect to TeamCity server later on. Once that has been done and all build agents are listed here, we can shut them all down except for one. We want to have something running, right?

Configuring autoscaling

It might have been a good question: why did we have to move all these machines under the same cloud service? The reason is simple: we wanted to autoscale our farm and this can only be done within one cloud service. From the cloud service, click the Scale tab and start configuring.

Autoscaling configuration

For this post, I’ve chosen the following values:

  • Autoscale based on CPU
  • Have a minimum of one instance, and a maximum of, well… all of them.
  • The target CPU range is 0 to 10. For a production environment this will typically be between 60 and 80 or 40 and 80, depending on the chosen machine size for the build agents. Windows Azure will trigger an autoscaling operation if we go outside this range, having a small and low range means it will trigger a scale operation much faster. Bigger numbers means slower to respond.
  • Scale up by and scale down by as well as the number of minutes to wait after the previous operations are up to you. If you want a build agent to remain online for 30 minutes after it has been started, even if CPU usage drops, set it to 30 minutes. If 2 machines should be started at once, increase that number as well.

Scaling will happen based on the average CPU percentage of all running machines. If our builds run at 100% CPU all the time on our agent we can set the thresholds a bit higher. If builds are only taking 20% we might want to run multiple agents on one machine or decrease the scaling thresholds a bit. Want to measure CPU utilization for a given build? Better read up on the TeamCity Performance Monitor then.

Putting it to the test

Putting it to the test shouldn’t be that hard. Start some builds and make sure the agent gets loaded with builds. Once we hit the CPU threshold, Windows Azure will launch a virtual machine that was previously turned off.

Windows Azure autoscaling in action

Once it has booted, we will also see it surface on the TeamCity server.

Build agents on TeamCity

Once the load goes down again, Windows Azure will shutdown machines that are below the thresholds and make sure they don’t incur costs any longer. Which is pretty impressive!

If a development team triggers a massive amount of builds during the day, Windows Azure will pretty soon scale out to a higher number of virtual build agents. And at night when there are only some builds being triggered, it will scale back to lesser instances. If, for example, we manage to run machines only for 12 hours instead of 24 hours a day, that means our build farm’s price goes down by half.

TeamCity’s architecture as well as the way Windows Azure works makes this cost reduction possible. It’s also fun to set up, it gives us a wide range of options (how about a Windows Server 2012 farm, a Linux farm and so on).

Enjoy!

Just released: MvcSiteMapProvider 4.0

MvcSiteMapProviderAfter a beta version about a month ago, we are proud to release MvcSiteMapProvider 4.0 stable! (get it from NuGet, it’s fresh!) It took 6 months to complete this major version but I think our GitHub contributors have done a great job. Thank you all and especially Shad for taking the lead on this release!

MvcSiteMapProvider is a tool targeted at ASP.NET MVC that provides menus, site maps, site map path functionality, and more. It provides the ability to configure a hierarchical navigation structure using a pluggable architecture that can be XML, database, or code driven. We have moved beyond a mere ASP.NET SiteMapProvider implementation to provide support for multi-tenant applications, flexible caching, dependency injection, and several interface-based extensibility points where virtually any part of the provider can be replaced with a custom implementation.

Based on areas, controller and action method names rather than hardcoded URL references, sitemap nodes are completely dynamic based on the routing engine used in an application. Search Engine Optimization support is also provided in the form of dynamic sitemaps XML, canonical URL tags, and meta robots tags to ensure you send the search engines consistent - rather than conflicting - information about your URLs.

What has changed?

What I originally intended to do in v2 (but decided against based on popular request) is something that now has been done. The biggest change in this release is that we have stepped away from being an ASP.NET SiteMapProvider implementation. This means a lot of code had to be rewritten making v4 a pretty clean release. We’re not there yet completely as we want to have unit tests for all (and some more changes will be required for that).

Next to stepping away from the ASP.NET provider model, we’ve improved support for dependency injection. If you don’t need it, no worries. If you do need it: every component of the MvcSiteMapProvider is now pluggable. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup. Read more in the documentation.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

The Menu() helper has been rewritten to become a more performant and reliable helper (thanks for the contribution, midishero!)

A great bunch of performance enhancements and stability fixes are in as well.

How do I upgrade?

Since MvcSiteMapProvider has had some significant updates going from v3 to v4, it is best to read the upgrade guide. The first part of the upgrade from v3 to v4 will be updating the NuGet package. Before, MvcSiteMapProvider only had one NuGet package. Today, it has been split in multiple, of which the following ones are good to know at this time:

  • MvcSiteMapProvider.Web containing all views and web.config changes
  • MvcSiteMapProvider.MVC<version>.Core containing the library itself

Upgrading from v3 to v4 consists of installing the correct packages for your ASP.NET MVC version:

  • For MVC 2, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC2
  • For MVC 3, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC3
  • For MVC 4, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC4
  • Note that for MVC 4 we have made it possible to upgrade MvcSiteMapProvider instead, which will pull in all required dependencies. Do know that this is not the recommended scenario and it is preferred to install MvcSiteMapProvider.MVC4 instead.

The MvcSiteMapProvider.Web update will add views and all required runtime dependencies to your project. This package is a dependency of each of the above options and generally will not need to be installed explicitly.

In .NET versions prior to .NET 4.0, one line of code should be added to the Application_Start() event of Global.asax:

MvcSiteMapProvider.DI.Composer.Compose();

Note that this code is automatically executed if using .NET 4.0 or higher by the use of WebActivator, so in most cases you will not need to call it manually.

More? Please read the upgrade guide.

What’s next?

NuGet all the things! Install the new MvcSiteMapProvider.MVCx package (replace X with your ASP.NET MVC version) and try it out! Leave your comments, ideas and pull requests on our GitHub page.

Enjoy!

Windows Azure Traffic Manager Explained

imageWith yesterday’s announcement on Windows Azure Traffic Manager surfacing in the management portal (as a preview), I thought it was a good moment to recap this more than 2 year old service. Windows Azure Traffic Manager allows you to control the distribution of network traffic to your Cloud Services and VMs hosted within Windows Azure.

 

 

What is Traffic Manager?

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more cloud services or VMs, all accessible with the same URL, in one or more Windows Azure datacenters. At its core, it is basically a distributed DNS service that knows which Windows Azure services are sitting behind the traffic manager URL and distributes requests based on three possible profiles:

  • Failover: all traffic is mapped to one Windows Azure service, unless it fails. It then directs all traffic to the failover Windows Azure service.
  • Performance: all traffic is mapped to the Windows Azure service “closest” (in routing terms) to the client requesting it. This will direct users from the US to one of the US datacenters, European users will probably end up in one of the European datacenters and Asian users, well, somewhere in the Asian datacenters.
  • Round-robin: Just distribute requests between various Windows Azure services defined in the Traffic Manager policy

Now I’ve started this post with the slightly bitchy tone that “this service has been around for over two years”. And that’s true! It has been in the old management portal for ages and hasn’t since left the preview stage. However don’t think nothing happened with this service: next to using Traffic Manager for cloud services, we now can also use it for distributing traffic across VM’s. Next to distributing traffic over datacenters for cloud services, we can now do this for VMs as well. What about a SharePoint farm deployed in multiple datacenters, using Traffic Manager to distribute traffic geographically?

Why should I care?

We’ve seen it before: clouds being down. Amazon EC2, Google, Windows Azure, … They all have had their glitches. With any cloud going down, whether completely or partially, it seems a lot of websites “in the cloud” are down at that time. Most comments you read on Twitter at those times are along the lines of “outrageous!” and “don’t go cloud!”. While I understand these comments, I think they are wrong. These “clouds” can fail. They are even designed to fail, and often provide components and services that allow you to cope with these failures. You just have to expect failure at some point in time and build it into your application.

Yes, I just told you to expect failure when going to the cloud. But don’t consider a failing cloud a bad cloud or a cloud that is down. For your application, a “failing” cloud or server or database should be nothing more than a scaling operation. The only thing is: it’s scaling down to zero. If you design your application so that it can scale out, you should also plan for scaling “in”, eventually to zero. Use different availability zones on Amazon, and if you’re a Windows Azure user you are protected by fault domains within the datacenter, and Traffic Manager can save your behind cross-datacenter. Use it!

 

My thoughts on Traffic Manager

Let’s come back to that “2 year old service”. Don’t let that or the fact that is “is still a preview” hold you back from using Traffic Manager. Our MyGet web application is making use of it since it was first introduced. While in the beginning we used it for performance reasons (routing US traffic to a US datacenter and EU traffic to a EU datacenter), we’ve changed the strategy and are now using it as a failover to the North Europe datacenter in which nothing is deployed. The screenshot below highlights a degradation (because there indeed is no deployment in Europe North, currently).

MyGet Windows Azure Traffic Manager

But why failover to a datacenter in which no deployments are done? Well, because if West Europe datacenter would fail, we can simply spin up a new deployment in North Europe. Yes, there will be some downtime, but the last thing we want to have in such situation is downtime from DNS propagation taking too long. Now we simply map www.myget.org to our Traffic Manager domain and whenever we need to switch, Traffic Manager takes care of the DNS part.

In general, Traffic Manager has probably been the most stable service in the Windows Azure platform. I haven’t experienced any issues so far with Traffic Manager over more than two years, preview mode or not.

Enjoy!

Update: Alexandre Brisebois, a colleague MVP, has some additional insights to share.

Autoscaling Windows Azure Cloud Services (and web sites)

At the Build conference, Microsoft today announced that Windows Azure Cloud Services now support autoscaling. And they do! From the Windows Azure Management Portal, we can use the newly introduced SCALE tab to configure autoscaling. That’s right: some configuration and we can select the range of instances we want to have. Windows Azure does the rest. And this is true for both Cloud Services and Standard Web Sites (formerly known as Reserved instances).

Automatic scaling Windows Azure

We can add various rules in the autoscaler:

  • The trigger for scaling: do we want to base scaling decisions on CPU usage or on the length of a given queue?
  • The scale up and scale down rules: do we scale by one instance or add / remove 5 at a time?
  • The interval: how long do we want to not touch the number of instances running after the previous scale operation?
  • The range: what’s the minimum and maximum required instances we want to have running?

Automatically increase instances under load

A long awaited feature is there! I'll enable this for some services and see how it goes...

Enabling PHP 5.5 on Windows Azure Web Sites using a remote shell and KuduExec

While probably this post will be outdated in the coming days, at the time of writing Windows Azure Web Sites has no PHP 5.5 support (again: yet). In this post, we’ll explore how to enable PHP 5.5 on Windows Azure Web Sites ourselves. Last year my friend Cory wrote a post on enabling PHP 5.4 in Windows Azure Web Sites which applies to PHP 5.5 as well. However I want to discuss a different approach. And do read on if PHP 5.5 is already officially available on WAWS: there are some tips and tricks in here.

Enabling PHP 5.5 on Windows Azure Web Sites is pretty simple. All we need is an extracted version of php-cgi.exe and all extensions on our web site and a handler mapping in IIS. Now… how to get that PHP executable there? Cory took the approach of uploading PHP using FTP to upload the executable. But why settle for FTP if we have shell access to our Windows Azure Web Site?

Shell access to Windows Azure Web Sites

Let’s make a little sidestep first. How do we connect to Windows Azure Web Sites shell? It depends a bit if you are a Node-head or a .NET-head. If you’re the first, then simply run the following command:

npm install kuduexec -g

In the other situation, download and compile KuduExec.Net.

KuduExec (or KuduExec.Net) are simple wrappers around the Windows Azure Web Sites API and can be used to get access to a shell on top of our web site. Both approaches use the same command name so let’s connect:

kuduexec https://<yourusername>@<yoursite>.scm.azurewebsites.net/

We will be connecting to the API endpoint of our web site, which is simply .scm.azurewebsites.net">https://<yoursite>.scm.azurewebsites.net. Once we enter our password, we have shell access to our web site:

Shell access to Windows Azure Web Site

Now let’s get the PHP executable there.

Downloading PHP through shell

We want to download the correct PHP 5.5 onto our Windows Azure Web Site, From PHP’s download page, we will need the VC11 x86 Non Thread Safe zip file zip file URL. Next, we can use curl to download it into our web site’s file system. But where?

Windows Azure Web Sites has an interesting file system. Some folders are local to the host your site is running on, others are located on a central file system shared by all instances of the current web site. Remember: everything that’s under the VirtualDirectory0 folder is synchronized with other machines your web site runs on. So let’s create a bin folder there in which we’ll download PHP.

mkdir bin 
cd bin
curl -O http://windows.php.net/downloads/releases/php-5.5.0-nts-Win32-VC11-x86.zip

This will download the PHP ZIP to the file system.

Download PHP using curl

We also will need to unzip our PHP 5.5 installation. Luckily, the WAWS shell has a tool called unzip which we can invoke:

mkdir php-5.5.0
unzip php-5.5.0-nts-Win32-VC11-x86.zip -d php-5.5.0

If needed, we can change directories and run PHP from the shell. Do remember that when PHP requires input (which will be the case if no parameters are passed in), the shell will block.

Enabling our custom PHP version in Windows Azure Web Sites

The next thing we have to do is enable this version of PHP in our web site. This must be done through the management portal. From the CONFIGURE tab, we can add a handler mapping. A handler mapping is a method of instructing IIS, the web server, to run a given executable when a request for a specific file extension comes in. Let;’s map *.php to our PHP executable. We can use the VirtualDirector0 path we had before, or use its shorter form: D:\home. Our PHP installation lives in D:\home\bin\php-5.5.0\php-cgi.exe.

PHP handler mapping

Once saved, our web site should now be running PHP 5.5:

Running PHP 5.5 on Windows Azure

Enjoy!

And there it is - MvcSiteMapProvider v4 (beta)

imageIt has been a while since a new major update has been done to the MvcSiteMapProvider project, but today is the day! MvcSiteMapProvider is a tool that provides flexible menus, breadcrumb trails, and SEO features for the ASP.NET MVC framework, similar to the ASP.NET SiteMapProvider model.

To be honest, I have not done a lot of work. Thanks to the power of open source (and Shad who did a massive job on refactoring the whole, thanks!), MvcSiteMapProvider v4 is around the corner.

A lot of things have changed. And by a lot, I mean A LOT! The most important change is that we’ve stepped away from the ASP.NET SiteMapProvider dependency. This has been a massive pain in the behind and source of a lot of issues. Whereas I initially planned on ditching this dependency with v3, it happened now anyway.

imageOther improvements have been done around dependency injection: every component in the MvcSiteMapProvider can now be replaced with custom implementations. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

Next to that, a series of new helpers have been added, bugs have been fixed, the security model has been made more performant and lots more. Consider v4 as almost a rewrite for the entire project!

We’ve tried to make the upgrade path as smooth as possible but there may be some breaking changes in the provider. If you currently have the ASP.NET MVC SiteMapProvider installed in your project, feel free to give the new version a try using the NuGet package of your choice (only one is needed for your ASP.NET MVC version).

Install-Package MvcSiteMapProvider.MVC2 -Pre
Install-Package MvcSiteMapProvider.MVC3 -Pre
Install-Package MvcSiteMapProvider.MVC4 -Pre

Speaking of NuGet packages: by popular demand, the core of MvcSIteMapProvider has been extracted into a separate package (MvcSiteMapProvider.MVC<version>.Core) so that you don’t have to include views and so on in your library projects.

Please give the beta a try and let us know your thoughts on GitHub (or the comments below). Pull requests currently go in the v4 branch.

Create a list of favorite ReSharper plugins

With the latest version of the ReSharper 8 EAP, JetBrains shipped an extension manager for plugins, annotations and settings. Where it previously was a hassle and a suboptimal experience to install plugins into ReSharper, it’s really easy to do now. And what is really nice is that this extension manager is built on top of NuGet! Which means we can do all sorts of tricks…

The first thing that comes to mind is creating a personal NuGet feed containing just those plugins that are of interest to me. And where better to create such feed than MyGet? Create a new feed, navigate to the Package Sources pane and add a new package source. There’s a preset available for using the ReSharper extension gallery!

Add package source on MyGet - R# plugins

After adding the ReSharper extension gallery as a package source, we can start adding our favorite plugins, annotations and extensions to our own feed.

Add ReSharper plugins to MyGet

Of course there are some other things we can do as well:

  • “Proxy” the plugins from the ReSharper extension gallery and post your project/team/organization specific plugins, annotations and settings to your private feed. Check this post for more information.
  • Push prerelease versions of your own plugins, annotations and settings to a MyGet feed. Once stable, push them “upstream” to the ReSharper extension gallery.

Enjoy!