Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Source Control considered harmful

TL;DR: Using source control is a really bad idea. Or is it? Skip to Conclusion for the meat of this post.

One of the first things I do with a new project in Visual Studio is not add it to source control. There are many reasons, but it all boils down to this: Source Control introduces more problems than it solves.

Before I dive into this, I'll share the solution with you. Put your sources on a USB drive. Yes, it's that simple.

Implications

If you're like most other people, you don't like that solution, because it feels inefficient:

  • USB drives can get lost
  • USB drives can end up in the dishwasher
  • I have to buy a USB drive for every developer on the team
  • Sharing sources with distributed teams is more difficult: USB drives have to be shipped by snail mail

All of that is true, but then again...

  • You can always make a copy of a USB drive to safeguard against loss
  • Sharing USB drives is really easy: plug and play! Ease of use!
  • You can have lots of coffee waiting for a USB drive to arrive with that contribution to your OSS project

Still, many people go for source control: Source Control and a central repository solve all implications of using a USB drive, so why not use source control?

Fragility

Have you ever let a junior developer loose on a git repository? I can promise you, it's not pretty.

  • Merges will go wrong
  • They will find out about rebasing and mess up the entire system
  • Pull requests on GitHub? One click to merge, no need to test or review!
  • Developers will forget to check in specific files

Again: all of this is easy with a USB drive: one location to store the project. Yes, merging is slightly difficult too but then again replaying history in source control is much worse.

And I haven't even talked about having to have a network share or a GitHub account in which you can have private repositories. That's all extra costs and extra risks. What if the Internet connection goes down. What if a dev's laptop breaks? You might even say a USB drive is too advanced and a typewriter is an even better way to write code!

Cost

Did I mention the cost of USB drives? At most conferences and shops you will get them for free. Even if you buy them, they are probably around 0.10$ per GB. USB drives are very inexpensive.

Compare that with source control: we need an Internet connecion, a GitHub repository, and most importantly: devs will have to read documentation on using git or be coached by someone on the team. That's really inefficient and costs a lot of time!

Conclusion

You may have noted that this is a slightly strange post. You are correct, it is. I’m responding to some of the outrages regarding yesterday’s NuGet.org outage. Tweets and blogs mention to not use NuGet, or use NuGet but definitely not use package restore. That’s perfectly fine, but I don’t think the reasons for not using it are well founded, hence the above sarcasm. If it wasn’t clear: you should be using source control.

Should you use NuGet package restore? I think it depends on your preference, mostly. It should not depend on NuGet.org outages, nor on the microwave destroying your WiFi signal and failing your builds utilizing package restore. Should you add packages to your repository or use package restore? It depends on what you want to achieve and how you want to work. I prefer not to do this because they are dependencies that are versioned (package version and packages.config) so why version them again? We don’t add the issues from our issue tracker to source control either, right?

We put issues in a specialized system for managing issues. In my opinion, the same should be true for software and component dependencies. But then again: if you want to add packages to source control, fine by me. As some tweets said, you don’t have to do it for the minimal disk space optimizations. All that matters is if it makes sense to your process. 

Just like with source control, issue trackers and other things (like package restore) in your build process, you should read up on them, play with them and know the risks. Do we know that our Internet connection can break during solar storms? Well yes. It’s a minor risk but if it’s important to your shop do mitigate that risk. Do laptops break? Yes. If it’s important that you can keep working even if a laptop crashes, buy some more and keep them up-to-date with your main development machine. If you rely on GitHub and want to get work done if they have issues, make sure you have an up to date fork somewhere on a file share. Make that two file shares!

And if you rely on NuGet package restore… you get the point, right? For NuGet, there are private repositories available that can host your in-house packages and the ones you are using from upstream sources like NuGet.org. Use them, if they matter for your development process. Know about NuGet 2.8’s automatic fallback to the local cache you have on disk and if something goes wrong, use that cache until the package source is back up.

The development process and the tools are part of your system. Know your tools. Even if it requires you to read crazy books like how to work with git. Or Pro NuGet 2.

Pro NuGet second edition is out

Pro NuGet will learn you all there is to know about NuGetPfew! Around February 2013, Xavier and I started planning work on an update of our book. Eight months later, we’re proud to present you with Pro NuGet (second edition). It’s been a tough couple of months writing this: Xavier has become a father for the second time (congratulations!), we’ve had two massive updates to NuGet we had to work in our book, … But here it is!

What’s new?

  • A number of workflows with NuGet have changed and have been added. Expect all of these, including NuGet’s old and new package restore functionality.
  • Want to work with NuGet and Windows Azure Websites, TeamCity, Visual Studio Online, OctopusDeploy, NuGet Gallery, ProGet or MyGet? We have a bunch of recipes for you!
  • Pitfalls of package versioning
  • Building a plugin system based on NuGet

Next to that there is a lot more meat in there!

  • Understand how NuGet fits into the big picture of your software development process to save you time and money.
  • How to keep your team working when your project depends on an external resource (such as a web service or cloud) which suddenly becomes unavailable.
  • Whether or not to auto-update NuGet packages within a continuous integration process for maximum reliability and speed.
  • How to combine NuGet with PowerShell to create your own Cmdlets and extend the base toolset in an extremely powerful manner.
  • Evaluate the pros-and-cons of hosting your own NuGet repository.
  • How to incorporate NuGet seamlessly within your continuous integration process.
  • Much much more!

We would love to get your feedback! E-mail us or write a review on your blog or Amazon. Enjoy the read!

PS: Thanks to our excellent reviewers (the NuGet team) and everyone at Apress! There is a lot of people involved in getting a quality book out there. Thanks!

Developing Windows Azure Mobile Services server-side

Word of warning: This is a partial cross-post from the JetBrains WebStorm blog. The post you are currently reading adds some more information around Windows Azure Mobile Services and builds on a full example and is a bit more in-depth.

With Microsoft’s Windows Azure Mobile Services, we can build a back-end for iOS, Android, HTML, Windows Phone and Windows 8 apps that supports storing data, authentication, push notifications across all platforms and more. There are client libraries available for all these platforms which can be used when developing in an IDE of choice, e.g. AppCode, Google Android Studio or Visual Studio. In this post, let’s focus on what these different platforms have in common: the server-side code.

This post was sparked by my buddy Kristof Rennen’s session for our user group. During his session he mentioned a couple of times how he dislikes Node.js and the trial-and-error manner of building the server-side due to lack of good tooling. Working for a tooling vendor and intrigued by the quest of finding a better way, I decided to post the short article you are currently reading.

Do note that I will focus more on how to get your development environment set-up and less on the Windows Azure Mobile Services feature set. Yes, you will learn some of the very basics but there are way better resources available for getting in-depth knowledge on the topic.

Here’s what we will see in this post:

  • Setting up a Windows Azure Mobile Service
  • Creating a table and storing data
  • A simple HTML/JS client
  • Adding logic to our API
  • Working on server-side logic with WebStorm
  • Sending e-mail using an Node.js module
  • Putting our API to the test with the REST client
  • Unit testing our logic

The scenario

Doing some exploration is always more fun when we can do it based on a simple scenario. Whenever JetBrains goes to a conference and we have a booth, we like to do a raffle for licenses. The idea is simple: come to our booth for a chat, fill out a simple form and we will pick random names after the conference and send a free license.

For this post, I’ve created a very simple form in HTML and JavaScript, collecting visitor name and e-mail address.

1

Once someone participates in the raffle, the name and e-mail address are stored in a database and we send out an e-mail thanking that person for visiting the booth together with a link to download a product trial.

Setting up a Windows Azure Mobile Service

First things first: we will require a Windows Azure account to start developing. Next, we can create a new Mobile Service through the Windows Azure Management Portal.

2

Next, we can give our service a name and pick the datacenter location for it. We also have to provide the type of database we want to use: a free, 20 MB database, or a full-fledged SQL Database. While Windows Azure Mobile Services is always coupled to a database, we can build a custom API with it as well.

3

Once completed, we get several tabs to work with. There’s the initial welcome screen, displaying links to documentation and client libraries. The other tabs give access to monitoring, scaling, how we want to authenticate users, push notification settings and logs. Since we want to store data of booth visitors, let’s enter the Data tab.

Creating a table and storing data

From the Data tab, we can create a new table. Let’s call it Visitor. When creating a new table, we have to specify access rules for the API that will be available on top of it.

4

We can tell who can read (API GET request), insert (API POST request), update (API PATCH request) and delete (API DELETE request). Since our application will only insert new data and we don’t want to force booth visitors to log in with their social profiles, we can specify inserts can be done if an API key is provided. All other operations will be blocked for outside users: reading and deleting will only be available through the Windows Azure Management Portal with the above settings.

Do we have to create columns for storing booth visitor data? By default, Windows Azure Mobile Services has “dynamic schema” enabled which means we can throw some JSON at our Mobile Service it and it will store data for us.

A simple HTML/JS client

As promised earlier in this post, let’s see how we can build a simple client for the service we have just created. We’ll go with an HTML and JavaScript based client as it’s fairly easy to demonstrate. Again, have a look at other client SDK’s for the platform you are developing for.

Our HTML page exists of nothing but two text boxes and a button, conveniently named name, email and send. There are two ways of sending data to our Mobile Service: calling the API directly or making use of the client library provided. Both are easy to do: the API lives at https://<servicename>.azure-mobile.net/tables/<tablename> and we can POST a JSON-serialized object to it, an approach we’ll take later in this blog post. There is also a JavaScript client library available from https://<servicename>.azure-mobile.net/client/MobileServices.Web-1.0.0.min.js which our client is using.

5

As we can see, a new MobileServiceClient is created on which we can get a table reference (getTable) and insert a JSON-formatted object. Do note that we have to pass in an API key in the client constructor, which can be obtained from the Windows Azure Management Portal under the Manage Keys toolbar button.

From the portal, we can now see the data we’re submitting from our simple application:

6

Adding logic to our API

Let’s make it a bit more exciting! What if we wanted to store a timestamp with every record? We may want to have some insight into when our booth was busiest. We can send a timestamp from the client but that would only add clutter to our client-side code. Also if we wanted to port the HTML/JS client to other platforms it would mean we have to make sure every client sends this data to our mobile service. In short: this calls for some server-side logic.

For every table created, we can make use of the Script tab to add custom logic to read, insert, update and delete operations which we can write in JavaScript. By default, this is what a script for insert may look like:

7

The insert function will be called with 3 parameters: the item to be stored (our JSON-serialized object), the current user and the full request. By default, the request.execute() function is called which will make use of the other two parameters internally. Let’s enrich our item with a timestamp.

8

Hitting Save will deploy this script to our mobile service which from now on will store an inserted timestamp in our database as well.

This is a very trivial example. There are a lot of things that can be done server-side: enforcing validation, record filtering, storing data in other tables as well, sending e-mail or text messages, … Here’s a post with some common scenarios. Full reference to the server-side objects is also available.

Working on server-side logic with WebStorm

Unfortunately, the in-browser editor for server-side scripts is a bit limited. It features no autocompletion and all code has to go in one file. How would we create shared logic which can be re-used across different scripts? How would we unit test our code? This is where WebStorm comes in. We can access the complete server-side code through a Git repository and work on it in a full IDE!

The Git access to our mobile service is disabled by default. Through the portal’s right-hand side menu, we can enable it by clicking the Set up source control link. Next, we can find repository details from the Configure tab.

9

We can now use WebStorm’s VCS | Checkout From Version Control | Git menu to bring down the server-side code for our Windows Azure Mobile Service.

10

In our project, we can see several folders and files. The service/api folder can hold custom API’s (check the readme.md file for more info). service/scheduler can hold scripts that execute at a given time or interval, much like CRON jobs. service/shared can hold shared scripts that can be used inside table logic, custom API’s and scheduler scripts. In the service/table folder we can find the script we have created through the portal: visitor.insert.js. Also note the visitor.json file which contains the access rules we configured through the portal earlier.

11

From now on, we can work inside WebStorm and push to the remote Git repository if we want to deploy our new code.

Sending e-mail using a Node.js module

Let’s go back to our initial requirements: whenever someone enters their name and e-mail address in our application, we want to send out an e-mail thanking them for participating. We can do this by making use of an NPM module, for example SendGrid.

Windows Azure Mobile Services comes with some NPM modules preinstalled, like SendGrid and Twilio. However we want to make sure we are always using the same version of the NPM package, so let’s install it into our project. WebStorm has a built-in package manager to do this, however Windows Azure Mobile Services requires us to install the module in a non-standard location (the service folder) hence we will use the Terminal tool window to install it.

12

Once finished, we can start working on our e-mail logic. Since we may want to re-use the e-mail logic (and we want to unit test it later), it’s best to create our logic in the shared folder.

13

In our shared module, we can make use of the SendGrid module to create and send an e-mail. We can export our sendThankYouMessage function to consumers of our shared module. In the visitor.insert.js script we can require our shared module and make use of the functionality it exposes. And as an added bonus, WebStorm provides us with autocompletion, code analysis and so on.

14

Once we’ve updated our code, we can transfer our server-side code to Windows Azure Mobile Services. Ctrl+K (or Cmd+K on Mac OS X) allows us to commit and push from within the IDE.

15

Putting our API to the test with the REST client

Once our changes have been deployed, we can test our API. This can be done using one of the client libraries or by making use of WebStorm’s built-in REST client. From the Tools | Test RESTful Web Service menu we can craft our API calls manually.

We can specify the HTTP method to use (POST since we want to insert) and the URL to our Windows Azure Mobile Services endpoint. In the headers section, we can add a Content-Type header and set it to application/json. We also have to specify an API key in the X-ZUMO-APPLICATION header. This API key can be found in the Windows Azure Management Portal. On the right-hand side we can provide the text to post, in this case a JSON-serialized object with some properties.

16

After running the request, we get back response headers and a response body:

17

No error message but an object is being returned? Great, that means our code works (and should also be sending out an e-mail). If something does go wrong, the Logs tab in the Windows Azure portal can be a tremendous help in finding out what went wrong.

Through the toolbar on the left, we can export/import requests, making it easy to create a number of predefined requests that can easily be run over and over for testing the REST API.

Unit testing our logic

With WebStorm we can easily test our JavaScript code and custom Node.js modules. Let’s first set up our IDE. Unit testing can be done using thenodeunit testing framework which we can install using the Node.js package manager.

18

Next, we can create a new Run Configuration from the toolbar selecting Nodeunit as the configuration type and entering all required configuration details. In our case, let’s run all tests from the test directory.

19

Next, we can create a folder that will hold our tests and mark it as a Test Source Root (open the context menu and use Mark Directory As | Test Source Root). Tests for Nodeunit are always considered modules and should export their test functions. Here’s a very basic example which tells Nodeunit to wait for one assertion, assert that a boolean is true and marks the test case completed.

20

Of course we can also test our business logic. It’s best to create separate modules under the shared folder as they will be easier to unit test. However if you do have to test the actual table scripts (like insert functionality), there is a little trick that allows doing just that. The following snippet exports the insert function outside of the table-specific module:

21

We can now test the complete visitor.insert.js module and even provide mocks to work with. The following example loads all our modules and sets up test expectation. We’re also overriding specific functionalities such as the sendThankYouMessage function to just make sure it’s called by our table API logic.

22

The full source code for both the server-side and client-side application can be found onhttps://github.com/maartenba/JetBrainsBoothMobileService.

If you would like to learn more about Windows Azure Mobile Services and work with authentication, push notifications or custom API’s checkout the getting started documentation. And if you haven’t already, give WebStorm a try.

Enjoy!

An autoscaling build farm using TeamCity and Windows Azure

Autoscaling... myself!NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.


Cloud computing is often referred to as a cost saver due to its billing models. If we can move workloads that are seasonal to the cloud, cost reduction is something that will come. No matter if it’s really “seasonal seasonal” (e.g. a temporary high workload around the holidays) or “daily seasonal” where workloads are different depending on the time of day, these workloads have written cloud all over them.

A workload that may be seasonal is the workload done by build servers. Take TeamCity for example. A TeamCity server instruments a pool of build agents that are either idle or compiling source code into binaries. Depending on how your team is structured and when people work, there is a big chance that pool of build agents is doing nothing for several hours every day, except incurring cost. What if we could move the build agents to a platform like Windows Azure and have them autoscale, depending on the actual load on the build farm?

Creating a build agent virtual machine

The first step in setting this brilliant scheme in motion is to set up a build agent virtual machine. We can select any virtual machine image we want for our build agent, even upload our own vhd’s if needed. I'm selecting a Windows Server 2012 image here but if you need a different OS for your build agent you can select that instead.

During the creation of this build agent, there is nothing special we should do. We can select a small/medium/large/extra large instance, give ourselves an administrator password and so on. The only important step here is that we set up an endpoint for the TeamCity build agent, listening on TCP port 9090.

Open load balancer endpoint

Once the machine is started, we will have to install all required prerequisites for our build agent. We can connect using remote desktop (or SSH if it’s a Linux machine). On the machine I have here, I installed all .NET framework versions to ensure I'm able to build .NET projects. On a build machine for Java, we would install the correct runtimes and JDK's for our projects. Anything, really, if it is needed for the sources we’ll be building. On Windows, I typically use Web Platform Installer and Chocolatey to get this done as automated as possible.

Web Platform Installer in action

Installing TeamCity build agent

In order for our build agent to communicate with the TeamCity server, we have to install the build agent. We can do this by navigating to our TeamCity server from within the virtual machine and use the Install Build Agents link from the Agents page.

Installing build agent

On a Windows server, we can use the Windows Installer but we can also use Java Web Start or even simply extract a ZIP file. This last option can be useful on a Linux machine, for example.

Installing the build agent is pretty much a next, next, finish operation. The only important thing is that we run the agent as a Windows service (or have it automatically start at boot time on other operating systems). We also want to specify the URL to our TeamCity server as well as the port on which the build agent will listen for incoming data from TeamCity. Note that this port should be the one opened in the load balancer earlier, in the case of this machine port 9090.

Specify port and server

Before starting the build agent, make sure the local firewall allows incoming connections. Through the Windows firewall, allow incoming connections for port 9090 (and while we’re at it, for a range of ports so we can easily clone this machine and not care about the firewall anymore).

Windows Firewall configuration

If we now start the build agent service, it should connect to our TeamCity server. Under the agents tab, we should be seeing a new unauthorized agent popping up. If that works, we’re good to go with our build agent farm.

A new unauthorized build agent shows up

Don’t shut down the machine just yet, we still need to prepare it for creating a build agent image.

Creating a build agent image

While still connected through remote desktop, open a command prompt and run the sysprep /generalize command from the c:\windows\system32\sysprep folder. On Linux, there’s a similar option in the Windows Azure agent. Sysprep ensures the machine can be cloned into a new machine, getting its own settings like a hostname and IP address. A non-sysprepped machine can thus never be cloned.

Sysprep our machine

Once finished, our RDP connection should be gone and our machine can be shutdown in the Windows Azure Management Portal. In fact, it must be shut down. Once that is done, we can use the Capture button and transform our virtual machine into a template we can create new virtual machines from.Capturing a virtual machine image

The capturing process will take a couple of minutes and results in having no more build server virtual machine to be found in the Windows Azure Management Portal. Is that bad? No, we can now start cloning the machine and create multiple, all having the exact same configuration and components installed.

Setting up multiple build agent machines

The next thing we want to have is multiple build agent machines. From the Windows Azure Management portal, we can create them. Not based on a platform image but using the image created during the previous step.

Using a virtual machine image as the base

The virtual machine configuration can be whatever we want. Do we want extra small instances or extra large? It’s all up to us and our credit card. On the next page, we have to specify some more important details. First, we have to select or create a cloud service. This will be the DNS host name under which all of our build agents are going to live. We also have to specify the affinity group, in essence a setting telling Windows Azure to never unplug power or networking for all machines in this group at the same time.

Selecting cloud service and availability set

We will be creating a couple of machines, so it’s important to get the next page right. Since all our machines will share the same hostname and IP address to the outside world, our build agents have to listen on different TCP ports. Make sure that the first agent maps port 9090 to port 9090, the second one 9091 to 9091 and so on. Not doing this will mess with your mind afterwards when troubleshooting.

Endpoint configuration

Finish the process, let Windows Azure start the machine and create a new one. Important: same cloud service, same availability set and correct endpoint mappings!

Configuring the build agents

Once we have several machines running, we have to connect to them using remote desktop again. This can be done through the portal. Once in, locate the build agent configuration file (c:\BuildAgent\conf\buildAgent.properties in a default installation) and set the port number on which it listens to the one that was mapped as an external endpoint. Again, agent one will listen on port 9090, agent number two on 9091 and so on. We can also set a better name for the build agent, in my case I’ve chosen to go with “agent2”. Very inspirational and all.

Setting build agent name and port

Save and restart the build agent (or the machine). The TeamCity server should now start listing all build agents.

Windows Azure build agents for TeamCity

Make sure to authorize them all, as we want to be sure they can connect to TeamCity server later on. Once that has been done and all build agents are listed here, we can shut them all down except for one. We want to have something running, right?

Configuring autoscaling

It might have been a good question: why did we have to move all these machines under the same cloud service? The reason is simple: we wanted to autoscale our farm and this can only be done within one cloud service. From the cloud service, click the Scale tab and start configuring.

Autoscaling configuration

For this post, I’ve chosen the following values:

  • Autoscale based on CPU
  • Have a minimum of one instance, and a maximum of, well… all of them.
  • The target CPU range is 0 to 10. For a production environment this will typically be between 60 and 80 or 40 and 80, depending on the chosen machine size for the build agents. Windows Azure will trigger an autoscaling operation if we go outside this range, having a small and low range means it will trigger a scale operation much faster. Bigger numbers means slower to respond.
  • Scale up by and scale down by as well as the number of minutes to wait after the previous operations are up to you. If you want a build agent to remain online for 30 minutes after it has been started, even if CPU usage drops, set it to 30 minutes. If 2 machines should be started at once, increase that number as well.

Scaling will happen based on the average CPU percentage of all running machines. If our builds run at 100% CPU all the time on our agent we can set the thresholds a bit higher. If builds are only taking 20% we might want to run multiple agents on one machine or decrease the scaling thresholds a bit. Want to measure CPU utilization for a given build? Better read up on the TeamCity Performance Monitor then.

Putting it to the test

Putting it to the test shouldn’t be that hard. Start some builds and make sure the agent gets loaded with builds. Once we hit the CPU threshold, Windows Azure will launch a virtual machine that was previously turned off.

Windows Azure autoscaling in action

Once it has booted, we will also see it surface on the TeamCity server.

Build agents on TeamCity

Once the load goes down again, Windows Azure will shutdown machines that are below the thresholds and make sure they don’t incur costs any longer. Which is pretty impressive!

If a development team triggers a massive amount of builds during the day, Windows Azure will pretty soon scale out to a higher number of virtual build agents. And at night when there are only some builds being triggered, it will scale back to lesser instances. If, for example, we manage to run machines only for 12 hours instead of 24 hours a day, that means our build farm’s price goes down by half.

TeamCity’s architecture as well as the way Windows Azure works makes this cost reduction possible. It’s also fun to set up, it gives us a wide range of options (how about a Windows Server 2012 farm, a Linux farm and so on).

Enjoy!

Running unit tests when deploying ASP.NET to Windows Azure Web Sites

Deployment failedOne of the well-loved features of Windows Azure Web Sites is the fact that you can simply push our ASP.NET application’s source code to the platform using Git (or TFS or DropBox) and that sources are compiled and deployed on your Windows Azure Web Site. If you’ve checked the management portal earlier, you may have noticed that a number of deployment steps are executed: the deployment process searches for the project file to compile, compiles it, copies the build artifacts to the web root and has your website running. But did you know you can customize this process?

[update] Mstest seems to work now as well, using the console runner from VS2012.

Customizing the build process

To get an understanding of how to customize the build process, I want to explain you how this works. In the root of your repository, you can add a .deployment file, containing a simple directive: which command should be run upon deployment.

1 [config] 2 command = build.bat

This command can be a batch file, a PHP file, a bash file and so on. As long as we can tell Windows Azure Web Sites what to execute. Let’s go with a batch file.

1 @echo off 2 echo This is a custom deployment script, yay!

When pushing this to Windows Azure Web Sites, here’s what you’ll see:

Windows Azure Web Sites custom build

In this batch file, we can use some environment variables to further customize the script:

  • DEPLOYMENT_SOURCE - The initial "working directory"
  • DEPLOYMENT_TARGET - The wwwroot path (deployment destination)
  • DEPLOYMENT_TEMP - Path to a temporary directory (removed after the deployment)
  • MSBUILD_PATH - Path to msbuild

After compiling, you can simply xcopy our application to the %DEPLOYMENT_TARGET% variable and have your website live.

Generating deployment scripts

Creating deployment scripts can be a tedious job, good thing that the azure-cli tools are there! Once those are installed, simply invoke the following command and have both the .deployment file as well as a batch or bash file generated:

1 azure site deploymentscript --aspWAP "path\to\project.csproj"

For reference, here’s what is generated:

1 @echo off 2 3 :: ---------------------- 4 :: KUDU Deployment Script 5 :: ---------------------- 6 7 :: Prerequisites 8 :: ------------- 9 10 :: Verify node.js installed 11 where node 2>nul >nul 12 IF %ERRORLEVEL% NEQ 0 ( 13 echo Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment. 14 goto error 15 ) 16 17 :: Setup 18 :: ----- 19 20 setlocal enabledelayedexpansion 21 22 SET ARTIFACTS=%~dp0%artifacts 23 24 IF NOT DEFINED DEPLOYMENT_SOURCE ( 25 SET DEPLOYMENT_SOURCE=%~dp0%. 26 ) 27 28 IF NOT DEFINED DEPLOYMENT_TARGET ( 29 SET DEPLOYMENT_TARGET=%ARTIFACTS%\wwwroot 30 ) 31 32 IF NOT DEFINED NEXT_MANIFEST_PATH ( 33 SET NEXT_MANIFEST_PATH=%ARTIFACTS%\manifest 34 35 IF NOT DEFINED PREVIOUS_MANIFEST_PATH ( 36 SET PREVIOUS_MANIFEST_PATH=%ARTIFACTS%\manifest 37 ) 38 ) 39 40 IF NOT DEFINED KUDU_SYNC_COMMAND ( 41 :: Install kudu sync 42 echo Installing Kudu Sync 43 call npm install kudusync -g --silent 44 IF !ERRORLEVEL! NEQ 0 goto error 45 46 :: Locally just running "kuduSync" would also work 47 SET KUDU_SYNC_COMMAND=node "%appdata%\npm\node_modules\kuduSync\bin\kuduSync" 48 ) 49 IF NOT DEFINED DEPLOYMENT_TEMP ( 50 SET DEPLOYMENT_TEMP=%temp%\___deployTemp%random% 51 SET CLEAN_LOCAL_DEPLOYMENT_TEMP=true 52 ) 53 54 IF DEFINED CLEAN_LOCAL_DEPLOYMENT_TEMP ( 55 IF EXIST "%DEPLOYMENT_TEMP%" rd /s /q "%DEPLOYMENT_TEMP%" 56 mkdir "%DEPLOYMENT_TEMP%" 57 ) 58 59 IF NOT DEFINED MSBUILD_PATH ( 60 SET MSBUILD_PATH=%WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild.exe 61 ) 62 63 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 64 :: Deployment 65 :: ---------- 66 67 echo Handling .NET Web Application deployment. 68 69 :: 1. Build to the temporary path 70 %MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\path.csproj" /nologo /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release 71 IF !ERRORLEVEL! NEQ 0 goto error 72 73 :: 2. KuduSync 74 echo Kudu Sync from "%DEPLOYMENT_TEMP%" to "%DEPLOYMENT_TARGET%" 75 call %KUDU_SYNC_COMMAND% -q -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.deployment;deploy.cmd" 2>nul 76 IF !ERRORLEVEL! NEQ 0 goto error 77 78 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 79 80 goto end 81 82 :error 83 echo An error has occured during web site deployment. 84 exit /b 1 85 86 :end 87 echo Finished successfully. 88

This script does a couple of things:

  • Ensure node.js is installed on Windows Azure Web Sites (needed later on for synchronizing files)
  • Setting up a bunch of environment variables
  • Run msbuild on the project file we specified
  • Use kudusync (a node.js based tool, hence node.js) to synchronize modified files to the wwwroot of our site

Try it: after pushing this to Windows Azure Web Sites, you’ll see the custom script being used. Not much added value so far, but that’s what you have to provide.

Unit testing before deploying

Unit tests would be nice! All you need is a couple of unit tests and a test runner. You can add it to your repository and store it there, or simply download it during the deployment. In my example, I’m using the Gallio test runner because it runs almost all test frameworks, but feel free to use the test runner for NUnit or xUnit instead.

Somewhere before the line that invokes msbuild and ideally in the “setup” region of the deployment script, add the following:

1 IF NOT DEFINED GALLIO_COMMAND ( 2 IF NOT EXIST "%appdata%\Gallio\bin\Gallio.Echo.exe" ( 3 :: Downloading unzip 4 echo Downloading unzip 5 curl -O http://stahlforce.com/dev/unzip.exe 6 IF !ERRORLEVEL! NEQ 0 goto error 7 8 :: Downloading Gallio 9 echo Downloading Gallio 10 curl -O http://mb-unit.googlecode.com/files/GallioBundle-3.4.14.0.zip 11 IF !ERRORLEVEL! NEQ 0 goto error 12 13 :: Extracting Gallio 14 echo Extracting Gallio 15 unzip -q -n GallioBundle-3.4.14.0.zip -d %appdata%\Gallio 16 IF !ERRORLEVEL! NEQ 0 goto error 17 ) 18 19 :: Set Gallio runner path 20 SET GALLIO_COMMAND=%appdata%\Gallio\bin\Gallio.Echo.exe 21 )

See what happens there?  We check if the local system on which your files are stored in WindowsAzure Web Sites already has a copy of the Gallio.Echo.exetest runner. If not, let’s download a tool which allows us to unzip. Next, the entire Gallio test runner is downloaded and extracted. As a final step, the %GALLIO_COMMAND% variable is populated with the full path to the test runner executable.

Right before the line that calls “kudusync”, add the following:

1 echo Running unit tests 2 "%GALLIO_COMMAND%" "%DEPLOYMENT_SOURCE%\SampleApp.Tests\bin\Release\SampleApp.Tests.dll" 3 IF !ERRORLEVEL! NEQ 0 goto error

Yes, the name of your test assembly will be different, you should obviously change that. What happens here? Well, we’re invoking the test runner on our unit tests. If it fails, we abort deployment. Push it to Windows Azure and see for yourself. Here’s what is displayed on success:

Windows Azure Web Site unit tests

All green! And on failure, we get:

Gallio test runner Windows Azure

In the portal, you can clearly see that deployment was aborted:

Deployment fail when unit tests fail

That’s it. Enjoy!

How I push GoogleAnalyticsTracker to NuGet

If you check my blog post Tracking API usage with Google Analytics, you’ll see that a small open-source component evolved from MyGet. This component, GoogleAnalyticsTracker, lives on GitHub and NuGet and has since evolved into something that supports Windows Phone and Windows RT as well. But let’s not focus on the open-source aspect.

It’s funny how things evolve. GoogleAnalyticsTracker started as a small component inside MyGet, and since a couple of weeks it uses MyGet to publish itself to NuGet. Say what? In this blog post, I’ll elaborate a bit on the development tools used on this tiny component.

Source code

Source code for GoogleAnalyticsTracker can be found on GitHub. This is the main entry point to all activity around this “project”. If you have a nice addition, feel free to fork it and send me a pull request.

Staging NuGet packages

Whenever I update the source code, I want to automatically build it and publish NuGet packages for it. Not directly to NuGet: I want to keep the regular version, the WinRT and WP version more or less in sync regarding version numbers. Also, I sometimes miss something which I fix again 5 minutes after. In the meanwhile, I like to have the generated package on some sort of “staging” feed, at MyGet. It’s even public, check http://www.myget.org/F/githubmaarten if you want to use my development artifacts.

When I decide it’s time for these packages to move to the “official NuGet package repository” at NuGet.org, I simply click the “push” button in my MyGet feed. Yes, that’s a manual step but I wanted to have some “gate” in the middle where I should explicitly do something. Here’s what happens after clicking “push”:

Push to NuGet

That’s right! You can use MyGet as a staging feed and from there push your packages onwards to any other feed out there. MyGet takes care of the uploading.

Building the package

There’s one thing which I forgot… How do I build these packages? Well… I don’t. I let MyGet Build Services.do the heavy lifting. On your feed, you can simply click the “Add GitHub project” button and a list of all your GitHub repos will be shown:

Build GitHub project

Tick a box and you’re ready to roll. And if you look carefully, you’ll see a “Build hook URL” being shown:

MyGet build hook

Back on GitHub, there’s this concept of “service hooks”, basically small utilities that you can fire whenever a new commit occurs on your repository. Wouldn’t it be awesome to trigger package creation on MyGet whenever I check in code to GitHub? Guess what…

GitHub build hook

That’s right! And MyGet even runs unit tests. Some sort of a continuous integration where I have the choice to promote packages to NuGet whenever I think they are stable.

Use NuGet Package Restore to avoid pushing assemblies to Windows Azure Websites

Windows Azure Websites allows you to publish a web site in ASP.NET, PHP, Node, … to Windows Azure by simply pushing your source code to a TFS or Git repository. But how does Windows Azure Websites manage dependencies? Do you have to check-in your assemblies and NuGet packages into source control? How about no…

NuGet 1.6 shipped with a great feature called NuGet Package Restore. This feature lets you use NuGet packages without adding them to your source code repository. When your solution is built by Visual Studio (or MSBuild, which is used in Windows Azure Websites), a build target calls nuget.exe to make sure any missing packages are automatically fetched and installed before the code is compiled. This helps you keep your source repo small by keeping large packages out of version control.

Enabling NuGet Package Restore

Enabling NuGet package restore can be done from within Visual Studio. Simply right-click your solution and click the “Enable NuGet Package Restore” menu item.

NuGet package restore Windows Azure Websites Antares

Visual Studio will now do the following with the projects in your solution:

  • Create a .nuget folder at the root of your solution, containing a NuGet.exe and a NuGet build target
  • Import this NuGet target into all your projects so that MSBuild can find, download and install NuGet packages on-the-fly when creating a build

Be sure to push the files in the .nuget folder to your source control system. The packages folder is not needed, except for the repositories.config file that sits in there.

But what about my non-public assembly references? What if I don't trust auto-updating from NuGet.org?

Good question. What about them? A simple answer would be to create NuGet packages for them. And if you already have NuGet packages for them, things get even easier. Make sure that you are hosting these packages in an online feed which is not the public NuGet repository at www.nuget.org, unless you want your custom assemblies out there in public. A good choice would be to checkout www.myget.org and host your packages there.

But then a new question surfaces: how do I link a custom feed to my projects? The answer is pretty simple: in the .nuget folder, edit the NuGet.targets file. In the PackageSources element, you can supply a semicolon (;) separated list of feeds to check for packages:

1 <?xml version="1.0" encoding="utf-8"?> 2 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 3 <PropertyGroup> 4 <!-- ... --> 5 6 <!-- Package sources used to restore packages. By default will used the registered sources under %APPDATA%\NuGet\NuGet.Config --> 7 <PackageSources>"http://www.myget.org/F/chucknorris;http://www.nuget.org/api/v2"</PackageSources> 8 9 <!-- ... --> 10 </PropertyGroup> 11 12 <!-- ... --> 13 </Project>

By doing this and pushing the targets file to your Windows Azure Websites Git or TFS repo, the build system backing Windows Azure Websites will go ahead and download your packages from an external location, not cluttering your sources. Which makes for one, happy cloud.

Windows Azure Git Deploy

GitHub for Windows Azure Websites

Windows Azure Websites Git Github for WindowsWith the new release of Windows Azure and Windows Azure Websites, a lot of new scenarios with Windows Azure just became possible. One I like a lot, especially since Appharbor and Heroku have similar offers too, is the possibility to push source code (ASP.NET or PHP) to Windows Azure instead of binaries using Windows Azure Websites.

Not everyone out there is a command-line here though: if you want to use Git as a mechanism of pushing sources to Windows Azure Websites chances are you may go crazy if you are unfamiliar with command-line git commands. Luckily, a couple of weeks ago, GitHub released GitHub for Windows. It features an easy-to-use GUI on top of GitHub repositories. And with a small trick also on top of Windows Azure Websites.

Setting up a Windows Azure Website

Since you’re probably still unfamiliar with Windows Azure Websites, let me guide you through the setup. It’s a simple process. First of all, navigate to the new Windows Azure portal. It looks different than the one you’re used to but it’s way easier to use. In the toolbar at the bottom, click New, select Web site, Quick Create and enter a hostname of choice. I chose “websiteswithgit”:

Creating a Windows Azure Website

After a couple of seconds, you’ll be presented with the dashboard of your newly created Windows Azure Website. This dashboard features a lot of interesting metrics about your website such as data traffic, CPU usage, errors, … It also displays the available means for publishing a site to Windows Azure Websites: TFS deploy, Git deploy, Webdeploy and FTP publishing. That’s it: your website has been set up and if you navigate to the newly created URL, you’ll be greeted with the default Windows Azure Websites landing page.

Setting up Git publishing

Since we’ll be using Git, click the Set up Git Publishing option.

Windows Azure Websites Dashboard

If you haven’t noticed already: Windows Azure Websites makes Windows Azure a lot easier. After a couple of seconds, Git publishing is configured and all it takes to deploy your website is commit your source code, whether ASP.NET, ASP.NET Webpages or PHP to the newly created Git repository. Windows Azure Websites will take care of the build process (cool!) and will deploy this to Windows Azure in just a couple of seconds. Whoever told you deploying to Windows Azure takes ages lied to you!

Connecting GitHub for Windows to Windows Azure Websites

After setting up Git publishing, you probably have noticed that there’s a Git repository URL being displayed. Copy this one to your clipboard as we’ll be needing it in a minute. Open GitHub for Windows, right-click the UI and choose to “open a shell here”. Make sure you’re in the folder of choice. Next, issue a “git clone <url>” command, where <url> of course is the Git repository URL you’ve just copied.

Windows Azure Git Repository Build

The (currently empty) Windows Azure Website Git repository will be cloned onto your system. Now close this command-line (I promised we would use GitHub for Windows instead).

Git folder

Open the folder in which you cloned the Git repo and drag it onto GitHub for Windows. It will look kind of empty, still:

A Windows Azure Websites repository in GitHub for Windows

Next, add any file you want. A PHP file, a plain HTML file or a complete ASP.NET or ASP.NET MVC Web Application. GitHub for Windows will detect these changes and you can commit them to your local repository:

GitHub commit Windows Azure

All that’s left to do after a commit is clicking the Publish button. GitHub for Windows will now copy all changesets to the Windows Azure Websites GitHub repository which will in turn trigger an eventual build process for your web site. The result? A happy Windows Azure Websites dashboard and a site up and running. Rinse, repeat, commit. Happy deployments to Windows Azure Websites using GitHub for Windows!

Antares Windows Azure Websites Deployment History Build

Pro NuGet is finally there!

Short version: Install-Package ProNuget or http://amzn.to/pronuget

Pro NuGet - Continuous integration Package RestoreIt’s been a while since I wrote my first book. After I’ve been telling that writing a book is horrendous (try writing a chapter per week after your office hours…) and that I would never write on again, my partner-in-crime Xavier Decoster and I had the same idea at the same time: what about a book on NuGet? So here it is: Pro NuGet is fresh off the presses (or on Kindle).

Special thanks go out to Scott Hanselman and Phil Haack for writing our foreword. Also big kudos to all who’ve helped us out now and then and did some small reviews. Yes Rob, Paul, David, Phil, Hadi: that’s you guys.

Why a book on NuGet?

Why not? At the time we decided we would start writing a book (september 2011), NuGet was out there for a while already. Yet, most users then (and still today) were using NuGet only as a means of installing packages, some creating packages. But NuGet is much more! And that’s what we wanted to write about. We did not want to create a reference guide on what NuGet command were available. We wanted to focus on best practices we’ve learned over the past few months using NuGet.

Some scenarios covered in our book:

  • What’s the big picture on package management?
  • Flashback last week: NuGet.org was down. How do you keep your team working if you depend on that external resource?
  • Is it a good idea to auto-update NuGet packages in a continous integration process?
  • Use the PowerShell console in VS2010/11. How do I write my own NuGet PowerShell Cmdlets? What can I do in there?
  • Why would you host your own NuGet repository?
  • Using NuGet for continuous delivery
  • More!

I feel we’ve managed to cover a lot of concepts that go beyond “how to use NuGet vX” and instead have given as much guidance as possible. Questions, suggestions, remarks, … are all welcome. And a click on “Add to cart” is also a good idea ;-)

Introducing MyGet package source proxy (beta)

My blog already has quite the number of blog posts around MyGet, our NuGet-as-a-Service solution which my colleague Xavier and I are running. There are a lot of reasons to host your own personal NuGet feed (such as protecting your intellectual property or only adding approved packages to the feed, but there’s many more as you can <plug>read in our book</plug>). We’ve added support for another scenario: MyGet now supports proxying remote feeds.

Up until now, MyGet required you to upload your own NuGet packages and to include packages from the NuGet feed. The problem with this is that you either required your team to register multiple NuGet feeds in Visual Studio (which still is a good option) or to register just your MyGet feed and add all packages your team is using to it. Which, again, is also a good option.

With our package source proxy in place, we now provide a third option: MyGet can proxy upstream NuGet feeds. Let’s start with a quick diagram and afterwards walk you through a scenario elaborating on this:

MyGet Feed Proxy Aggregate Feed Connector

You are seeing this correctly: you can now register just your MyGet feed in Visual Studio and we’ll add upstream packages to your feed automatically, optionally filtered as well.

Enabling MyGet package source proxy

Enabling the MyGet package source proxy is very straightforward. Navigate to your feed of choice (or create a new one) and click the Package Sources item. This will present you with a screen similar to this:

MyGet hosted package source

From there, you can add external (or MyGet) feeds to your personal feed and add packages directly from them using the Add package dialog. More on that in Xavier’s blog post. What’s more: with the tick of a checkbox, these external feeds can also be aggregated with your feed in Visual Studio’s search results. Here’s the magical add dialog and the proxy checkbox:

Add package source proxy

As you may see, we also offer the option to filter upstream packages. For example, the filter string substringof('wp7', Tags) eq true that we used will filter all upstream packages where the tags contain “wp7”.

What will Visual Studio display us? Well, just the Windows Phone 7 packages from NuGet, served through our single-endpoint MyGet feed.

Conclusion

Instead of working with a number of NuGet feeds, your development team will just work with one feed that is aggregating packages from both MyGet and other package sources out there (NuGet, Orchard Gallery, Chocolatey, …). This centralizes managing external packages and makes it easier for your team members to find the packages they can use in your projects.

Do let us know what you think of this feature! Our UserVoice is there for you, and in fact, that’s where we got the idea for this feature from in the first place. Your voice is heard!