Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

SymbolSource support for NuGet Package Source Discovery

A couple of weeks, I told you about NuGet Package Source Discovery. In short, it allows you to add some meta information to your website and use your website as a discovery document for NuGet feeds. And thanks to a contribution to the spec by Marcin from SymbolSource.org, Package Source Discovery (PSD) now supports configuring Visual Studio for consuming symbols as well. Nifty!

An example

Let’s go with an example. If we discover packages from my blog, some feeds will be added to NuGet in Visual Studio.

1 Install-Package DiscoverPackageSources 2 Discover-PackageSources -Url "http://blog.maartenballiauw.be"

Because my blog links to my feeds on MyGet, I can provide my MyGet credentials with it:

1 Install-Package DiscoverPackageSources 2 Discover-PackageSources -Url "http://blog.maartenballiauw.be" -Username maarten -Password s3cr3t

Note I’ve stripped out some of the secrets in the examples but I’m sure you get the idea.

What’s interesting is that because I provided credentials, MyGet also returned the SymbolSource URL for my feeds and it registered them automatically in Visual Studio.

Symbol server

Now that’s what I call being lazy in a professional manner!

On a side note… NuGet Feed Discovery

While not completely related to SymbolSource support, it’s worth mentioning that Package Source Discovery also got support for that other NuGet discovery protocol by the guys at Inedo, NuGet Feed Discovery (NFD). NFD differs from PSD in that both specs have a different intent.

  • NFD is a convention-based API endpoint for listing feeds on a server
  • PSD is a means of discovering feeds from any URL given

The fun thing is: if you add an NFD url to your web site’s metadata, it will also be added into Visual Studio by using NuGet Package Source Discovery. For reference, here’s an example where I add my local NuGet feeds to my blog for discovery:

1 <link rel="nuget" 2 type="application/atom+xml" 3 title="Local feeds" 4 href="http://localhost:8888/nugetext/discover-feeds" />

Enjoy!

Running unit tests when deploying ASP.NET to Windows Azure Web Sites

Deployment failedOne of the well-loved features of Windows Azure Web Sites is the fact that you can simply push our ASP.NET application’s source code to the platform using Git (or TFS or DropBox) and that sources are compiled and deployed on your Windows Azure Web Site. If you’ve checked the management portal earlier, you may have noticed that a number of deployment steps are executed: the deployment process searches for the project file to compile, compiles it, copies the build artifacts to the web root and has your website running. But did you know you can customize this process?

[update] Mstest seems to work now as well, using the console runner from VS2012.

Customizing the build process

To get an understanding of how to customize the build process, I want to explain you how this works. In the root of your repository, you can add a .deployment file, containing a simple directive: which command should be run upon deployment.

1 [config] 2 command = build.bat

This command can be a batch file, a PHP file, a bash file and so on. As long as we can tell Windows Azure Web Sites what to execute. Let’s go with a batch file.

1 @echo off 2 echo This is a custom deployment script, yay!

When pushing this to Windows Azure Web Sites, here’s what you’ll see:

Windows Azure Web Sites custom build

In this batch file, we can use some environment variables to further customize the script:

  • DEPLOYMENT_SOURCE - The initial "working directory"
  • DEPLOYMENT_TARGET - The wwwroot path (deployment destination)
  • DEPLOYMENT_TEMP - Path to a temporary directory (removed after the deployment)
  • MSBUILD_PATH - Path to msbuild

After compiling, you can simply xcopy our application to the %DEPLOYMENT_TARGET% variable and have your website live.

Generating deployment scripts

Creating deployment scripts can be a tedious job, good thing that the azure-cli tools are there! Once those are installed, simply invoke the following command and have both the .deployment file as well as a batch or bash file generated:

1 azure site deploymentscript --aspWAP "path\to\project.csproj"

For reference, here’s what is generated:

1 @echo off 2 3 :: ---------------------- 4 :: KUDU Deployment Script 5 :: ---------------------- 6 7 :: Prerequisites 8 :: ------------- 9 10 :: Verify node.js installed 11 where node 2>nul >nul 12 IF %ERRORLEVEL% NEQ 0 ( 13 echo Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment. 14 goto error 15 ) 16 17 :: Setup 18 :: ----- 19 20 setlocal enabledelayedexpansion 21 22 SET ARTIFACTS=%~dp0%artifacts 23 24 IF NOT DEFINED DEPLOYMENT_SOURCE ( 25 SET DEPLOYMENT_SOURCE=%~dp0%. 26 ) 27 28 IF NOT DEFINED DEPLOYMENT_TARGET ( 29 SET DEPLOYMENT_TARGET=%ARTIFACTS%\wwwroot 30 ) 31 32 IF NOT DEFINED NEXT_MANIFEST_PATH ( 33 SET NEXT_MANIFEST_PATH=%ARTIFACTS%\manifest 34 35 IF NOT DEFINED PREVIOUS_MANIFEST_PATH ( 36 SET PREVIOUS_MANIFEST_PATH=%ARTIFACTS%\manifest 37 ) 38 ) 39 40 IF NOT DEFINED KUDU_SYNC_COMMAND ( 41 :: Install kudu sync 42 echo Installing Kudu Sync 43 call npm install kudusync -g --silent 44 IF !ERRORLEVEL! NEQ 0 goto error 45 46 :: Locally just running "kuduSync" would also work 47 SET KUDU_SYNC_COMMAND=node "%appdata%\npm\node_modules\kuduSync\bin\kuduSync" 48 ) 49 IF NOT DEFINED DEPLOYMENT_TEMP ( 50 SET DEPLOYMENT_TEMP=%temp%\___deployTemp%random% 51 SET CLEAN_LOCAL_DEPLOYMENT_TEMP=true 52 ) 53 54 IF DEFINED CLEAN_LOCAL_DEPLOYMENT_TEMP ( 55 IF EXIST "%DEPLOYMENT_TEMP%" rd /s /q "%DEPLOYMENT_TEMP%" 56 mkdir "%DEPLOYMENT_TEMP%" 57 ) 58 59 IF NOT DEFINED MSBUILD_PATH ( 60 SET MSBUILD_PATH=%WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild.exe 61 ) 62 63 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 64 :: Deployment 65 :: ---------- 66 67 echo Handling .NET Web Application deployment. 68 69 :: 1. Build to the temporary path 70 %MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\path.csproj" /nologo /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release 71 IF !ERRORLEVEL! NEQ 0 goto error 72 73 :: 2. KuduSync 74 echo Kudu Sync from "%DEPLOYMENT_TEMP%" to "%DEPLOYMENT_TARGET%" 75 call %KUDU_SYNC_COMMAND% -q -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.deployment;deploy.cmd" 2>nul 76 IF !ERRORLEVEL! NEQ 0 goto error 77 78 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 79 80 goto end 81 82 :error 83 echo An error has occured during web site deployment. 84 exit /b 1 85 86 :end 87 echo Finished successfully. 88

This script does a couple of things:

  • Ensure node.js is installed on Windows Azure Web Sites (needed later on for synchronizing files)
  • Setting up a bunch of environment variables
  • Run msbuild on the project file we specified
  • Use kudusync (a node.js based tool, hence node.js) to synchronize modified files to the wwwroot of our site

Try it: after pushing this to Windows Azure Web Sites, you’ll see the custom script being used. Not much added value so far, but that’s what you have to provide.

Unit testing before deploying

Unit tests would be nice! All you need is a couple of unit tests and a test runner. You can add it to your repository and store it there, or simply download it during the deployment. In my example, I’m using the Gallio test runner because it runs almost all test frameworks, but feel free to use the test runner for NUnit or xUnit instead.

Somewhere before the line that invokes msbuild and ideally in the “setup” region of the deployment script, add the following:

1 IF NOT DEFINED GALLIO_COMMAND ( 2 IF NOT EXIST "%appdata%\Gallio\bin\Gallio.Echo.exe" ( 3 :: Downloading unzip 4 echo Downloading unzip 5 curl -O http://stahlforce.com/dev/unzip.exe 6 IF !ERRORLEVEL! NEQ 0 goto error 7 8 :: Downloading Gallio 9 echo Downloading Gallio 10 curl -O http://mb-unit.googlecode.com/files/GallioBundle-3.4.14.0.zip 11 IF !ERRORLEVEL! NEQ 0 goto error 12 13 :: Extracting Gallio 14 echo Extracting Gallio 15 unzip -q -n GallioBundle-3.4.14.0.zip -d %appdata%\Gallio 16 IF !ERRORLEVEL! NEQ 0 goto error 17 ) 18 19 :: Set Gallio runner path 20 SET GALLIO_COMMAND=%appdata%\Gallio\bin\Gallio.Echo.exe 21 )

See what happens there?  We check if the local system on which your files are stored in WindowsAzure Web Sites already has a copy of the Gallio.Echo.exetest runner. If not, let’s download a tool which allows us to unzip. Next, the entire Gallio test runner is downloaded and extracted. As a final step, the %GALLIO_COMMAND% variable is populated with the full path to the test runner executable.

Right before the line that calls “kudusync”, add the following:

1 echo Running unit tests 2 "%GALLIO_COMMAND%" "%DEPLOYMENT_SOURCE%\SampleApp.Tests\bin\Release\SampleApp.Tests.dll" 3 IF !ERRORLEVEL! NEQ 0 goto error

Yes, the name of your test assembly will be different, you should obviously change that. What happens here? Well, we’re invoking the test runner on our unit tests. If it fails, we abort deployment. Push it to Windows Azure and see for yourself. Here’s what is displayed on success:

Windows Azure Web Site unit tests

All green! And on failure, we get:

Gallio test runner Windows Azure

In the portal, you can clearly see that deployment was aborted:

Deployment fail when unit tests fail

That’s it. Enjoy!

NuGet Package Source Discovery

It’s already been 2 years since NuGet was introduced. This.NET package manager features the concept of feeds, or “package sources”, on which packages containing .NET libraries and tools can be hosted. In fact, support for feeds inspired us to build www.myget.org. While not all people are aware of this, Microsoft started out with two feeds as well: one for www.nuget.org, the other one for the Orchard CMS.

More and more feeds are being created daily, both by Microsoft as well as others. Here’s a list of feeds Microsoft has that I know of (there are probably more):

Wouldn’t it be nice if we could add them all to our Visual Studio package sources without having to know these URL’s? Meet the NuGet Package Source Discovery specification, or in short: PSD, a specification Xavier, Scott, PhilJeff, Howard and myself have been working on (thanks guys!)

Package Source Discovery

Because PowerShell says more than words, try the following. Open Visual Studio and open any solution. Then issue the following in the Package Manager Console:

1 Install-Package DiscoverPackageSources 2 Discover-PackageSources -Url "http://blog.maartenballiauw.be"

While we’re at it, perhaps the Glimpse project has something to discover as well.

1 Discover-PackageSources -Url "http://getglimpse.com"

Close and re-open Visual Studio and check your package sources. Notice anything new? My blog has provided you with 2 feeds. And you’ve also been subscribed to Glimpse’s nightly builds feed.

But there’s more. If you would have been authenticated when connecting to my blog, it will yield API keys as well. This allows the PSD client to setup everything that is needed for me to work with my personal feeds, both consuming and producing, by just remembering the URL of my blog.

Package Source Discovery boils down to trust. Since you apparently trust me, you can discover feeds from my blog. If you trust Microsoft, discover feeds from www.microsoft.com. Do you trust Windows Azure? Get their packages by discovering feeds at www.windowsazure.com. Need your company feeds? Discover them at http://nuget. A lot of options and possibilities there!

Recycling standards

If you are a blogger and are using Windows Live Writer, you’ve already used this before. We’ve written the NuGet Package Source Discovery specification based on what happens with blogs: when a simple <link /> element is added to your HTML, you are compatible with feed discovery. Here are the two elements that are listed in the source code for my blog:

1 <link rel="nuget" type="application/atom+xml" title="Maarten Balliauw NuGet feed" href="http://www.myget.org/F/maartenballiauw" /> 2 <link rel="nuget" type="application/rsd+xml" href="http://www.myget.org/Discovery/Feed/googleanalyticstracker" />

The first one points directly to a feed. Using the URL and the title attribute, we can add this one to our NuGet package sources with ease. The second one points to an RSD file, known since ages as the Really Simple Discovery format described on https://github.com/danielberlinger/rsd. We’ve recycled it to allow a lot of things at the client side. Since not all required metadata can be obtained from the RSD format, the Dublin Core schema is present in the PSD response as well.

Here’s an an example:

1 <?xml version="1.0" encoding="utf-8"?> 2 <rsd version="1.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> 3 <service> 4 <engineName>MyGet</engineName> 5 <engineLink>http://www.myget.org</engineLink> 6 7 <dc:identifier>http://www.myget.org/F/googleanalyticstracker</dc:identifier> 8 <dc:creator>maartenba</dc:creator> 9 <dc:owner>maartenba</dc:owner> 10 <dc:title>Staging feed for GoogleAnalyticsTracker</dc:title> 11 <dc:description>Staging feed for GoogleAnalyticsTracker</dc:description> 12 <homePageLink>http://www.myget.org/gallery/googleanalyticstracker</homePageLink> 13 14 <apis> 15 <api name="nuget-v2-packages" preferred="true" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v2" blogID="" /> 16 <api name="nuget-v2-push" preferred="true" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v2/package" blogID=""> 17 <settings> 18 <setting name="apiKey">abcdefghijkl</setting> 19 </settings> 20 </api> 21 <api name="nuget-v1-packages" preferred="false" apiLink="http://www.myget.org/F/googleanalyticstracker/api/v1" blogID="" /> 22 </apis> 23 </service> 24 </rsd> 25

As you can see, using RSD we can embed a lot more information about a feed in this document. If we wanted to add a link to someone’s GitHub and have a client that wants to use this, we can add another <api /> element in here.

Who is using this?

I am (http://blog.maartenballiauw.be), Xavier is (http://www.xavierdecoster.com), Glimpse is (http://getglimpse.com), NancyFX is (http://www.nancyfx.org) and MyGet has implemented several endpoints as well. Why don't you join the wonderful world of package source discovery?

Feedback needed!

This is not part of NuGet out of the box yet. We need your feedback, comments, implementations and so on. Head over to our GitHub repository, read through the spec and all examples and provide us with your thoughts. Try the two clients we’ve crafted (more on Xavier's blog) and make your NuGet repositories discoverable. Feel free to post a link to your blog below.

Enjoy and let the commenting begin!

Remote profiling Windows Azure Cloud Services with dotTrace

Here’s another cross-post from our JetBrains .NET blog. It’s focused around dotTrace but there are a lot of tips and tricks around Windows Azure Cloud Services in it as well, especially around working with the load balancer. Enjoy the read!

With dotTrace Performance, we can profile applications running on our local computer as well as on remote machines. The latter can be very useful when some performance problems only occur on the staging server (or even worse: only in production). And what if that remote server is a Windows Azure Cloud Service?

Note: in this post we’ll be exploring how to setup a Windows Azure Cloud Service for remote profiling using dotTrace, the “platform-as-a-service” side of Windows Azure. If you are working with regular virtual machines (“infrastructure-as-a-service”), the only thing you have to do is open up any port in the loadbalancer, redirect it to the machine’s port 9000 (dotTrace’s default) and follow the regular remote profiling workflow.

Preparing your Windows Azure Cloud Service for remote profiling

Since we don’t have system administrators at hand when working with cloud services, we have to do some of their work ourselves. The most important piece of work is making sure the load balancer in Windows Azure lets dotTrace’s traffic through to the server instance we want to profile.

We can do this by adding an InstanceInput endpoint type in the web- or worker role’s configuration:

Windows Azure InstanceInput endpoint

By default, the Windows Azure load balancer uses a round-robin approach in routing traffic to role instances. In essence every request gets routed to a random instance. When profiling later on, we want to target a specific machine. And that’s what the InstanceInput endpoint allows us to do: it opens up a range of ports on the load balancer and forwards traffic to a local port. In the example above, we’re opening ports 9000-9019 in the load balancer and forward them to port 9000 on the server. If we want to connect to a specific instance, we can use a port number from this range. Port 9000 will connect to port 9000 on server instance 0. Port 9001 will connect to port 9000 on role instance 1 and so on.

When deploying, make sure to enable remote desktop for the role as well. This will allow us to connect to a specific machine and start dotTrace’s remote agent there.

Windows Azure Remote Desktop RDP

That’s it. Whenever we want to start remote profiling on a specific role instance, we can now connect to the machine directly.

Starting a remote profiling session with a specific instance

And then that moment is there: we need to profile production!

First of all, we want to open a remote desktop connection to one of our role instances. In the Windows Azure management portal, we can connect to a specific instance by selecting it and clicking the Connect button. Save the file that’s being downloaded somewhere on your system: we need to change it before connecting.

Windows Azure connect to specific role instance

The reason for saving and not immediately opening the .rdp file is that we have to copy the dotTrace Remote Agent to the machine. In order to do that we want to enable access to our local drives. Right-click the downloaded .rdp file and select Edit from the context menu. Under the Local Resources tab, check the Drives option to allow access to our local filesystem.

Windows Azure access local filesystem

Save the changes and connect to the remote machine. We can now copy the dotTrace Remote Agent to the role instance by copying all files from our local dotTrace installation. The Remote Agent can be found in C:\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote, but since the machine in Windows Azure has no clue about that path we have to specify \\tsclient\C\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote instead.

From the copied folder, launch the RemoteAgent.exe. A console window similar to the one below will appear:

image

Not there yet: we did open the load balancer in Windows Azure to allow traffic to flow to our machine, but the machine’s own firewall will be blocking our incoming connection. To solve this, configure Windows Firewall to allow access on port 9000. A one-liner which can be run in a command prompt would be the following:

netsh advfirewall firewall add rule name="Profiler" dir=in action=allow protocol=TCP localport=9000

 

Since we’ve opened ports 9000 thru 9019 in the Windows Azure load balancer and every role instance gets their own port number from that range, we can now connect to the machine using dotTrace. We’ve connected to instance 1, which means we have to connect to port 9001 in dotTrace’s Attach to Process window. The Remote Agent URL will look like http://<yourservice>.cloudapp.net:PORT/RemoteAgent/AgentService.asmx.

Attach to process

Next, we can select the process we want to do performance tracing on. I’ve deployed a web application so I’ll be connecting to IIS’s w3wp.exe.

Profile application dotTrace

We can now user our application and try reproducing performance issues. Once we feel we have enough data, the Get Snapshot button will download all required data from the server for local inspection.

dotTrace get performance snapshot

We can now perform our performance analysis tasks and hunt for performance issues. We can analyze the snapshot data just as if we had recorded the snapshot locally. After determining the root cause and deploying a fix, we can repeat the process to collect another snapshot and verify that you have resolved the performance problem. Note that all steps in this post should be executed again in the next profiling session: Windows Azure’s Cloud Service machines are stateless and will probably discard everything we’ve done with them so far.

Analyze snapshot data

Bonus tip: get the instance being profiled out of the load balancer

Since we are profiling a production application, we may get in the way of our users by collecting profiling data. Another issue we have is that our own test data and our live user’s data will show up in the performance snapshot. And if we’re running a lot of instances, not every action we do in the application will be performed by the role instance we’ve connected to because of Windows Azure’s round-robin load balancing.

Ideally we want to temporarily remove the role instance we’re profiling from the load balancer to overcome these issues.The good news is: we can do this! The only thing we have to do is add a small piece of code in our WebRole.cs or WorkerRole.cs class.

1 public class WebRole : RoleEntryPoint 2 { 3 public override bool OnStart() 4 { 5 // For information on handling configuration changes 6 // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357. 7 8 RoleEnvironment.StatusCheck += (sender, args) => 9 { 10 if (File.Exists("C:\\Config\\profiling.txt")) 11 { 12 args.SetBusy(); 13 } 14 }; 15 16 return base.OnStart(); 17 } 18 }

Essentially what we’re doing here is capturing the load balancer’s probes to see if our node is still healthy. We can choose to respond to the load balancer that our current instance is busy and should not receive any new requests. In the example code above we’re checking if the file C:\Config\profiling.txt exists. If it does, we respond the load balancer with a busy status.

When we start profiling, we can now create the C:\Config\profiling.txt file to take the instance we’re profiling out of the server pool. After about a minute, the management portal will report the instance is “Busy”.

Role instance marked Busy

The best thing is we can still attach to the instance-specific endpoint and attach dotTrace to this instance. Just keep in mind that using the application should now happen in the remote desktop session we opened earlier, since we no longer have the current machine available from the Internet.

image

Once finished, we can simply remove the C:\Config\profiling.txt file and Windows Azure will add the machine back to the server pool. Don't forget this as otherwise you'll be paying for the machine without being able to serve the application from it. Reimaging the machine will also add it to the pool again.

Enjoy!

Custom media types for ASP.NET Web API versioning

There is a raging discussion on the interwebs on whether to version API’s by using their URL or by using a custom media type. Some argue that doing it in the URL breaks REST (since a different URL is a different resource while versions don’t necessarily mean a new resource is available). While I still feel good about both approaches, I guess it depends on the domain you are working with.

But that is not the topic of this talk. I recently found a sample on CodePlex providing support for routing versioned URL’s to different namespaces. In short, it maps /api/v1/values to MyApp.V1.Controllers and /api/v2/values to MyApp.V2.Controllers. Great! But that only supports the URL-versioning side of the discussion. Let’s implement this sample and build ASP.NET Web API support for versioning an API using custom media types…

Custom Media Types

If you have no clue about what I am talking about, no worries. I’ll give you a quick primer on this using the GitHub API as an example. Since their API version 3, endpoints for the API (or “resource addresses”) will no longer change every version of the API. Instead, they will be parsing the Accept HTTP header to determine the incoming message version and the expected response version.

Getting a list of repositories from the API? The URL will always be /users/repos. However different incoming and outgoing responses are possible, varying based on their media types. Want to use the V3 message format in JSON? Use application/vnd.github.v3+json. Prefer the V3 message format in XML? Use application/vnd.github.v3+xml. Whenever they update their messages, they can add a new media type such as application/vnd.github.v4 without changing any URL. Nifty trick, aye? Let’s do this for our own API.

IHttpControllerSelector

The IHttpControllerSelector interface allows you to interfere in selecting the right controller for the current request. This is an ideal location for grabbing all contextual information and providing ASP.NET Web API with a controller based on that context.

1 public class AcceptHeaderControllerSelector : IHttpControllerSelector 2 { 3 private const string ControllerKey = "controller"; 4 5 private readonly HttpConfiguration _configuration; 6 private readonly Func<MediaTypeHeaderValue, string> _namespaceResolver; 7 private readonly Lazy<Dictionary<string, HttpControllerDescriptor>> _controllers; 8 private readonly HashSet<string> _duplicates; 9 10 public AcceptHeaderControllerSelector(HttpConfiguration config, Func<MediaTypeHeaderValue, string> namespaceResolver) 11 { 12 _configuration = config; 13 _namespaceResolver = namespaceResolver; 14 _duplicates = new HashSet<string>(StringComparer.OrdinalIgnoreCase); 15 _controllers = new Lazy<Dictionary<string, HttpControllerDescriptor>>(InitializeControllerDictionary); 16 } 17 18 private Dictionary<string, HttpControllerDescriptor> InitializeControllerDictionary() 19 { 20 var dictionary = new Dictionary<string, HttpControllerDescriptor>(StringComparer.OrdinalIgnoreCase); 21 22 // Create a lookup table where key is "namespace.controller". The value of "namespace" is the last 23 // segment of the full namespace. For example: 24 // MyApplication.Controllers.V1.ProductsController => "V1.Products" 25 IAssembliesResolver assembliesResolver = _configuration.Services.GetAssembliesResolver(); 26 IHttpControllerTypeResolver controllersResolver = _configuration.Services.GetHttpControllerTypeResolver(); 27 28 ICollection<Type> controllerTypes = controllersResolver.GetControllerTypes(assembliesResolver); 29 30 foreach (Type t in controllerTypes) 31 { 32 var segments = t.Namespace.Split(Type.Delimiter); 33 34 // For the dictionary key, strip "Controller" from the end of the type name. 35 // This matches the behavior of DefaultHttpControllerSelector. 36 var controllerName = t.Name.Remove(t.Name.Length - DefaultHttpControllerSelector.ControllerSuffix.Length); 37 38 var key = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", segments[segments.Length - 1], controllerName); 39 40 // Check for duplicate keys. 41 if (dictionary.Keys.Contains(key)) 42 { 43 _duplicates.Add(key); 44 } 45 else 46 { 47 dictionary[key] = new HttpControllerDescriptor(_configuration, t.Name, t); 48 } 49 } 50 51 // Remove any duplicates from the dictionary, because these create ambiguous matches. 52 // For example, "Foo.V1.ProductsController" and "Bar.V1.ProductsController" both map to "v1.products". 53 foreach (string s in _duplicates) 54 { 55 dictionary.Remove(s); 56 } 57 return dictionary; 58 } 59 60 // Get a value from the route data, if present. 61 private static T GetRouteVariable<T>(IHttpRouteData routeData, string name) 62 { 63 object result = null; 64 if (routeData.Values.TryGetValue(name, out result)) 65 { 66 return (T)result; 67 } 68 return default(T); 69 } 70 71 public HttpControllerDescriptor SelectController(HttpRequestMessage request) 72 { 73 IHttpRouteData routeData = request.GetRouteData(); 74 if (routeData == null) 75 { 76 throw new HttpResponseException(HttpStatusCode.NotFound); 77 } 78 79 // Get the namespace and controller variables from the route data. 80 string namespaceName = null; 81 foreach (var accepts in request.Headers.Accept) 82 { 83 namespaceName = _namespaceResolver(accepts); 84 if (namespaceName != null) 85 { 86 break; 87 } 88 } 89 if (namespaceName == null) 90 { 91 throw new HttpResponseException(HttpStatusCode.NotFound); 92 } 93 94 string controllerName = GetRouteVariable<string>(routeData, ControllerKey); 95 if (controllerName == null) 96 { 97 throw new HttpResponseException(HttpStatusCode.NotFound); 98 } 99 100 // Find a matching controller. 101 string key = String.Format(CultureInfo.InvariantCulture, "{0}.{1}", namespaceName, controllerName); 102 103 HttpControllerDescriptor controllerDescriptor; 104 if (_controllers.Value.TryGetValue(key, out controllerDescriptor)) 105 { 106 return controllerDescriptor; 107 } 108 else if (_duplicates.Contains(key)) 109 { 110 throw new HttpResponseException( 111 request.CreateErrorResponse(HttpStatusCode.InternalServerError, 112 "Multiple controllers were found that match this request.")); 113 } 114 else 115 { 116 throw new HttpResponseException(HttpStatusCode.NotFound); 117 } 118 } 119 120 public IDictionary<string, HttpControllerDescriptor> GetControllerMapping() 121 { 122 return _controllers.Value; 123 } 124 }

To be honest, I did not write much code in this. I grabbed the IHttpControllerSelector implementation from the sample on CodePlex and added just these lines to check the Accept header instead.

1 // Get the namespace and controller variables from the route data. 2 string namespaceName = null; 3 foreach (var accepts in request.Headers.Accept) 4 { 5 namespaceName = _namespaceResolver(accepts); 6 if (namespaceName != null) 7 { 8 break; 9 } 10 } 11 if (namespaceName == null) 12 { 13 throw new HttpResponseException(HttpStatusCode.NotFound); 14 }

The real logic in finding out the version that is called is delegated to the user of this IHttpControllerSelector. Let’s wire it up!

Wiring it up

ASP.NET Web API has a lot of “plugs”, among which there’s one where we can plug in our custom IHttpControllerSelector, Let’s override the default one and add our own:

1 config.Services.Replace(typeof(IHttpControllerSelector), 2 new AcceptHeaderControllerSelector(config, accept => 3 { 4 foreach (var parameter in accept.Parameters) 5 { 6 if (parameter.Name.Equals("version", StringComparison.InvariantCultureIgnoreCase)) 7 { 8 switch (parameter.Value) 9 { 10 case "1.0": return "v1"; 11 case "2.0": return "v2"; 12 } 13 } 14 } 15 16 return "v2"; // default namespace, return null to throw 404 when namespace not given 17 }));

As you can see, we can pass in a lambda which gets called with the contents of the Accept header and must return the namespace obtained from the header. The above example will work when using the version property of a header, e.g.: application/json;version=1.0 and application/json;version=2.0. The last statement returns “v2” as the default version when no specific media header is given. Return null if you want this to result in a 404 Page Not Found.

Using this header scheme is recommended but of course other options are possible. It’s your lambda!

Another approach would be going "GitHub style" and use things like application/vnd.api.v1+json and similar?

1 config.Services.Replace(typeof(IHttpControllerSelector), 2 new AcceptHeaderControllerSelector(config, accept => 3 { 4 var matches = Regex.Match(accept.MediaType, @"application\/vnd.api.(.*)\+.*"); 5 if (matches.Groups.Count >= 2) 6 { 7 return matches.Groups[1].Value; 8 } 9 return "v2"; // default namespace, return null to throw 404 when namespace not given 10 }));

Note that when using the GitHub-style media type, it’s best to also configure the default media type formatters to recognize these new types. That way you can even use different media type formats for each API version.

1 // Add custom media types as supported to their default formatters 2 config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v1+json")); 3 config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v2+json")); 4 5 config.Formatters.XmlFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v1+xml")); 6 config.Formatters.XmlFormatter.SupportedMediaTypes.Add(new MediaTypeWithQualityHeaderValue("application/vnd.api.v2+xml"));

That’s basically it. We can now implement our controllers in different namespaces, like so:

1 namespace TestSelector.Controllers.V1 2 { 3 public class ValuesController : ApiController 4 { 5 public string Get() 6 { 7 return "This is a V1 response."; 8 } 9 } 10 } 11 12 namespace TestSelector.Controllers.V2 13 { 14 public class ValuesController : ApiController 15 { 16 public string Get() 17 { 18 return "This is a V2 response."; 19 } 20 } 21 }

When providing different Accept headers, we now get routed to the correct namespace depending on our custom media type. REST maturity level up!

I’ve issued a pull request on the official samples page, in the meanwhile here’s the download: AcceptHeaderControllerSelector.zip (238.43 kb)

Enjoy!

[edit] there's a project on GitHub containing other implementations as well, check http://github.com/Sebazzz/SDammann.WebApi.Versioning

Taking over the @msdnbelux Twitter account

Just a quick post to let you know I’ll be taking over the @msdnbelux Twitter account for the next two weeks. This is the official Twitter account for MSDN BeLux. It’s not hacked, I did not steal the password: they gave it to me!

image

The best thing about this takeover is that there are no constraints: I can tweet whatever I want to tweet! So far it's been fun to do, I've seen a lot of reactions on my tweets as well. Let me know how I do! Who knows, I might just change the password and keep this account for myself after these two weeks :-)

Follow @msdnbelux and I’ll provide you with great ASP.NET MVC, ASP.NET Web API, JavaScript and Windows Azure related content.

Enjoy!

Working with Windows Azure SQL Database in PhpStorm

Disclaimer: My job at JetBrains holds a lot of “exploration of tools”. From time to time I discover things I personally find really cool and blog about those on the JetBrains blogs. If it relates to Windows Azure, I  typically cross-post on my personal blog.

clip_image002PhpStorm provides us the possibility to connect to Windows Azure SQL Database right from within the IDE. In this post, we’ll explore several options that are available for working with Windows Azure SQL Database (or database systems like SQL Server, MySQL, PostgreSQL or Oracle, for that matter):

  • Setting up a database connection
  • Creating a table
  • Inserting and updating data
  • Using the database console
  • Generating a database diagram
  • Database refactoring

If you are familiar with Windows Azure SQL Database, make sure to configure the database firewall correctly so you can connect to it from your current machine.

Setting up a database connection

Database support can be found on the right-hand side of the IDE or by using the Ctrl+Alt+A (Cmd+Alt+A on Mac) and searching for “Database”.

clip_image004

Opening the database pane, we can create a new connection or Data Source. We’ll have to specify the JDBC database driver to be used to connect to our database. Since Windows Azure SQL Database is just “SQL Server” in essence, we can use the SQL Server driver available in the list of drivers. PhpStorm doesn’t ship these drivers but a simple click (on “Click here”) fetches the correct JDBC driver from the Internet.

clip_image006

Next, we’ll have to enter our connection details. As the JDBC driver class, select the com.microsoft.sqlserver.jdbc driver. The Database URL should be a connection string to our SQL Database and typically comes in the following form:

1 jdbc:sqlserver://<servername>.database.windows.net;database=<databasename>

The username to use comes in a different form. Due to a protocol change that was required for Windows Azure SQL Database, we have to suffix the username with the server name.

clip_image007

After filling out the necessary information, we can use the Test Connection button to test the database connection.

clip_image009

Congratulations! Our database connection is a fact and we can store it by closing the Data Source dialog using the Ok button.

Creating a table

If we right click a schema discovered in our Data Source, we can use the New | Table menu item to create a table.

clip_image011

We can use the Create New Table dialog to define columns on our to-be-created table. PhpStorm provides us with a user interface which allows us to graphically specify columns and generates the DDL for us.

clip_image013

Clicking Ok will close the dialog and create the table for us. We can now right-click our table and modify existing columns or add additional columns and generate DDL which alters the table.

Inserting and updating data

After creating a table, we can insert data (or update data from an existing table). Upon connecting to the database, PhpStorm will display a list of all tables and their columns. We can select a table and press F4 (or right-click and use the Table Editor context menu).

clip_image015

We can add new rows and/or edit existing rows by using the + and - buttons in the toolbar. By default, auto-commit is enabled and changes are committed automatically to the database. We can disable this option and manually commit and rollback any changes that have been made in the table editor.

Using the database console

Sometimes there is no better tool than a database console. We can bring up the Console by right-clicking a table and selecting the Console menu item or simply by pressing Ctrl+Shift+F10 (Cmd+Shift+F10 on Mac).

clip_image017

We can enter any SQL statement in the console and run it against our database. As you can see from the screenshot above, we even get autocompletion on table names and column names!

Generating a database diagram

If we have multiple tables with foreign keys between them, we can easily generate a database diagram by selecting the tables to be included in the diagram and selecting Diagrams | Show Visualization... from the context menu or using the Ctrl+Alt+Shift+U (Cmd+Alt+Shift+U on Mac). PhpStorm will then generate a database diagram for these tables, displaying how they relate to each other.

clip_image019

Database refactoring

Renaming a table or column often is tedious. PhpStorm includes a Rename refactoring (Shift-F6) which generates the required SQL code for renaming tables or columns.

clip_image021

As we’ve seen in this post, working with Windows Azure SQL Database is pretty simple from within PhpStorm using the built-in database support.

Running unit tests when deploying to Windows Azure Web Sites

When deploying an application to Windows Azure Web Sites, a number of deployment steps are executed. For .NET projects, msbuild is triggered. For node.js applications, a list of dependencies is restored. For PHP applications, files are copied from source control to the actual web root which is served publicly. Wouldn’t it be cool if Windows Azure Web Sites refused to deploy fresh source code whenever unit tests fail? In this post, I’ll show you how.

Disclaimer:  I’m using PHP and PHPUnit here but the same approach can be used for node.js. .NET is a bit harder since most test runners out there are not supported by the Windows Azure Web Sites sandbox. I’m confident however that in the near future this issue will be resolved and the same technique can be used for .NET applications.

Our sample application

First of all, let’s create a simple application. Here’s a very simple one using the Silex framework which is similar to frameworks like Sinatra and Nancy.

1 <?php 2 require_once(__DIR__ . '/../vendor/autoload.php'); 3 4 $app = new \Silex\Application(); 5 6 $app->get('/', function (\Silex\Application $app) { 7 return 'Hello, world!'; 8 }); 9 10 $app->run();

Next, we can create some unit tests for this application. Since our app itself isn’t that massive to test, let’s create some dummy tests instead:

1 <?php 2 namespace Jb\Tests; 3 4 class SampleTest 5 extends \PHPUnit_Framework_TestCase { 6 7 public function testFoo() { 8 $this->assertTrue(true); 9 } 10 11 public function testBar() { 12 $this->assertTrue(true); 13 } 14 15 public function testBar2() { 16 $this->assertTrue(true); 17 } 18 }

As we can see from our IDE, the three unit tests run perfectly fine.

Running PHPUnit in PhpStorm

Now let’s see if we can hook them up to Windows Azure Web Sites…

Creating a Windows Azure Web Sites deployment script

Windows Azure Web Sites allows us to customize deployment. Using the azure-cli tools we can issue the following command:

1 azure site deploymentscript

As you can see from the following screenshot, this command allows us to specify some additional options, such as specifying the project type (ASP.NET, PHP, node.js, …) or the script type (batch or bash).

image

Running this command does two things: it creates a .deployment file which tells Windows Azure Web Sites which command should be run during the deployment process and a deploy.cmd (or deploy.sh if you’ve opted for a bash script) which contains the entire deployment process. Let’s first look at the .deployment file:

1 [config] 2 command = bash deploy.sh

This is a very simple file which tells Windows Azure Web Sites to invoke the deploy.sh script using bash as the shell. The default deploy.sh will look like this:

1 #!/bin/bash 2 3 # ---------------------- 4 # KUDU Deployment Script 5 # ---------------------- 6 7 # Helpers 8 # ------- 9 10 exitWithMessageOnError () { 11 if [ ! $? -eq 0 ]; then 12 echo "An error has occured during web site deployment." 13 echo $1 14 exit 1 15 fi 16 } 17 18 # Prerequisites 19 # ------------- 20 21 # Verify node.js installed 22 where node &> /dev/null 23 exitWithMessageOnError "Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment." 24 25 # Setup 26 # ----- 27 28 SCRIPT_DIR="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 29 ARTIFACTS=$SCRIPT_DIR/artifacts 30 31 if [[ ! -n "$DEPLOYMENT_SOURCE" ]]; then 32 DEPLOYMENT_SOURCE=$SCRIPT_DIR 33 fi 34 35 if [[ ! -n "$NEXT_MANIFEST_PATH" ]]; then 36 NEXT_MANIFEST_PATH=$ARTIFACTS/manifest 37 38 if [[ ! -n "$PREVIOUS_MANIFEST_PATH" ]]; then 39 PREVIOUS_MANIFEST_PATH=$NEXT_MANIFEST_PATH 40 fi 41 fi 42 43 if [[ ! -n "$KUDU_SYNC_COMMAND" ]]; then 44 # Install kudu sync 45 echo Installing Kudu Sync 46 npm install kudusync -g --silent 47 exitWithMessageOnError "npm failed" 48 49 KUDU_SYNC_COMMAND="kuduSync" 50 fi 51 52 if [[ ! -n "$DEPLOYMENT_TARGET" ]]; then 53 DEPLOYMENT_TARGET=$ARTIFACTS/wwwroot 54 else 55 # In case we are running on kudu service this is the correct location of kuduSync 56 KUDU_SYNC_COMMAND="$APPDATA\\npm\\node_modules\\kuduSync\\bin\\kuduSync" 57 fi 58 59 ################################################################################################################################## 60 # Deployment 61 # ---------- 62 63 echo Handling Basic Web Site deployment. 64 65 # 1. KuduSync 66 echo Kudu Sync from "$DEPLOYMENT_SOURCE" to "$DEPLOYMENT_TARGET" 67 $KUDU_SYNC_COMMAND -q -f "$DEPLOYMENT_SOURCE" -t "$DEPLOYMENT_TARGET" -n "$NEXT_MANIFEST_PATH" -p "$PREVIOUS_MANIFEST_PATH" -i ".git;.deployment;deploy.sh" 68 exitWithMessageOnError "Kudu Sync failed" 69 70 ################################################################################################################################## 71 72 echo "Finished successfully." 73

This script does two things: setup a bunch of environment variables so our script has all the paths to the source code repository, the target web site root and some well-known commands, Next, it runs the KuduSync executable, a helper which copies files from the source code repository to the web site root using an optimized algorithm which only copies files that have been modified. For .NET, there would be a third action which is done: running msbuild to compile sources into binaries.

Right before the part that reads # Deployment, we can add some additional steps for running unit tests. We can invoke the php.exe executable (located on the D:\ drive in Windows Azure Web Sites) and run phpunit.php passing in the path to the test configuration file:

1 ################################################################################################################################## 2 # Testing 3 # ------- 4 5 echo Running PHPUnit tests. 6 7 # 1. PHPUnit 8 "D:\Program Files (x86)\PHP\v5.4\php.exe" -d auto_prepend_file="$DEPLOYMENT_SOURCE\\vendor\\autoload.php" "$DEPLOYMENT_SOURCE\\vendor\\phpunit\\phpunit\\phpunit.php" --configuration "$DEPLOYMENT_SOURCE\\app\\phpunit.xml" 9 exitWithMessageOnError "PHPUnit tests failed" 10 echo

On a side note, we can also run other commands like issuing a composer update, similar to NuGet package restore in the .NET world:

1 echo Download composer. 2 curl -O https://getcomposer.org/composer.phar > /dev/null 3 4 echo Run composer update. 5 cd "$DEPLOYMENT_SOURCE" 6 "D:\Program Files (x86)\PHP\v5.4\php.exe" composer.phar update --optimize-autoloader 7

Putting our deployment script to the test

All that’s left to do now is commit and push our changes to Windows Azure Web Sites. If everything goes right, the output for the git push command should contain details of running our unit tests:

image

Here’s what happens when a test fails:

image

And even better, the Windows Azure Web Sites portal shows us that the latest sources were commited to the git repository but not deployed because tests failed:

image

As you can see, using deployment scripts we can customize deployment on Windows Azure Web Sites to fit our needs. We can run unit tests, fetch source code from a different location and so on. Enjoy!

Hosting a YouTrack instance on Windows Azure

Note: this is a cross-post from the JetBrains YouTrack blog. Since it is centered around Windows Azure, I thought it is appropriate to post a copy on my own blog as well.


YouTrack, JetBrains’ agile issue tracker, can be installed on different platforms. There is a stand-alone version which can be downloaded and installed on your own server. If you prefer a cloud-hosted solution there’s YouTrack InCloud available for you. There is always a third way as well: why not host YouTrack stand-alone on a virtual machine hosted in Windows Azure?

In this post we’ll walk you through getting a Windows Azure subscription, creating a virtual machine, installing YouTrack and configuring firewalls so we can use our cloud-hosted YouTrack instance from any browser on any location.

Getting a Windows Azure subscription

In order to be able to work with Windows Azure, we’ll need a subscription. Microsoft has several options there but as a first-time user, there is a 90-day free trial which comes with a limited amount of free resources, enough for hosting YouTrack. If you are an MSDN subscriber or BizSpark member, there are some additional benefits that are worth exploring.

On www.windowsazure.com, click the Try it free button to start the subscription wizard. You will be asked for a Windows Live ID and for credit card details, depending on the country you live in. No worries: you will not be charged in this trial unless you explicitly remove the spending cap.

clip_image002

The 90-day trial comes with 750 small compute hours monthly, which means we can host a single core machine with 1.5 GB of memory without being charged. There is 35 GB of storage included, enough to host the standard virtual machines available in the platform. Inbound traffic is free, 25 GB of outbound traffic is included as well. Seems reasonable to give YouTrack on Windows Azure a spin!

Enabling Windows Azure preview features

Before continuing, it is important to know that some features of the Windows Azure platform are still in preview, such as the “infrastructure-as-a-service” virtual machines (VM) we’re going to use in this blog post. After creating a Windows Azure account, make sure to enable these preview features from the administration page.

clip_image004

Once that’s done, we can direct our browser to http://manage.windowsazure.com and start our YouTrack deployment.

Creating a virtual machine

The Windows Azure Management Portal gives us access to all services activated in our subscription. Under Virtual Machines we can manage existing virtual machines or create our own.

When clicking the + New button, we can create a new virtual machine, either by using the Quick create option or by using the From gallery option. We’ll choose the latter as it provides us with some preinstalled virtual machines running a variety of operating systems, both Windows and Linux.

clip_image006

Depending on your preferences, feel free to go with one of the templates available. YouTrack is supported on both Windows and Linux. Let’s go with the latest version of Windows Server 2012 for this blog post.

Following the wizard, we can name our virtual machine and provide the administrator password. The name we’re giving in this screen is the actual hostname, not the DNS name we will be using to access the machine remotely. Note the machine size can also be selected. If you are using the free trial, make sure to use the Small machine size or charges will incur. There is also an Extra Small instance but this has few resources available.

clip_image008

In the next step of the wizard, we have to provide the DNS name for our machine. Pick anything you would like to use, do note it will always end in .cloudapp.net. No worries if you would like to link a custom domain name later since that is supported as well.

We can also select the region where our virtual machine will be located. Microsoft has 8 Windows Azure datacenters globally: 4 in the US, 2 in Europe and 2 in Asia. Pick one that’s close to you since that will reduce network latency.

clip_image010

The last step of the wizard provides us with the option of creating an availability set. Since we’ll be starting off with just one virtual machine this doesn’t really matter. However when hosting multiple virtual machines make sure to add them to the same availability set. Microsoft uses these to plan maintenance and make sure only part of your virtual machines is subject to maintenance at any given time.

After clicking the Complete button, we can relax a bit. Depending on the virtual machine size selected it may take up to 15 minutes before our machine is started. Status of the machine can be inspected through the management portal, as well as some performance indicators like CPU and memory usage.

clip_image012

Every machine has only one open firewall port by default: remote desktop for Windows VM’s (on TCP port 3389) or SSH for Linux VM’s (on TCP port 22). Which is enough to start our YouTrack installation. Using the Connect button or by opening a remote desktop or SSH session to the URL we created in the VM creation wizard, we can connect to our fresh machine as an administrator.

Installing YouTrack

After logging in to the virtual machine using remote desktop, we have a complete server available. There is a browser available on the Windows Server 2012 start screen which can be accessed by moving our mouse to the lower left-hand corner.

clip_image014

From our browser we can navigate to the JetBrains website and download the YouTrack installer. Note that by default, Internet Explorer on Windows Server is being paranoid about any website and will display a security warning. Use the Add button to allow it to access the JetBrains website. If you want to disable this entirely it’s also possible to disable Internet Explorer Enhanced Security.

clip_image016

We can now download the YouTrack installer directly from the JetBrains website. Internet Explorer will probably give us another security warning but we know the drill.

clip_image018

If you wish to save the installer to disk, you may notice that there is both a C:\ and D:\ drive available in a Windows Azure VM. It’s important to know that only the C:\ drive is persistent. The D:\ drive holds the Windows pagefile and can be used as temporary storage. It may get swiped during automated maintenance in the datacenter.

We can install YouTrack like we would do it on any other server: complete the wizard and make sure YouTrack gets installed to the C:\ drive.

clip_image020

The final step of the YouTrack installation wizard requires us to provide the port number on which YouTrack will be available. This can be any port number you want but since we’re only going to use this server to host YouTrack let’s go with the default HTTP port 80.

clip_image022

Once the wizard completes, a browser Window is opened and the initial YouTrack configuration page is loaded. Note that the first start may take a couple of minutes. An important setting to specify, next to the root password, is the system base URL. By default, this will read http://localhost. Since we want to be able to use this YouTrack instance through any browser and have correctly generated URLs in e-mail being sent out, we have to specify the full DNS name to our Windows Azure VM.

clip_image024

Once saved we can start creating a project, add issues, configure the agile board, do time tracking and so on.

clip_image026

Let’s see if we can make our YouTrack instance accessible from the outside world.

Configuring the firewall

By default, every VM can only be accessed remotely through either remote desktop or SSH. To open up access to HTTP port 80 on which YouTrack is running, we have to explicitly open some firewall ports.

Before diving in, it’s important to know that every virtual machine on Windows Azure is sitting behind a load balancer in the datacenter’s network topology. This means we will have to configure the load balancer to send traffic on HTTP port 80 to our virtual machine. Next to that, our virtual machine may have a firewall enabled as well, depending on the selected operating system. Windows Server 2012 blocks all traffic on HTTP port 80 by default which means we have to configure both our machine and the load balancer.

Allowing HTTP traffic on the VM

If you are a command-line person, open up a command console in the remote desktop session and issue the following command:

netsh advfirewall firewall add rule name="YouTrack" dir=in action=allow protocol=TCP localport=80

If not, here’s a crash-course in configuring Windows Firewall. From the remote desktop session to our machine we can bring up Windows Firewall configuration by using the Server Manager (starts with Windows) and clicking Configure this local server and then Windows Firewall.

clip_image028

Next, open Advanced settings.

clip_image030

Next, add a new inbound rule by right-clicking the Inbound Rules node and using the New Rule… menu item. In the wizard that opens, add a Port rule, specify TCP port 80, allow the connection and apply it to all firewall modes. Finally, we can give the rule a descriptive name like “Allow YouTrack”.

clip_image032

Once that’s done, we can configure the Windows Azure load balancer.

Configuring the Windows Azure load balancer

From the Windows Azure management portal, we can navigate to our newly created VM and open the Endpoints tab. Next, click Add Endpoint and open up public TCP port 80 and forward it to private port 80 (or another one if you’ve configured YouTrack differently).

clip_image034

After clicking Complete, the load balancer rules will be updated. This operation typically takes a couple of seconds. Progress will be reported on the Endpoints tab.

clip_image036

Once completed we can use any browser on any Internet-connected machine to use our YouTrack instance. Using the login created earlier, we can create projects and invite users to register with our cloud-hosted YouTrack instance.

clip_image038

Enjoy!

Tales from the trenches: resizing a Windows Azure virtual disk the smooth way

We’ve all been there. Running a virtual machine on Windows Azure and all of a sudden you notice that a virtual disk is running full. Having no access to the hypervisor nor to its storage (directly), there’s no easy way out…

Big disclaimer: use the provided code on your own risk! I’m not responsible if something breaks! The provided code is as-is without warranty! I have tested this on a couple of data disks without any problems. I've tested this on OS disks and this sometimes works, sometimes fails. Be warned.

Download/contribute: on GitHub

When searching for a solution to this issue,the typical solution you’ll find is the following:

  • Delete the VM
  • Download the .vhd
  • Resize the downloaded .vhd
  • Delete the original .vhd from blob storage
  • Upload the resized .vhd
  • Recreate the VM
  • Use diskpart to resize the partition

That’s a lot of work. Deleting and re-creating the VM isn’t that bad, it can be done pretty quickly. But doing a download of a 30GB disk, resizing the disk and re-uploading it is a serious PITA! Even if you do this on a temporary VM that sits in the same datacenter as your storage account.

Last saturday, I was in this situation… A decision would have to be made: spend an estimated 3 hours in doing the entire download/resize/upload process or reading up on the VHD file format and finding an easier way. With the possibility of having to fall back to doing the entire process…

Now what!

Being a bit geeked out, I decided to read up on the VHD file format and download the specs.

Before we dive in: why would I even read up on the VHD file format? Well, since Windows Azure storage is used as the underlying store for Windows Azure Virtual Machine VHD’s and Windows Azure storage supports byte operations without having to download an entire file, it occurred to me that combining both would result in a less-than-one-second VHD resize. Or would it?

Note that if you’re just interested in the bits to “get it done”, check the last section of this post.

Researching the VHD file format specs

The specs for the VHD file format are publicly available. Which means it shouldn't be to hard to learn how VHD files, the underlying format for virtual disks on Windows Azure Virtual Machines, are structured. Having fear of extremely complex file structures, I started reading and found that a VHD isn’t actually that complicated.

Apparently, VHD files created with Virtual PC 2004 are a bit different from newer VHD files. But hey, Microsoft will probably not use that old beast in their datacenters, right? Using that assumption and the assumption that VHD files for Windows Azure Virtual Machines are always fixed in size, I learnt the following over-generalized lesson:

A fixed-size VHD for Windows Azure Virtual Machines is a bunch of bytes representing the actual disk contents, followed by a 512-byte file footer that holds some metadata.
Maarten Balliauw – last Saturday

A-ha! So in short, if the size of the VHD file is known, the offset to the footer can be calculated and the entire footer can be read. And this footer is just a simple byte array. From the specs:

VHD footer specification

Let’s see what’s needed to do some dynamic VHD resizing…

Resizing a VHD file - take 1

My first approach to “fixing” this issue was simple:

  • Read the footer bytes
  • Write null values over it and resize the disk to (desired size + 512 bytes)
  • Write the footer in those last 512 bytes

Guess what? I tried mounting an updated VHD file in Windows, without any successful result. Time for some more reading… resulting in the big Eureka! scream: the “current size” field in the footer must be updated!

So I did that… and got failure again. But Eureka! again: the checksum must be updated so that the VHD driver can verify the footer is valid!

So I did that… and found more failure.

*sigh* – the fallback scenario of download/resize/update came to mind again…

Resizing a VHD file - take 2

Being a persistent developer, I decided to do some more searching. For most problems, at least a partial solution is available out there! And there was: CodePlex holds a library called .NET DiscUtils which supports reading from and writing to a giant load of file container formats such as ISO, VHD, various file systems, Udf, Vdi and much more!

Going through the sources and doing some research, I found the one missing piece from my first attempt: “geometry”. An old class on basic computer principles came to mind where the professor taught us that disks have geometry: cylinder-head-sector or CHS information for the disk driver which can use this info for determining physical data blocks on the disk.

Being lazy, I decided to copy-and-adapt the Footer class from this library. Why reinvent the wheel? Why risk  going sub-zero on the WIfe Acceptance Factor since this was saturday?

So I decided to generate a fresh VHD file in Windows and try to resize that one using this Footer class. Let’s start simple: specify the file to open, the desired new size and open a read/write stream to it.

1 string file = @"c:\temp\path\to\some.vhd"; 2 long newSize = 20971520; // resize to 20 MB 3 4 using (Stream stream = new FileStream(file, FileMode.OpenOrCreate, FileAccess.ReadWrite)) 5 { 6 // code goes here 7 }

Since we know the size of the file we’ve just opened, the footer is at length – 512, the Footer class takes these bytes and creates a .NET object for it:

1 stream.Seek(-512, SeekOrigin.End); 2 var currentFooterPosition = stream.Position; 3 4 // Read current footer 5 var footer = new byte[512]; 6 stream.Read(footer, 0, 512); 7 8 var footerInstance = Footer.FromBytes(footer, 0);

Of course, we want to make sure we’re working on a fixed-size disk and that it’s smaller than the requested new size.

1 if (footerInstance.DiskType != FileType.Fixed 2 || footerInstance.CurrentSize >= newSize) 3 { 4 throw new Exception("You are one serious nutcase!"); 5 }

If all is well, we can start resizing the disk. Simply writing a series of zeroes in the least optimal way will do:

1 // Write 0 values 2 stream.Seek(currentFooterPosition, SeekOrigin.Begin); 3 while (stream.Length < newSize) 4 { 5 stream.WriteByte(0); 6 }

Now that we have a VHD file that holds the desired new size capacity, there’s one thing left: updating the VHD file footer. Again, the Footer class can help us here by updating the current size, original size, geometry and checksum fields:

1 // Change footer size values 2 footerInstance.CurrentSize = newSize; 3 footerInstance.OriginalSize = newSize; 4 footerInstance.Geometry = Geometry.FromCapacity(newSize); 5 6 footerInstance.UpdateChecksum();

One thing left: writing the footer to our VHD file:

1 footer = new byte[512]; 2 footerInstance.ToBytes(footer, 0); 3 4 // Write new footer 5 stream.Write(footer, 0, footer.Length);

That’s it. And my big surprise after running this? Great success! A VHD that doubled in size.

Resize VHD Windows Azure disk

So we can now resize VHD files in under a second. That’s much faster than any VHD resizer tool you find out here! But still: what about the download/upload?

Resizing a VHD file stored in blob storage

Now that we have the code for resizing a local VHD, porting this to using blob storage and more specifically, the features provided for manipulating page blobs, is pretty straightforward. The Windows Azure Storage SDK gives us access to every single page of 512 bytes of a page blob, meaning we can work with files that span gigabytes of data while only downloading and uploading a couple of bytes…

Let’s give it a try. First of all, our file is now a URL to a blob:

1 var blob = new CloudPageBlob( 2 "http://account.blob.core.windows.net/vhds/some.vhd", 3 new StorageCredentials("accountname", "accountkey));

Next, we can fetch the last page of this blob to read our VHD’s footer:

1 blob.FetchAttributes(); 2 var originalLength = blob.Properties.Length; 3 4 var footer = new byte[512]; 5 using (Stream stream = new MemoryStream()) 6 { 7 blob.DownloadRangeToStream(stream, originalLength - 512, 512); 8 stream.Position = 0; 9 stream.Read(footer, 0, 512); 10 stream.Close(); 11 } 12 13 var footerInstance = Footer.FromBytes(footer, 0);

After doing the check on disk type again (fixed and smaller than the desired new size), we can resize the VHD. This time not by writing zeroes to it, but by calling one simple method on the storage SDK.

1 blob.Resize(newSize + 512);

In theory, it’s not required to overwrite the current footer with zeroes, but let’s play it clean:

1 blob.ClearPages(originalLength - 512, 512);

Next, we can change our footer values again:

1 footerInstance.CurrentSize = newSize; 2 footerInstance.OriginalSize = newSize; 3 footerInstance.Geometry = Geometry.FromCapacity(newSize); 4 5 footerInstance.UpdateChecksum(); 6 7 footer = new byte[512]; 8 footerInstance.ToBytes(footer, 0);

And write them to the last page of our page blob:

1 using (Stream stream = new MemoryStream(footer)) 2 { 3 blob.WritePages(stream, newSize); 4 }

And that’s all, folks! Using this code you’ll be able to resize a VHD file stored on blob storage in less than a second without having to download and upload several gigabytes of data.

Meet WindowsAzureDiskResizer

Since resizing Windows Azure VHD files is a well-known missing feature, I decided to wrap all my code in a console application and share it on GitHub. Feel free to fork, contribute and so on. WindowsAzureDiskResizer takes at least two parameters: the desired new size (in bytes) and a blob URL to the VHD. This can be a URL containing a Shared Access SIgnature.

Resize windows azure VM disk

Now let’s resize a disk. Here are the steps to take:

  • Shutdown the VM
  • Delete the VM -or- detach the disk if it’s not the OS disk
  • In the Windows Azure portal, delete the disk (retain the data!) do that the lease Windows Azure has on it is removed
  • Run WindowsAzureDiskResizer
  • In the Windows Azure portal, recreate the disk based on the existing blob
  • Recreate the VM  -or- reattach the disk if it’s not the OS disk
  • Start the VM
  • Use diskpart / disk management to resize the partition

Here’s how fast the resizing happens:

VhdResizer

Woah! Enjoy!

We’re good for now, at least until Microsoft decides to switch to the newer VHDX file format…

Download/contribute: on GitHub or binaries: WindowsAzureDiskResizer-1.0.0.0.zip (831.69 kb)