Maarten Balliauw {blog}

Web development, NuGet, Microsoft Azure, PHP, ...

NAVIGATION - SEARCH

Replaying IIS request logs using Apache JMeter

I don't always test code...How would you validate a new API is compatible with an old API? While upgrading frameworks in a web application we’re building, that was exactly the question we were asking ourselves. Sure, we could write synthetic tests on each endpoint, but is that representative? Users typically find insane better ways to test an API, so why not replay actual requests against the new API?

In this post, we’ll see how we can do exactly this using IIS and Apache JMeter. I’ve been using JMeter quite often in the past years doing web development, as it’s one of the most customizable load test and functional test tools for web applications. The interface is quite spartan, but don’t let that discourage you from using JMeter. After all, this is Sparta!

Collecting IIS logs

Of course, one of the first things to do before being able to replay actual request logs is collecting those logs. Depending on the server configuration, these can be stored in many locations, such as C:\inetpub\logs\LogFiles or C:\Windows\system32\LogFiles\W3SVC1. I will leave it up to you where to find them.

In our case, we’re using Azure Cloud Services to host our web applications. IIS logs are stored in a location similar to C:\Resources\Directory\<deploymentid>.<solename>.DiagnosticStore\LogFiles\Web\W3SVC<numbers>. If you’d be on Azure Web Sites, logs must be enabled first before they can be downloaded.

Converting IIS logs

By default, IIS logs are in the W3C-IIS format. Which is great, as many tools are able to parse that format. Except for JMeter,which works with NCSA Common Log Format. Not to worry though! Fetch the RConvLog tool and invoke it on the IIS log file that should be replayed later on.

Running RConvLog

We’re running RConvLog on our log, which will be converted to the NCSA log format. We’re also providing RConvLog with an idea about the time zone the logs were generated in. Since Azure runs in UTC, we can just tell it it’s all UTC by passing in +0000.

Setting up a JMeter test plan

Time for action! After launching JMeter, we can start setting up our test plan. Using the context Add | Threads | Thread Group menu, we can add a thread group. A thread group in JMeter is what simulates users. We can decide how many users are active at the same time (number of threads), how dispersed requests are (ramp-up period) and how many requests will be made in total (loop count * number of threads). The following configuration will simulate 10.000 requests with at most 100 concurrent users. Note that when replaying logs, having 10.000 requests means only the first 10.000 requests from the log will be replayed (if in the next step, the OrderPreservingLogParser is selected). If the log file holds 40.000 and we want to replay them all, we’ll have to do the math and ensure we actually will do that number of requests.

JMeter test plan - users and threads

Yes, spartan. I already mentioned that. Next up, we can use the Add | Sampler | Access Log Sampler context menu. We can now specify where our log file that is to be replayed lives. Since NCSA log files don’t hold the hostname or server port, we can configure them here. For example if we want to replay logs against localhost, that’s what we’d enter under Server. We’ll also have to enter the log file location so that JMeter knows which log file to read from.

Important thing to change here! The log parser class. JMeter comes with several of them and can be extended with custom classes as well. The three that come in the box are the TCLogParser which processes the access log independently for each thread. The SharedTCLogParser and OrderPreservingLogParser share access to the file, where each thread gets the next entry in the log. Let’s pick the OrderPreservingLogParser so that the access log is read and replayed line by line.

JMeter Access Logs Sampler

All that’s left is using the Add | Listener | Aggregate Report context menu so that we can have a look at the results. That’s pretty much it. We should now save our test plan so we can run JMeter.

Replaying the logs with JMeter

Clicking the green Run button launches our virtual users and processes logs. The Aggregate Report will list every request made, show its timings and the error rate.

Aggregate Report when replaying IIS request logs

That’s about it. But there are some considerations to make…

Considerations

What have we tested so far? Not a lot, to be honest. We’ve replayed IIS request logs but have not validated they return the expected results. So… how to do that? Using the Add | Assertions context menu, we can add assertions on status code and response contents. That’s great for functional tests, but replaying 100.000 entries is a bit harder to validate… For the test case we’ve opened this blog post with, we’ve created an Excel file that has the HTTP status codes and response size (they are in the logs) and compare them with the results we see in JMeter.

Maybe we’re not interested in the actual result, but in what is going on in our application? Turns out replaying IIS request logs can help us there, to. For the application we’re converting, we’re using Azure AppInsights to collect real-time telemetry from our application.  We could also use a profiler like JetBrains’ dotMemory and dotTrace, we can subject our running application to close inspection while JMeter simulates a real load against it.

And of course, we're testing only anonymous GET requests here. While JMeter supports sending cookies and request types other than GET, there are other tools to test those scenarios as well, like Fiddler or Runscope.

Enjoy!

Domain Routing and resolving current tenant with ASP.NET MVC 6 / ASP.NET 5

So you’re building a multi-tenant application. And just like many multi-tenant applications out there, the application will use a single (sub)domain per tenant and the application will use that to select the correct database connection, render the correct stylesheet and so on. Great! But how to do this with ASP.NET MVC 6?

A few years back, I wrote about ASP.NET MVC Domain Routing. It seems that post was more popular than I thought, as people have been asking me how to do this with the new ASP.NET MVC 6. In this blog post, I’ll do exactly that, as well as provide an alternative way of resolving the current tenant based on the current request URL.

Disclaimer: ASP.NET MVC 6 still evolves, and a big chance exists that this blog post is outdated when you are reading it. I’ve used the following dependencies to develop this against:

You’re on your own if you are using other dependencies.

Domain routing – what do we want to do?

The premise for domain routing is simple. Ideally, we want to be able to register a route like this:

The route would match any request that uses a hostname similar to *.localtest.me, where "*" is recognized as the current tenant and provided to controllers a a route value. And of course we can define the path route template as well, so we can recognize which controller and action to route into.

Domain routing – let’s do it!

Just like in my old post on ASP.NET MVC Domain Routing, we will be using the existing magic of ASP.NET routing and extend it a bit with what we need. In ASP.NET MVC 6, this means we’ll be creating a new IRouter implementation that encapsulates a TemplateRoute. Let’s call ours DomainTemplateRoute.

The DomainTemplateRoute has a similar constructor to MVC’s TemplateRoute, with one exception which is that we also want a domainTemplate parameter in which we can define the template that the host name should match. We will create a new TemplateRoute that’s held in a private field, so we can easily match the request path against that if the domain matches. This means we only need some logic to match the incoming domain, something which we need a TemplateMatcher for. This guy will parse {tenant}.localtest.me into a dictionary that contains the actual value of the tenant placeholder. Not deal, as the TemplateMatcher usually does its thing on a path, but since it treats a dot (.) as a separator we should be good there.

Having that infrastructure in place, we will need to build out the Task RouteAsync(RouteContext context) method that handles the routing. Simplified, it would look like this:

We match the hostname against our domain emplate. If it does not match, then the route does not match. If it does match, we call the inner TemplateRoute’s RouteAsync method and let that one handle the path template matching, constraints processing and so on. Lazy, but convenient!

We’re not there yet. We also want to be able to build URLs using the various HtmlHelpers that are around. If we pass it route data that is only needed for the domain part of the route, we want to strip it off the virtual path context so we don’t end up with URLs like /Home/About?tenant=tenant1 but instead with a normal /Home/About. Here’s a gist:

Fitting it all together, here’s the full DomainTemplateRoute class: https://gist.github.com/maartenba/77ca6f9cfef50efa96ec#file-domaintemplateroute-cs – The helpers for registering these routes are at https://gist.github.com/maartenba/77ca6f9cfef50efa96ec#file-domaintemplateroutebuilderextensions-cs

But there’s another approach!

One might say the only reason we would want domain routing is to know the current tenant in a multi-tenant application. This will often be the case, and there’s probably a more convenient method of doing this: by building a middleware. Ideally in our application startup, we want to add app.UseTenantResolver(); and that should ensure we always know the desired tenant for the current request. Let’s do this!

OWIN learned us that we can simply create our own request pipeline and decide which steps the current request is routed through. So if we create such step, a middleware, that sets the current tenant on the current request context, we’re good. And that’s exactly what this middleware does:

We check the current request, based on the request or one of its properties we create a Tenant instance and a TenantFeature instance and set it as a feature on the current HttpContext. And in our controllers we can now get the tenant by using that feature: Context.GetFeature<ITenantFeature>().

There you go, two ways of detecting tenants based on the incoming URL. The full source for both solutions is at https://gist.github.com/maartenba/77ca6f9cfef50efa96ec (requires some assembling).

Building future .NET projects is quite pleasant

You may remember my ranty post from a couple of months back. If not, read about how building .NET projects is a world of pain and here’s how we should solve it. With Project K ASP.NET vNext ASP.NET 5 around the corner, I thought I had to look into it again and see if things will actually get better… So here goes!

Setting up a build agent is no longer a world of pain

There, the title says it all. For all .NET development we currently do, this world of pain will still be there. No way around it, you will want to commit random murders if you want to do builds targeting .NET 2.0 – .NET 4.5. A billion SDK’s all packaged in MSI’s that come with weird silent installs so you can not really script their setup, it will be there still. Reason for that is that dependencies we have are all informal: we build against some SDK, and assume it will be there. Our application does not define what it needs, so we have to provide the whole world on our build machines…

But if we forget all that and focus just on ASP.NET 5 and the new runtime, this new world is bliss. What do we need on the build agent? A few things, still.

  • An operating system (Windows, Linux and even Mac OS X will do the job)
  • PowerShell, or any other shell like Bash
  • Some form of .NET installed, for example mono

Sounds pretty standard out-of-the-box to me. So that’s all good! What else do we need installed permanently on the machine? Nothing! That’s right: NOTHING! Builds for ASP.NET 5 are self-contained and will make sure they can run anytime, anywhere. Every project specifies its dependencies, that will all be downloaded when needed so they are available to the build. Let’s see how builds now work…

How ASP.NET 5 projects work…

As an example for this post, I will use the Entity Framework repository on GitHub, which is built against ASP.NET 5. When building a project in Visual Studio 2015, there will be .sln files that represent the solution, as well as new .kproj files that represent our project. For Visual Studio. That’s right: you can ignore these files, they are just so Visual Studio can figure out how it all fits together. “But that .kproj file is like a project file, it’s msbuild-like and I can add custom tasks in there!” – Crack! That was the sound of a whip on your fingers. Yes, you can, and the new project system actually adds some things in there to make building the project in Visual Studio work, but stay away from the .kproj files. Don’t touch them.

The real project files are these: global.json and project.json. The first one, global.json, may look like this:

{ "sources": [ "src" ] }

It defines the structure of our project, where we say that source code is in the folder named src. Multiple folders could be there, for example src and test so we can distinguish where which type of project is stored. For every project we want to make, we can create a folder under the sources folder and in there, add a project.json file. It could look like this:

{ "version": "7.0.0-*", "description": "Entity Framework is Microsoft's recommended data access technology for new applications.", "compilationOptions": { "warningsAsErrors": true }, "dependencies": { "Ix-Async": "1.2.3-beta", "Microsoft.Framework.Logging": "1.0.0-*", "Microsoft.Framework.OptionsModel": "1.0.0-*", "Remotion.Linq": "1.15.15", "System.Collections.Immutable": "1.1.32-beta" }, "code": [ "**\\*.cs", "..\\Shared\\*.cs" ], "frameworks": { "net45": { "frameworkAssemblies": { "System.Collections": { "version": "", "type": "build" }, "System.Diagnostics.Debug": { "version": "", "type": "build" }, "System.Diagnostics.Tools": { "version": "", "type": "build" }, "System.Globalization": { "version": "", "type": "build" }, "System.Linq": { "version": "", "type": "build" }, "System.Linq.Expressions": { "version": "", "type": "build" }, "System.Linq.Queryable": { "version": "", "type": "build" }, "System.ObjectModel": { "version": "", "type": "build" }, "System.Reflection": { "version": "", "type": "build" }, "System.Reflection.Extensions": { "version": "", "type": "build" }, "System.Resources.ResourceManager": { "version": "", "type": "build" }, "System.Runtime": { "version": "", "type": "build" }, "System.Runtime.Extensions": { "version": "", "type": "build" }, "System.Runtime.InteropServices": { "version": "", "type": "build" }, "System.Threading": { "version": "", "type": "build" } } }, "aspnet50": { "frameworkAssemblies": { "System.Collections": "", "System.Diagnostics.Debug": "", "System.Diagnostics.Tools": "", "System.Globalization": "", "System.Linq": "", "System.Linq.Expressions": "", "System.Linq.Queryable": "", "System.ObjectModel": "", "System.Reflection": "", "System.Reflection.Extensions": "", "System.Resources.ResourceManager": "", "System.Runtime": "", "System.Runtime.Extensions": "", "System.Runtime.InteropServices": "", "System.Threading": "" } }, "aspnetcore50": { "dependencies": { "System.Diagnostics.Contracts": "4.0.0-beta-*", "System.Linq.Queryable": "4.0.0-beta-*", "System.ObjectModel": "4.0.10-beta-*", "System.Reflection.Extensions": "4.0.0-beta-*" } } } }

Whoa! My eyes! Well, it’s not so bad. A couple of things are in here:

  • The version of our project (yes, we have to version properly, woohoo!)
  • A description (as I have been preaching a long time: every project is now a package!)
  • Where is our source code stored? II n this case, all .cs files in all folders and some in a shared folder one level up.
  • Dependencies of our project. These are identifiers of other packages, that will either be searched for on NuGet, or on the filesystem. Since every project is a package, there is no difference between a project or a NuGet package. During development, you can depend on a project. When released, you can depend on a package. Convenient!
  • The frameworks supported and the framework components we require.

That’s the project system. These are not all supported elements, there are more. But generally speaking: our project now defines what it needs. One I like is the option to run scripts at various stages of the project’s lifecycle and build lifecycle, such as restoring npm or bower packages. SLight thorn in my eye there is that the examples out there all assume npm and bower are on the build machine. Yes, that’s a hidden dependency right there…

The good things?

  • Everything is a package
  • Everything specifies their dependencies explicitly (well, almost everything)
  • It’s human readable and machine readable

So let’s see what we would have to do if we want to automate a build of, say, the Entity Framework repository on GitHub.

Automated building of ASP.NET 5 projects

This is going to be so dissappointing when you read it: to build Entity Framework, you run build.cmd (or build.sh on non-Windows OS). That’s it. It will compile everything into assemblies in NuGet packages, run tests and that’s it. But what does this build.cmd do, exactly? Let’s dissect it! Here’s the source code that’s in there at time of writing this blog post:

@echo off cd %~dp0 SETLOCAL SET CACHED_NUGET=%LocalAppData%\NuGet\NuGet.exe IF EXIST %CACHED_NUGET% goto copynuget echo Downloading latest version of NuGet.exe... IF NOT EXIST %LocalAppData%\NuGet md %LocalAppData%\NuGet @powershell -NoProfile -ExecutionPolicy unrestricted -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest 'https://www.nuget.org/nuget.exe' -OutFile '%CACHED_NUGET%'" :copynuget IF EXIST .nuget\nuget.exe goto restore md .nuget copy %CACHED_NUGET% .nuget\nuget.exe > nul :restore IF EXIST packages\KoreBuild goto run .nuget\NuGet.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre .nuget\NuGet.exe install Sake -version 0.2 -o packages -ExcludeVersion IF "%SKIP_KRE_INSTALL%"=="1" goto run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 :run CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 packages\Sake\tools\Sake.exe -I packages\KoreBuild\build -f makefile.shade %*

Did I ever mention my dream was to have fully self-contained builds? This is one. Here’s what happens:

  • A NuGet.exe is required, if it’s found that one is reused, if not, it’s downloaded on the fly.
  • Using NuGet, 2 packages are installed (currently from the alpha feed the ASP.NET team has on MyGet, but I assume these will end up on NuGet.org someday)
    • KoreBuild
    • Sake
  • The KoreBuild package contains a few things (go on, use NuGet Package Explorer and see, I’ll wait)
    • A kvm.ps1, which is the bootstrapper for the ASP.NET 5 runtime that installs a specific runtime version and kpm, the package manager.
    • A bunch of .shade files
  • Using that kvm.ps1, the latest CoreCLR runtime is installed and activated
  • Sake.exe is run from the Sake package

Dissappointment, I can feel it! This file does botstrap having the CoreCLR runtime on the build machine, but how is the actual build performed? The answer lies in the .shade files from that KoreBuild package. A lot of information is there, but distilling it all, here’s how a build is done using Sake:

  • All bin folders underneath the current directory are removed. Consider this the old-fashioned “clean” target in msbuild.
  • The kpm restore command is run from the folder where the global.json file is. This will ensure that all dependencies for all project files are downloaded and made available on the machine the build is running on.
  • In every folder containing a project.json file, the kpm build command is run, which compiles it all and generates a NuGet package for every project.
  • In every folder containing a project.json file where a command element is found that is named test, the k test command is run to execute unit tests

This is a simplified version, as it also cleans and restores npm and bower, but you get the idea. A build is pretty easy now. KoreBuild and Sake do this, but we could also just run all steps in the same order to achieve a fully working build. So that’s what I did…

Automated building of ASP.NET 5 projects with TeamCity

To see if it all was true, I decided to try and automate things using TeamCity. Entity Framework would be to easy as that’s just calling build.bat. Which is awesome!

I crafted a little project on GitHub that has a website, a library project and a test project. The goal I set out was automating a build of all this using TeamCity, and then making sure tests are run and reported. On a clean build agent with no .NET SDK’s installed at all. I also decided to not use the Sake approach, to see if my theory about the build process was right.

So… Installing the runtime, running a clean, build and test, right? Here goes:

@echo off cd %teamcity.build.workingDir% SETLOCAL IF EXIST packages\KoreBuild goto run %teamcity.tool.NuGet.CommandLine.DEFAULT.nupkg%\tools\nuget.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre -Source https://www.myget.org/F/aspnetvnext/api/v2 :run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 :clean @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% "bin" -Directory -rec -erroraction 'silentlycontinue' | Remove-Item -Recurse; exit $Lastexitcode" :restore @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% global.json -rec -erroraction 'silentlycontinue' | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& CALL kpm restore }; exit $Lastexitcode" :buildall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Foreach { kpm build $_.FullName --configuration %mr.Configuration% }; exit $Lastexitcode" @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% *.nupkg -rec -erroraction 'silentlycontinue' | Where-Object {$_.FullName -match 'bin'} | Select-Object -Expand FullName | Foreach { Write-Host `#`#teamcity`[publishArtifacts `'$_`'`] }; exit $Lastexitcode" :testall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Where-Object { $_.FullName -like '*test*' } | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& k test -teamcity }; exit $Lastexitcode"

(note: this may not be optimal, it’s as experimental as it gets, but it does the job – feel free to rewrite this in Ant or Maven to make it cross platform on TeamCity agents, too)

TeamCity will now run the build and provide us with the artifacts generated during build (all the NuGet packages), and expose them in the UI after each build:

Projeckt K ASP.NET vNext TeamCity

Even better: since TeamCity has a built-in NuGet server, these packages now show up on that feed as well, allowing me to consume these in other projects:

NuGet feed in TeamCity for ASP.NET vNext

Running tests was unexpected: it seems the ASP.NET 5 xUnit runner still uses TeamCity service messages and exposes results back to the server:

Test results from xUnit vNext in TeamCity

But how to set the build number, you ask? Well, turns out that this is coming from the project.json. The build umber in there is leading, but we can add a suffix by creating a K_VERSION_NUMBER environment variable. On TeamCity, we could use our build counter for it. Or run GitVersion and use that as the version suffix.

TeamCity set ASP.NET 5 version number

Going a step further, running kpm pack even allows us to build our web applications and add the entire generated artifact to our build, ready for deployment:

ASP.NET 5 application build on TeamCity

Very, very nice! I’m liking where ASP.NET 5 is going, and forgetting everything that came before gives me high hopes for this incarnation.

Conclusion

This is really nice, and the way I dreamt it would all work. Everything is a package, and builds are self-contained. It’s all still in beta state, but it gives a great view of what we’ll soon all be doing. I hope a lot of projects will use the builds like the Entity Framework one. having one or two build.bat files in there that do the entire thing. But even if not and you have a boilerplate VS2015 project, using the steps outlined in this blog post gets the job done. In fact, I created some TeamCity meta runners for you to enjoy (contributions welcome). How about adding one build step to your ASP.NET 5 builds in TeamCity…

TeamCity ASP.NET build by convention

Go grab these meta runners now! I have created quite a few:

  • Install KRE
  • Convention-based build
  • Clean sources
  • Restore packages
  • Build one project
  • Build all projects
  • Test one project
  • Test all projects
  • Package application

PS: Thanks Mike for helping me out with some PowerShell goodness!

Automatically strong name signing NuGet packages

Some developers prefer to strong name sign their assemblies. Signing them also means that the dependencies that are consumed must be signed. Not all third-party dependencies are signed, though, for example when consuming packages from NuGet. Some are signed, some are unsigned, and the only way to know is when at compile time when we see this:

Referenced assembly does not have a strong name

That’s right: a signed assembly cannot consume an unsigned one. Now what if we really need that dependency but don’t want to lose the fact that we can easily update it using NuGet… Turns out there is a NuGet package for that!

The Assembly Strong Naming Toolkit can be installed into our project, after which we can use the NuGet Package Manager Console to sign referenced assemblies. There is also the .NET Assembly Strong-Name Signer by Werner van Deventer, which provides us with a nice UI as well.

The problem is that the above tools only sign the assemblies once we already consumed the NuGet package. With package restore enabled, that’s pretty annoying as the assemblies will be restored when we don’t have them on our system, thus restoring unsigned assemblies…

NuGet Signature

Based on the .NET Assembly Strong-Name Signer, I decided to create a small utility that can sign all assemblies in a NuGet package and creates a new package out of those. This “signed package” can then be used instead of the original, making sure we can simply consume the package in Visual Studio and be done with it. Here’s some sample code which signs the package “MyPackage” and creates “MyPackage.Signed”:

var signer = new PackageSigner(); signer.SignPackage("MyPackage.1.0.0.nupkg", "MyPackage.Signed.1.0.0.nupkg", "SampleKey.pfx", "password", "MyPackage.Signed");

This is pretty neat, if I may say so, but still requires manual intervention for the packages we consume. Unless the NuGet feed we’re consuming could sign the assemblies in the packages for us?

NuGet Signature meets MyGet Webhooks

Earlier this week, MyGet announced their webhooks feature. After enabling the feature on our feed, we could pipe events, such as “package added”, into software of our own and perform an action based on this event. Such as, signing a package.

MyGet automatically sign NuGet package

Sweet! From the GitHub repository here, download the Web API project and deploy it to Microsoft Azure Websites. I wrote the Web API project with some configuration options, which we can either specify before deploying or through the management dashboard. The application needs these:

  • Signature:KeyFile - path to the PFX file to use when signing (defaults to a sample key file)
  • Signature:KeyFilePassword - private key/password for using the PFX file
  • Signature:PackageIdSuffix - suffix for signed package id's. Can be empty or something like ".Signed"
  • Signature:NuGetFeedUrl - NuGet feed to push signed packages to
  • Signature:NuGetFeedApiKey - API key for pushing packages to the above feed

The configuration in the Microsoft Azure Management Dashboard could look like the this:

Azure Websites

Once that runs, we can configure the web hook on the MyGet side. Be sure to add an HTTP POST hook that posts to <url to your deployed app>/api/sign, and only with the package added event.

MyGet Webhook configuration

From now on, all packages that are added to our feed will be signed when the webhook is triggered. Here’s an example where I pushed several packages to the feed and the NuGet Signature application signed the packages themselves.

NuGet list signed packages

The nice thing is in Visual Studio we can now consume the “.Signed” packages and no longer have to worry about strong name signing.

Thanks to Werner for the .NET Assembly Strong-Name Signer I was able to base this on.

Enjoy!

Speeding up ASP.NET vNext package restore

TL;DR: If you have multiple NuGet feeds configured on your machine, it may be worth to do some tweaking in the NuGet.config file shipping with your project.

Last week, the ASP.NET team released a preview of “ASP.NET vNext”, a first step in the good direction for solving the pain building .NET projects is, but more than that a great step towards having an open and cross-platform ASP.NET that is super developer friendly. If you haven’t checked it out yet, do so now.

One of the things ASP.NET vNext leans on heavily is NuGet. In fact, every application comes with a project.json file that describes an application’s dependencies. Only when running kpm restore these dependencies are downloaded and the application can be run. Running this package restore (it’s NuGet after all) is usually pretty fast, but if you, like me, are a heavy NuGet user, chances are the restore is not happening in the most optimal way. Have a look at the output of my kpm restore command right after I installed ASP.NET vNext on my system:

Project K package restore

It’s not easy to capture a screenshot that proves the point I'm about to make, but if you do this yourself and you have multiple NuGet feeds configured on your system, you’ll see that ASP.NET vNext is trying to restore packages from all configured feeds. In my case, I’m using a personal feed on MyGet, a feed hosted on my TeamCity server, a feed on my local filesystem (testing purposes) and then the ASP.NET vNext MyGet feed as well as NuGet.org. That’s 5 feeds being checked over and over again for the dependencies listed in my project.json… Let’s see if we can reduce this a bit.

If we look at the samples shipped in ASP.NET vNext, we can find a NuGet.config file in there. And as we know, NuGet has this thing called configuration file inheritance. This means that the feeds defined in here will be enriched with the feeds configured at the machine level, in my case 5 of them. But that also means we can easily fix this: adding a <clear /> element under the <packageSources> element will do the trick of removing all previously defined feeds and using just the ones defined for the project I’m working on:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/" /> <add key="NuGet.org" value="https://nuget.org/api/v2/" /> </packageSources> <!-- ... --> </configuration>

Use this trick for your own ASP.NET vNext projects as well: specify the feeds you want to use explicitly and everything will be faster for you and other developers working with your code. It ensures that kpm or NuGet for that matter only check the feeds that are relevant to your project and not every feed that is configured on your system.

Source Control considered harmful

TL;DR: Using source control is a really bad idea. Or is it? Skip to Conclusion for the meat of this post.

One of the first things I do with a new project in Visual Studio is not add it to source control. There are many reasons, but it all boils down to this: Source Control introduces more problems than it solves.

Before I dive into this, I'll share the solution with you. Put your sources on a USB drive. Yes, it's that simple.

Implications

If you're like most other people, you don't like that solution, because it feels inefficient:

  • USB drives can get lost
  • USB drives can end up in the dishwasher
  • I have to buy a USB drive for every developer on the team
  • Sharing sources with distributed teams is more difficult: USB drives have to be shipped by snail mail

All of that is true, but then again...

  • You can always make a copy of a USB drive to safeguard against loss
  • Sharing USB drives is really easy: plug and play! Ease of use!
  • You can have lots of coffee waiting for a USB drive to arrive with that contribution to your OSS project

Still, many people go for source control: Source Control and a central repository solve all implications of using a USB drive, so why not use source control?

Fragility

Have you ever let a junior developer loose on a git repository? I can promise you, it's not pretty.

  • Merges will go wrong
  • They will find out about rebasing and mess up the entire system
  • Pull requests on GitHub? One click to merge, no need to test or review!
  • Developers will forget to check in specific files

Again: all of this is easy with a USB drive: one location to store the project. Yes, merging is slightly difficult too but then again replaying history in source control is much worse.

And I haven't even talked about having to have a network share or a GitHub account in which you can have private repositories. That's all extra costs and extra risks. What if the Internet connection goes down. What if a dev's laptop breaks? You might even say a USB drive is too advanced and a typewriter is an even better way to write code!

Cost

Did I mention the cost of USB drives? At most conferences and shops you will get them for free. Even if you buy them, they are probably around 0.10$ per GB. USB drives are very inexpensive.

Compare that with source control: we need an Internet connecion, a GitHub repository, and most importantly: devs will have to read documentation on using git or be coached by someone on the team. That's really inefficient and costs a lot of time!

Conclusion

You may have noted that this is a slightly strange post. You are correct, it is. I’m responding to some of the outrages regarding yesterday’s NuGet.org outage. Tweets and blogs mention to not use NuGet, or use NuGet but definitely not use package restore. That’s perfectly fine, but I don’t think the reasons for not using it are well founded, hence the above sarcasm. If it wasn’t clear: you should be using source control.

Should you use NuGet package restore? I think it depends on your preference, mostly. It should not depend on NuGet.org outages, nor on the microwave destroying your WiFi signal and failing your builds utilizing package restore. Should you add packages to your repository or use package restore? It depends on what you want to achieve and how you want to work. I prefer not to do this because they are dependencies that are versioned (package version and packages.config) so why version them again? We don’t add the issues from our issue tracker to source control either, right?

We put issues in a specialized system for managing issues. In my opinion, the same should be true for software and component dependencies. But then again: if you want to add packages to source control, fine by me. As some tweets said, you don’t have to do it for the minimal disk space optimizations. All that matters is if it makes sense to your process. 

Just like with source control, issue trackers and other things (like package restore) in your build process, you should read up on them, play with them and know the risks. Do we know that our Internet connection can break during solar storms? Well yes. It’s a minor risk but if it’s important to your shop do mitigate that risk. Do laptops break? Yes. If it’s important that you can keep working even if a laptop crashes, buy some more and keep them up-to-date with your main development machine. If you rely on GitHub and want to get work done if they have issues, make sure you have an up to date fork somewhere on a file share. Make that two file shares!

And if you rely on NuGet package restore… you get the point, right? For NuGet, there are private repositories available that can host your in-house packages and the ones you are using from upstream sources like NuGet.org. Use them, if they matter for your development process. Know about NuGet 2.8’s automatic fallback to the local cache you have on disk and if something goes wrong, use that cache until the package source is back up.

The development process and the tools are part of your system. Know your tools. Even if it requires you to read crazy books like how to work with git. Or Pro NuGet 2.

A new year's present: introducing Glimpse plugins for Windows Azure

Glimpse plugin for Windows AzureHave you tried Glimpse before? It shows you server-side information like execution times, server configuration, request data and such in your browser. At the February MVP Summit this year, Anthony, Nik and I had a chat about what would be useful information to be displayed in Glimpse when working on Windows Azure. Some beers and a bit of coding later, we had a proof-of-concept showing Windows Azure runtime configuration data in a Glimpse tab.

Today, we are happy to announce a first public preview of two Windows Azure tabs in Glimpse: the Glimpse.WindowsAzure package displaying runtime information, and Glimpse.WindowsAzure.Storage collecting information about traffic from and to storage.

Want to give it a try? You can install these two NuGet packages from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

Now let’s have a look at what these packages have to offer!

Glimpse.WindowsAzure

The Glimpse.WindowsAzure package adds a new tab to Glimpse, displaying environment information when the web application is hosted on Windows Azure. It does this for Cloud Services as well as for Windows Azure Web Sites.

Installation is easy: simply add the Glimpse.WindowsAzure package to your project and you’re done. If you are running on .NET 4.5, you will have to add the following setting to your Web.config:

<appSettings>
  <add key="Glimpse:DisableAsyncSupport" value="true"/>
</appSettings>

When hosting in a Windows Azure Cloud Service (or the full emulator available in the Windows Azure SDK), the Azure Environment tab will provide information gathered from the RoleEnvironment class. Youcan see the deployment ID, current role instance information, a list of configured endpoints, which fault and uopdate domain our application is running in and so on.

Windows Azure Role Environment

When the web application is hosted on Windows Azure Web Sites, we get information like Compute Mode (Shared or Reserved) as well as Site Mode (Limited in the screenshot below means the application is running on a Free web site).

Glimpse Windows Azure Web Sites

The Azure Environment tab will also provide a link to the Kudu Remote Console, a feature in Windows Azure Web Sites where you can run commands on the box hosting the web site,

Kudu Console

Pretty handy if you ask me!

Glimpse.WindowsAzure.Storage

The Glimpse.WindowsAzure.Storage package adds an “Azure Storage” tab to Glimpse, displaying all sorts of information about traffic from and to Windows Azure storage. It will also estimate the cost for loading the current page depending on number of transactions and traffic to blobs, tables and/or queues. Note that this package can also be used in ASP.NET web sites that are not hosted on Windows Azure yet making use of Windows Azure Storage.

Once the package is installed into your project, you can almost start inspecting all this information. Almost? Well, see the caveat further down…

 

Number of transactions and a cost estimate

The first type of data displayed in the Azure Storage tab is the total number of transactions, traffic consumed and a cost estimate for 10.000 pageviews. This information can be used for several scenarios:

  • Know how many calls are made to storage. Maybe you can reduce the number of calls to reduce the toal number of transactions, one of the billing metrics for Windows Azure.
  • Another billing metric is the amount of traffic consumed. When running in the same datacenter as the storage account, it’s less important for cost but still, reducing the traffic can reduce the page load time.

Windows Azure Storage Transactions and bandwidth consumed

Now where do we get the price per 10.000 pageviews? Well, this is a very rough estimate, based om the pay-per-use pricing in Windows Azure. It is very likely that the actual price willk be lower if you are running on an MSDN subscription, a pre-paid plan or an Enterprise Agreement.

Warnings and analysis of requests

One feature we’re particularly proud of is this one: warnings and analysis of requests to Windows Azure Storage. First of all, we’ll analyse the settings for communicating over the network. In the screenshot below, you can see several general hints to optimize throughput by disabling the Nagle algorithm or disabling HTTP 100 Continue.

Another analysis we’ll do is verifying the requests themselves. In the example below, Glimpse is giving a warning about the fact that I’m querying table storage on properties that are not indexed, potentially causing timeouts in my application.

There are several more inspections in there, if you have suggestions for others feel free to let us know!

Analysis of requests

List of requests and Timeline

When using Windows Azure Storage, Glimpse will show you all requests that have been made together with the status code and total duration of the request.

image

Since a plain list is often not that easy to analyze, the Timeline tab is extended with this information as well. It shows you a summary of when calls to Windows Azure Storage have been made, as well as full details of the requests:

Timeline tracing Windows Azure Storage

One caveat

Because of a current limitation in the Windows Azure Storage SDK, you will have to explicitly add one parameter to every call that is made to Windows Azure Storage.

The idea is that the OperationContext parameter for calls to storage has to be a special Glimpse OperationContext obtained by calling OperationContextFactory.Current.Create(). This Glimpse-specific implementation provides us all the information required to do display information in the Azure Storage tab. here’s an example on how to wire it in for a call to create a blob storage container:

var account = CloudStorageAccount.DevelopmentStorageAccount;
var blobclient
= account.CreateCloudBlobClient();
var container1
= blobclient.GetContainerReference("glimpse1");
container1.CreateIfNotExists(operationContext: OperationContextFactory.Current.Create());

We are talking with Microsoft about this and are pretty sure this shortcoming will be addressed in the future.

What’s next?

It would be great if you could give these two packages a try! NuGet packages are available from NuGet.org (prerelease packages for now). Sources can be found on GitHub. And all comments, remarks and suggestions can go in the comments to this blog post.

We’re still looking at load balanced environments. You can implement Glimpse’s IPersistenceStore but we would like to have a zero-configuration setup.

Once we’re confident Glimpse.WindowsAzure and Glimpse.WindowsAzure.Storage are working properly, we’ll have a look at Windows Azure Caching and Service Bus.

Enjoy!

Visual Studio Online for Windows Azure Web Sites

Today’s official Visual Studio 2013 launch provides some interesting novelties, especially for Windows Azure Web Sites. There is now the choice of choosing which pipeline to run in (classic or integrated), we can define separate applications in subfolders of our web site, debug a web site right from within Visual Studio. But the most impressive one is this. How about… an in-browser editor for your application?

Editing Node.JS in browser

Let’s take a quick tour of it. After creating a web site we can go to the web site’s configuration we can enable the Visual Studio Online preview.

Edit in Visual Studio Online

Once enabled, simply navigate to https://<yoursitename>.scm.azurewebsites.net/dev or click the link from the dashboard, provide your site credentials and be greeted with Visual Studio Online.

On the left-hand menu, we can select the feature to work with. Explore does as it says: it gives you the possibility to explore the files in your site, open them, save them, delete them and so on. We can enable Git integration, search for files and classes and so on. When working in the editor we get features like autocompletion, FInd References, Peek Definition and so on. Apparently these don’t work for all languages yet, currently JavaScript and node.js seem to work, C# and PHP come with syntax highlighting but nothing more than that.

Peek definition

Most actions in the editor come with keyboard shortcuts, for example Ctrl+, opens navigation towards files in our application.

Navigation

The console comes with things like npm and autocompletion on most commands as well.

Console in Visual Studio Online

I can see myself using this for some scenarios like on-the-road editing from a Git repository (yes, you can clone any repo you want in this tool) or make live modifications to some simple sites I have running. What would you use this for?

Using the Windows Azure Content Delivery Network (CDN)

CDNWith the Windows Azure Content Delivery Network (CDN) released as a preview, I thought it was a good time to write up some details about how to work with it. The CDN can be used for offloading content to a globally distributed network of servers, ensuring faster throughput to your end users.

Note: this is a modified and updated version of my article at ACloudyPlace.com roughly two years ago. I have added information on how to work with ASP.NET MVC bundling and the Windows Azure CDN, updated screenshots and so on.

Reasons for using a CDN

There are a number of reasons to use a CDN. One of the obvious reasons lies in the nature of the CDN itself: a CDN is globally distributed and caches static content on edge nodes, closer to the end user. If a user accesses your web application and some of the files are cached on the CDN, the end user will download those files directly from the CDN, experiencing less latency in their request.

Windows Azure CDN graphically

Another reason for using the CDN is throughput. If you look at a typical webpage, about 20% of it is HTML which was dynamically rendered based on the user’s request. The other 80% goes to static files like images, CSS, JavaScript and so forth. Your server has to read those static files from disk and write them on the response stream, both actions which take away some of the resources available on your virtual machine. By moving static content to the CDN, your virtual machine will have more capacity available for generating dynamic content.

Enabling the Windows Azure CDN

The Windows Azure CDN is built for two services that are available in your subscription: storage and cloud services. The easiest way to get started with the CDN is by using the Windows Azure Management Portal. From the New menu at the bottom, select App Services | CDN | Quick Create.

Enabling Windows Azure CDN

From the dropdown that is shown, select either a storage account or a cloud service which will serve as the source of our CDN edge data. After clicking Create, the CDN will be initialized. This may take up to 60 minutes because the settings you’ve just applied may take that long to propagate to all CDN edge locations globally (over 24 was the last number I read). Your CDN will be assigned a URL in the form of .vo.msecnd.net">http://<id>.vo.msecnd.net.

Once the CDN endpoint is created, there are some options that can be managed. Currently they are somewhat limited but I’m pretty sure this will expand. For now, you can for example assign a custom domain name to the CDN by clicking the “Manage Domains” button in the toolbar.

Manage the Windows Azure CDN - Add custom domain

Note that the CDN works using HTTP by default, but HTTPS is supported as well and can be enabled through the management portal. Unfortunately, SSL is using a certificate that Microsoft provides and there’s currently no option to use your own, making it hard to use a custom domain name and HTTPS.

Serving blob storage content through the CDN

Let’s start and offload our static content (CSS, images, JavaScript) to the Windows Azure CDN using a storage account as the source for CDN content. In an ASP.NET MVC project, edit the _Layout.cshtml view. Instead of using the bundles for CSS and scripts, let’s include them manually from a URL hosted on your newly created CDN:

1 <!DOCTYPE html> 2 <html> 3 <head> 4 <title>@ViewBag.Title</title> 5 <link href="http://az172665.vo.msecnd.net/static/Content/Site.css" rel="stylesheet" type="text/css" /> 6 <script src="http://az172665.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js" type="text/javascript"></script> 7 </head> 8 <!-- more HTML --> 9 </html>

Note that the CDN URL includes a reference to a folder named “static”.

If you now run this application, you’ll find no CSS or JavaScript applied. The reason for this is obvious: we have specified the URL to our CDN but haven’t uploaded any files to our storage account backing the CDN.

Where are our styles?

Uploading files to the CDN is easy. All you need is a public blob container and some blobs hosted in there. You can use tools like Cerebrata’s Cloud Storage Studio or upload the files from code. For example, I’ve created an action method taking care of uploading static content for me:

1 [HttpPost, ActionName("Synchronize")] 2 public ActionResult Synchronize_Post() 3 { 4 var account = CloudStorageAccount.Parse( 5 ConfigurationManager.AppSettings["StorageConnectionString"]); 6 var client = account.CreateCloudBlobClient(); 7 8 var container = client.GetContainerReference("static"); 9 container.CreateIfNotExist(); 10 container.SetPermissions( 11 new BlobContainerPermissions { 12 PublicAccess = BlobContainerPublicAccessType.Blob }); 13 14 var approot = HostingEnvironment.MapPath("~/"); 15 var files = new List<string>(); 16 files.AddRange(Directory.EnumerateFiles( 17 HostingEnvironment.MapPath("~/Content"), "*", SearchOption.AllDirectories)); 18 files.AddRange(Directory.EnumerateFiles( 19 HostingEnvironment.MapPath("~/Scripts"), "*", SearchOption.AllDirectories)); 20 21 foreach (var file in files) 22 { 23 var contentType = "application/octet-stream"; 24 switch (Path.GetExtension(file)) 25 { 26 case "png": contentType = "image/png"; break; 27 case "css": contentType = "text/css"; break; 28 case "js": contentType = "text/javascript"; break; 29 } 30 31 var blob = container.GetBlobReference(file.Replace(approot, "")); 32 blob.Properties.ContentType = contentType; 33 blob.Properties.CacheControl = "public, max-age=3600"; 34 blob.UploadFile(file); 35 blob.SetProperties(); 36 } 37 38 ViewBag.Message = "Contents have been synchronized with the CDN."; 39 40 return View(); 41 }

There are two very important lines of code in there. The first one, container.SetPermissions, ensures that the blob storage container we’re uploading to allows public access. The Windows Azure CDN can only cache blobs stored in public containers.

The second important line of code, blob.Properties.CacheControl, is more interesting. How does the Windows Azure CDN know how long a blob should be cached on each edge node? By default, each blob will be cached for roughly 72 hours. This has some important consequences. First, you cannot invalidate the cache and have to wait for content expiration to occur. Second, the CDN will possibly refresh your blob every 72 hours.

As a general best practice, make sure that you specify the Cache-Control HTTP header for every blob you want to have cached on the CDN. If you want to have the possibility to update content every hour, make sure you specify a low TTL of, say, 3600 seconds. If you want less traffic to occur between the CDN and your storage account, specify a longer TTL of a few days or even a few weeks.

Another best practice is to address CDN URLs using a version number. Since the CDN can create a separate cache of a blob based on the query string, appending a version number to the URL may make it easier to refresh contents in the CDN based on the version of your application. For example, main.css?v1 and main.css?v2 may return different versions of main.css cached on the CDN edge node. Do note that the query string support is opt-in and should be enabled through the management portal. Here’s a quick code snippet which appends the AssemblyVersion to the CDN URLs to version content based on the deployed application version:

1 @{ 2 var version = System.Reflection.Assembly.GetAssembly( 3 typeof(WindowsAzureCdn.Web.Controllers.HomeController)) 4 .GetName().Version.ToString(); 5 } 6 <!DOCTYPE html> 7 <html> 8 <head> 9 <title>@ViewBag.Title</title> 10 <link href="http://az172729.vo.msecnd.net/static/Content/Site.css?@version" rel="stylesheet" type="text/css" /> 11 <script src="http://az172729.vo.msecnd.net/static/Scripts/jquery-1.8.2.min.js?@version" type="text/javascript"></script> 12 </head> 13 <!-- more HTML --> 14 </html>

Using cloud services with the CDN

So far we’ve seen how you can offload static content to the Windows Azure CDN. We can upload blobs to a storage account and have them cached on different edge nodes around the globe. Did you know you can also use your cloud service as a source for files cached on the CDN? The only thing to do is, again, go to the Windows Azure Management Portal and ensure the CDN is enabled for the cloud service you want to use.

Serving static content through the CDN

The main difference with using a storage account as the source for the CDN is that the CDN will look into the /cdn/* folder on your cloud service to retrieve its contents. There are two options for doing this: either moving static content to the /cdn folder, or using IIS URL rewriting to “fake” a /cdn folder.

When using ASP.NET MVC’s bundling features, we’ll have to modify the bundle configuration in BundleConfig.cs. First, we’ll have to set bundle.EnableCdn to true. Next, we’ll have to provide the URL to the CDN version of our bundles. Here’s a snippet which does just that for the Content/css bundle. We’re still working with a version number to make sure we can update the CDN contents for every deployment of our application.

1 var version = System.Reflection.Assembly.GetAssembly(typeof(BundleConfig)).GetName().Version.ToString(); 2 var cdnUrl = "http://az170459.vo.msecnd.net/{0}?" + version; 3 4 bundles.UseCdn = true; 5 bundles.Add(new StyleBundle("~/Content/css", string.Format(cdnUrl, "Content/css")).Include("~/Content/site.css"));

Note that this time, the CDN URL does not include any reference to a blob container.

Whether you are using bundling or not, the trick will be to request URLs straight from the CDN instead of from your server to be able to make use of the CDN.

Exposing static content to the CDN with IIS URL rewriting

The Windows Azure CDN only looks at the /cdn folder as a source of files to cache. This means that if you simply copy your static content into the /cdn folder, you’re finished. Your web application and the CDN will play happily together. But this means the static content really has to be static. In the previous example of using ASP.NET MVC bundling, our static “bundles” aren’t really static…

An alternative to copying static content to a /cdn folder explicitly is to use IIS URL rewriting. IIS URL rewriting is enabled on Windows Azure by default and can be configured to translate a /cdn URL to a / URL. For example, if the CDN requests the /cdn/Content/css bundle, IIS URL rewriting will simply serve the /Content/css bundle leaving you with no additional work.

To configure IIS URL rewriting, add a <rewrite> section under the <system.webServer> section in Web.config:

1 <system.webServer> 2 <!-- More settings --> 3 4 <rewrite> 5 <rules> 6 <rule name="RewriteIncomingCdnRequest" stopProcessing="true"> 7 <match url="^cdn/(.*)$" /> 8 <action type="Rewrite" url="{R:1}" /> 9 </rule> 10 </rules> 11 </rewrite> 12 </system.webServer>

As a side note, you can also configure an outbound rule in IIS URL rewriting to automatically modify your HTML into using the Windows Azure CDN. Do know that this option is only supported when not using dynamic content compression and adds additional workload to your web server due to having to parse and modify your outgoing HTML.

Serving dynamic content through the CDN

Some dynamic content is static in a sense. For example, generating an image on the server or generating a PDF report based on the same inputs. Why would you generate those files over and over again? This kind of content is a perfect candidate to cache on the CDN as well!

Imagine you have an ASP.NET MVC action method which generates an image based on a given string. For every different string the output would be different, however if someone uses the same input string the image being generated would be exactly the same.

As an example, we’ll be using this action method in a view to display the page title as an image. Here’s the view’s Razor code:

1 @{ 2 ViewBag.Title = "Home Page"; 3 } 4 5 <h2><img src="/Home/GenerateImage/@ViewBag.Message" alt="@ViewBag.Message" /></h2> 6 <p> 7 To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8 </p>

In the previous section, we’ve seen how an IIS rewrite rule can map all incoming requests from the CDN. The same rule can be applied here: if the CDN requests /cdn/Home/GenerateImage/Welcome, IIS will rewrite this to /Home/GenerateImage/Welcome and render the image once and cache it on the CDN from then on.

As mentioned earlier, a best practice is to specify the Cache-Control HTTP header. This can be done in our action method by using the [OutputCache] attribute, specifying the time-to-live in seconds:

1 [OutputCache(VaryByParam = "*", Duration = 3600, Location = OutputCacheLocation.Downstream)] 2 public ActionResult GenerateImage(string id) 3 { 4 // ... generate image ... 5 6 return File(image, "image/png"); 7 }

We would now only have to generate this image once for every different string requested. The Windows Azure CDN will take care of all intermediate caching.

Conclusion

The Windows Azure CDN is one of the building blocks to create fault-tolerant, reliable and fast applications running on Windows Azure. By caching static content on the CDN, the web server has more resources available to process other requests. Next to that, users will experience faster loading of your applications because content is delivered from a server closer to their location.

Enjoy!

Just released: MvcSiteMapProvider 4.0

MvcSiteMapProviderAfter a beta version about a month ago, we are proud to release MvcSiteMapProvider 4.0 stable! (get it from NuGet, it’s fresh!) It took 6 months to complete this major version but I think our GitHub contributors have done a great job. Thank you all and especially Shad for taking the lead on this release!

MvcSiteMapProvider is a tool targeted at ASP.NET MVC that provides menus, site maps, site map path functionality, and more. It provides the ability to configure a hierarchical navigation structure using a pluggable architecture that can be XML, database, or code driven. We have moved beyond a mere ASP.NET SiteMapProvider implementation to provide support for multi-tenant applications, flexible caching, dependency injection, and several interface-based extensibility points where virtually any part of the provider can be replaced with a custom implementation.

Based on areas, controller and action method names rather than hardcoded URL references, sitemap nodes are completely dynamic based on the routing engine used in an application. Search Engine Optimization support is also provided in the form of dynamic sitemaps XML, canonical URL tags, and meta robots tags to ensure you send the search engines consistent - rather than conflicting - information about your URLs.

What has changed?

What I originally intended to do in v2 (but decided against based on popular request) is something that now has been done. The biggest change in this release is that we have stepped away from being an ASP.NET SiteMapProvider implementation. This means a lot of code had to be rewritten making v4 a pretty clean release. We’re not there yet completely as we want to have unit tests for all (and some more changes will be required for that).

Next to stepping away from the ASP.NET provider model, we’ve improved support for dependency injection. If you don’t need it, no worries. If you do need it: every component of the MvcSiteMapProvider is now pluggable. A simple IoC container is used inside MvcSiteMapProvider but you can easily use your preferred one. We’ve created several NuGet packages for popular containers: Ninject, StructureMap, Unity, Autofac and Windsor. Note that we also have packages with the modules only so you can keep using your own container setup. Read more in the documentation.

The sitemap building pipeline has changed as well. A collection of sitemap builders is used to build the sitemap hierarchy from one or more sources. The default configuration of sitemap builders include an XML parser builder, a reflection-based builder, and a builder that implements the visitor pattern which is used to resolve the URLs before they are cached. Both the builders and visitors can be replaced with 1 or more custom implementations, opening up the door to alternate data sources and alternate visitor actions. In other words, you can build the tree any way you see fit. The only limitation is that only one of the builders must decide which node is the root node of the tree (although subsequent builders may change that decision, if needed).

The Menu() helper has been rewritten to become a more performant and reliable helper (thanks for the contribution, midishero!)

A great bunch of performance enhancements and stability fixes are in as well.

How do I upgrade?

Since MvcSiteMapProvider has had some significant updates going from v3 to v4, it is best to read the upgrade guide. The first part of the upgrade from v3 to v4 will be updating the NuGet package. Before, MvcSiteMapProvider only had one NuGet package. Today, it has been split in multiple, of which the following ones are good to know at this time:

  • MvcSiteMapProvider.Web containing all views and web.config changes
  • MvcSiteMapProvider.MVC<version>.Core containing the library itself

Upgrading from v3 to v4 consists of installing the correct packages for your ASP.NET MVC version:

  • For MVC 2, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC2
  • For MVC 3, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC3
  • For MVC 4, uninstall MvcSiteMapProvider and install MvcSiteMapProvider.MVC4
  • Note that for MVC 4 we have made it possible to upgrade MvcSiteMapProvider instead, which will pull in all required dependencies. Do know that this is not the recommended scenario and it is preferred to install MvcSiteMapProvider.MVC4 instead.

The MvcSiteMapProvider.Web update will add views and all required runtime dependencies to your project. This package is a dependency of each of the above options and generally will not need to be installed explicitly.

In .NET versions prior to .NET 4.0, one line of code should be added to the Application_Start() event of Global.asax:

MvcSiteMapProvider.DI.Composer.Compose();

Note that this code is automatically executed if using .NET 4.0 or higher by the use of WebActivator, so in most cases you will not need to call it manually.

More? Please read the upgrade guide.

What’s next?

NuGet all the things! Install the new MvcSiteMapProvider.MVCx package (replace X with your ASP.NET MVC version) and try it out! Leave your comments, ideas and pull requests on our GitHub page.

Enjoy!