Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Remote profiling Windows Azure Cloud Services with dotTrace

Here’s another cross-post from our JetBrains .NET blog. It’s focused around dotTrace but there are a lot of tips and tricks around Windows Azure Cloud Services in it as well, especially around working with the load balancer. Enjoy the read!

With dotTrace Performance, we can profile applications running on our local computer as well as on remote machines. The latter can be very useful when some performance problems only occur on the staging server (or even worse: only in production). And what if that remote server is a Windows Azure Cloud Service?

Note: in this post we’ll be exploring how to setup a Windows Azure Cloud Service for remote profiling using dotTrace, the “platform-as-a-service” side of Windows Azure. If you are working with regular virtual machines (“infrastructure-as-a-service”), the only thing you have to do is open up any port in the loadbalancer, redirect it to the machine’s port 9000 (dotTrace’s default) and follow the regular remote profiling workflow.

Preparing your Windows Azure Cloud Service for remote profiling

Since we don’t have system administrators at hand when working with cloud services, we have to do some of their work ourselves. The most important piece of work is making sure the load balancer in Windows Azure lets dotTrace’s traffic through to the server instance we want to profile.

We can do this by adding an InstanceInput endpoint type in the web- or worker role’s configuration:

Windows Azure InstanceInput endpoint

By default, the Windows Azure load balancer uses a round-robin approach in routing traffic to role instances. In essence every request gets routed to a random instance. When profiling later on, we want to target a specific machine. And that’s what the InstanceInput endpoint allows us to do: it opens up a range of ports on the load balancer and forwards traffic to a local port. In the example above, we’re opening ports 9000-9019 in the load balancer and forward them to port 9000 on the server. If we want to connect to a specific instance, we can use a port number from this range. Port 9000 will connect to port 9000 on server instance 0. Port 9001 will connect to port 9000 on role instance 1 and so on.

When deploying, make sure to enable remote desktop for the role as well. This will allow us to connect to a specific machine and start dotTrace’s remote agent there.

Windows Azure Remote Desktop RDP

That’s it. Whenever we want to start remote profiling on a specific role instance, we can now connect to the machine directly.

Starting a remote profiling session with a specific instance

And then that moment is there: we need to profile production!

First of all, we want to open a remote desktop connection to one of our role instances. In the Windows Azure management portal, we can connect to a specific instance by selecting it and clicking the Connect button. Save the file that’s being downloaded somewhere on your system: we need to change it before connecting.

Windows Azure connect to specific role instance

The reason for saving and not immediately opening the .rdp file is that we have to copy the dotTrace Remote Agent to the machine. In order to do that we want to enable access to our local drives. Right-click the downloaded .rdp file and select Edit from the context menu. Under the Local Resources tab, check the Drives option to allow access to our local filesystem.

Windows Azure access local filesystem

Save the changes and connect to the remote machine. We can now copy the dotTrace Remote Agent to the role instance by copying all files from our local dotTrace installation. The Remote Agent can be found in C:\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote, but since the machine in Windows Azure has no clue about that path we have to specify \\tsclient\C\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote instead.

From the copied folder, launch the RemoteAgent.exe. A console window similar to the one below will appear:

image

Not there yet: we did open the load balancer in Windows Azure to allow traffic to flow to our machine, but the machine’s own firewall will be blocking our incoming connection. To solve this, configure Windows Firewall to allow access on port 9000. A one-liner which can be run in a command prompt would be the following:

netsh advfirewall firewall add rule name="Profiler" dir=in action=allow protocol=TCP localport=9000

 

Since we’ve opened ports 9000 thru 9019 in the Windows Azure load balancer and every role instance gets their own port number from that range, we can now connect to the machine using dotTrace. We’ve connected to instance 1, which means we have to connect to port 9001 in dotTrace’s Attach to Process window. The Remote Agent URL will look like http://<yourservice>.cloudapp.net:PORT/RemoteAgent/AgentService.asmx.

Attach to process

Next, we can select the process we want to do performance tracing on. I’ve deployed a web application so I’ll be connecting to IIS’s w3wp.exe.

Profile application dotTrace

We can now user our application and try reproducing performance issues. Once we feel we have enough data, the Get Snapshot button will download all required data from the server for local inspection.

dotTrace get performance snapshot

We can now perform our performance analysis tasks and hunt for performance issues. We can analyze the snapshot data just as if we had recorded the snapshot locally. After determining the root cause and deploying a fix, we can repeat the process to collect another snapshot and verify that you have resolved the performance problem. Note that all steps in this post should be executed again in the next profiling session: Windows Azure’s Cloud Service machines are stateless and will probably discard everything we’ve done with them so far.

Analyze snapshot data

Bonus tip: get the instance being profiled out of the load balancer

Since we are profiling a production application, we may get in the way of our users by collecting profiling data. Another issue we have is that our own test data and our live user’s data will show up in the performance snapshot. And if we’re running a lot of instances, not every action we do in the application will be performed by the role instance we’ve connected to because of Windows Azure’s round-robin load balancing.

Ideally we want to temporarily remove the role instance we’re profiling from the load balancer to overcome these issues.The good news is: we can do this! The only thing we have to do is add a small piece of code in our WebRole.cs or WorkerRole.cs class.

1 public class WebRole : RoleEntryPoint 2 { 3 public override bool OnStart() 4 { 5 // For information on handling configuration changes 6 // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357. 7 8 RoleEnvironment.StatusCheck += (sender, args) => 9 { 10 if (File.Exists("C:\\Config\\profiling.txt")) 11 { 12 args.SetBusy(); 13 } 14 }; 15 16 return base.OnStart(); 17 } 18 }

Essentially what we’re doing here is capturing the load balancer’s probes to see if our node is still healthy. We can choose to respond to the load balancer that our current instance is busy and should not receive any new requests. In the example code above we’re checking if the file C:\Config\profiling.txt exists. If it does, we respond the load balancer with a busy status.

When we start profiling, we can now create the C:\Config\profiling.txt file to take the instance we’re profiling out of the server pool. After about a minute, the management portal will report the instance is “Busy”.

Role instance marked Busy

The best thing is we can still attach to the instance-specific endpoint and attach dotTrace to this instance. Just keep in mind that using the application should now happen in the remote desktop session we opened earlier, since we no longer have the current machine available from the Internet.

image

Once finished, we can simply remove the C:\Config\profiling.txt file and Windows Azure will add the machine back to the server pool. Don't forget this as otherwise you'll be paying for the machine without being able to serve the application from it. Reimaging the machine will also add it to the pool again.

Enjoy!

Protecting Windows Azure Web and Worker roles from malware

Most IT administrators will install some sort of virus scanner on your precious servers. Since the cloud, from a technical perspective, is just a server, why not follow that security best practice on Windows Azure too? It has gone by almost unnoticed, but last week Microsoft released the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview. For the sake of bandwidth, I’ll be referring to it as EP.

EP offers real-time protection, scheduled scanning, malware remediation (a fancy word for quarantining), active protection and automatic signature updates. Sounds a lot like Microsoft Endpoint Protection or Windows Security Essentials? That’s no coincidence: EP is a Windows Azurified version of it.

Enabling anti-malware on Windows Azure

After installing the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview, sorry, EP, a new Windows Azure import will be available. As with remote desktop or diagnostics, EP can be enabled by a simple XML one liner:

1 <Import moduleName="Antimalware" />

Here’s a sample web role ServiceDefinition.csdef file containing this new import:

1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="ChuckProject" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 4 <WebRole name="ChuckNorris" vmsize="Small"> 5 <Sites> 6 <Site name="Web"> 7 <Bindings> 8 <Binding name="Endpoint1" endpointName="Endpoint1" /> 9 </Bindings> 10 </Site> 11 </Sites> 12 <Endpoints> 13 <InputEndpoint name="Endpoint1" protocol="http" port="80" /> 14 </Endpoints> 15 <Imports> 16 <Import moduleName="Antimalware" /> 17 <Import moduleName="Diagnostics" /> 18 </Imports> 19 </WebRole> 20 </ServiceDefinition>

That’s it! When you now deploy your Windows Azure solution, Microsoft Endpoint Protection will be installed, enabled and configured on your Windows Azure virtual machines.

Now since I started this blog post with “IT administrators”, chances are you want to fine-tune this plugin a little. No problem! The ServiceConfiguration.cscfg file has some options waiting to be eh, touched. And since these are in the service configuration, you can also modify them through the management portal, the management API, or sysadmin-style using PowerShell. Anyway, the following options are available:

  • Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation – Specify the datacenter region where your application is deployed, for example “West Europe” or “East Asia”. This will speed up deployment time.
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware – Should EP be enabled or not?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection – Should real-time protection be enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans – Weekly scheduled scans enabled?
  • Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans – Which day of the week (0 – 7 where 0 means daily)
  • Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans – What time should the scheduled scan run?
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions – Specify file extensions to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths – Specify paths to exclude from scanning (pip-delimited)
  • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses – Specify processes to exclude from scanning (pip-delimited)

Monitoring anti-malware on Windows Azure

How will you know if a threat has been detected? Well, luckily for us, Windows Endpoint Protection writes its logs to the System event log. Which means that you can simply add a specific data source in your diagnostics monitor and you’re done:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // Note: if you need informational / verbose, also subscribe to levels 4 and 5 4 configuration.WindowsEventLog.DataSources.Add( 5 "System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]"); 6 7 configuration.WindowsEventLog.ScheduledTransferPeriod 8 = System.TimeSpan.FromMinutes(1); 9 10 DiagnosticMonitor.Start( 11 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 12 configuration);

In addition, EP also logs its inner workings to its installation folders. You can also include these in your diagnostics configuration:

1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // ...add the event logs like in the previous code sample... 4 5 var mep1 = new DirectoryConfiguration(); 6 mep1.Container = "wad-endpointprotection-container"; 7 mep1.DirectoryQuotaInMB = 5; 8 mep1.Path = "%programdata%\Microsoft Endpoint Protection"; 9 10 var mep2 = new DirectoryConfiguration(); 11 mep2.Container = "wad-endpointprotection-container"; 12 mep2.DirectoryQuotaInMB = 5; 13 mep2.Path = "%programdata%\Microsoft\Microsoft Security Client"; 14 15 configuration.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); 16 configuration.Directories.DataSources.Add(mep1); 17 configuration.Directories.DataSources.Add(mep2); 18 19 DiagnosticMonitor.Start( 20 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 21 configuration);

From this moment one, you can use a tool like Cerebrata’s Diagnostics Monitor to check the event logs of all your Windows Azure instances that have anti-malware enabled.

A Glimpse at Windows Identity Foundation claims

For a current project, I’m using Glimpse to inspect what’s going on behind the ASP.NET covers. I really hope that you have heard about the greatest ASP.NET module of 2011: Glimpse. If not, shame on you! Install-Package Glimpse immediately! And if you don’t know what I mean by that, NuGet it now! (the greatest .NET addition since sliced bread).

This project is also using Windows Identity Foundation. It’s really a PITA to get a look at the claims being passed around. Usually, I do this by putting a breakpoint somewhere and inspecting the current IPrincipal’s internals. But with Glimpse, using a small plugin to just show me the claims and their values is a no-brainer. Check the right bottom of this '(partial) screenshot:

Glimpse Windows Identity Foundation

Want to have this too? Simply copy the following class in your project and you’re done:

1 [GlimpsePlugin()] 2 public class GlimpseClaimsInspectorPlugin : IGlimpsePlugin 3 { 4 public object GetData(HttpApplication application) 5 { 6 // Return the data you want to display on your tab 7 var data = new List<object[]> { new[] { "Identity", "Claim", "Value", "OriginalIssuer", "Issuer" } }; 8 9 // Add all claims found 10 var claimsPrincipal = application.User as ClaimsPrincipal; 11 if (claimsPrincipal != null) 12 { 13 foreach (var identity in claimsPrincipal.Identities) 14 { 15 foreach (var claim in identity.Claims) 16 { 17 data.Add(new object[] { identity.Name, claim.ClaimType, claim.Value, claim.OriginalIssuer, claim.Issuer }); 18 } 19 } 20 } 21 22 return data; 23 } 24 25 public void SetupInit(HttpApplication application) 26 { 27 } 28 29 public string Name 30 { 31 get { return "WIF Claims"; } 32 } 33 }

Enjoy! And if you feel like NuGet-packaging this (or including it with Glimpse), feel free.

Taking Care of a Cloud Environment (slides)

It looks like I’m only doing sessions lately :-) Here’s another slide deck for a presentation I did on the Architect Forum last week in Belgium.

Abstract: “No, this session is not about greener IT. Learn about using the RoleEnvironment and diagnostics provided by Windows Azure. Communication between roles, logging and automatic upscaling of your application are just some of the possibilities of what you can do if you know about how the Windows Azure environment works.”

Thanks for attending!

Revised: ASP.NET MVC and the Managed Extensibility Framework (MEF)

A while ago, I did a blog post on combining ASP.NET MVC and MEF (Managed Extensibility Framework), making it possible to “plug” controllers and views into your application as a module. I received a lot of positive feedback as well as a hard question from Dan Swatik who was experiencing a Server Error with this approach… Here’s a better approach to ASP.NET MVC and MEF.

kick it on DotNetKicks.com

The Exception

Server Error

The stack trace was being quite verbose on this one:

InvalidOperationException

The view at '~/Plugins/Views/Demo/Index.aspx' must derive from ViewPage, ViewPage<TViewData>, ViewUserControl, or ViewUserControl<TViewData>.

at System.Web.Mvc.WebFormView.Render(ViewContext viewContext, TextWriter writer) at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass11.<InvokeActionResultWithFilters>b__e() at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func`1 continuation) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass11.<>c__DisplayClass13.<InvokeActionResultWithFilters>b__10() at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Our exception seemed to be thrown ONLY when the following conditions were met:

  • The View was NOT located in ~/Views but in ~/Plugins/Views (or other path)
  • The View created in our MEF plugin was strong-typed

Problem one… Forgot to register ViewTypeParserFilter…

Allright, go calling me stupid… Our ~/Plugins/Views folder was not containing the following Web.config file:

[code:c#]

<?xml version="1.0"?>
<configuration>
  <system.web>
    <httpHandlers>
      <add path="*" verb="*"
          type="System.Web.HttpNotFoundHandler"/>
    </httpHandlers>

    <!--
        Enabling request validation in view pages would cause validation to occur
        after the input has already been processed by the controller. By default
        MVC performs request validation before a controller processes the input.
        To change this behavior apply the ValidateInputAttribute to a
        controller or action.
    -->
    <pages
        validateRequest="false"
        pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
        pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
        userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <controls>
        <add assembly="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
      </controls>
    </pages>
  </system.web>

  <system.webServer>
    <validation validateIntegratedModeConfiguration="false"/>
    <handlers>
      <remove name="BlockViewHandler"/>
      <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler"/>
    </handlers>
  </system.webServer>
</configuration>

[/code]

Now why would you need this one anyway? Well: first of all, you do not want your views to expose their source code. Therefore, we add the HttpNotFoundHandler for this folder. Next, we do not want request validation to happen again (because this is already done when invoking the controller). Next: we want the MvcViewTypeParserFilter to be used for enabling strong-typed views (more on this by Phil Haack).

Problem two: MEF’s approach to plugins and ASP.NET’s approach to rendering views…

When compiling a view, ASP.NET dynamically compiles the markup into a temporary assembly, after which it is rendered. This compilation process knows only the assemblies loaded by your web application’s AppDomain. Unfortunately, assemblies loaded by MEF are not available for this compilation process… I went ahead and checked with Reflector if we could do something about this on ASP.NET side: nope. The main classes we need for this are internal :-( The MEF side could be easily tweaked since its source code is available on CodePlex, but… it’s still subject to change and will be included in .NET 4.0 as a framework component, which would limit my customizations a bit for the future.

Now let’s describe this problem as one, simple sentence: we need the MEF plugin assembly loaded in our current AppDomain, available for all other components in the web application.

The solution to this: I want a MEF DirectoryCatalog to monitor my plugins folder and load/unload the assemblies in there dynamically. Loading should be no problem, but unloading… The assemblies will always be locked by my web server’s process! So let’s go for another approach: monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there. The solution: WebServerDirectoryCatalog. Here’s the code:

[code:c#]

public sealed class WebServerDirectoryCatalog : ComposablePartCatalog
{
    private FileSystemWatcher fileSystemWatcher;
    private DirectoryCatalog directoryCatalog;
    private string path;
    private string extension;

    public WebServerDirectoryCatalog(string path, string extension, string modulePattern)
    {
        Initialize(path, extension, modulePattern);
    }

    private void Initialize(string path, string extension, string modulePattern)
    {
        this.path = path;
        this.extension = extension;

        fileSystemWatcher = new FileSystemWatcher(path, modulePattern);
        fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed);
        fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created);
        fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted);
        fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed);
        fileSystemWatcher.IncludeSubdirectories = false;
        fileSystemWatcher.EnableRaisingEvents = true;

        Refresh();
    }

    void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e)
    {
        RemoveFromBin(e.OldName);
        Refresh();
    }

    void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e)
    {
        RemoveFromBin(e.Name);
        Refresh();
    }

    void fileSystemWatcher_Created(object sender, FileSystemEventArgs e)
    {
        Refresh();
    }

    void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
    {
        Refresh();
    }

    private void Refresh()
    {
        // Determine /bin path
        string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin");

        // Copy files to /bin
        foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly))
        {
            try
            {
                File.Copy(file, Path.Combine(binPath, Path.GetFileName(file)), true);
            }
            catch
            {
                // Not that big deal... Blog readers will probably kill me for this bit of code :-)
            }
        }

        // Create new directory catalog
        directoryCatalog = new DirectoryCatalog(binPath, extension);
    }

    public override IQueryable<ComposablePartDefinition> Parts
    {
        get { return directoryCatalog.Parts; }
    }

    private void RemoveFromBin(string name)
    {
        string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin");
        File.Delete(Path.Combine(binPath, name));
    }
}

[/code]

Download the example code

First of all: this was tricky, and the solution to it is also a bit tricky. Use at your own risk!

You can download the example code here: RevisedMvcMefDemo.zip (1.03 mb)

kick it on DotNetKicks.com

Verifying code and testing with Pex

Pex, Automated White box testing for .NET

Earlier this week, Katrien posted an update on the list of Belgian TechDays 2009 speakers. This post featured a summary on all sessions, of which one was titled “Pex – Automated White Box Testing for .NET”. Here’s the abstract:

“Pex is an automated white box testing tool for .NET. Pex systematically tries to cover every reachable branch in a program by monitoring execution traces, and using a constraint solver to produce new test cases with different behavior. Pex can be applied to any existing .NET assembly without any pre-existing test suite. Pex will try to find counterexamples for all assertion statements in the code. Pex can be guided by hand-written parameterized unit tests, which are API usage scenarios with assertions. The result of the analysis is a test suite which can be persisted as unit tests in source code. The generated unit tests integrate with Visual Studio Team Test as well as other test frameworks. By construction, Pex produces small unit test suites with high code and assertion coverage, and reported failures always come with a test case that reproduces the issue. At Microsoft, this technique has proven highly effective in testing even an extremely well-tested component.”

After reading the second sentence in this abstract, I was thinking: “SWEET! Let’s try!”. So here goes…

Getting started

First of all, download the academic release of Pex at http://research.microsoft.com/en-us/projects/Pex/. After installing this, Visual Studio 2008 (or 2010 if you are mr. or mrs. Cool), some context menus should be added. We will explore these later on in this post.

What we will do next is analyzing a piece of code in a fictive library of string extension methods. The following method is intended to mimic VB6’s Left method.

[code:c#]

/// <summary>
/// Return leftmost characters from string for a certain length
/// </summary>
/// <param name="current">Current string</param>
/// <param name="length">Length to take</param>
/// <returns>Leftmost characters from string</returns>
public static string Left(this string current, int length)
{
    if (length < 0)
    {
        throw new ArgumentOutOfRangeException("length", "Length should be >= 0");
    }

    return current.Substring(0, length);
}

[/code]

Great coding! I even throw an ArgumentOutOfRangeException if I receive a faulty length parameter.

Pexify this!

Analyzing this with Pex can be done in 2 manners: by running Pex Explorations, which will open a new add-in in Visual Studio and show me some results, or by generating a unit test for this method. Since I know this is good code, unit tests are not needed. I’ll pick the first option: right-click the above method and pick “Run Pex Explorations”.

Run Pex Explorations

A new add-in window opens in Visual Studio, showing me the output of calling my method with 4 different parameter combinations:

Pex Exploration Results

Frustrated, I scream: “WHAT?!? I did write good code! Pex schmex!” According to Pex, I didn’t. And actually, it is right. Pex explored all code execution paths in my Left method, of which two paths are not returning the correct results. For example, calling Substring(0, 2) on an empty string will throw an uncaught ArgumentOutOfRangeException. Luckily, Pex is also there to help.

When I right-click the first failing exploration, I can choose from some menu options. For example, I could assign this as a task to someone in Team Foundation Server.

Pex Exploration Options In this case, I’ll just pick “Add precondition”. This will actually show me a window of code which might help avoiding this uncaught exception.

Preview and Apply updates

Nice! It actually avoids the uncaught exception and provides the user of my code with a new ArgumentException thrown at the right location and with the right reason. After doing this for both failing explorations, my code looks like this:

[code:c#]

/// <summary>
/// Return leftmost characters from string for a certain length
/// </summary>
/// <param name="current">Current string</param>
/// <param name="length">Length to take</param>
/// <returns>Leftmost characters from string</returns>
public static string Left(this string current, int length)
{
    // <pex>
    if (current == (string)null)
        throw new ArgumentNullException("current");
    if (length < 0 || current.Length < length)
        throw new ArgumentException("length < 0 || current.Length < length");
    // </pex>

    return current.Substring(0, length);
}

[/code]

Great! This should work for any input now, returning a clear exception message when someone does provide faulty parameters.

Note that I could also run these explorations as a unit test. If someone introduces a new error, Pex will let me know.

More information

More information on Pex can be found on http://research.microsoft.com/en-us/projects/Pex/.

kick it on DotNetKicks.com

ASP.NET MVC - Testing issues Q and A

WTF? When playing around with the ASP.NET MVC framework and automated tests using Rhino Mocks, you will probably find yourself close to throwing your computer trough the nearest window. Here are some common issues and answers:

Q: How to mock Request.Form?

A: When testing a controller action which expects Request.Form to be a NameValueCollection, a NullReferenceException is thrown... This is due to the fact that Request.Form is null.

Use Scott's helper classes for Rhino Mocks and add the following extension method:

[code:c#]

public static void SetupFormParameters(this HttpRequestBase request)
{
    SetupResult.For(request.Form).Return(new NameValueCollection());
}

[/code]

Q: I can't use ASP.NET Membership in my controller, every test seems to go bad...

A: To test a controller using ASP.NET Membership, you should use a little trick. First of all, add a new property to your controller class:

[code:c#]

private MembershipProvider membershipProvider;

public MembershipProvider MembershipProviderInstance {
    get {
        if (membershipProvider == null)
        {
            membershipProvider = Membership.Provider;
        }
        return membershipProvider;
    }
    set { membershipProvider = value; }
}

[/code]

By doing this, you will enable the use of a mocked membership provider. Make sure you use this property in your controller instead of the standard Membership class (i.e. MembershipProviderInstance.ValidateUser(userName, password) instead of Membership.ValidateUser(userName, password)).

Let's say you are testing a LoginController which should set an error message in the ViewData instance when authentication fails. You do this by creating a mocked MembershipProvider which is assigned to the controller. This mock object will be instructed to always shout "false" on the ValidateUser method of the MembershipProvider. Here's how:

[code:c#]

LoginController controller = new LoginController();
var fakeViewEngine = new FakeViewEngine();
controller.ViewEngine = fakeViewEngine;

MockRepository mocks = new MockRepository();
using (mocks.Record())
{
    mocks.SetFakeControllerContext(controller);
    controller.HttpContext.Request.SetupFormParameters();

    System.Web.Security.MembershipProvider membershipProvider = mocks.DynamicMock<System.Web.Security.MembershipProvider>();
    SetupResult.For(membershipProvider.ValidateUser("", "")).IgnoreArguments().Return(false);

    controller.MembershipProviderInstance = membershipProvider;
}
using (mocks.Playback())
{
    controller.HttpContext.Request.Form.Add("Username", "");
    controller.HttpContext.Request.Form.Add("Password", "");

    controller.Authenticate();

    Assert.AreEqual("Index", fakeViewEngine.ViewContext.ViewName);
    Assert.IsNotNull(
        ((IDictionary<string, object>)fakeViewEngine.ViewContext.ViewData)["ErrorMessage"]
    );
}

[/code]

More questions? Feel free to ask! I'd be happy to answer them.

kick it on DotNetKicks.com

Data Driven Testing in Visual Studio 2008 - Part 2

This is the second post in my series on Data Driven Testing in Visual Studio 2008. The first post focusses on Data Driven Testing in regular Unit Tests. This part will focus on the same in web testing.

Web Testing

I assume you have read my previous post and saw the cool user interface I created. Let's first add some code to that, focussing on the TextBox_TextChanged event handler that is linked to TextBox1 and TextBox2.

[code:c#]

public partial class _Default : System.Web.UI.Page
{
    // ... other code ...

    protected void TextBox_TextChanged(object sender, EventArgs e)
    {
        if (!string.IsNullOrEmpty(TextBox1.Text.Trim()) && !string.IsNullOrEmpty(TextBox2.Text.Trim()))
        {
            int a;
            int b;
            int.TryParse(TextBox1.Text.Trim(), out a);
            int.TryParse(TextBox2.Text.Trim(), out b);

            Calculator calc = new Calculator();
            TextBox3.Text = calc.Add(a, b).ToString();
        }
        else
        {
            TextBox3.Text = "";
        }
    }
}

[/code]

It is now easy to run this in a browser and play with it. You'll notice 1 + 1 equals 2, otherwise you copy-pasted the wrong code. You can now create a web test for this. Right-click the test project, "Add", "Web Test...". If everything works well your browser is now started with a giant toolbar named "Web Test Recorder" on the left. This toolbar will record a macro of what you are doing, so let's simply navigate to the web application we created, enter some numbers and whatch the calculation engine do the rest:

Web Test Recorder

You'll notice an entry on the left for each request that is being fired. When the result is shown, click "Stop" and let Visual Studio determine what happened behind the curtains of your browser. An overview of this test recording session should now be available in Visual Studio.

Data Driven Web testing

There's our web test! But it's not data driven yet... First thing to do is linking the database we created in part 1 by clicking the "Add datasource  Add Datasource" button. Finish the wizard by selecting the database and the correct table. Afterwards, you can pick one of the Form Post Parameters and assign the value from our newly added datasource. Do this for each step in our test: the first step should fill TextBox1, the second should fill TextBox1 and TextBox2.

Bind Form Post Parameters

In the last recorded step of our web test, add a validation rule. We want to check whether our sum is calculated correct and is shown in TextBox3. Pick the following options in the "Add Validation Rule" screen. For the "Expected Value" property, enter the variable name which comes from our data source: {{DataSource1.CalculatorTestAdd.expected}}

image

If you now run the test, you should see success all over the place! But there's one last step to do though... Visual Studio 2008 will only run this test for the first data row, not for all other rows! To overcome this poblem, select "Run Test (Pause Before Starting" instead of just "Run Test". You'll notice the following hyperlink in the IDE interface:

Edit Run Settings

Click "Edit run Settings" and pick "One run per data source row". There you go! Multiple test runs are now validated ans should result in an almost green-bulleted screen:

image

kick it on DotNetKicks.com

Data Driven Testing in Visual Studio 2008 - Part 1

Last week, I blogged about code performance analysis in Visual Studio 2008. Since that topic provoked lots of comments (thank you Bart for associating "hotpaths" with "hotpants"), thought about doing another post on code quality in .NET.

This post will be the first of two on Data Driven Testing. This part will focus on Data Driven Testing in regular Unit Tests. The second part will focus on the same in web testing.

Data Driven Testing?

We all know unit testing. These small tests are always based on some values, which are passed throug a routine you want to test and then validated with a known result. But what if you want to run that same test for a couple of times, wih different data and different expected values each time?

Data Driven Testing comes in handy. Visual Studio 2008 offers the possibility to use a database with parameter values and expected values as the data source for a unit test. That way, you can run a unit test, for example, for all customers in a database and make sure each customer passes the unit test.

Sounds nice! Show me how!

You are here for the magic, I know. That's why I invented this nifty web application which looks like this:

Example application

This is a simple "Calculator" which provides a user interface that accepts 2 values, then passes these to a Calculator business object that calculates the sum of these two values. Here's the Calculator object:

[code:c#] 

public class Calculator
{
    public int Add(int a, int b)
    {
        return a + b;
    }
}

[/code]

Create Unit Tests...Now right-click the Add method, and select "Create Unit Tests...". Visual Studio will pop up a wizard. You can simply click "OK" and have your unit test code generated:

[code:c#]

/// <summary>
///A test for Add
///</summary>
[TestMethod()]
public void AddTest()
{
    Calculator target = new Calculator(); // TODO: Initialize to an appropriate value
    int a = 0; // TODO: Initialize to an appropriate value
    int b = 0; // TODO: Initialize to an appropriate value
    int expected = 0; // TODO: Initialize to an appropriate value
    int actual;
    actual = target.Add(a, b);
    Assert.AreEqual(expected, actual);
    Assert.Inconclusive("Verify the correctness of this test method.");
}

[/code]

As you see, in a normal situation we would now fix these TODO items and have a unit test ready in no time. For this data driven test, let's first add a database to our project. Create column a, b and expected. These do not have to represent names in the unit test, but it's always more clear. Also, add some data.

Data to test

Test View Great, but how will our unit test use these values while running? Simply click the test to be bound to data, add the data source and table name properties. Next, read your data from the TestContext.DataRow property. The unit test will now look like this:

[code:c#]

/// <summary>
///A test for Add
///</summary>
[DataSource("System.Data.SqlServerCe.3.5", "data source=|DataDirectory|\\Database1.sdf", "CalculatorTestAdd", DataAccessMethod.Sequential), DeploymentItem("TestProject1\\Database1.sdf"), TestMethod()]
public void AddTest()
{
    Calculator target = new Calculator();
    int a = (int)TestContext.DataRow["a"];
    int b = (int)TestContext.DataRow["b"];
    int expected = (int)TestContext.DataRow["expected"];
    int actual;
    actual = target.Add(a, b);
    Assert.AreEqual(expected, actual);
}

[/code]

Now run this newly created test. After the test run, you will see that the test is run a couple of times, one time for each data row in he database. You can also drill down further and check which values failed and which were succesful. If you do not want Visual Studio to use each data row sequential, you can also use the random accessor and really create a random data driven test.

Test results

Tomorrow, I'll try to do this with a web test and test our web interface. Stay tuned!

kick it on DotNetKicks.com

Code performance analysis in Visual Studio 2008

Visual Studio developer, did you know you have a great performance analysis (profiling) tool at your fingertips? In Visual Studio 2008 this profiling tool has been placed in a separate menu item to increase visibility and usage. Allow me to show what this tool can do for you in this walktrough.

An application with a smell…

Before we can get started, we need a (simple) application with a “smell”. Create a new Windows application, drag a TextBox on the surface, and add the following code:

[code:c#]

private void Form1_Load(object sender, EventArgs e)
{
    string s = "";
    for (int i = 0; i < 1500; i++)
    {
        s = s + " test";
    }
    textBox1.Text = s;
}

[/code]

You should immediately see the smell in the above code. If you don’t: we are using string.Concat() for 1.500 times! This means a new string is created 1.500 times, and the old, intermediate strings, have to be disposed again. Smells like a nice memory issue to investigate!

Profiling

Performance wizardThe profiling tool is hidden under the Analyze menu in Visual Studio. After launching the Performance Wizard, you will see two options are available: sampling and instrumentation. In a “real-life” situation, you’ll first want to sample the entire application searching for performance spikes. Afterwards, you can investigate these spikes using instrumentation. Since we only have one simple application, let’s instrumentate immediately.

Upon completing the wizard, the first thing we’ll do is changing some settings. Right-click the root node, and select Properties. Check the “Collect .NET object allocation information” and “Also collect .NET object lifetime information” to make our profiling session as complete as possible:

Profiling property pages

Launch with profilingYou can now start the performance session from the toolpane. Note that you have two options to start: Launch with profiling and Launch with profiling paused. The first will immediately start profiling, the latter will first start your application and wait for your sign to start profiling. This can be useful if you do not want to profile your application startup but only a certain event that is started afterwards.

After the application run, simply close it and wait for the summary report to appear:

Performance Report Summary 1

WOW! Seems like string.Concat() is taking 97% of the application’s memory! That’s a smell indeed... But where is it coming from? In a larger application, it might not be clear which method is calling string.Concat() this many times. To discover where the problem is situated, there are 2 options…

Discovering the smell – option 1

Option 1 in discovering the smell is quite straight-forward. Right-click the item in the summary and pick Show functions calling Concat:

Functions allocating most memory

You are now transferred to the “Caller / Callee” view, where all methods doing a string.Concat() call are shown including memory usage and allocations. In this particular case, it’s easy to see where the issue might be situated. You can now right-click the entry and pick View source to be transferred to this possible performance killer.

Possible performance killer

Discovering the smell – option 2

Visual Studio 2008 introduced a cool new way of discovering smells: hotpath tracking. When you move to the Call Tree view, you’ll notice a small flame icon in the toolbar. After clicking it, Visual Studio moves down the call tree following the high inclusive numbers. Each click takes you further down the tree and should uncover more details. Again, string.Concat() seems to be the problem!

Hotpath tracking

Fixing the smell

We are about to fix the smell. Let’s rewrite our application code using StringBuilder:

[code:c#]

private void Form1_Load(object sender, EventArgs e)
{
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < 1500; i++)
    {
        sb.Append(" test");
    }
    textBox1.Text = sb.ToString();
}

[/code]

In theory, this should perform better. Let’s run our performance session again and have a look at the results:

Performance Report Summary 2

Compare Peformance ReportsSeems like we fixed the glitch! You can now investigate further if there are other problems, but for this walktrough, the application is healthy now. One extra feature though: performance session comparison (“diff”). Simply pick two performance reports, right-click and pick Compare performance reports. This tool will show all delta values (= differences) between the two sessions we ran earlier:

Comparison report 

Update 2008-02-14: Some people commented on not finding the Analyze menu. This is only available in the Developer or Team Edition of Visual Studio. Click here for a full comparison of all versions.

Update 2008-05-29: Make sure to check my post on NDepend as well, as it offers even more insight in your code!

kick it on DotNetKicks.com