Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Remote profiling Windows Azure Cloud Services with dotTrace

Here’s another cross-post from our JetBrains .NET blog. It’s focused around dotTrace but there are a lot of tips and tricks around Windows Azure Cloud Services in it as well, especially around working with the load balancer. Enjoy the read!

With dotTrace Performance, we can profile applications running on our local computer as well as on remote machines. The latter can be very useful when some performance problems only occur on the staging server (or even worse: only in production). And what if that remote server is a Windows Azure Cloud Service?

Note: in this post we’ll be exploring how to setup a Windows Azure Cloud Service for remote profiling using dotTrace, the “platform-as-a-service” side of Windows Azure. If you are working with regular virtual machines (“infrastructure-as-a-service”), the only thing you have to do is open up any port in the loadbalancer, redirect it to the machine’s port 9000 (dotTrace’s default) and follow the regular remote profiling workflow.

Preparing your Windows Azure Cloud Service for remote profiling

Since we don’t have system administrators at hand when working with cloud services, we have to do some of their work ourselves. The most important piece of work is making sure the load balancer in Windows Azure lets dotTrace’s traffic through to the server instance we want to profile.

We can do this by adding an InstanceInput endpoint type in the web- or worker role’s configuration:

Windows Azure InstanceInput endpoint

By default, the Windows Azure load balancer uses a round-robin approach in routing traffic to role instances. In essence every request gets routed to a random instance. When profiling later on, we want to target a specific machine. And that’s what the InstanceInput endpoint allows us to do: it opens up a range of ports on the load balancer and forwards traffic to a local port. In the example above, we’re opening ports 9000-9019 in the load balancer and forward them to port 9000 on the server. If we want to connect to a specific instance, we can use a port number from this range. Port 9000 will connect to port 9000 on server instance 0. Port 9001 will connect to port 9000 on role instance 1 and so on.

When deploying, make sure to enable remote desktop for the role as well. This will allow us to connect to a specific machine and start dotTrace’s remote agent there.

Windows Azure Remote Desktop RDP

That’s it. Whenever we want to start remote profiling on a specific role instance, we can now connect to the machine directly.

Starting a remote profiling session with a specific instance

And then that moment is there: we need to profile production!

First of all, we want to open a remote desktop connection to one of our role instances. In the Windows Azure management portal, we can connect to a specific instance by selecting it and clicking the Connect button. Save the file that’s being downloaded somewhere on your system: we need to change it before connecting.

Windows Azure connect to specific role instance

The reason for saving and not immediately opening the .rdp file is that we have to copy the dotTrace Remote Agent to the machine. In order to do that we want to enable access to our local drives. Right-click the downloaded .rdp file and select Edit from the context menu. Under the Local Resources tab, check the Drives option to allow access to our local filesystem.

Windows Azure access local filesystem

Save the changes and connect to the remote machine. We can now copy the dotTrace Remote Agent to the role instance by copying all files from our local dotTrace installation. The Remote Agent can be found in C:\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote, but since the machine in Windows Azure has no clue about that path we have to specify \\tsclient\C\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote instead.

From the copied folder, launch the RemoteAgent.exe. A console window similar to the one below will appear:

image

Not there yet: we did open the load balancer in Windows Azure to allow traffic to flow to our machine, but the machine’s own firewall will be blocking our incoming connection. To solve this, configure Windows Firewall to allow access on port 9000. A one-liner which can be run in a command prompt would be the following:

netsh advfirewall firewall add rule name="Profiler" dir=in action=allow protocol=TCP localport=9000

 

Since we’ve opened ports 9000 thru 9019 in the Windows Azure load balancer and every role instance gets their own port number from that range, we can now connect to the machine using dotTrace. We’ve connected to instance 1, which means we have to connect to port 9001 in dotTrace’s Attach to Process window. The Remote Agent URL will look like http://<yourservice>.cloudapp.net:PORT/RemoteAgent/AgentService.asmx.

Attach to process

Next, we can select the process we want to do performance tracing on. I’ve deployed a web application so I’ll be connecting to IIS’s w3wp.exe.

Profile application dotTrace

We can now user our application and try reproducing performance issues. Once we feel we have enough data, the Get Snapshot button will download all required data from the server for local inspection.

dotTrace get performance snapshot

We can now perform our performance analysis tasks and hunt for performance issues. We can analyze the snapshot data just as if we had recorded the snapshot locally. After determining the root cause and deploying a fix, we can repeat the process to collect another snapshot and verify that you have resolved the performance problem. Note that all steps in this post should be executed again in the next profiling session: Windows Azure’s Cloud Service machines are stateless and will probably discard everything we’ve done with them so far.

Analyze snapshot data

Bonus tip: get the instance being profiled out of the load balancer

Since we are profiling a production application, we may get in the way of our users by collecting profiling data. Another issue we have is that our own test data and our live user’s data will show up in the performance snapshot. And if we’re running a lot of instances, not every action we do in the application will be performed by the role instance we’ve connected to because of Windows Azure’s round-robin load balancing.

Ideally we want to temporarily remove the role instance we’re profiling from the load balancer to overcome these issues.The good news is: we can do this! The only thing we have to do is add a small piece of code in our WebRole.cs or WorkerRole.cs class.

1 public class WebRole : RoleEntryPoint 2 { 3 public override bool OnStart() 4 { 5 // For information on handling configuration changes 6 // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357. 7 8 RoleEnvironment.StatusCheck += (sender, args) => 9 { 10 if (File.Exists("C:\\Config\\profiling.txt")) 11 { 12 args.SetBusy(); 13 } 14 }; 15 16 return base.OnStart(); 17 } 18 }

Essentially what we’re doing here is capturing the load balancer’s probes to see if our node is still healthy. We can choose to respond to the load balancer that our current instance is busy and should not receive any new requests. In the example code above we’re checking if the file C:\Config\profiling.txt exists. If it does, we respond the load balancer with a busy status.

When we start profiling, we can now create the C:\Config\profiling.txt file to take the instance we’re profiling out of the server pool. After about a minute, the management portal will report the instance is “Busy”.

Role instance marked Busy

The best thing is we can still attach to the instance-specific endpoint and attach dotTrace to this instance. Just keep in mind that using the application should now happen in the remote desktop session we opened earlier, since we no longer have the current machine available from the Internet.

image

Once finished, we can simply remove the C:\Config\profiling.txt file and Windows Azure will add the machine back to the server pool. Don't forget this as otherwise you'll be paying for the machine without being able to serve the application from it. Reimaging the machine will also add it to the pool again.

Enjoy!

A Glimpse at Windows Identity Foundation claims

For a current project, I’m using Glimpse to inspect what’s going on behind the ASP.NET covers. I really hope that you have heard about the greatest ASP.NET module of 2011: Glimpse. If not, shame on you! Install-Package Glimpse immediately! And if you don’t know what I mean by that, NuGet it now! (the greatest .NET addition since sliced bread).

This project is also using Windows Identity Foundation. It’s really a PITA to get a look at the claims being passed around. Usually, I do this by putting a breakpoint somewhere and inspecting the current IPrincipal’s internals. But with Glimpse, using a small plugin to just show me the claims and their values is a no-brainer. Check the right bottom of this '(partial) screenshot:

Glimpse Windows Identity Foundation

Want to have this too? Simply copy the following class in your project and you’re done:

1 [GlimpsePlugin()] 2 public class GlimpseClaimsInspectorPlugin : IGlimpsePlugin 3 { 4 public object GetData(HttpApplication application) 5 { 6 // Return the data you want to display on your tab 7 var data = new List<object[]> { new[] { "Identity", "Claim", "Value", "OriginalIssuer", "Issuer" } }; 8 9 // Add all claims found 10 var claimsPrincipal = application.User as ClaimsPrincipal; 11 if (claimsPrincipal != null) 12 { 13 foreach (var identity in claimsPrincipal.Identities) 14 { 15 foreach (var claim in identity.Claims) 16 { 17 data.Add(new object[] { identity.Name, claim.ClaimType, claim.Value, claim.OriginalIssuer, claim.Issuer }); 18 } 19 } 20 } 21 22 return data; 23 } 24 25 public void SetupInit(HttpApplication application) 26 { 27 } 28 29 public string Name 30 { 31 get { return "WIF Claims"; } 32 } 33 }

Enjoy! And if you feel like NuGet-packaging this (or including it with Glimpse), feel free.

Book review: Software Testing with Visual Studio Team System 2008

Software Testing with Visual Studio Team System 2008 Another book review, this time for Packt’s “Software Testing with Visual Studio Team System 2008”. The book introduces you to the main types of testing available in Visual Studio Team System 2008 for both desktop and web applications, and then walks you through deploying, running, and interpreting the results of tests.

The book starts with an overview of why you need testing and then lists all available test types in Visual Studio 2008. It also explains the differences between a stand alone Visual Studio 2008 and a Team Foundation Server backed version. Each chapter thereafter covers one of the test types in detail: unit tests, web tests, advanced web tests, load tests, manual tests, …

Next to these things, more information on how to deploy and run tests on a VSTS build server is provided. And when you work with a VSTS build server, chances are big reporting is enabled. These reports are also covered in detail, showing you how to interpret the data displayed.

I’ve been working with and giving training on Visual Studio 2008 for quite a while now, including a large part on Visual Studio and testing. To be honest, I think this book really covers all aspects of testing in Visual Studio 2008, making it an ideal reference for any development team working with VSTS. Here's the official product page at Packt.

kick it on DotNetKicks.com

Detailed code metrics with NDepend

A while ago, I blogged about code performance analysis in Visual Studio 2008. Using profiling and hot path tracking, I measured code performance and was able to react to that. Last week, Patrick Smacchia contacted me asking if I wanted to test his project NDepend. He promised me NDepend would provide more insight in my applications. Let's test that!

After downloading, extracting and starting NDepend, an almost familiar interface shows up. Unfortunately, the interface that shows up after analyzing a set of assemblies is a little bit overwhelming... Note that this overwhelming feeling fades away after 15 minutes: the interface shows the information you want in a very efficient way! Here's the analysis of a personal "wine tracking" application I wrote 2 years ago.

Am I independent?

Let's start with the obvious... One of the graphs NDepend generates, is a dependency map. This diagram shows all dependencies of my "WijnDatabase" project.

Dependencies mapped

One thing I can see from this, is that there probably is an assembly too much! WijnDatabase.Classes could be a candidate for merging into WijnDatabase, the GUI project. These dependencies are also shown in the dependency window.

Dependencies mapped

You can see (in the upper right corner) that 38 methods of the WijnDatabase assembly are using 5 members of WijnDatabase.Classes. Left-click this cell, and have more details on this! A diagram of boxes clearly shows my methods in a specific form calling into WijnDatabase.Classes.

More detail on dependencies

In my opinion, these kinds of views are really useful to see dependencies in a project without reading code! The fun part is that you can widen this view and have a full dependency overview of all members of all assemblies in the project. Cool! This makes it possible to check if I should be refactoring into something more abstract (or less abstract). Which is also analysed in the next diagram:

Is my application in the zone of pain?

What you can see here is the following:

  • The zone of pain contains assemblies which are not very extensible (no interfaces, no abstract classes, nor virtual methods, stuff like that). Also, these assemblies tend to have lots of dependent assemblies.
  • The zone of uselessness is occupied by very abstract assemblies which have almost no dependent assemblies.

Most of my assemblies don't seem to be very abstract, dependencies are OK (the domain objects are widely used so more in the zone of pain). Conclusion: I should be doing some refactoring to make assemblies more abstract (or replacable, if you prefer it that way).

CQL - Code Query Language

Next to all these graphs and diagrams, there's another powerful utility: CQL, or Code Query Language. It's sort of a "SQL to code" thing. Let's find out some things about my application...

 

Methods poorly commented

It's always fun to check if there are enough comments in your code. Some developers tend to comment more than writing code, others don't write any comments at all. Here's a (standard) CQL query:

[code:c#]

// <Name>Methods poorly commented</Name>
WARN IF Count > 0 IN SELECT TOP 10 METHODS WHERE PercentageComment < 20 AND NbLinesOfCode > 10  ORDER BY PercentageComment ASC
// METHODS WHERE %Comment < 20 and that have at least 10 lines of code should be more commented.
// See the definition of the PercentageComment metric here http://www.ndepend.com/Metrics.aspx#PercentageComment

[/code]

This query searches the top 10 methods containing more than 10 lines of code where the percentage of comments is less than 20%.

CQL result 

Good news! I did quite good at commenting! The result of this query shows only Visual Studio generated code (the InitializeComponent() sort of methods), and some other, smaller methods I wrote myself. Less than 20% of comments in a method consisting of only 11 lines of code (btnVoegItemToe_Click in the image) is not bad!

Quick summary of methods to refactor

Another cool CQL query is the "quick summary of methods to refactor". Only one method shows up, but I should probably refactor it. Quiz: why?

CQL result

Answer: there are 395 IL instructions in this method (and if I drill down, 57 lines of code). I said "probably", because it might be OK after all. But if I drill down, I'm seeing some more information that is probably worrying: cyclomatic complexity is high, there are many variables used, ... Refactoring is indeed the answer for this method!

Methods that use boxing/unboxing

Are you familiar with the concept of boxing/unboxing? If not, check this article. One of the CQL queries in NDepend is actually finding all methods using boxing and unboxing. Seems like my data access layer is boxing a lot! Perhaps some refactoring could be needed in here too.

CQL result

Conclusion

Over the past hour, I've been analysing only a small tip of information from my project. But there's LOTS more information gathered by NDepend! Too much information, you think? Not sure if a specific metric should be fitted on your application? There's good documentation on all metrics as well as short, to-the-point video demos.

In my opinion, each development team should be gathering some metrics from NDepend with every build and do a more detailed analysis once in a while. This detailed analysis will give you a greater insight on how your assemblies are linked together and offer a great review of how you can improve your software design. Now grab that trial copy!

kick it on DotNetKicks.com

Code performance analysis in Visual Studio 2008

Visual Studio developer, did you know you have a great performance analysis (profiling) tool at your fingertips? In Visual Studio 2008 this profiling tool has been placed in a separate menu item to increase visibility and usage. Allow me to show what this tool can do for you in this walktrough.

An application with a smell…

Before we can get started, we need a (simple) application with a “smell”. Create a new Windows application, drag a TextBox on the surface, and add the following code:

[code:c#]

private void Form1_Load(object sender, EventArgs e)
{
    string s = "";
    for (int i = 0; i < 1500; i++)
    {
        s = s + " test";
    }
    textBox1.Text = s;
}

[/code]

You should immediately see the smell in the above code. If you don’t: we are using string.Concat() for 1.500 times! This means a new string is created 1.500 times, and the old, intermediate strings, have to be disposed again. Smells like a nice memory issue to investigate!

Profiling

Performance wizardThe profiling tool is hidden under the Analyze menu in Visual Studio. After launching the Performance Wizard, you will see two options are available: sampling and instrumentation. In a “real-life” situation, you’ll first want to sample the entire application searching for performance spikes. Afterwards, you can investigate these spikes using instrumentation. Since we only have one simple application, let’s instrumentate immediately.

Upon completing the wizard, the first thing we’ll do is changing some settings. Right-click the root node, and select Properties. Check the “Collect .NET object allocation information” and “Also collect .NET object lifetime information” to make our profiling session as complete as possible:

Profiling property pages

Launch with profilingYou can now start the performance session from the toolpane. Note that you have two options to start: Launch with profiling and Launch with profiling paused. The first will immediately start profiling, the latter will first start your application and wait for your sign to start profiling. This can be useful if you do not want to profile your application startup but only a certain event that is started afterwards.

After the application run, simply close it and wait for the summary report to appear:

Performance Report Summary 1

WOW! Seems like string.Concat() is taking 97% of the application’s memory! That’s a smell indeed... But where is it coming from? In a larger application, it might not be clear which method is calling string.Concat() this many times. To discover where the problem is situated, there are 2 options…

Discovering the smell – option 1

Option 1 in discovering the smell is quite straight-forward. Right-click the item in the summary and pick Show functions calling Concat:

Functions allocating most memory

You are now transferred to the “Caller / Callee” view, where all methods doing a string.Concat() call are shown including memory usage and allocations. In this particular case, it’s easy to see where the issue might be situated. You can now right-click the entry and pick View source to be transferred to this possible performance killer.

Possible performance killer

Discovering the smell – option 2

Visual Studio 2008 introduced a cool new way of discovering smells: hotpath tracking. When you move to the Call Tree view, you’ll notice a small flame icon in the toolbar. After clicking it, Visual Studio moves down the call tree following the high inclusive numbers. Each click takes you further down the tree and should uncover more details. Again, string.Concat() seems to be the problem!

Hotpath tracking

Fixing the smell

We are about to fix the smell. Let’s rewrite our application code using StringBuilder:

[code:c#]

private void Form1_Load(object sender, EventArgs e)
{
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < 1500; i++)
    {
        sb.Append(" test");
    }
    textBox1.Text = sb.ToString();
}

[/code]

In theory, this should perform better. Let’s run our performance session again and have a look at the results:

Performance Report Summary 2

Compare Peformance ReportsSeems like we fixed the glitch! You can now investigate further if there are other problems, but for this walktrough, the application is healthy now. One extra feature though: performance session comparison (“diff”). Simply pick two performance reports, right-click and pick Compare performance reports. This tool will show all delta values (= differences) between the two sessions we ran earlier:

Comparison report 

Update 2008-02-14: Some people commented on not finding the Analyze menu. This is only available in the Developer or Team Edition of Visual Studio. Click here for a full comparison of all versions.

Update 2008-05-29: Make sure to check my post on NDepend as well, as it offers even more insight in your code!

kick it on DotNetKicks.com