Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

Source Control considered harmful

TL;DR: Using source control is a really bad idea. Or is it? Skip to Conclusion for the meat of this post.

One of the first things I do with a new project in Visual Studio is not add it to source control. There are many reasons, but it all boils down to this: Source Control introduces more problems than it solves.

Before I dive into this, I'll share the solution with you. Put your sources on a USB drive. Yes, it's that simple.

Implications

If you're like most other people, you don't like that solution, because it feels inefficient:

  • USB drives can get lost
  • USB drives can end up in the dishwasher
  • I have to buy a USB drive for every developer on the team
  • Sharing sources with distributed teams is more difficult: USB drives have to be shipped by snail mail

All of that is true, but then again...

  • You can always make a copy of a USB drive to safeguard against loss
  • Sharing USB drives is really easy: plug and play! Ease of use!
  • You can have lots of coffee waiting for a USB drive to arrive with that contribution to your OSS project

Still, many people go for source control: Source Control and a central repository solve all implications of using a USB drive, so why not use source control?

Fragility

Have you ever let a junior developer loose on a git repository? I can promise you, it's not pretty.

  • Merges will go wrong
  • They will find out about rebasing and mess up the entire system
  • Pull requests on GitHub? One click to merge, no need to test or review!
  • Developers will forget to check in specific files

Again: all of this is easy with a USB drive: one location to store the project. Yes, merging is slightly difficult too but then again replaying history in source control is much worse.

And I haven't even talked about having to have a network share or a GitHub account in which you can have private repositories. That's all extra costs and extra risks. What if the Internet connection goes down. What if a dev's laptop breaks? You might even say a USB drive is too advanced and a typewriter is an even better way to write code!

Cost

Did I mention the cost of USB drives? At most conferences and shops you will get them for free. Even if you buy them, they are probably around 0.10$ per GB. USB drives are very inexpensive.

Compare that with source control: we need an Internet connecion, a GitHub repository, and most importantly: devs will have to read documentation on using git or be coached by someone on the team. That's really inefficient and costs a lot of time!

Conclusion

You may have noted that this is a slightly strange post. You are correct, it is. I’m responding to some of the outrages regarding yesterday’s NuGet.org outage. Tweets and blogs mention to not use NuGet, or use NuGet but definitely not use package restore. That’s perfectly fine, but I don’t think the reasons for not using it are well founded, hence the above sarcasm. If it wasn’t clear: you should be using source control.

Should you use NuGet package restore? I think it depends on your preference, mostly. It should not depend on NuGet.org outages, nor on the microwave destroying your WiFi signal and failing your builds utilizing package restore. Should you add packages to your repository or use package restore? It depends on what you want to achieve and how you want to work. I prefer not to do this because they are dependencies that are versioned (package version and packages.config) so why version them again? We don’t add the issues from our issue tracker to source control either, right?

We put issues in a specialized system for managing issues. In my opinion, the same should be true for software and component dependencies. But then again: if you want to add packages to source control, fine by me. As some tweets said, you don’t have to do it for the minimal disk space optimizations. All that matters is if it makes sense to your process. 

Just like with source control, issue trackers and other things (like package restore) in your build process, you should read up on them, play with them and know the risks. Do we know that our Internet connection can break during solar storms? Well yes. It’s a minor risk but if it’s important to your shop do mitigate that risk. Do laptops break? Yes. If it’s important that you can keep working even if a laptop crashes, buy some more and keep them up-to-date with your main development machine. If you rely on GitHub and want to get work done if they have issues, make sure you have an up to date fork somewhere on a file share. Make that two file shares!

And if you rely on NuGet package restore… you get the point, right? For NuGet, there are private repositories available that can host your in-house packages and the ones you are using from upstream sources like NuGet.org. Use them, if they matter for your development process. Know about NuGet 2.8’s automatic fallback to the local cache you have on disk and if something goes wrong, use that cache until the package source is back up.

The development process and the tools are part of your system. Know your tools. Even if it requires you to read crazy books like how to work with git. Or Pro NuGet 2.

Introducing MyGet package source proxy (beta)

My blog already has quite the number of blog posts around MyGet, our NuGet-as-a-Service solution which my colleague Xavier and I are running. There are a lot of reasons to host your own personal NuGet feed (such as protecting your intellectual property or only adding approved packages to the feed, but there’s many more as you can <plug>read in our book</plug>). We’ve added support for another scenario: MyGet now supports proxying remote feeds.

Up until now, MyGet required you to upload your own NuGet packages and to include packages from the NuGet feed. The problem with this is that you either required your team to register multiple NuGet feeds in Visual Studio (which still is a good option) or to register just your MyGet feed and add all packages your team is using to it. Which, again, is also a good option.

With our package source proxy in place, we now provide a third option: MyGet can proxy upstream NuGet feeds. Let’s start with a quick diagram and afterwards walk you through a scenario elaborating on this:

MyGet Feed Proxy Aggregate Feed Connector

You are seeing this correctly: you can now register just your MyGet feed in Visual Studio and we’ll add upstream packages to your feed automatically, optionally filtered as well.

Enabling MyGet package source proxy

Enabling the MyGet package source proxy is very straightforward. Navigate to your feed of choice (or create a new one) and click the Package Sources item. This will present you with a screen similar to this:

MyGet hosted package source

From there, you can add external (or MyGet) feeds to your personal feed and add packages directly from them using the Add package dialog. More on that in Xavier’s blog post. What’s more: with the tick of a checkbox, these external feeds can also be aggregated with your feed in Visual Studio’s search results. Here’s the magical add dialog and the proxy checkbox:

Add package source proxy

As you may see, we also offer the option to filter upstream packages. For example, the filter string substringof('wp7', Tags) eq true that we used will filter all upstream packages where the tags contain “wp7”.

What will Visual Studio display us? Well, just the Windows Phone 7 packages from NuGet, served through our single-endpoint MyGet feed.

Conclusion

Instead of working with a number of NuGet feeds, your development team will just work with one feed that is aggregating packages from both MyGet and other package sources out there (NuGet, Orchard Gallery, Chocolatey, …). This centralizes managing external packages and makes it easier for your team members to find the packages they can use in your projects.

Do let us know what you think of this feature! Our UserVoice is there for you, and in fact, that’s where we got the idea for this feature from in the first place. Your voice is heard!

Don’t brag about your Visual Studio achievements! (yet?)

imageThe Channel 9 folks seem to have released the first beta of their Visual Studio Achievements project. The idea of Visual Studio Achievements is pretty awesome:

Bring Some Game To Your Code!

A software engineer’s glory so often goes unnoticed. Attention seems to come either when there are bugs or when the final project ships. But rarely is a developer appreciated for all the nuances and subtleties of a piece of code–and all the heroics it took to write it. With Visual Studio Achievements Beta, your talents are recognized as you perform various coding feats, unlock achievements and earn badges.

Find the announcement here and the beta from the Visual Studio Gallery here.

The bad

The idea behind Visual Studio Achievements is awesome! Unfortunately, the current achievements series is pure crap and will get you into trouble. A simple example:

Regional Manager (7 points)

Add 10 regions to a class. Your code is so readable, if I only didn't have to keep collapsing and expanding!

Are they serious? 10 regions in a class means bad code design. It should crash your Visual Studio and only allow you to restart it if you swear you’ll read a book on modern OO design.

Another example:

Job Security (0 points)

Write 20 single letter class level variables in one file. Kudos to you for being cryptic! Uses FxCop

While I’m sure this one is meant to be sarcastic (hence the 0 points), it makes people write unreadable code.

There’s a number of bad coding habits in the list of achievements. And I really hope no-one on my team ever “achieves” some items on that list. If they do, I’m pretty sure that project is doomed.

The good

The good thing is: there are some positive achievements. For example, stimulating people to organize usings. Or to try out some extensions. Unfortunately, there are almost no “good” achievements. What I would like to see is a bunch more extensions that make it fun to discover new features in Visual Studio or learn about good coding habits.

Don’t get me wrong: I do like the idea of achievements very much. In fact, I feel an urge to have the Go To Hell achievement (and delete the code afterwards, promise!), but why not use them to teach people to be better at coding or be more productive? How about achievements that stimulate people to use CTRL + , which a lot of people don’t know about. Or teach people to write a unit test. Heck, you can even become Disposable by correctly implementing IDisposable!

So in conclusion: your resume will look very bad if you are a Regional Manager or gained the Turtles All The Way Down achievement. Don’t brag about those. Come up with some good habits that can be rewarded with achievements and please, ask the Channel 9 guys to include those.

[edit]This one does have positive achievements: https://github.com/jonasswiatek/strokes [/edit]
[edit]http://channel9.msdn.com/niners/maartenba/achievements/visualstudio/GotoAchievement [/edit]

Book review: Refactoring with Visual Studio 2010

refactoring-with-microsoft-visual-studio-2010Yet again, Packt Publishing has sent me a book for review. For once, one without the typical orange/black cover but instead a classy white/black cover: Refactoring with Visual Studio 2010 by Peter Ritchie.

Since my book shelf is quite heavy on the Packt side (really, almost have their complete collection I guess, they keep sending me books), I was a bit in doubt if I should write yet another review for one of their books as I think I’m starting to sound like a Packt marketing guy. After reading it though, I thought that this book deserves some credit!

I’m going to skip the official wording on what the book is all about: the title suggest refactoring with Visual Studio 2010, but that title covers only 5% of the book’s contents. This is also reflected in the book: it describes a refactoring, in 8 out of 10 cases followed by a sentence “that this refactoring is not supported in Visual Studio 2010”. However, all refactorings are clearly explained with practical, easy to grasp sample code.

So this book is partially about refactoring and a little bit about Visual Studio 2010. However, the main content that makes this book valuable to me is that it covers a lot of design patterns, software design principles and object-oriented concepts. As an example, check the sample 'Chapter 6 "Improving Class Quality'. It talks about the single responsibility principle and starts refactoring an ugly, tight coupled class into a nice, easy to maintain class with lots of practical tips and sample code.

My recommendation for anyone: must read! Not for the VS2010 refactoring part, but for the design patterns & object-oriented principles clearly explained in the book.

Mocking - VISUG session (screencast)

A new screencast has just been uploaded to the MSDN Belgium Chopsticks page. Don't forget to rate the video!

Mocking - VISUG session (screencast)

Abstract: "This session provides an introduction to unit testing using mock objects. It builds a small application using TDD (test driven development). To enable easier unit testing, all dependencies are removed from code and introduced as mock objects. Afterwards, a mocking framework by the name of Moq (mock you) is used to shorten unit tests and create a maintainable set of unit tests for the example application. "

Slides and example code can be found in my previous blog post on this session: Mocking - VISUG session

kick it on DotNetKicks.com

Mocking - VISUG session

Thursday evening, I did a session on Mocking for the VISUG (Visual Studio User Group Belgium). As promised, here is the slide deck I’ve used. The session will be available online soon, in the meantime you'll have to go with the slide deck.

Demo code can also be downloaded: MockingDemoCode.zip (1.64 mb)

Thank you for attending the session!

kick it on DotNetKicks.com

More ASP.NET MVC Best Practices

In this post, I’ll share some of the best practices and guidelines which I have come across while developing ASP.NET MVC web applications. I will not cover all best practices that are available, instead add some specific things that have not been mentioned in any blog post out there.

Existing best practices can be found on Kazi Manzur Rashid’s blog and Simone Chiaretta’s blog:

After reading the best practices above, read the following best practices.

kick it on DotNetKicks.com

Use model binders where possible

I assume you are familiar with the concept of model binders. If not, here’s a quick model binder 101: instead of having to write action methods like this (or a variant using FormCollection form[“xxxx”]):

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save()
{
    // ...

    Person newPerson = new Person();
    newPerson.Name = Request["Name"];
    newPerson.Email = Request["Email"];

    // ...
}

[/code]

You can now write action methods like this:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(FormCollection form)
{
    // ...

    Person newPerson = new Person();
    if (this.TryUpdateModel(newPerson, form.ToValueProvider()))
    {
        // ...
    }

    // ...
}

[/code]

Or even cleaner:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(Person newPerson)
{
    // ...
}

[/code]

What’s the point of writing action methods using model binders?

  • Your code is cleaner and less error-prone
  • They are LOTS easier to test (just test and pass in a Person)

Be careful when using model binders

I know, I’ve just said you should use model binders. And now, I still say it, except with a disclaimer: use them wisely! The model binders are extremely powerful, but they can cause severe damage…

Let’s say we have a Person class that has an Id property. Someone posts data to your ASP.NET MVC application and tries to hurt you: that someone also posts an Id form field! Using the following code, you are screwed…

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(Person newPerson)
{
    // ...
}

[/code]

Instead, use blacklisting or whitelisting of properties that should be bound where appropriate:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save([Bind(Prefix=””, Exclude=”Id”)] Person newPerson)
{
    // ...
}

[/code]

Or whitelisted (safer, but harder to maintain):

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save([Bind(Prefix=””, Include=”Name,Email”)] Person newPerson)
{
    // ...
}

[/code]

Yes, that can be ugly code. But…

  • Not being careful may cause harm
  • Setting blacklists or whitelists can help you sleep in peace

Never re-invent the wheel

Never reinvent the wheel. Want to use an IoC container (like Unity or Spring)? Use the controller factories that are available in MvcContrib. Need validation? Check xVal. Need sitemaps? Check MvcSiteMap.

Point is: reinventing the wheel will slow you down if you just need basic functionality. On top of that, it will cause you headaches when something is wrong in your own code. Note that creating your own wheel can be the better option when you need something that would otherwise be hard to achieve with existing projects. This is not a hard guideline, you’ll have to find the right balance between custom code and existing projects for every application you’ll build.

Avoid writing decisions in your view

Well, the title says it all.. Don’t do this in your view:

[code:c#]

<% if (ViewData.Model.User.IsLoggedIn()) { %>
  <p>...</p>
<% } else { %>
  <p>...</p>
<% } %>

[/code]

Instead, do this in your controller:

[code:c#]

public ActionResult Index()
{
    // ...

    if (myModel.User.IsLoggedIn())
    {
        return View("LoggedIn");
    }
    return View("NotLoggedIn");
}

[/code]

Ok, the first example I gave is not that bad if it only contains one paragraph… But if there are many paragraphs and huge snippets of HTML and ASP.NET syntax involved, then use the second approach. Really, it can be a PITA when having to deal with large chunks of data in an if-then-else structure.

Another option would be to create a HtmlHelper extension method that renders partial view X when condition is true, and partial view Y when condition is false. But still, having this logic in the controller is the best approach.

Don’t do lazy loading in your ViewData

I’ve seen this one often, mostly by people using Linq to SQL or Linq to Entities. Sure, you can do lazy loading of a person’s orders:

[code:c#]

<%=Model.Orders.Count()%>

[/code]

This Count() method will go to your database if model is something that came out of a Linq to SQL data context… Instead of doing this, retrieve any value you will need on your view within the controller and create a model appropriate for this.

[code:c#]

public ActionResult Index()
{
    // ...

    var p = ...;

    var myModel = new {
        Person = p,
        OrderCount = p.Orders.Count()
    };
    return View(myModel);
}

[/code]

Note: This one is really for illustration purpose only. Point is not to pass the datacontext-bound IQueryable to your view but instead pass a List or similar.

And the view for that:

[code:c#]

<%=Model.OrderCount%>

[/code]

Motivation for this is:

  • Accessing your data store in a view means you are actually breaking the MVC design pattern.
  • If you don't care about the above: when you are using a Linq to SQL datacontext, for example, and you've already closed that in your controller, your view will error if you try to access your data store.

Put your controllers on a diet

Controllers should be really thin: they only accept an incoming request, dispatch an action to a service- or business layer and eventually respond to the incoming request with the result from service- or business layer, nicely wrapped and translated in a simple view model object.

In short: don’t put business logic in your controller!

Compile your views

Yes, you can do that. Compile your views for any release build you are trying to do. This will make sure everything compiles nicely and your users don’t see an “Error 500” when accessing a view. Of course, errors can still happen, but at least, it will not be the view’s fault anymore.

Here’s how you compile your views:

1. Open the project file in a text editor. For example, start Notepad and open the project file for your ASP.NET MVC application (that is, MyMvcApplication.csproj).

2. Find the top-most <PropertyGroup> element and add a new element <MvcBuildViews>:

[code:c#]

<PropertyGroup>

...
<MvcBuildViews>true</MvcBuildViews>

</PropertyGroup>

[/code]

3. Scroll down to the end of the file and uncomment the <Target Name="AfterBuild"> element. Update its contents to match the following:

[code:c#]

<Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">

<AspNetCompiler VirtualPath="temp"
PhysicalPath="$(ProjectDir)\..\$(ProjectName)" />
</Target>

[/code]

4. Save the file and reload the project in Visual Studio.

Enabling view compilation may add some extra time to the build process. It is recommended not to enable this during development as a lot of compilation is typically involved during the development process.

More best practices

There are some more best practices over at LosTechies.com. These are all a bit advanced and may cause performance issues on larger projects. Interesting read but do use them with care.

kick it on DotNetKicks.com

ASP.NET MVC and the Managed Extensibility Framework (MEF)

Microsoft’s Managed Extensibility Framework (MEF) is a .NET library (released on CodePlex) that enables greater re-use of application components. You can do this by dynamically composing your application based on a set of classes and methods that can be combined at runtime. Think of it like building an appliation that can host plugins, which in turn can also be composed of different plugins. Since examples say a thousand times more than text, let’s go ahead with a sample leveraging MEF in an ASP.NET MVC web application.

kick it on DotNetKicks.com

Getting started…

The Managed Extensibility Framework can be downloaded from the CodePlex website. In the download, you’ll find the full source code, binaries and some examples demonstrating different use cases for MEF.

Now here’s what we are going to build: an ASP.NET MVC application consisting of typical components (model, view, controller), containing a folder “Plugins” in which you can dynamically add more models, views and controllers using MEF. Schematically:

Sample Application Architecture

Creating a first plugin

Before we build our host application, let’s first create a plugin. Create a new class library in Visual Studio, add a reference to the MEF assembly (System.ComponentModel.Composition.dll) and to System.Web.Mvc and System.Web.Abstractions. Next, create the following project structure:

Sample Plugin Project

That is right: a DemoController and a Views folder containing a Demo folder containing Index.aspx view. Looks a bit like a regular ASP.NET MVC application, no? Anyway, the DemoController class looks like this:

[code:c#]

[Export(typeof(IController))]
[ExportMetadata("controllerName", "Demo")]
[PartCreationPolicy(CreationPolicy.NonShared)]
public class DemoController : Controller
{
    public ActionResult Index()
    {
        return View("~/Plugins/Views/Demo/Index.aspx");
    }
}

[/code]

Nothing special, except… what are those three attributes doing there, Export and PartCreationPolicy? In short:

  • Export tells the MEF framework that our DemoController class implements the IController contract and can be used when the host application is requesting an IController implementation.
  • ExportMetaData provides some metadata to the MEF, which can be used to query plugins afterwards.
  • PartCreationPolicy tells the MEF framework that it should always create a new instance of DemoController whenever we require this type of controller. By defaukt, a single instance would be shared across the application which is not what we want here. CreationPolicy.NonShared tells MEF to create a new instance every time.

Now we are ready to go to our host application, in which this plugin will be hosted.

Creating our host application

The ASP.NET MVC application hosting these plugin controllers is a regular ASP.NET MVC application, in which we’ll add a reference to the MEF assembly (System.ComponentModel.Composition.dll). Next, edit the Global.asax.cs file and add the following code in Application_Start:

[code:c#]

ControllerBuilder.Current.SetControllerFactory(
    new MefControllerFactory(
        Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins")));

[/code]

What we are doing here is telling the ASP.NET MVC framework to create controller instances by using the MefControllerFactory instead of ASP.NET MVC’s default DefaultControllerFactory. Remember that everyone’s always telling ASP.NET MVC is very extensible, and it is: we are now changing a core component of ASP.NET MVC to use our custom MefControllerFactory class. We’re also telling our own MefControllerFactory class to check the “Plugins” folder in our web application for new plugins. By the way, here’s the code for the MefControllerFactory:

[code:c#]

public class MefControllerFactory : IControllerFactory
{
    private string pluginPath;
    private DirectoryCatalog catalog;
    private CompositionContainer container;

    private DefaultControllerFactory defaultControllerFactory;

    public MefControllerFactory(string pluginPath)
    {
        this.pluginPath = pluginPath;
        this.catalog = new DirectoryCatalog(pluginPath);
        this.container = new CompositionContainer(catalog);

        this.defaultControllerFactory = new DefaultControllerFactory();
    }

    #region IControllerFactory Members

    public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
    {
        IController controller = null;

        if (controllerName != null)
        {
            string controllerClassName = controllerName + "Controller";
            Export<IController> export = this.container.GetExports<IController>()
                                             .Where(c => c.Metadata.ContainsKey("controllerName")
                                                 && c.Metadata["controllerName"].ToString() == controllerName)
                                             .FirstOrDefault();
            if (export != null) {
                controller = export.GetExportedObject();
            }
        }

        if (controller == null)
        {
            return this.defaultControllerFactory.CreateController(requestContext, controllerName);
        }

        return controller;
    }

    public void ReleaseController(IController controller)
    {
        IDisposable disposable = controller as IDisposable;
        if (disposable != null)
        {
            disposable.Dispose();
        }
    }

    #endregion
}

[/code]

Too much? Time for a breakdown. Let’s start with the constructor:

[code:c#]

public MefControllerFactory(string pluginPath)
{
    this.pluginPath = pluginPath;
    this.catalog = new DirectoryCatalog(pluginPath);
    this.container = new CompositionContainer(catalog);

    this.defaultControllerFactory = new DefaultControllerFactory();
}

[/code]

In the constructor, we are storing the path where plugins can be found (the “Plugins” folder in our web application). Next, we are telling MEF to create a catalog of plugins based on what it can find in that folder using the DirectoryCatalog class. Afterwards, a CompositionContainer is created which will be responsible for matching plugins in our application.

Next, the CreateController method we need to implement for IControllerFactory:

[code:c#]

public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
{
    IController controller = null;

    if (controllerName != null)
    {
        string controllerClassName = controllerName + "Controller"; 
        Export<IController> export = this.container.GetExports<IController>()
                                         .Where(c => c.Metadata.ContainsKey("controllerName") 
                                             && c.Metadata["controllerName"].ToString() == controllerName)
                                         .FirstOrDefault();
        if (export != null) {
            controller = export.GetExportedObject();
        }
    }

    if (controller == null)
    {
        return this.defaultControllerFactory.CreateController(requestContext, controllerName);
    }

    return controller;
}

[/code]

This method handles the creation of a controller, based on the current request context and the controller name that is required. What we are doing here is checking MEF’s container for all “Exports” (plugins as you wish) that match the controller name. If one is found, we return that one. If not, we’re falling back to ASP.NET MVC’s DefaultControllerBuilder.

The ReleaseController method is not really exciting: it's used by ASP.NET MVC to correctly dispose a controller after use.

Running the sample

First of all, the sample code can be downloaded here: MvcMefDemo.zip (270.82 kb)

When launching the application, you’ll notice nothing funny. That is, untill you want to navigate to the http://localhost:xxxx/Demo URL: there is no DemoController to handle that request! Now compile the plugin we’ve just created (in the MvcMefDemo.Plugins.Sample project) and copy the contents from the \bin\Debug folder to the \Plugins folder of our host application. Now, when the application restarts (for example by modifying web.config), the plugin will be picked up and the http://localhost:xxxx/Demo URL will render the contents from our DemoController plugin:

Sample run MEF ASP.NET MVC

Conclusion

The MEF (Managed Extensibility Framework) offers a rich manner to dynamically composing applications. Not only does it allow you to create a plugin based on a class, it also allows exporting methods and even properties as a plugin (see the samples in the CodePlex download).

By the way, sample code can be downloaded here: MvcMefDemo.zip (270.82 kb)

kick it on DotNetKicks.com

Book review: ASP.NET 3.5 Social Networking

image Last week, I found another book from Packt in my letterbox. This time, the title is ASP.NET 3.5 Social Networking, written by Andrew Siemer.

On the back cover, I read that this book shows you how to create a scalable, maintainable social network that can support hundreds of thousands of users, multimedia features and stuff like that. The words scalable and maintainable seem to have triggered me: I started reading ASAP. The first chapter talks about what a social network is and proposes a new social network: Fisharoo.com, a web site for salt water aquarium fanatics, complete with blogs, forums, personal web sites, …

The book starts by building a framework containing several features such as logging, mail sending, …, all backed-up by a dependency injection framework to enable fast replacement of several components. Afterwards, each feature of the Fisharoo.com site is described in a separate chapter: what is the feature, how will we store data, what do we need to do in our application to make it work?

A good thing about this book is that it demonstrates several concepts in application design using a sample application that anyone who has used a site like Facebook is familiar with. The concepts demonstrated are some that any application can benefit from: Domain Driven Design, Test Driven Design (TDD), Dependency Injection, Model-View-Presenter, … Next to this, some third-party components like Lucene.NET are demonstrated. This all is very readable and understandable, really a must-read for anyone interested in these concepts!

Bottom line of the story: it has been a while since I was enthousiast about a book, and this one clearly made me enthousiast. Sure, it describes stuff about building a social network, but I think that is only a cover for what this book is really about: building good software that is easy to maintain, test and extend.

Verifying code and testing with Pex

Pex, Automated White box testing for .NET

Earlier this week, Katrien posted an update on the list of Belgian TechDays 2009 speakers. This post featured a summary on all sessions, of which one was titled “Pex – Automated White Box Testing for .NET”. Here’s the abstract:

“Pex is an automated white box testing tool for .NET. Pex systematically tries to cover every reachable branch in a program by monitoring execution traces, and using a constraint solver to produce new test cases with different behavior. Pex can be applied to any existing .NET assembly without any pre-existing test suite. Pex will try to find counterexamples for all assertion statements in the code. Pex can be guided by hand-written parameterized unit tests, which are API usage scenarios with assertions. The result of the analysis is a test suite which can be persisted as unit tests in source code. The generated unit tests integrate with Visual Studio Team Test as well as other test frameworks. By construction, Pex produces small unit test suites with high code and assertion coverage, and reported failures always come with a test case that reproduces the issue. At Microsoft, this technique has proven highly effective in testing even an extremely well-tested component.”

After reading the second sentence in this abstract, I was thinking: “SWEET! Let’s try!”. So here goes…

Getting started

First of all, download the academic release of Pex at http://research.microsoft.com/en-us/projects/Pex/. After installing this, Visual Studio 2008 (or 2010 if you are mr. or mrs. Cool), some context menus should be added. We will explore these later on in this post.

What we will do next is analyzing a piece of code in a fictive library of string extension methods. The following method is intended to mimic VB6’s Left method.

[code:c#]

/// <summary>
/// Return leftmost characters from string for a certain length
/// </summary>
/// <param name="current">Current string</param>
/// <param name="length">Length to take</param>
/// <returns>Leftmost characters from string</returns>
public static string Left(this string current, int length)
{
    if (length < 0)
    {
        throw new ArgumentOutOfRangeException("length", "Length should be >= 0");
    }

    return current.Substring(0, length);
}

[/code]

Great coding! I even throw an ArgumentOutOfRangeException if I receive a faulty length parameter.

Pexify this!

Analyzing this with Pex can be done in 2 manners: by running Pex Explorations, which will open a new add-in in Visual Studio and show me some results, or by generating a unit test for this method. Since I know this is good code, unit tests are not needed. I’ll pick the first option: right-click the above method and pick “Run Pex Explorations”.

Run Pex Explorations

A new add-in window opens in Visual Studio, showing me the output of calling my method with 4 different parameter combinations:

Pex Exploration Results

Frustrated, I scream: “WHAT?!? I did write good code! Pex schmex!” According to Pex, I didn’t. And actually, it is right. Pex explored all code execution paths in my Left method, of which two paths are not returning the correct results. For example, calling Substring(0, 2) on an empty string will throw an uncaught ArgumentOutOfRangeException. Luckily, Pex is also there to help.

When I right-click the first failing exploration, I can choose from some menu options. For example, I could assign this as a task to someone in Team Foundation Server.

Pex Exploration Options In this case, I’ll just pick “Add precondition”. This will actually show me a window of code which might help avoiding this uncaught exception.

Preview and Apply updates

Nice! It actually avoids the uncaught exception and provides the user of my code with a new ArgumentException thrown at the right location and with the right reason. After doing this for both failing explorations, my code looks like this:

[code:c#]

/// <summary>
/// Return leftmost characters from string for a certain length
/// </summary>
/// <param name="current">Current string</param>
/// <param name="length">Length to take</param>
/// <returns>Leftmost characters from string</returns>
public static string Left(this string current, int length)
{
    // <pex>
    if (current == (string)null)
        throw new ArgumentNullException("current");
    if (length < 0 || current.Length < length)
        throw new ArgumentException("length < 0 || current.Length < length");
    // </pex>

    return current.Substring(0, length);
}

[/code]

Great! This should work for any input now, returning a clear exception message when someone does provide faulty parameters.

Note that I could also run these explorations as a unit test. If someone introduces a new error, Pex will let me know.

More information

More information on Pex can be found on http://research.microsoft.com/en-us/projects/Pex/.

kick it on DotNetKicks.com