Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

ASP.NET MVC TDD using Visual Studio 2010

Phil Haack announced yesterday that the tooling support for ASP.NET MVC is available for Visual Studio 2010. Troy Goode already blogged about the designer snippets (which are really really cool, just like other parts of the roadmap for ASP.NET MVC 2.0). I’ll give the new TDD workflow introduced in VS2010 a take.

kick it on DotNetKicks.com

Creating a new controller, the TDD way

First of all, I’ll create a new ASP.NET MVC application in VS2010. After installing the project template (and the designer snippets if you are cool), this is easy in VS2010:

Step 1

Proceed and make sure to create a unit test project as well.

Next, in your unit test project, add a new unit test class and name it DemoControllerTests.cs.

Step 2Go ahead and start typing the following test:

Step 3Now when you type CTRL-. (or right click the DemoController unknown class), you can pick “Generate other…”:

Step 4A new window will appear,  where you can select the project where you want the new DemoController class to be created. Make sure to enter the MvcApplication project here (and not your test project).

Step 5

Great, that class has been generated. But how about the constructor accepting List<string>? Press CTRL-. and proceed with the suggested action.

Step 6

Continue typing your test and let VS2010 also implement the Index() action method.

Step 7You can now finish the test code:

Step 8The cool thing is: we did not have to go out of our DemoControllerTests.cs editor while writing this test class, while VS2010 took care of stubbing my DemoController in the background:Step 9Run your tests and see it fail. That’s the TDD approach: first make it fail, and then implement what’s needed:

Step 10

If you run your tests  now, you’ll see the test pass.

Conclusion

I like this new TDD approach and ASP.NET MVC! It’s not ReSharper yet, but I think its a fine step that the Visual Studio team has taken.

kick it on DotNetKicks.com

A view from the cloud (or: locate your ASP.NET MVC views on Windows Azure Blob Storage)

Hosting and deploying ASP.NET MVC applications on Windows Azure works like a charm. However, if you have been reading my blog for a while, you might have seen that I don’t like the fact that my ASP.NET MVC views are stored in the deployed package as well… Why? If I want to change some text or I made a typo, I would have to re-deploy my entire application for this. Takes a while, application is down during deployment, … And all of that for a typo…

Luckily, Windows Azure also provides blob storage, on which you can host any blob of data (or any file, if you don’t like saying “blob”). These blobs can easily be managed with a tool like Azure Blob Storage Explorer. Now let’s see if we can abuse blob storage for storing the views of an ASP.NET MVC web application, making it easier to modify the text and stuff. We’ll do this by creating a new VirtualPathProvider.

Note that this approach can also be used to create a CMS based on ASP.NET MVC and Windows Azure.

kick it on DotNetKicks.com

Putting our views in the cloud

Of course, we need a new ASP.NET MVC web application. You can prepare this for Azure, but that’s not really needed for testing purposes. Download and run Azure Blob Storage Explorer, and put all views in a blob storage container. Make sure to incldue the full virtual path in the blob’s name, like so:

Azure Blob Storage Explorer

Note I did not upload every view to blob storage. In the approach we’ll take, you do not need to put every view in there: we’ll support mixed-mode where some views are deployed and some others are in blob storage.

Creating a VirtualPathProvider

You may or may not know the concept of ASP.NET VirtualPathProviders. Therefore, allow me to quickly explain quickly: ASP.NET 2.0 introduced the concept of VirtualPathProviders, where you can create a virtual filesystem that can be sued by your application. A VirtualPathProvider has to be registered before ASP.NET will make use of it. After registering, ASP.NET will automatically iterate all VirtualPathProviders to check whether it can provide the contents of a specific virtual file or not. In ASP.NET MVC for example, the VirtualPathProviderViewEngine (default) will use this concept to look for its views. Ideal, since we do not have to plug the ASP.NET MVC view engine when we create our BlobStorageVirtualPathProvider!

A VirtualPathProvider contains some methods that are used to determine if it can serve a specific virtual file. We’ll only be implementing FileExists() and GetFile(), but there are also methods like DirectoryExists() and GetDirectory(). I suppose you’ll know what all this methods are doing by looking at the name…

In order for our BlobStorageVirtualPathProvider class to access Windows Azure Blob Storage, we need to reference the StorageClient project you can find in the Windows Azure SDK. Next, our class will have to inherit from VirtualPathProvider and need some fields holding useful information:

[code:c#]

public class BlobStorageVirtualPathProvider : VirtualPathProvider
{
    protected readonly StorageAccountInfo accountInfo;
    protected readonly BlobContainer container;
    protected BlobStorage blobStorage;

    // ...

    public BlobStorageVirtualPathProvider(StorageAccountInfo storageAccountInfo, string containerName)
    {
        accountInfo = storageAccountInfo;
        BlobStorage blobStorage = BlobStorage.Create(accountInfo);
        container = blobStorage.GetBlobContainer(containerName);
    }

    // ...

}

[/code]

Allright! We can now hold everyhting that is needed for accessing Windows Azure Blob Storage: the account info (including credentials) and a BlobContainer holding our views. Our constructor accepts these things and makes sure verything is prepared for accessing blob storage.

Next, we’ll have to make sure we can serve a file, by adding FileExists() and GetFile() method overrides:

[code:c#]

public override bool FileExists(string virtualPath)
{
    // Check if the file exists on blob storage
    string cleanVirtualPath = virtualPath.Replace("~", "").Substring(1);
    if (container.DoesBlobExist(cleanVirtualPath))
    {
        return true;
    }
    else
    {
        return Previous.FileExists(virtualPath);
    }
}

public override VirtualFile GetFile(string virtualPath)
{
    // Check if the file exists on blob storage
    string cleanVirtualPath = virtualPath.Replace("~", "").Substring(1);
    if (container.DoesBlobExist(cleanVirtualPath))
    {
        return new BlobStorageVirtualFile(virtualPath, this);
    }
    else
    {
        return Previous.GetFile(virtualPath);
    }
}

[/code]

These methods simply check the BlobContainer for the existance of a virtualFile path passed in.  GetFile() returns a new BlobStorageVirtualPath instance. This class provides all functionality for really returning the file’s contents, in its Open() method:

[code:c#]

public override System.IO.Stream Open()
{
    string cleanVirtualPath = this.VirtualPath.Replace("~", "").Substring(1);
    BlobContents contents = new BlobContents(new MemoryStream());
    parent.BlobContainer.GetBlob(cleanVirtualPath, contents, true);
    contents.AsStream.Seek(0, SeekOrigin.Begin);
    return contents.AsStream;
}

[/code]

We’ve just made it possible to download a blob from Windows Azure Blob Storage into a MemoryStream and pass this on to ASP.NET for further action.

Here’s the full BlobStorageVirtualPathProvider class:

[code:c#]

public class BlobStorageVirtualPathProvider : VirtualPathProvider
{
    protected readonly StorageAccountInfo accountInfo;
    protected readonly BlobContainer container;

    public BlobContainer BlobContainer
    {
        get { return container; }
    }

    public BlobStorageVirtualPathProvider(StorageAccountInfo storageAccountInfo, string containerName)
    {
        accountInfo = storageAccountInfo;
        BlobStorage blobStorage = BlobStorage.Create(accountInfo);
        container = blobStorage.GetBlobContainer(containerName);
    }

    public override bool FileExists(string virtualPath)
    {
        // Check if the file exists on blob storage
        string cleanVirtualPath = virtualPath.Replace("~", "").Substring(1);
        if (container.DoesBlobExist(cleanVirtualPath))
        {
            return true;
        }
        else
        {
            return Previous.FileExists(virtualPath);
        }
    }

    public override VirtualFile GetFile(string virtualPath)
    {
        // Check if the file exists on blob storage
        string cleanVirtualPath = virtualPath.Replace("~", "").Substring(1);
        if (container.DoesBlobExist(cleanVirtualPath))
        {
            return new BlobStorageVirtualFile(virtualPath, this);
        }
        else
        {
            return Previous.GetFile(virtualPath);
        }
    }

    public override System.Web.Caching.CacheDependency GetCacheDependency(string virtualPath, System.Collections.IEnumerable virtualPathDependencies, DateTime utcStart)
    {
        return null;
    }
}

[/code]

And here’s BlobStorageVirtualFile:

[code:c#]

public class BlobStorageVirtualFile : VirtualFile
{
    protected readonly BlobStorageVirtualPathProvider parent;

    public BlobStorageVirtualFile(string virtualPath, BlobStorageVirtualPathProvider parentProvider) : base(virtualPath)
    {
        parent = parentProvider;
    }

    public override System.IO.Stream Open()
    {
        string cleanVirtualPath = this.VirtualPath.Replace("~", "").Substring(1);
        BlobContents contents = new BlobContents(new MemoryStream());
        parent.BlobContainer.GetBlob(cleanVirtualPath, contents, true);
        contents.AsStream.Seek(0, SeekOrigin.Begin);
        return contents.AsStream;
    }
}

[/code]

Registering BlobStorageVirtualPathProvider with ASP.NET

We’re not completely ready yet. We still have to tell ASP.NET that it can possibly get virtual files using the BlobStorageVirtualPathProvider. We’ll do this in the Application_Start event in Global.asax.cs:

[code:c#]

protected void Application_Start()
{
    RegisterRoutes(RouteTable.Routes);

    // Register the virtual path provider with ASP.NET
    System.Web.Hosting.HostingEnvironment.RegisterVirtualPathProvider(new BlobStorageVirtualPathProvider(
        new StorageAccountInfo(
            new Uri("http://blob.core.windows.net"),
            false,
            "your_storage_account_name_here",
            "your_storage_account_key_here"),
            "your_container_name_here"));
}

[/code]

Add your own Azure storage account name, key and the container name that you’ve put your views in and you are set! Development storage will work as well as long as you enter the required info.

Running the example code

Download the sample code here: MvcViewInTheCloud.zip (58.72 kb)

Some instructions for running the sample code:

  • Upload all views from the ____Views folder to a blob container (as described earlier in this post)
  • Change your Azure credetials in Application_Start

kick it on DotNetKicks.com

PHP and Silverlight - DevDays session

I just returned from The Hague where Kevin and I delivered a session on PHP and Silverlight. As promised, we are putting our slides and demos online. Download the demo code from here: PHP and Silverlight - DevDays.zip (1.00 mb)

Abstract:

"So you have an existing PHP application and would like to spice it up with a rich and attractive front-end. Next to Adobe Flex, you can also choose Silverlight as a solution. This session shows you around in Silverlight and shows that PHP and Silverlight can go together easily."

We really enjoyed DevDays and want to thank everyone who was there (and was in our session while beer drinking seemed more appropriate that time of day).

ConnectedShow Podcast - PHP SDK for Windows Azure

The fifth episode of the ConnectedShow podcast is up. This podcast is all about cloud computing, Windows Azure, … Recently, they have asked me if I wanted to be in one of their podcasts on the PHP SDK for Windows Azure.

In this episode Dmitry welcomes a new co-host, Peter Laudati. Next, we speak to Maarten Balliauw about the new PHP SDK for Windows Azure which is designed to help PHP developers use Windows Azure services.

Here’s the link to the podcast: http://www.connectedshow.com/default.aspx?Episode=5

ASP.NET MVC Domain Routing

Routing Ever since the release of ASP.NET MVC and its routing engine (System.Web.Routing), Microsoft has been trying to convince us that you have full control over your URL and routing. This is true to a certain extent: as long as it’s related to your application path, everything works out nicely. If you need to take care of data tokens in your (sub)domain, you’re screwed by default.

Earlier this week, Juliën Hanssens did a blog post on his approach to subdomain routing. While this is a good a approach, it has some drawbacks:

  • All routing logic is hard-coded: if you want to add a new possible route, you’ll have to code for it.
  • The VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) method is not implemented, resulting in “strange” urls when using HtmlHelper ActionLink helpers. Think of http://live.localhost/Home/Index/?liveMode=false where you would have just wanted http://develop.localhost/Home/Index.

Unfortunately, the ASP.NET MVC infrastructure is based around this VirtualPathData class. That’s right: only tokens in the URL’s path are used for routing… Check my entry on the ASP.NET MVC forums on that one.

Now for a solution… Here are some scenarios we would like to support:

  • Scenario 1: Application is multilingual, where www.nl-be.example.com should map to a route like “www.{language}-{culture}.example.com”.
  • Scenario 2: Application is multi-tenant, where www.acmecompany.example.com should map to a route like “www.{clientname}.example.com”.
  • Scenario 3: Application is using subdomains for controller mapping: www.store.example.com maps to "www.{controller}.example.com/{action}...."

Sit back, have a deep breath and prepare for some serious ASP.NET MVC plumbing…

kick it on DotNetKicks.com

Defining routes

Here are some sample route definitions we want to support. An example where we do not want to specify the controller anywhere, as long as we are on home.example.com:

[code:c#]

routes.Add("DomainRoute", new DomainRoute(
    "home.example.com", // Domain with parameters
    "{action}/{id}",    // URL with parameters
    new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
));

[/code]

Another example where we have our controller in the domain name:

[code:c#]

routes.Add("DomainRoute", new DomainRoute(
    "{controller}.example.com",     // Domain with parameters< br />    "{action}/{id}",    // URL with parameters
    new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
));

[/code]

Want the full controller and action in the domain?

[code:c#]

routes.Add("DomainRoute", new DomainRoute(
    "{controller}-{action}.example.com",     // Domain with parameters
    "{id}",    // URL with parameters
    new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
));

[/code]

Here’s the multicultural route:

[code:c#]

routes.Add("DomainRoute", new DomainRoute(
    "{language}.example.com",     // Domain with parameters
    "{controller}/{action}/{id}",    // URL with parameters
    new { language = "en", controller = "Home", action = "Index", id = "" }  // Parameter defaults
));

[/code]

HtmlHelper extension methods

Since we do not want all URLs generated by HtmlHelper ActionLink to be using full URLs, the first thing we’ll add is some new ActionLink helpers, containing a boolean flag whether you want full URLs or not. Using these, you can now add a link to an action as follows:

[code:c#]

<%= Html.ActionLink("About", "About", "Home", true)%>

[/code]

Not too different from what you are used to, no?

Here’s a snippet of code that powers the above line of code:

[code:c#]

public static class LinkExtensions
{
    public static string ActionLink(this HtmlHelper htmlHelper, string linkText, string actionName, string controllerName, bool requireAbsoluteUrl)
    {
        return htmlHelper.ActionLink(linkText, actionName, controllerName, new RouteValueDictionary(), new RouteValueDictionary(), requireAbsoluteUrl);
    }

    // more of these...

    public static string ActionLink(this HtmlHelper htmlHelper, string linkText, string actionName, string controllerName, RouteValueDictionary routeValues, IDictionary<string, object> htmlAttributes, bool requireAbsoluteUrl)
    {
        if (requireAbsoluteUrl)
        {
            HttpContextBase currentContext = new HttpContextWrapper(HttpContext.Current);
            RouteData routeData = RouteTable.Routes.GetRouteData(currentContext);

            routeData.Values["controller"] = controllerName;
            routeData.Values["action"] = actionName;

            DomainRoute domainRoute = routeData.Route as DomainRoute;
            if (domainRoute != null)
            {
                DomainData domainData = domainRoute.GetDomainData(new RequestContext(currentContext, routeData), routeData.Values);
                return htmlHelper.ActionLink(linkText, actionName, controllerName, domainData.Protocol, domainData.HostName, domainData.Fragment, routeData.Values, null);
            }
        }
        return htmlHelper.ActionLink(linkText, actionName, controllerName, routeValues, htmlAttributes);
    }
}

[/code]

Nothing special in here: a lot of extension methods, and some logic to add the domain name into the generated URL. Yes, this is one of the default ActionLink helpers I’m abusing here, getting some food from my DomainRoute class (see: Dark Magic).

Dark magic

You may have seen the DomainRoute class in my code snippets from time to time. This class is actually what drives the extraction of (sub)domain and adds token support to the domain portion of your incoming URLs.

We will be extending the Route base class, which already gives us some properties and methods we don’t want to implement ourselves. Though there are some we will define ourselves:

[code:c#]

public class DomainRoute : Route

    // ...

    public string Domain { get; set; }

    // ...

    public override RouteData GetRouteData(HttpContextBase httpContext)
    {
        // Build regex
        domainRegex = CreateRegex(Domain);
        pathRegex = CreateRegex(Url);

        // Request information
        string requestDomain = httpContext.Request.Headers["host"];
        if (!string.IsNullOrEmpty(requestDomain))
        {
            if (requestDomain.IndexOf(":") > 0)
            {
                requestDomain = requestDomain.Substring(0, requestDomain.IndexOf(":"));
            }
        }
        else
        {
            requestDomain = httpContext.Request.Url.Host;
        }
        string requestPath = httpContext.Request.AppRelativeCurrentExecutionFilePath.Substring(2) + httpContext.Request.PathInfo;

        // Match domain and route
        Match domainMatch = domainRegex.Match(requestDomain);
        Match pathMatch = pathRegex.Match(requestPath);

        // Route data
        RouteData data = null;
        if (domainMatch.Success && pathMatch.Success)
        {
            data = new RouteData(this, RouteHandler);

            // Add defaults first
            if (Defaults != null)
            {
                foreach (KeyValuePair<string, object> item in Defaults)
                {
                    data.Values[item.Key] = item.Value;
                }
            }

            // Iterate matching domain groups
            for (int i = 1; i < domainMatch.Groups.Count; i++)
            {
                Group group = domainMatch.Groups[i];
                if (group.Success)
                {
                    string key = domainRegex.GroupNameFromNumber(i);
                    if (!string.IsNullOrEmpty(key) && !char.IsNumber(key, 0))
                    {
                        if (!string.IsNullOrEmpty(group.Value))
                        {
                            data.Values[key] = group.Value;
                        }
                    }
                }
            }

            // Iterate matching path groups
            for (int i = 1; i < pathMatch.Groups.Count; i++)
            {
                Group group = pathMatch.Groups[i];
                if (group.Success)
                {
                    string key = pathRegex.GroupNameFromNumber(i);
                    if (!string.IsNullOrEmpty(key) && !char.IsNumber(key, 0))
                    {
                        if (!string.IsNullOrEmpty(group.Value))
                        {
                            data.Values[key] = group.Value;
                        }
                    }
                }
            }
        }

        return data;
    }

    public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
    {
        return base.GetVirtualPath(requestContext, RemoveDomainTokens(values));
    }

    public DomainData GetDomainData(RequestContext requestContext, RouteValueDictionary values)
    {
        // Build hostname
        string hostname = Domain;
        foreach (KeyValuePair<string, object> pair in values)
        {
            hostname = hostname.Replace("{" + pair.Key + "}", pair.Value.ToString());
        }

        // Return domain data
        return new DomainData
        {
            Protocol = "http",
            HostName = hostname,
            Fragment = ""
        };
    }

    // ...
}

[/code]

Wow! That’s a bunch of code! What we are doing here is converting the incoming request URL into tokens we defined in our route, on the domain level and path level. We do this by converting {controller} and things like that into a regex which we then try to match into the route values dictionary. There are some other helper methods in our DomainRoute class, but these are the most important.

Download the full code here: MvcDomainRouting.zip (250.72 kb)

(if you want to try this using the development web server in Visual Studio, make sue to add some fake (sub)domains in your hosts file)

kick it on DotNetKicks.com

Document Interoperability Workshop, London, May 18 2009

Microsoft building London, Cardinal Place After a pleasant flight with VLM airlines (Antwerp – London City), traveling under half of the city of London, I arrived at the Microsoft offices in Victoria for their third (?) DII workshop, of which I attended a previous one in Brussels last year.

If you are wondering: “What are you doing there???”, here’s a short intro. I’ve been working on Microsoft interop projects for quite a few years now, like PHPExcel, PHPPowerPoint, PHPLinq, PHPAzure, … When working on PHPExcel and PHPPowerpoint, I hit the term “document interoperability” quite a lot. OpenXML (the underlying file format) is well documented, but there is some work on making sure the generated document by any of those tools is fully compatible with the standard. And that’s what these DII workshops are all about.

The previous DII workshop mentioned the OpenXML document viewer, which converted DOCX to HTML. Great to see there’s a new version available today, read more at the interop blog from Microsoft.

This blog post gives an overview of my experience during the DII day.

By the way, here’s a cool blog post about interop on an Excel document between PHP, JAVA and .NET. Nice read!

Validation of OpenXML resources

Some talks on the topic, one by Alex Brown, introduced what would be needed to make sure a document is conform the standard. This is quite a complicated topic, because validation should occur at multiple levels: ZIP package level, relations, XML markup, … Using W3C’s XProc is one of the possible solutions to this, where a pipeline of different validations on XML can be linked and executed. Cool thing is that it is a non-Microsoft approach to validating documents.

Another problem facing: there’s lots of things not in an XML schema, for example custom XML data in Word documents. How to validate those? Schematron is the answer to that (nice read).

Making sure documents are accessible in the future

Matevz Gacnik had a great presentation on all the problems there are to make sure documents stored in a document management system are accessible in the future. There are some technical issues to this (making sure you do not lose information: keep the text and do not convert everything to TIFF), but there are some legal issues as well: the document should be signed, you can not store alternative copies of a document, …

From legal back to technical: Matevz also showed us some technical implementations of their OpenXML based document management system (eDMS): cool! They parse content, add extra information using custom XML and bookmarks, … Great showoff for what you can do with OOXML.

Discussion: OpenXML SDK

Next, we had a discussion on the OOXML SDK. Some opinions are that XML markup is more clear and as verbose as the SDK, other opinions are that there are people on this world that don’t like XML and want to use code anyway. I think I’m going with the latter idea. But there’s one point that remains: source code for working with the SDK is still very verbose and I don’t like to type a lot. Luckily there’s the document reflector in the SDK too, which writes a lot of code for you based on a document that you want to be generated.

InteroperabilityPHPPowerPoint

Thanks to the people at Microsoft, I also had an opportunity to do a short demo of PHPPowerPoint. The demo scenario was quite simple: I did a short overview of the architecture behind PHPPowerPoint and a demo of the SDK and what it currently can do.

Community interoperability

Gerd Schürmann from Fraunhofer institute did a talk on their role in document interoperability in Germany and how they advise the government using different R&D projects and proof-of-concept projects. Their main purpose is to be a neutral mediator in open-source use. For this, they participate in lots of community projects like SourceForge, BerliOS, … As an example, Gerd showed us a community site demonstrating various scenarios around eID in Germany.

PLANETS and document conversion tools

Wolfgang Keber did his talk on PLANETS & document conversion tools. PLANETS is a tool that is aiming at preserving your digital assets by making sure they can always be converted into other document formats. There are some subprojects available, for example one that characterises a document. It determines what document format a file is in, and also determines if, for example, tables are used. These characteristics can then be used to convert the document into a required format using any conversion tool available (extensibility!). For example, libraries can use PLANETS to automatically characterise and convert old scanned books in, for example, TIFF, to PDF or OOXML.

c1 Extensibility within Standards

One of the great talks at the DII event was Stephen Peront ‘s talk on extensibility, targeting the less-known part of the OpenXML standard: markup compatibility. Basically, this allows you to embed your own custom XML markup inside OpenXML documents without disturbing the application that is opening your document (if done right). This presentation led to discussion about whether this is a good thing or a bad thing. Some say that extending a standard is creating a new standard while others agree that this markup compatibility manner of adding extra information to a document is a good thing. My guess is that this really depends on what you are doing. Adding some extra attributes should be cool. Adding extra nested elements embedding OOXML elements embedding more custom tags may be a road you don’t really want to take.

Other coverage

Other coverage on the DII event in London:

Mocking - VISUG session (screencast)

A new screencast has just been uploaded to the MSDN Belgium Chopsticks page. Don't forget to rate the video!

Mocking - VISUG session (screencast)

Abstract: "This session provides an introduction to unit testing using mock objects. It builds a small application using TDD (test driven development). To enable easier unit testing, all dependencies are removed from code and introduced as mock objects. Afterwards, a mocking framework by the name of Moq (mock you) is used to shorten unit tests and create a maintainable set of unit tests for the example application. "

Slides and example code can be found in my previous blog post on this session: Mocking - VISUG session

kick it on DotNetKicks.com

Announcing PHP SDK for Windows Azure

As part of Microsoft’s commitment to Interoperability, a new open source project has just been released on CodePlex: PHP SDK for Windows Azure, bridging PHP developers to Windows Azure. PHPAzure is an open source project to provide software development kit for Windows Azure and Windows Azure Storage – Blobs, Tables & Queues. I’m pleased that Microsoft has chosen RealDolmen and me to work on the PHP SDK for Windows Azure.

logomicrosoft logorealdolmen

Windows Azure provides an open, standards-based and interoperable environment with support for multiple internet protocols.  This helps reduce the cost of running a mixed IT environment.  Azure building block services use XML, REST and SOAP standards so they can be called from other platforms and programming languages.  Developers can create their own services and applications that conform to internet standards. Next to the new PHP SDK for Windows Azure, Microsoft also shipped Java and Ruby SDK for .NET Services demonstrating how heterogeneous languages and frameworks could take advantage of interoperable Identity Service (Access Control) & Service Bus using SOAP and REST-based frameworks.

  • Overview
    • Enables PHP developers to take advantage of the Microsoft Cloud Services Platform  – Windows Azure
    • Provides consistent programming model for Windows Azure Storage (Blobs, Tables & Queues)
  • Features
    • PHP classes for Windows Azure Blobs, Tables & Queues (for CRUD operations)
    • Helper Classes for HTTP transport, AuthN/AuthZ, REST & Error Management
    • Manageability, Instrumentation & Logging support

The logical architecture of PHP SDK for Windows Azure is as follows: it provides access to Windows Azure's storage, computation and management interfaces by abstracting the REST/XML interface Windows Azure provides into a simple PHP API.

logical_architecture

An application built using PHP SDK for Windows Azure can access Windows Azure's features, no matter if it is hosted on the Windows Azure platform or on an in-premise web server.

deployment_scenario

You can contribute, provide feature requests & test your own enhancements to the toolkit by joining the user forum.

kick it on DotNetKicks.com

Mocking - VISUG session

Thursday evening, I did a session on Mocking for the VISUG (Visual Studio User Group Belgium). As promised, here is the slide deck I’ve used. The session will be available online soon, in the meantime you'll have to go with the slide deck.

Demo code can also be downloaded: MockingDemoCode.zip (1.64 mb)

Thank you for attending the session!

kick it on DotNetKicks.com

More ASP.NET MVC Best Practices

In this post, I’ll share some of the best practices and guidelines which I have come across while developing ASP.NET MVC web applications. I will not cover all best practices that are available, instead add some specific things that have not been mentioned in any blog post out there.

Existing best practices can be found on Kazi Manzur Rashid’s blog and Simone Chiaretta’s blog:

After reading the best practices above, read the following best practices.

kick it on DotNetKicks.com

Use model binders where possible

I assume you are familiar with the concept of model binders. If not, here’s a quick model binder 101: instead of having to write action methods like this (or a variant using FormCollection form[“xxxx”]):

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save()
{
    // ...

    Person newPerson = new Person();
    newPerson.Name = Request["Name"];
    newPerson.Email = Request["Email"];

    // ...
}

[/code]

You can now write action methods like this:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(FormCollection form)
{
    // ...

    Person newPerson = new Person();
    if (this.TryUpdateModel(newPerson, form.ToValueProvider()))
    {
        // ...
    }

    // ...
}

[/code]

Or even cleaner:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(Person newPerson)
{
    // ...
}

[/code]

What’s the point of writing action methods using model binders?

  • Your code is cleaner and less error-prone
  • They are LOTS easier to test (just test and pass in a Person)

Be careful when using model binders

I know, I’ve just said you should use model binders. And now, I still say it, except with a disclaimer: use them wisely! The model binders are extremely powerful, but they can cause severe damage…

Let’s say we have a Person class that has an Id property. Someone posts data to your ASP.NET MVC application and tries to hurt you: that someone also posts an Id form field! Using the following code, you are screwed…

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save(Person newPerson)
{
    // ...
}

[/code]

Instead, use blacklisting or whitelisting of properties that should be bound where appropriate:

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save([Bind(Prefix=””, Exclude=”Id”)] Person newPerson)
{
    // ...
}

[/code]

Or whitelisted (safer, but harder to maintain):

[code:c#]

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Save([Bind(Prefix=””, Include=”Name,Email”)] Person newPerson)
{
    // ...
}

[/code]

Yes, that can be ugly code. But…

  • Not being careful may cause harm
  • Setting blacklists or whitelists can help you sleep in peace

Never re-invent the wheel

Never reinvent the wheel. Want to use an IoC container (like Unity or Spring)? Use the controller factories that are available in MvcContrib. Need validation? Check xVal. Need sitemaps? Check MvcSiteMap.

Point is: reinventing the wheel will slow you down if you just need basic functionality. On top of that, it will cause you headaches when something is wrong in your own code. Note that creating your own wheel can be the better option when you need something that would otherwise be hard to achieve with existing projects. This is not a hard guideline, you’ll have to find the right balance between custom code and existing projects for every application you’ll build.

Avoid writing decisions in your view

Well, the title says it all.. Don’t do this in your view:

[code:c#]

<% if (ViewData.Model.User.IsLoggedIn()) { %>
  <p>...</p>
<% } else { %>
  <p>...</p>
<% } %>

[/code]

Instead, do this in your controller:

[code:c#]

public ActionResult Index()
{
    // ...

    if (myModel.User.IsLoggedIn())
    {
        return View("LoggedIn");
    }
    return View("NotLoggedIn");
}

[/code]

Ok, the first example I gave is not that bad if it only contains one paragraph… But if there are many paragraphs and huge snippets of HTML and ASP.NET syntax involved, then use the second approach. Really, it can be a PITA when having to deal with large chunks of data in an if-then-else structure.

Another option would be to create a HtmlHelper extension method that renders partial view X when condition is true, and partial view Y when condition is false. But still, having this logic in the controller is the best approach.

Don’t do lazy loading in your ViewData

I’ve seen this one often, mostly by people using Linq to SQL or Linq to Entities. Sure, you can do lazy loading of a person’s orders:

[code:c#]

<%=Model.Orders.Count()%>

[/code]

This Count() method will go to your database if model is something that came out of a Linq to SQL data context… Instead of doing this, retrieve any value you will need on your view within the controller and create a model appropriate for this.

[code:c#]

public ActionResult Index()
{
    // ...

    var p = ...;

    var myModel = new {
        Person = p,
        OrderCount = p.Orders.Count()
    };
    return View(myModel);
}

[/code]

Note: This one is really for illustration purpose only. Point is not to pass the datacontext-bound IQueryable to your view but instead pass a List or similar.

And the view for that:

[code:c#]

<%=Model.OrderCount%>

[/code]

Motivation for this is:

  • Accessing your data store in a view means you are actually breaking the MVC design pattern.
  • If you don't care about the above: when you are using a Linq to SQL datacontext, for example, and you've already closed that in your controller, your view will error if you try to access your data store.

Put your controllers on a diet

Controllers should be really thin: they only accept an incoming request, dispatch an action to a service- or business layer and eventually respond to the incoming request with the result from service- or business layer, nicely wrapped and translated in a simple view model object.

In short: don’t put business logic in your controller!

Compile your views

Yes, you can do that. Compile your views for any release build you are trying to do. This will make sure everything compiles nicely and your users don’t see an “Error 500” when accessing a view. Of course, errors can still happen, but at least, it will not be the view’s fault anymore.

Here’s how you compile your views:

1. Open the project file in a text editor. For example, start Notepad and open the project file for your ASP.NET MVC application (that is, MyMvcApplication.csproj).

2. Find the top-most <PropertyGroup> element and add a new element <MvcBuildViews>:

[code:c#]

<PropertyGroup>

...
<MvcBuildViews>true</MvcBuildViews>

</PropertyGroup>

[/code]

3. Scroll down to the end of the file and uncomment the <Target Name="AfterBuild"> element. Update its contents to match the following:

[code:c#]

<Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">

<AspNetCompiler VirtualPath="temp"
PhysicalPath="$(ProjectDir)\..\$(ProjectName)" />
</Target>

[/code]

4. Save the file and reload the project in Visual Studio.

Enabling view compilation may add some extra time to the build process. It is recommended not to enable this during development as a lot of compilation is typically involved during the development process.

More best practices

There are some more best practices over at LosTechies.com. These are all a bit advanced and may cause performance issues on larger projects. Interesting read but do use them with care.

kick it on DotNetKicks.com