Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

MEF will not get easier, it’s cool as ICE

Over the past few weeks, several people asked me to show them how to use MEF (Managed Extensibility Framework), some of them seemed to have some difficulties with the concept of MEF. I tried explaining that it will not get easier than it is currently, hence the title of this blog post. MEF is based on 3 keywords: export, import, compose. Since these 3 words all start with a letter that can be combined to a word, and MEF is cool, here’s a hint on how to remember it: MEF is cool as ICE!

kick it on DotNetKicks.com

Imagine the following:

You want to construct a shed somewhere in your back yard. There’s tools to accomplish that, such as a hammer and a saw. There’s also material, such as nails and wooden boards.

Let’s go for this! Here’s a piece of code to build the shed:

[code:c#]

public class Maarten
{
    public void Execute(string command)
    {
        if (command == “build-a-shed”)
        {
          List<ITool> tools = new List<ITool>
          {
            new Hammer(),
            new Saw()
          };

          List<IMaterial> material = new List<IMaterial>
          {
            new BoxOfNails(),
            new WoodenBoards()
          };

          BuildAShedCommand task = new BuildAShedCommand(tools, material);
          task.Execute();
        }
    }
}

[/code]

That’s a lot of work, building a shed! Imagine you had someone to do the above for you, someone who gathers your tools spread around somewhere in the house, goes to the DIY-store and gets a box of nails, … This is where MEF comes in to place.

Compose

Let’s start with the last component of the MEF paradigm: composition. Let’s not look for tools in the garage (and the attic), let’s not go to the DIY store, let’s “outsource” this task to someone cheap: MEF. Cheap because it will be in .NET 4.0, not because it’s, well, “cheap”. Here’s how the outsourcing would be done:

[code:c#]

public class Maarten
{
    public void Execute(string command)
    {
        if (command == “build-a-shed”)
        {
          // Tell MEF to look for stuff in my house, maybe I still have nails and wooden boards as well
          AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
          CompositionContainer container = new CompositionContainer(catalog);

          // Start the job, and ask MEF to find my tools and material
          BuildAShedCommand task = new BuildAShedCommand();
          container.ComposeParts(task);
          task.Execute();
        }
    }
}

[/code]

Cleaner, no? The only thing I have to do is start the job, which is more fun when my tools and material are in reach. The ComposeParts() call figures out where my tools and material are. However, MEF's stable composition promise will only do that if it can find ("satisfy") all required imports. And MEF will not know all of this automatically. Tools and material should be labeled. And that’s where the next word comes in play: export.

Export

Export, or the ExportAttribute to be precise, is a marker for MEF to tell that you want to export the type or property on which the attribute is placed. Really think of it like a label. Let’s label our hammer:

[code:c#]

[Export(typeof(ITool))]
public class Hammer : ITool
{
  // ...
}

[/code]

The same should be done for the saw, the box of nails and the wooden boards. Remember to put a different label color on the tools and the material, otherwise MEF will think that sawing should be done with a box of nails.

Import

Of course, MEF can go ahead and gather tools and material, but it will not know what to do with it unless you give it a hint. And that’s where the ImportAttribute (and the ImportManyAttribute) come in handy. I will have to tell MEF that the tools go on one stack, the material goes on another one. Here’s how:

[code:c#]

public class BuildAShedCommand : ICommand
{
  [ImportMany(typeof(ITool))]
  public IEnumerable<ITool> Tools { get; set; }

  [ImportMany(typeof(IMaterial))]
  public IEnumerable<IMaterial> Materials { get; set; }

  // ...
}

[/code]

Conclusion

Easy, no? Of course, MEF can do a lot more than this. For instance, you can specify that a certain import is only valid for exports of a specific type and specific metadata: I can have a small and a large hammer, both ITool. For building a shed, I will require the large hammer though.

Another cool feature is creating your own export provider (example at TheCodeJunkie.com). And if ICE does not make sense, try the Zoo example.

kick it on DotNetKicks.com

Introducing RealDolmenBlogs.com

RealDolmenBlogs.com Here’s something I would like to share with you. A few months ago, our company (RealDolmen) started a new website, RealDolmenBlogs.com. This site syndicates content from employee blogs, people with lots of experience in their range of topics. These guys have lots of knowledge to share, but sometimes their blog does not have a lot of attention from, well, you. Since we would really love to share employee knowledge, RealDolmenBlogs.com was born.

The following topics are covered:

  • .NET
  • Application Lifecycle Management
  • Architecture
  • ASP.NET
  • Biztalk
  • PHP
  • Sharepoint
  • Silverlight
  • Visual Studio

Make sure to subscribe to the syndicated RSS feed and have quality content delivered to your RSS reader.

The technical side

Since I do not like to do blog posts on topic that do not have a technical touch, considered that the first few lines of text of this post are pure marketing in a sense, here’s the technical bit.

RealDolmenBlogs.com is built on Windows Azure and SQL Azure. As a company we believe there is value in cloud computing, in this case we chose for cloud computing due to the fact that the setup costs for the website were very small (pay-per-use) and that we can easily scale-up the website if needed.

The software behind the site is a customized version of BlogEngine.NET. It has been extended with a syndication feature, pulling content from employee blogs with a little help of the Argotic syndication framework. Running BlogEngine.NET on Windows Azure is not that hard, especially when you are using SQL Azure as well: the only thing to modify is the connection string to your database and you are done. Well… that is if you don’t care about images and attachments. We had to do some modifications to how BlogEngine.NET handles file uploads and made sure everything is now stored safe and sound in Windows Azure blob storage.

That being said: enjoy the content that my colleagues are sharing, posts are definitely worth a read!

Sharpy - an ASP.NET MVC view engine based on Smarty

Sharpy - ASP.NET MVC View Engine based on SmartyAre you also one of those ASP.NET MVC developers who prefer a different view engine than the default Webforms view engine available? You tried Spark, NHaml, …? If you are familiar with the PHP world as well, chances are you know Smarty, a great engine for creating views that can easily be read and understood by both developers and designers. And here’s the good news: Sharpy provides the same syntax for ASP.NET MVC!

If you want more details on Sharpy, visit Jaco Pretorius’ blog:

kick it on DotNetKicks.com

A simple example…

Here’s a simple example:

[code:c#]

{master file='~/Views/Shared/Master.sharpy' title='Hello World sample'}

<h1>Blog entries</h1>

{foreach from=$Model item="entry"}
    <tr>
        <td>{$entry.Name|escape}</td>       
        <td align="right">{$entry.Date|date_format:"dd/MMM/yyyy"}</td>       
    </tr>
    <tr>
        <td colspan="2" bgcolor="#dedede">{$entry.Body|escape}</td>
    </tr>
{foreachelse}
    <tr>
        <td colspan="2">No records</td>
    </tr>
{/foreach}

[/code]

The above example first specifies the master page to use. Next, a foreach-loop is executed for each blog post (aliased “entry”) in the $Model. Printing the entry’s body is done using {$entry.Body|escape}. Note the pipe “|” and the word escape after it. These are variable modifiers that can be used to escape content, format dates, …

Extensibility

Sharpy is all about extensibility: every function in a view is actually a plugin of a specific type (there are four types, IInlineFunction, IBlockFunction, IExpressionFunction and IVariableModifier). These plugins are all exposed through MEF. This means that Sharpy will always use any of your custom functions that are exposed through MEF. For example, here’s a custom function named “content”:

[code:c#]

[Export(typeof(IInlineFunction))]
public class Content : IInlineFunction
{
    public string Name
    {
        get { return "content"; }
    }

    public string Evaluate(IDictionary<string, object> attributes, IFunctionEvaluator evaluator)
    {
        // Fetch attributes
        var file = attributes.GetRequiredAttribute<string>("file");

        // Write output
        return evaluator.EvaluateUrl(file);
    }
}

[/code]

Here’s how to use it:

[code:c#]

{content file='~/Content/SomeFile.txt'}

[/code]

Sharpy uses MEF to allow developers to implement their own functions and modifiers.  All the built-in functions are also built using this exact same framework – the same functionality is available to both internal and external functions.

Extensibility is one of the strongest features in Sharpy.  This should allow us to leverage any functionality available in a normal ASP.NET view while maintaining simple views and straightforward markup.

Give it a spin!

Do give Sharpy a spin, you will learn to love it.

Using Windows Azure Drive (aka X-Drive)

Windows Azure X Drive With today’s release of the Windows Azure Tools and SDK version 1.1, also the Windows Azure Drive feature has been released. Announced at last year’s PDC as X-Drive, which has nothing to do with a well-known German car manufacturer, this new feature enables a Windows Azure application to use existing NTFS APIs to access a durable drive. This allows the Windows Azure application to mount a page blob as a drive letter, such as X:, and enables easily migration of existing NTFS applications to the cloud.

This blog post will describe the necessary steps to create and/or mount a virtual hard disk on a Windows Azure role instance.

kick it on DotNetKicks.com

Using Windows Azure Drive

In a new or existing cloud service, make sure you have a LocalStorage definition in ServiceDefinition.csdef. This local storage, defined with the name InstanceDriveCache below, will be used by the Windows Azure Drive API to cache virtual hard disks on the virtual machine that is running, enabling faster access times. Here’s the ServiceDefinition.csdef for my project:

[code:c#]

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="MyWorkerRole" enableNativeCodeExecution="true">
    <LocalResources>
      <LocalStorage name="InstanceDriveCache"
                    cleanOnRoleRecycle="false"
                    sizeInMB="300" />
    </LocalResources>
    <ConfigurationSettings>
      <!-- … -->
   </ConfigurationSettings>
  </WorkerRole>
</ServiceDefinition>

[/code]

Next, in code, fire up a CloudStorageAccount, I’m using development storage settings here:

[code:c#]

CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;

[/code]

After that, the Windows Azure Drive environment has to be initialized. Remember the LocalStorage definition we made earlier? This is where it comes into play:

[code:c#]

LocalResource localCache = RoleEnvironment.GetLocalResource("InstanceDriveCache");
CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);

[/code]

Just to be sure, let’s create a blob storage container with any desired name, for instance “drives”:

[code:c#]

CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();

[/code]

Ok, now we got that, it’s time to get a reference to a Windows Azure Drive. Here’s how:

[code:c#]

CloudDrive myCloudDrive = storageAccount.CreateCloudDrive(
    blobClient
        .GetContainerReference("drives")
        .GetPageBlobReference("mysupercooldrive.vhd")
        .Uri.ToString()
);

[/code]

Our cloud drive will be stored in a page blob on the “drives” blob container, named “mysupercooldrive.vhd”. Note that when using development settings, the page blob will not be created on development storage. Instead, files will be located at C:\Users\<your.user.name>\AppData\Local\dftmp\wadd\devstoreaccount1.

Next up: making sure our virtual hard disk exists.  Note that this should only be done once in a virtual disk’s lifetime. Let’s create a giant virtual disk of 64 MB:

[code:c#]

try
{
    myCloudDrive.Create(64);
}
catch (CloudDriveException ex)
{
    // handle exception here
    // exception is also thrown if all is well but the drive already exists
}

[/code]

Great, our disk is created. Now let’s mount it, i.e. assign a drive letter to it. The drive letter can not be chosen, instead it is returned by the Mount() method. The 25 is the cache size that will be used on the virtual machine instance. The DriveMountOptions can be None, Force and FixFileSystemErrors.

[code:c#]

string driveLetter = myCloudDrive.Mount(25, DriveMountOptions.None);

[/code]

Great! Do whatever you like with your disk! For example, create some files:

[code:c#]

for (int i = 0; i < 1000; i++)
{
    System.IO.File.WriteAllText(driveLetter + "\\" + i.ToString() + ".txt", "Test");
}

[/code]

One thing left when the role instance is being shut down: unmounting the disk and makign sure all contents are on blob storage again:

[code:c#]

myCloudDrive.Unmount();

[/code]

Now, just for fun: you can also create a snapshot from a Windows Azure Drive by calling the Snapshot() method on it. A new Uri with the snapshot location will be returned.

Full code sample

The code sample described above looks like this when not going trough each line of code separately:

[code:c#]

public override void Run()

    CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;

    LocalResource localCache = RoleEnvironment.GetLocalResource("InstanceDriveCache");
    CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);

    // Just checking: make sure the container exists
    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
    blobClient.GetContainerReference("drives").CreateIfNotExist();

    // Create cloud drive
    CloudDrive myCloudDrive = storageAccount.CreateCloudDrive(
        blobClient
        .GetContainerReference("drives")
        .GetPageBlobReference("mysupercooldrive.vhd")
        .Uri.ToString()
    );

    try
    {
        myCloudDrive.Create(64);
    }
    catch (CloudDriveException ex)
    {
        // handle exception here
        // exception is also thrown if all is well but the drive already exists
    }

    string driveLetter = myCloudDrive.Mount(25, DriveMountOptions.Force);

    for (int i = 0; i < 1000; i++)
    {
        System.IO.File.WriteAllText(driveLetter + "\\" + i.ToString() + ".txt", "Test");
    }

    myCloudDrive.Unmount();
}

[/code]

Enjoy!

kick it on DotNetKicks.com

Translating routes (ASP.NET MVC and Webforms)

Localized route in ASP.NET MVC - Translated route in ASP.NET MVC For one of the first blog posts of the new year, I thought about doing something cool. And being someone working with ASP.NET MVC, I thought about a cool thing related to that: let’s do something with routes! Since System.Web.Routing is not limited to ASP.NET MVC, this post will also play nice with ASP.NET Webforms. But what’s the cool thing? How about… translating route values?

Allow me to explain… I’m tired of seeing URLs like http://www.example.com/en/products and http://www.example.com/nl/products. Or something similar, with query parameters like “?culture=en-US”. Or even worse stuff. Wouldn’t it be nice to have http://www.example.com/products mapping to the English version of the site and http://www.exaple.com/producten mapping to the Dutch version? Better to remember when giving away a link to someone, better for SEO as well.

Of course, we do want both URLs above to map to the ProductsController in our ASP.NET MVC application. We do not want to duplicate logic because of a language change, right? And what’s more: it’s not fun if this would mean having to switch from <%=Html.ActionLink(…)%> to something else because of this.

Let’s see if we can leverage the routing engine in System.Web.Routing for this…

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

Mapping a translated route

First things first: here’s how I see a translated route being mapped in Global.asax.cs:

[code:c#]

routes.MapTranslatedRoute(
    "TranslatedRoute",
    "{controller}/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" },
    new { controller = translationProvider, action = translationProvider },
    true
);

[/code]

Looks pretty much the same as you would normally map a route, right? There’s only one difference: the new { controller = translationProvider, action = translationProvider } line of code. This line of code basically tells the routing engine to use the object translationProvider as a provider which allows to translate a route value. In this case, the same translation provider will handle translating controller names and action names.

Translation providers

The translation provider being used can actually be anything, as long as it conforms to the following contract:

[code:c#]

public interface IRouteValueTranslationProvider
{
    RouteValueTranslation TranslateToRouteValue(string translatedValue, CultureInfo culture);
    RouteValueTranslation TranslateToTranslatedValue(string routeValue, CultureInfo culture);
}

[/code]

This contract provides 2 method definitions: one for mapping a translated value to a route value (like: mapping the Dutch “Thuis” to “Home”). The other method will do the opposite.

TranslatedRoute

The “core” of this solution is the TranslatedRoute class. It’s basically an overridden implementation of the System.Web.Routing.Route class, using the IRouteValueTranslationProvider for translating a route. As a bonus, it also tries to set the current thread culture to the CultureInfo detected based on the route being called. Note that this is just a reasonable guess, not the very truth. It will not detect nl-NL versus nl-BE, for example. Here’s the code:

[code:c#]

public class TranslatedRoute : Route
{
    // ...

    public RouteValueDictionary RouteValueTranslationProviders { get; private set; }

    // ...

    public override RouteData GetRouteData(HttpContextBase httpContext)
    { 
        RouteData routeData = base.GetRouteData(httpContext);
        if (routeData == null) return null;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && routeData.Values.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation = translationProvider.TranslateToRouteValue(
                    routeData.Values[pair.Key].ToString(),
                    CultureInfo.CurrentCulture);

                routeData.Values[pair.Key] = translation.RouteValue;

                // Store detected culture
                if (routeData.DataTokens[DetectedCultureKey] == null)
                {
                    routeData.DataTokens.Add(DetectedCultureKey, translation.Culture);
                }

                // Set detected culture
                if (this.SetDetectedCulture)
                {
                    System.Threading.Thread.CurrentThread.CurrentCulture = translation.Culture;
                    System.Threading.Thread.CurrentThread.CurrentUICulture = translation.Culture;
                }
            }
        }

        return routeData;
    }

    public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
    {
        RouteValueDictionary translatedValues = values;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && translatedValues.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation =
                    translationProvider.TranslateToTranslatedValue(
                        translatedValues[pair.Key].ToString(), CultureInfo.CurrentCulture);

                translatedValues[pair.Key] = translation.TranslatedValue;
            }
        }

        return base.GetVirtualPath(requestContext, translatedValues);
    }
}

[/code]

The GetRouteData finds a corresponding route translation if I entered “/Thuis/Over” in the URL. The GetVirtualPath method does the opposite, and will be used for mapping a call to <%=Html.ActionLink(“About”, “About”, “Home”)%> to a route like “/Thuis/Over” if the current thread culture is nl-NL. This is not rocket science, it simply tries to translate every token in the requested path and update the route data with it so the ASP.NET MVC subsystem will know that “Thuis” maps to HomeController.

Tying everything together

We already tied the route definition in Global.asax.cs earlier in this blog post, but let’s do it again with a sample DictionaryRouteValueTranslationProvider that will be used for translating routes. This one goes in Global.asax.cs:

[code:c#]

public static void RegisterRoutes(RouteCollection routes)
{
    CultureInfo cultureEN = CultureInfo.GetCultureInfo("en-US");
    CultureInfo cultureNL = CultureInfo.GetCultureInfo("nl-NL");
    CultureInfo cultureFR = CultureInfo.GetCultureInfo("fr-FR");

    DictionaryRouteValueTranslationProvider translationProvider = new DictionaryRouteValueTranslationProvider(
        new List<RouteValueTranslation> {
            new RouteValueTranslation(cultureEN, "Home", "Home"),
            new RouteValueTranslation(cultureEN, "About", "About"),
            new RouteValueTranslation(cultureNL, "Home", "Thuis"),
            new RouteValueTranslation(cultureNL, "About", "Over"),
            new RouteValueTranslation(cultureFR, "Home", "Demarrer"),
            new RouteValueTranslation(cultureFR, "About", "Infos")
        }
    );

    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapTranslatedRoute(
        "TranslatedRoute",
        "{controller}/{action}/{id}",
        new { controller = "Home", action = "Index", id = "" },
        new { controller = translationProvider, action = translationProvider },
        true
    );

    routes.MapRoute(
        "Default",      // Route name
        "{controller}/{action}/{id}",   // URL with parameters
        new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
    );

}

[/code]

This is basically it! What I can now do is set the current thread’s culture to, let’s say fr-FR, and all action links generated by ASP.NET MVC will be using French. Easy? Yes! Cool? Yes!

Localizing ASP.NET MVC routing

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

PHPMEF 0.1.0 released!

PHP MEF A while ago, I did a conceptual blog post on PHP Managed Extensibility Framework – PHPMEF. Today, I’m proud to announce the first public release of PHPMEF! After PHPExcel, PHPLinq, PHPPowerPoint and the Windows Azure SDK for PHP, PHPMEF is the 5th open-source project I started on interoperability (or conceptual interoperability) between the Microsoft world and the PHP world. Noble price for peace upcoming :-)

What is this thing?

PHPMEF is a PHP port of the .NET Managed Extensibility Framework, allowing easy composition and extensibility in an application using the Inversion of Control principle and 2 easy keywords: @export and @import.

PHPMEF is based on a .NET library, MEF, targeting extensibility of projects. It allows you to declaratively extend your application instead of requiring you to do a lot of plumbing. All this is done with three concepts in mind: export, import and compose. “PHPMEF” uses the same concepts in order to provide this extensibility features.

Show me an example!

Ok, I will. But not here. Head over to http://phpmef.codeplex.com and have a look at the principles and features behind PHPMEF.

Enjoy!

Creating an external facing Azure Worker Role endpoint

Internet facing Azure Worker Role When Windows Azure was first released, only Web Roles were able to have an externally facing endpoint. Since PDC 2009, Worker Roles can now also have an external facing endpoint, allowing for a custom application server to be hosted in a Worker Role. Another option would be to run your own WCF service and have it hosted in a Worker Role. Features like load balancing, multiple instances of the Worker are all available. Let’s see how you can create a simple TCP service that can display the current date and time.

Here’s what I want to see when I connect to my Azure Worker Role using telnet (“telnet efwr.cloudapp.net 1234”):

Telnet Azure Worker Role

Let’s go ahead and build this thing. Example code can be downloaded here: EchoCloud.zip (9.92 kb)

kick it on DotNetKicks.com

Configuring the external endpoint

Fire up your Visual Studio and create a new Cloud Service, named EchoCloud, with one Worker Role (named EchoWorker). After you complete this, you should have a Windows Azure solution containing one Worker Role. Right-click the worker role and select Properties. Browse to the Endpoints tab and add a new endpoint, like so:

Configuring an external endpoint on a Windows Azure Worker Role

This new endpoint (named EchoEndpoint) listens on an external TCP port with port number 1234. Note that you can also make this an internal endpoint, which is an endpoint that can only be reached within your Windows Azure solution and not from an external PC. This can be useful if you wan to host a custom application server in your project and make it available for other Web and Worker Roles in your solution.

Building the worker role

As you know, a Worker Role (in the WorkerRole.cs file in your newly created solution) consists of 3 methods that can be implemented: OnStart, Run and OnStop. There’s also an event handler RoleEnvironmentChanging available. The method names sort of speak for themselves, but allow me to explain quickly:

  • OnStart() is executed when the Worker Role is starting. Initializations and some checks can be done here.
  • Run() is the method which contains the actual Worker Role logic. The cool stuff goes in here :-)
  • OnStop() can be used to do things that should be done when the Worker Role is stopped.
  • RoleEnvironmentChanging() is the event handler that gets called when the environment changes: configuration changed, extra instances fired, … are possible triggers for this.

Our stuff will go in the Run() method. We’ll be creating a new TcpListener which will sit and accept connections. Whenever a connection is available, it will be dispatched on a second thread that will be communicating with the client. Let’s see how we can start the TcpListener:

[code:c#]

public class WorkerRole : RoleEntryPoint
{
    private AutoResetEvent connectionWaitHandle = new AutoResetEvent(false);

    public override void Run()
    {
        TcpListener listener = null;
        try
        {
            listener = new TcpListener(
                RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["EchoEndpoint"].IPEndpoint);
            listener.ExclusiveAddressUse = false;
            listener.Start();
        }
        catch (SocketException)
        {
            Trace.Write("Echo server could not start.", "Error");
            return;
        }

        while (true)
        {
            IAsyncResult result = listener.BeginAcceptTcpClient(HandleAsyncConnection, listener);
            connectionWaitHandle.WaitOne();
        }
    }
}

[/code]

First thing to notice is that the TcpListener is initialized using the IPEndpoint from the current Worker Role instance:

[code:c#]

listener = new TcpListener(
                RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["EchoEndpoint"].IPEndpoint);

[/code]

We could have started the TcpListener using a static configuration telling it to listen on TCP port 1234, but that would be difficult for the Windows Azure platform. Instead, we start the TcpListener using the current IPEndpoint configuration that we set earlier in this blog post. This allows the application to run on the Windows Azure production environment, as well as on the development environment available from the Windows Azure SDK. Here’s how it would work if we had multiple Worker Roles hosting this application:

Multiple worker roles running a custom TCP server 

Second thing we are doing is starting the infinite loop that accepts connections and dispatches the connection to the HandleAsyncConnection method that will sit on another thread. This allows for having multiple connections into one Worker Role. Let’s have a look at the HandleAsyncConnection method:

[code:c#]

private void HandleAsyncConnection(IAsyncResult result)
{
    // Accept connection
    TcpListener listener = (TcpListener)result.AsyncState;
    TcpClient client = listener.EndAcceptTcpClient(result);
    connectionWaitHandle.Set();

    // Accepted connection
    Guid clientId = Guid.NewGuid();
    Trace.WriteLine("Accepted connection with ID " + clientId.ToString(), "Information");

    // Setup reader/writer
    NetworkStream netStream = client.GetStream();
    StreamReader reader = new StreamReader(netStream);
    StreamWriter writer = new StreamWriter(netStream);
    writer.AutoFlush = true;

    // Show application
    string input = string.Empty;
    while (input != "9")
    {
        // Show menu
        writer.WriteLine("…");

        input = reader.ReadLine();
        writer.WriteLine();

        // Do something
        if (input == "1")
        {
            writer.WriteLine("Current date: " + DateTime.Now.ToShortDateString());
        }
        else if (input == "2")
        {
            writer.WriteLine("Current time: " + DateTime.Now.ToShortTimeString());
        }

        writer.WriteLine();
    }

    // Done!
    client.Close();
}

[/code]

Code speaks for itself, no? One thing that you may find awkward is the connectionWaitHandle.Set();. In the previous code sample, we did connectionWaitHandle.WaitOne();. This means that we are not accepting any new connection until the current one is up and running. connectionWaitHandle.Set(); signals the original thread to start accepting new connections again.

Running the worker role

When running the application using the development fabric, you can fire up multiple instances. I fired up 4 Worker Roles that provide the simple TCP service that we just created. This means that my application will be load balanced, and every incoming connection will be distributed over these 4 Worker Role instances. Nifty!

Here’s a screenshot of my development fabric with two Worker Roles that I crashed intentionally. The service is still available, thanks to the fabric controller dispatching connections only to available Worker Role instances.

Development fabric with crashed worker roles

Example code

Example code can be downloaded here: EchoCloud.zip (9.92 kb)

Just a quick note: the approach described here can also be used to run a custom WCF host that has other bindings than for example basicHttpBinding.

kick it on DotNetKicks.com 

Ordering fields in ASP.NET MVC 2 templated helpers

Ever worked with the templated helpers provided by ASP.NET MVC 2? Templated helpers provide a way to automatically build UI based on a data model that is marked with attributes defined in the System.ComponentModel.DataAnnotations namespace. For example, a property in the model can be decorated with the attribute [DisplayFormat(DataFormatString = "{0:c}")], and the templated helpers will always render this field formatted as currency.

If you have worked with templated helpers, you must agree: they can be useful! There’s one thing which is impossible in the current version: ordering fields.

kick it on DotNetKicks.com

Take the following class and the rendered form using templated helpers:

ASP.NET MVC EditorForModel()

[code:c#]

public class Person
{
    public string Email { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

[/code]

Nice, but I would rather have the field “Email” displayed third. It would be nice if the field order could be applied using the same approach as with the System.ComponentModel.DataAnnotations namespace: let’s build us an attribute for it!

Building the OrderAttribute

Assuming you have already built an attribute once in your life, let’s go over this quickly:

[code:c#]

[global::System.AttributeUsage(AttributeTargets.Property, Inherited = true, AllowMultiple = false)]
public sealed class OrderAttribute : Attribute
{
    readonly int order;

    public OrderAttribute(int order)
    {
        this.order = order;
    }

    public int Order
    {
        get { return order; }
    }
}

[/code]

The OrderAttribute can be applied to any property of a model, and needs exactly one parameter: order. This order will be used to sort the fields being rendered. Here’s how our Person class may look like after applying the OrderAttribute:

[code:c#]

public class Person
{
    [Order(3)]
    public string Email { get; set; }

    [Order(1)]
    public string FirstName { get; set; }

    [Order(2)]
    public string LastName { get; set; }
}

[/code]

Speaks for itself, no? Now, before you stop reading: this will not work yet. The reason is that the default ModelMetadataProvider from the ASP.NET MVC framework, which provides the templated helpers all information they need about the model, does not know about this OrderAttribute. Let’s see what we can do about that…

Building the OrderedDataAnnotationsModelMetadataProvider

In order for the ASP.NET MVC framework to know and use the OrderAttribute created previously, we’re going to extend the default DataAnnotationsModelMetadataProvider provided with ASP.NET MVC 2. Here’s the code:

[code:c#]

public class OrderedDataAnnotationsModelMetadataProvider : DataAnnotationsModelMetadataProvider
{
    public override IEnumerable<ModelMetadata> GetMetadataForProperties(
        object container, Type containerType)
    {
        SortedDictionary<int, ModelMetadata> returnValue =
            new SortedDictionary<int, ModelMetadata>();

        int key = 20000; // sort order for "unordered" keys

        IEnumerable<ModelMetadata> metadataForProperties =
            base.GetMetadataForProperties(container, containerType);

        foreach (ModelMetadata metadataForProperty in metadataForProperties)
        {
            PropertyInfo property = metadataForProperty.ContainerType.GetProperty(
                metadataForProperty.PropertyName);

            object[] propertyAttributes = property.GetCustomAttributes(
                typeof(OrderAttribute), true);

            if (propertyAttributes.Length > 0)
            {
                OrderAttribute orderAttribute = propertyAttributes[0] as OrderAttribute;
                returnValue.Add(orderAttribute.Order, metadataForProperty);
            }
            else
            {
                returnValue.Add(key++, metadataForProperty);
            }
        }

        return returnValue.Values.AsEnumerable();
    }
}

[/code]

By overriding the GetMetadataForProperties, we’re hooking into the DataAnnotationsModelMetadataProvider’s moment of truth, the method where all properties of the model are returned as ModelMetadata. First of all, we’re using the ModelMetadata the base class provdes. Next, we use a little bit of reflection to get to the OrderAttribute (if specified) and use it to build a SortedDictionary of ModelMetadata. Easy!

One small caveat: non-decorated properties will always come last in the rendered output.

One thing left…

One thing left: registering the OrderedDataAnnotationsModelMetadataProvider with the ModelMetadataProviders infrastructure offered by ASP.NET MVC. Here’s how:

[code:c#]

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();

    RegisterRoutes(RouteTable.Routes);

    ModelMetadataProviders.Current = new OrderedDataAnnotationsModelMetadataProvider();
}

[/code]

I guess you know this one goes into Global.asax.cs. If all works according to plan, your rendered view should now look like the following, with the e-mail field third:

image

kick it on DotNetKicks.com

Vote to help me speak at the MIX 2010 conference!

Everybody knows the Microsoft MIX event, right? The one in Las Vegas? The one with all the fancy web-related stuff? Rings a bell? Ok, great. In the beginning of December 2009, Microsoft did an open call for speakers, which I answered with some session proposals. Who doesn’t want to go to Vegas, right?

The open call proposals have been processed (150+ sessions submitted, wow!) and a voting has started. Yes, you hear me coming: please go ahead and vote for a session I submitted. Voting ends January 15th, 2010.

Since I could not decide which color of the voting banner matched best with my blog’s theme, I decided to put them all three online:

image

Thanks in advance!

Maarten

PS: There's also Elijah Manor, Justin Etheredge, K. Scott Allen, and many others who submitted good looking sessions.

kick it on DotNetKicks.com

Microsoft Web Development Summit 2009

PHP at Microsoft Being in the US for 2 times in a month (PDC09 and Web Development Summit) is fun, tiring and rewarding. The WDS09 was an invite-only event organized by Microsoft, focusing on interaction between Microsoft and the PHP community. I must say: the event has been helpful and interesting for both parties!

  • The Heathman Hotel in Kirkland is a nice hotel!
  • Traveling towards the US is far more productive than flying back: I did PHPMEF traveling westbound, I crashed (half sleep/half awake) on the eastbound flight…
  • If you just traveled over 26 hours: do NOT go shopping immediately when you arrive home! It’s really frustrating and tiring.
  • Did a session on Windows Azure SDK for PHP, PHPExcel and PHPLinq.
  • Did an interview for the Connected Show
  • Met a lot of people I knew from Twitter and e-mail, and met a lot of new people, both Microsoft and PHP community. Nice to meet you all!
  • Event focus was on feedback between Microsoft and PHP community, overall I think the dialogue was respectful and open and helpful to both parties.

Standing at the Microsoft logo

This was actually my first time at the WDS which has been around for 5 years already. The Interop team invited me there, and I want to thank them for doing that: it was a great trip, a great event and I got the chance to meet lots of new people.

Attendees were mostly people from the PHP community, like Cal Evans, Rafael Doms, Chris Cornutt, Romain Bourdon (WAMP server anyone?), Alison “snipe” Gianotto, … Next to that, lots of Microsoft people came by during various sessions. Some of them even reserved the whole week and were attending all sessions to make sure they were in the feedback loop all the time.

We’ve seen Microsoft sessions on IIS, Web Platform Installer, Silverlight, SQL Server, Bing, Powershell (sorry, Scott Hanselman, for disturbing your presentation with a tweet :-)). Interesting sessions with some info I did not know. PHP community sessions were also available: Wordpress, Joomla, Drupal, the PHP community perspective, feedback sessions, PHPLinq, PHPExcel, interoperability bridges, … A good mix of content with knowledgeable speakers and good communication between speakers, product groups and audience. Well done!