Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

TechDays 2010 Portugal slides and demo code

First of all: thank you for attending the sessions Kevin Dockx and I gave at TechDays 2010 Portugal! A wonder we made it there with all the ash clouds and volcanic interference based in Iceland.

Just Another Wordpress Weblog, But More Cloudy

Abstract: “While working together with Microsoft on the Windows Azure SDK for PHP, we found that we needed an popular example application hosted on Microsoft’s Windows Azure. Wordpress was an obvious choice, but not an obvious task. Learn more about Windows Azure, the PHP SDK that we developed, SQL Azure and about the problems we faced porting an existing PHP application to Windows Azure.”

I can not disclose demo code at this time, sorry. Here’s a list of good resources to get you started though:

PHP and Silverlight

Abstract: “So you have an existing PHP application and would like to spice it up with a rich and attractive front-end. Next to Adobe Flex, you can also choose Silverlight as a solution. This session shows you around in Silverlight and shows that PHP and Silverlight can go together easily.”

Demo code: PHP and Silverlight - DevDays.zip (1.00 mb) (based on Silverlight 2, bug Kevin for a recent version :-))

Using Windows Azure Drive in PHP (or Ruby)

At the JumpIn Camp in Zürich this week, we are trying to get some of the more popular PHP applications running on Windows Azure. As you may know, Windows Azure has different storage options like blobs, tables, queues and drives. There’s the Windows Azure SDK for PHP for most of this, except for drives. Which is normal: drives are at the operating system level and have nothing to do with the REST calls that are used for the other storage types. By the way: I did a post on using Windows Azure Drive (or “XDrive”) a while ago if you want more info.

Unfortunately, .NET code is currently the only way to create and mount these virtual hard drives from Windows Azure. But luckily, IIS7 has this integrated pipeline model which Windows Azure is also using. Among other things, this means that services provided by managed modules (written in .NET) can now be applied to all requests to the server, not just ones handled by ASP.NET! In even other words: you can have some .NET code running in the same request pipeline as the FastCGI process running PHP (or Ruby). Which made me think: it should be possible to create and mount a Windows Azure Drive in a .NET HTTP module and pass the drive letter of this thing to PHP through a server variable. And here’s how...

Note: I’ll start with the implementation part first, the usage part comes after that. If you don’t care about the implementation, scroll down...

Download source code and binaries at http://phpazurecontrib.codeplex.com.

kick it on DotNetKicks.com

Building the Windows Azure Drive HTTP module

Building HTTP modules in .NET is easy! Simply reference the System.Web assembly and create a class implementing IHttpModule:

[code:c#]

public class AzureDriveModule : IHttpModule
{
    void IHttpModule.Dispose()
    {
        throw new NotImplementedException();
    }

    void IHttpModule.Init(HttpApplication context)
    {
        throw new NotImplementedException();
    }
}

[/code]

There’s our skeleton! Now for the implementation… (Note: insane amount of code coming!)

[code:c#]

public class AzureDriveModule : IHttpModule
{
    #region IHttpModule Members

    public void Init(HttpApplication context)
    {
        // Initialize config environment
        CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
        {
            configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
        });

        // Initialize local cache
        CloudDrive.InitializeCache(
            RoleEnvironment.GetLocalResource("cloudDriveCache").RootPath,
            RoleEnvironment.GetLocalResource("cloudDriveCache").MaximumSizeInMegabytes);

        // Determine drives to map
        for (int i = 0; i < 10; i++)
        {
            string driveConnectionString = null;
            try
            {
                driveConnectionString = RoleEnvironment.GetConfigurationSettingValue("CloudDrive" + i);
            }
            catch (RoleEnvironmentException) { }

            if (string.IsNullOrEmpty(driveConnectionString))
            {
                continue;
            }

            string[] driveConnection = driveConnectionString.Split(new char[] { ';' });

            // Create storage account
            CloudStorageAccount storageAccount = CloudStorageAccount.FromConfigurationSetting(driveConnection[0]);
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

            // Mount requested drive
            blobClient.GetContainerReference(driveConnection[1]).CreateIfNotExist();

            var drive = storageAccount.CreateCloudDrive(
                blobClient
                    .GetContainerReference(driveConnection[1])
                    .GetPageBlobReference(driveConnection[2])
                    .Uri.ToString()
            );

            try
            {
                drive.Create(int.Parse(driveConnection[3]));
            }
            catch (CloudDriveException ex)
            {
                // handle exception here
                // exception is also thrown if all is well but the drive already exists
            }

            string driveLetter = drive.Mount(
                RoleEnvironment.GetLocalResource("cloudDriveCache").MaximumSizeInMegabytes, DriveMountOptions.None);

            // Add the drive letter to the environment
            Environment.SetEnvironmentVariable("CloudDrive" + i, driveLetter);
        }
    }

    public void Dispose()
    {
    }

    #endregion
}

[/code]

Configuring and using Windows Azure Drive

There are 4 steps involved in using the Windows Azure Drive HTTP module:

  1. Copy the .NET assemblies into your project
  2. Edit ServiceConfiguration.cscfg (and ServiceDefinition.csdef)
  3. Edit Web.config
  4. Use the thing!

The Windows Azure tooling for Eclipse will be used in the following example.

Copy the .NET assemblies into your project

Create a /bin folder in your web role project and copy in all .DLL files provided. Here’s a screenshot of how this looks:

.NET assemblies for XDrive in PHP on Azure

Edit ServiceConfiguration.cscfg (and ServiceDefinition.csdef)

In order to be able to mount, some modifications to ServiceConfiguration.cscfg (and ServiceDefinition.csdef) are required. The ServiceDefinition.csdef file should contain the following additional entries:

[code:c#]

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="TestCustomModules" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="WebRole" enableNativeCodeExecution="true">
    <ConfigurationSettings>
      <Setting name="CloudDriveConnectionString" />  
      <Setting name="CloudDrive0" />   
    </ConfigurationSettings>
    <InputEndpoints>
      <!-- Must use port 80 for http and port 443 for https when running in the cloud -->
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
    <LocalStorage name="cloudDriveCache" sizeInMB="128"/>
  </WebRole>
</ServiceDefinition>

[/code]

Things to note are the cloudDriveCache local storage entry, which is needed for caching access to the virtual drive. The configuration settings are defined for use in ServiceConfiguration.csdef:

[code:c#]

<?xml version="1.0"?>
<ServiceConfiguration serviceName="TestCustomModules" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="WebRole">
    <Instances count="1"/>
    <ConfigurationSettings>
      <Setting name="CloudDriveConnectionString" value="UseDevelopmentStorage=true" />
      <Setting name="CloudDrive0" value="CloudDriveConnectionString;drives;sampledrive.vhd;64" />
    </ConfigurationSettings>
  </Role>
</ServiceConfiguration>

[/code]

The configuration specifies that a cloud drive “CloudDrive0” (up to “CloudDrive9”) should be mounted using the storage account in “CloudDriveConnectionString”, a storage container named “drives” and a virtual hard disk file named “sampledrive.vhd”. Oh, and the drive should be 64 MB in size.

Edit Web.config

Before the HTTP module is used by IIS7 or Windows Azure, the following should be added to Web.config:

[code:c#]

<modules>
  <add name="AzureDriveModule" type="PhpAzureExtensions.AzureDriveModule, PhpAzureExtensions"/>
</modules>

[/code]

Here’s my complete Web.config:

[code:c#]

<?xml version="1.0"?>
<configuration>
  <system.webServer>
    <!-- DO NOT REMOVE: PHP FastCGI Module Handler -->
    <handlers>
      <clear />
      <add name="PHP via FastCGI"
           path="*.php"
           verb="*"
           modules="FastCgiModule"
           scriptProcessor="%RoleRoot%\approot\php\php-cgi.exe"
           resourceType="Unspecified" />
      <add name="StaticFile" path="*" verb="*" modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule" resourceType="Either" requireAccess="Read" />
    </handlers>
    <!-- Example WebRole IIS 7 Configation -->
    <defaultDocument>
      <files>
        <clear />
        <add value="index.php" />
      </files>
    </defaultDocument>

    <modules>
      <add name="AzureDriveModule" type="PhpAzureExtensions.AzureDriveModule, PhpAzureExtensions"/>
    </modules>
  </system.webServer>
</configuration>

[/code]

Use the thing!

Next thing to do is use your virtual Windows Azure Drive. The HTTP module adds an entry in the $_SERVER variable, named after the CloudDrive0-9 settings defined earlier. The following code example stores a file on a virtual Windows Azure Drive and reads it back afterwards:

[code:c#]

<?php
file_put_contents($_SERVER['CloudDrive0'] . '\sample.txt', 'Hello World!');

echo file_get_contents($_SERVER['CloudDrive0'] . '\sample.txt');

[/code]

Source code in PHPAzureContrib (CodePlex)

Since there already is a project for PHP and Azure contributions, I decided to add this module to that project. The binaries and source code can be found on http://phpazurecontrib.codeplex.com.

Other possible usages

The approach I demonstrated above may be used for other scenarios as well:

  • Modifying php.ini before PHP runs. The module you can access would run before FastCGI runs, an ideal moment to modigy php.ini settings and such.
  • Using .NET authentication modules in PHP, check this site for an example.
  • Download updates to PHP automatically if a new version is available and deploy it into your application at runtime. Probably needs some performance tuning but this trick may work. The same goes for static content and script updates by the way. Imagine pulling a website dynamically from blob storage and deploy it onto your web role without any hassle…

In short: endless possibilities!

kick it on DotNetKicks.com

Put your existing application in the cloud!

As promised during my talk, here's the slide deck for "Put your existing application in the cloud!".

Abstract: "Leverage the highly scalable Windows Azure platform and deploy your existing ASP.NET application to a new home in the clouds. This demo filled session will guide you in how to make successful use of Windows Azure’s hosting and storage platform as well as SQL Azure, the relational database in the cloud, by moving an existing ASP.NET application to a higher level."

And here's the live recording:

Thanks for joining TechDays 2010 and my session!

Using FTP to access Windows Azure Blob Storage

A while ago, I did a blog post on creating an external facing Azure Worker Role endpoint, listening for incoming TCP connections. After doing that post, I had the idea of building a Windows Azure FTP server that served as a bridge to blob storage. Lack of time, other things to do, you name it: I did not work on that idea. Until now, that is.

Being a lazy developer, I did not start from scratch: writing an FTP server may be something that has been done before, and yes: “Binging” for “ Csharp FTP server” led me to this article on CodeGuru.com. Luckily, the author of the article had the idea of abstraction in mind: he did not build his software on top of a real file system, no, he did an abstraction. This would mean I would only have to host this thing in a worker role somehow and add some classes working with blobs and not with files. Cool!

kick it on DotNetKicks.com Shout it

Demo of the FTP to Blob Storage bridge

Well, you can try this one yourself actually… But let’s start with a disclaimer: I’m not logging your account details when you log in. Next, I’m not allowing you to transfer more than 10MB of data per day. If you require this, feel free to contact me and I’ll give you more traffic quotas.

Open up your favourite FTP client (like FileZilla), and open up an FTP connection to ftp.cloudapp.net. Don’t forget to use your Windows Azure storage account name as the username and the storage account key as the password. Connect, and you’ll be greeted in a nice way:

Windows Azure Blob FileZilla

The folders you are seeing are your blob storage containers. Feel free to browse your storage account and:

  • Create, remove and rename blob containers.
  • Create, remove and rename folders inside a container. Note that a .placeholder file will be created when doing this.
  • Upload and download blobs.

Feels like regular FTP, right? There’s more though… Using the Windows Azure storage API, you can also choose if a blob container is private or public. Why not do this using the FTP client? Right-click a blob container,  pick “File permissions…” and here you are: the public read permission is the one that you can use to control access to a blob container.

Change container permission through FTP

Show me the code!

Well… No! I think it’s not stable enough for releasing it to public yet. But what I will do is share some off my struggles I faced while developing on this.

Struggle #1: Quotas

As you may have noticed: I’m not allowing data transfers of more than 10 MB per day per storage account. This is not much, but I did not want  to go pay for other people’s traffic that comes trough a demo app. However: every command you send, every action you take, is generating traffic. I had to choose how this would be logged and persisted.

The strategy used is that all transferred bytes are counted and stored in a cache in the worker role. I created a dedicated thread that monitors this cache, and regularly persists the traffic log in blob storage. There is no fixed interval in which this happens, it just happens. I’m not sure yet that this is the best way to do it, but I feel it is a good mix between intensity of logging and intensity of an expensive write to blob storage.

Struggle #2: Sleep

This is not a technical struggle. Since I had fun, I dedicated a lot of time to this thing, mainly in fine-tuning, testing, testing with multiple concurrent clients, … I learnt that System.Net has some cool classes and also learnt that TcpClient that are closed should also be disposed. Otherwise, the socket will not be released and no new connections will be accepted after a while. Anyway: it caused a lack of sleep. The solution to this was drinking more coffee, just at the moment where I actually was drinking less coffee for over a month or two. I will have to go to coffee-rehab again…

Struggle #3: FTP PASV mode

I will not assume you know this because I also didn’t know the exact difference… When a client connects to an FTP server, it will have 2 connections with that server. One on the standard FTP TCP port 21 used for sending commands back and forth, and one on another TCP port used for transferring data. This second connection can be an active one or a passive one.

The main difference between active and passive FTP lies in the direction of the connection: with active FTP, the FTP server opens a connection to a TCP port on the client, while with passive FTP, the client will open a connection to another TCP port on the server. Here’s more details on that:

The Basics

FTP is a TCP based service exclusively. There is no UDP component to FTP. FTP is an unusual service in that it utilizes two ports, a 'data' port and a 'command' port (also known as the control port). Traditionally these are port 21 for the command port and port 20 for the data port. The confusion begins however, when we find that depending on the mode, the data port is not always on port 20.

Active FTP

In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.

From the server-side firewall's standpoint, to support active mode FTP the following communication channels need to be opened:

  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's port 20 to ports > 1023 (Server initiates data connection to client's data port)
  • FTP server's port 20 from ports > 1023 (Client sends ACKs to server's data port)

When drawn out, the connection appears as follows:

In step 1, the client's command port contacts the server's command port and sends the command PORT 1027. The server then sends an ACK back to the client's command port in step 2. In step 3 the server initiates a connection on its local data port to the data port the client specified earlier. Finally, the client sends an ACK back as shown in step 4.

The main problem with active mode FTP actually falls on the client side. The FTP client doesn't make the actual connection to the data port of the server--it simply tells the server what port it is listening on and the server connects back to the specified port on the client. From the client side firewall this appears to be an outside system initiating a connection to an internal client--something that is usually blocked.

Passive FTP

In order to resolve the issue of the server initiating the connection to the client a different method for FTP connections was developed. This was known as passive mode, or PASV, after the command used by the client to tell the server it is in passive mode.

In passive mode FTP the client initiates both connections to the server, solving the problem of firewalls filtering the incoming data port connection to the client from the server. When opening an FTP connection, the client opens two random unprivileged ports locally (N > 1023 and N+1). The first port contacts the server on port 21, but instead of then issuing a PORT command and allowing the server to connect back to its data port, the client will issue the PASV command. The result of this is that the server then opens a random unprivileged port (P > 1023) and sends the PORT P command back to the client. The client then initiates the connection from port N+1 to port P on the server to transfer data.

From the server-side firewall's standpoint, to support passive mode FTP the following communication channels need to be opened:

  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's ports > 1023 from anywhere (Client initiates data connection to random port specified by server)
  • FTP server's ports > 1023 to remote ports > 1023 (Server sends ACKs (and data) to client's data port)

When drawn, a passive mode FTP connection looks like this:

In step 1, the client contacts the server on the command port and issues the PASV command. The server then replies in step 2 with PORT 2024, telling the client which port it is listening to for the data connection. In step 3 the client then initiates the data connection from its data port to the specified server data port. Finally, the server sends back an ACK in step 4 to the client's data port.

While passive mode FTP solves many of the problems from the client side, it opens up a whole range of problems on the server side. The biggest issue is the need to allow any remote connection to high numbered ports on the server. Fortunately, many FTP daemons, including the popular WU-FTPD allow the administrator to specify a range of ports which the FTP server will use. See Appendix 1 for more information.

The second issue involves supporting and troubleshooting clients which do (or do not) support passive mode. As an example, the command line FTP utility provided with Solaris does not support passive mode, necessitating a third-party FTP client, such as ncftp.

With the massive popularity of the World Wide Web, many people prefer to use their web browser as an FTP client. Most browsers only support passive mode when accessing ftp:// URLs. This can either be good or bad depending on what the servers and firewalls are configured to support.

(from http://slacksite.com/other/ftp.html)

Clear enough? Good! In order to support passive FTP, the Windows Azure worker role should be listening on more ports than only port 21. After doing some research, I found that most FTP servers allow specifying the passive FTP port range. Opening a range of over 1000 TCP ports is also something most FTP servers seem to do. Good, I tried this one on Windows Azure, deployed it and… found out that you can only define a maximum of 5 public endpoints per deployment.

This led me to re-implementing PASV mode, opening a new port on demand from a pool of 4 public endpoints defined. Again, I deployed this one but this failed as well: there was too much of a delay in opening a new TcpListener on the fly.

Option three seemed to work: I have a TcpListener open on TCP port 20 all the time and try to dispatch incoming connections immediately. There’s also a downside to this: if users send a lot of PASV requests, there will be a lot of unused connections that may cause the application to crash. So I did a trick here as well: close listening connections after a short delay.

Conclusion

Feel free to use the service and if you require more than 10 MB traffic a day, feel free to contact me. I can specify traffic quotas per storage account and may increase traffic quotas for yours.

kick it on DotNetKicks.com Shout it

MEF will not get easier, it’s cool as ICE

Over the past few weeks, several people asked me to show them how to use MEF (Managed Extensibility Framework), some of them seemed to have some difficulties with the concept of MEF. I tried explaining that it will not get easier than it is currently, hence the title of this blog post. MEF is based on 3 keywords: export, import, compose. Since these 3 words all start with a letter that can be combined to a word, and MEF is cool, here’s a hint on how to remember it: MEF is cool as ICE!

kick it on DotNetKicks.com

Imagine the following:

You want to construct a shed somewhere in your back yard. There’s tools to accomplish that, such as a hammer and a saw. There’s also material, such as nails and wooden boards.

Let’s go for this! Here’s a piece of code to build the shed:

[code:c#]

public class Maarten
{
    public void Execute(string command)
    {
        if (command == “build-a-shed”)
        {
          List<ITool> tools = new List<ITool>
          {
            new Hammer(),
            new Saw()
          };

          List<IMaterial> material = new List<IMaterial>
          {
            new BoxOfNails(),
            new WoodenBoards()
          };

          BuildAShedCommand task = new BuildAShedCommand(tools, material);
          task.Execute();
        }
    }
}

[/code]

That’s a lot of work, building a shed! Imagine you had someone to do the above for you, someone who gathers your tools spread around somewhere in the house, goes to the DIY-store and gets a box of nails, … This is where MEF comes in to place.

Compose

Let’s start with the last component of the MEF paradigm: composition. Let’s not look for tools in the garage (and the attic), let’s not go to the DIY store, let’s “outsource” this task to someone cheap: MEF. Cheap because it will be in .NET 4.0, not because it’s, well, “cheap”. Here’s how the outsourcing would be done:

[code:c#]

public class Maarten
{
    public void Execute(string command)
    {
        if (command == “build-a-shed”)
        {
          // Tell MEF to look for stuff in my house, maybe I still have nails and wooden boards as well
          AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
          CompositionContainer container = new CompositionContainer(catalog);

          // Start the job, and ask MEF to find my tools and material
          BuildAShedCommand task = new BuildAShedCommand();
          container.ComposeParts(task);
          task.Execute();
        }
    }
}

[/code]

Cleaner, no? The only thing I have to do is start the job, which is more fun when my tools and material are in reach. The ComposeParts() call figures out where my tools and material are. However, MEF's stable composition promise will only do that if it can find ("satisfy") all required imports. And MEF will not know all of this automatically. Tools and material should be labeled. And that’s where the next word comes in play: export.

Export

Export, or the ExportAttribute to be precise, is a marker for MEF to tell that you want to export the type or property on which the attribute is placed. Really think of it like a label. Let’s label our hammer:

[code:c#]

[Export(typeof(ITool))]
public class Hammer : ITool
{
  // ...
}

[/code]

The same should be done for the saw, the box of nails and the wooden boards. Remember to put a different label color on the tools and the material, otherwise MEF will think that sawing should be done with a box of nails.

Import

Of course, MEF can go ahead and gather tools and material, but it will not know what to do with it unless you give it a hint. And that’s where the ImportAttribute (and the ImportManyAttribute) come in handy. I will have to tell MEF that the tools go on one stack, the material goes on another one. Here’s how:

[code:c#]

public class BuildAShedCommand : ICommand
{
  [ImportMany(typeof(ITool))]
  public IEnumerable<ITool> Tools { get; set; }

  [ImportMany(typeof(IMaterial))]
  public IEnumerable<IMaterial> Materials { get; set; }

  // ...
}

[/code]

Conclusion

Easy, no? Of course, MEF can do a lot more than this. For instance, you can specify that a certain import is only valid for exports of a specific type and specific metadata: I can have a small and a large hammer, both ITool. For building a shed, I will require the large hammer though.

Another cool feature is creating your own export provider (example at TheCodeJunkie.com). And if ICE does not make sense, try the Zoo example.

kick it on DotNetKicks.com

Introducing RealDolmenBlogs.com

RealDolmenBlogs.com Here’s something I would like to share with you. A few months ago, our company (RealDolmen) started a new website, RealDolmenBlogs.com. This site syndicates content from employee blogs, people with lots of experience in their range of topics. These guys have lots of knowledge to share, but sometimes their blog does not have a lot of attention from, well, you. Since we would really love to share employee knowledge, RealDolmenBlogs.com was born.

The following topics are covered:

  • .NET
  • Application Lifecycle Management
  • Architecture
  • ASP.NET
  • Biztalk
  • PHP
  • Sharepoint
  • Silverlight
  • Visual Studio

Make sure to subscribe to the syndicated RSS feed and have quality content delivered to your RSS reader.

The technical side

Since I do not like to do blog posts on topic that do not have a technical touch, considered that the first few lines of text of this post are pure marketing in a sense, here’s the technical bit.

RealDolmenBlogs.com is built on Windows Azure and SQL Azure. As a company we believe there is value in cloud computing, in this case we chose for cloud computing due to the fact that the setup costs for the website were very small (pay-per-use) and that we can easily scale-up the website if needed.

The software behind the site is a customized version of BlogEngine.NET. It has been extended with a syndication feature, pulling content from employee blogs with a little help of the Argotic syndication framework. Running BlogEngine.NET on Windows Azure is not that hard, especially when you are using SQL Azure as well: the only thing to modify is the connection string to your database and you are done. Well… that is if you don’t care about images and attachments. We had to do some modifications to how BlogEngine.NET handles file uploads and made sure everything is now stored safe and sound in Windows Azure blob storage.

That being said: enjoy the content that my colleagues are sharing, posts are definitely worth a read!

Sharpy - an ASP.NET MVC view engine based on Smarty

Sharpy - ASP.NET MVC View Engine based on SmartyAre you also one of those ASP.NET MVC developers who prefer a different view engine than the default Webforms view engine available? You tried Spark, NHaml, …? If you are familiar with the PHP world as well, chances are you know Smarty, a great engine for creating views that can easily be read and understood by both developers and designers. And here’s the good news: Sharpy provides the same syntax for ASP.NET MVC!

If you want more details on Sharpy, visit Jaco Pretorius’ blog:

kick it on DotNetKicks.com

A simple example…

Here’s a simple example:

[code:c#]

{master file='~/Views/Shared/Master.sharpy' title='Hello World sample'}

<h1>Blog entries</h1>

{foreach from=$Model item="entry"}
    <tr>
        <td>{$entry.Name|escape}</td>       
        <td align="right">{$entry.Date|date_format:"dd/MMM/yyyy"}</td>       
    </tr>
    <tr>
        <td colspan="2" bgcolor="#dedede">{$entry.Body|escape}</td>
    </tr>
{foreachelse}
    <tr>
        <td colspan="2">No records</td>
    </tr>
{/foreach}

[/code]

The above example first specifies the master page to use. Next, a foreach-loop is executed for each blog post (aliased “entry”) in the $Model. Printing the entry’s body is done using {$entry.Body|escape}. Note the pipe “|” and the word escape after it. These are variable modifiers that can be used to escape content, format dates, …

Extensibility

Sharpy is all about extensibility: every function in a view is actually a plugin of a specific type (there are four types, IInlineFunction, IBlockFunction, IExpressionFunction and IVariableModifier). These plugins are all exposed through MEF. This means that Sharpy will always use any of your custom functions that are exposed through MEF. For example, here’s a custom function named “content”:

[code:c#]

[Export(typeof(IInlineFunction))]
public class Content : IInlineFunction
{
    public string Name
    {
        get { return "content"; }
    }

    public string Evaluate(IDictionary<string, object> attributes, IFunctionEvaluator evaluator)
    {
        // Fetch attributes
        var file = attributes.GetRequiredAttribute<string>("file");

        // Write output
        return evaluator.EvaluateUrl(file);
    }
}

[/code]

Here’s how to use it:

[code:c#]

{content file='~/Content/SomeFile.txt'}

[/code]

Sharpy uses MEF to allow developers to implement their own functions and modifiers.  All the built-in functions are also built using this exact same framework – the same functionality is available to both internal and external functions.

Extensibility is one of the strongest features in Sharpy.  This should allow us to leverage any functionality available in a normal ASP.NET view while maintaining simple views and straightforward markup.

Give it a spin!

Do give Sharpy a spin, you will learn to love it.

Using Windows Azure Drive (aka X-Drive)

Windows Azure X Drive With today’s release of the Windows Azure Tools and SDK version 1.1, also the Windows Azure Drive feature has been released. Announced at last year’s PDC as X-Drive, which has nothing to do with a well-known German car manufacturer, this new feature enables a Windows Azure application to use existing NTFS APIs to access a durable drive. This allows the Windows Azure application to mount a page blob as a drive letter, such as X:, and enables easily migration of existing NTFS applications to the cloud.

This blog post will describe the necessary steps to create and/or mount a virtual hard disk on a Windows Azure role instance.

kick it on DotNetKicks.com

Using Windows Azure Drive

In a new or existing cloud service, make sure you have a LocalStorage definition in ServiceDefinition.csdef. This local storage, defined with the name InstanceDriveCache below, will be used by the Windows Azure Drive API to cache virtual hard disks on the virtual machine that is running, enabling faster access times. Here’s the ServiceDefinition.csdef for my project:

[code:c#]

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="MyWorkerRole" enableNativeCodeExecution="true">
    <LocalResources>
      <LocalStorage name="InstanceDriveCache"
                    cleanOnRoleRecycle="false"
                    sizeInMB="300" />
    </LocalResources>
    <ConfigurationSettings>
      <!-- … -->
   </ConfigurationSettings>
  </WorkerRole>
</ServiceDefinition>

[/code]

Next, in code, fire up a CloudStorageAccount, I’m using development storage settings here:

[code:c#]

CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;

[/code]

After that, the Windows Azure Drive environment has to be initialized. Remember the LocalStorage definition we made earlier? This is where it comes into play:

[code:c#]

LocalResource localCache = RoleEnvironment.GetLocalResource("InstanceDriveCache");
CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);

[/code]

Just to be sure, let’s create a blob storage container with any desired name, for instance “drives”:

[code:c#]

CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();

[/code]

Ok, now we got that, it’s time to get a reference to a Windows Azure Drive. Here’s how:

[code:c#]

CloudDrive myCloudDrive = storageAccount.CreateCloudDrive(
    blobClient
        .GetContainerReference("drives")
        .GetPageBlobReference("mysupercooldrive.vhd")
        .Uri.ToString()
);

[/code]

Our cloud drive will be stored in a page blob on the “drives” blob container, named “mysupercooldrive.vhd”. Note that when using development settings, the page blob will not be created on development storage. Instead, files will be located at C:\Users\<your.user.name>\AppData\Local\dftmp\wadd\devstoreaccount1.

Next up: making sure our virtual hard disk exists.  Note that this should only be done once in a virtual disk’s lifetime. Let’s create a giant virtual disk of 64 MB:

[code:c#]

try
{
    myCloudDrive.Create(64);
}
catch (CloudDriveException ex)
{
    // handle exception here
    // exception is also thrown if all is well but the drive already exists
}

[/code]

Great, our disk is created. Now let’s mount it, i.e. assign a drive letter to it. The drive letter can not be chosen, instead it is returned by the Mount() method. The 25 is the cache size that will be used on the virtual machine instance. The DriveMountOptions can be None, Force and FixFileSystemErrors.

[code:c#]

string driveLetter = myCloudDrive.Mount(25, DriveMountOptions.None);

[/code]

Great! Do whatever you like with your disk! For example, create some files:

[code:c#]

for (int i = 0; i < 1000; i++)
{
    System.IO.File.WriteAllText(driveLetter + "\\" + i.ToString() + ".txt", "Test");
}

[/code]

One thing left when the role instance is being shut down: unmounting the disk and makign sure all contents are on blob storage again:

[code:c#]

myCloudDrive.Unmount();

[/code]

Now, just for fun: you can also create a snapshot from a Windows Azure Drive by calling the Snapshot() method on it. A new Uri with the snapshot location will be returned.

Full code sample

The code sample described above looks like this when not going trough each line of code separately:

[code:c#]

public override void Run()

    CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;

    LocalResource localCache = RoleEnvironment.GetLocalResource("InstanceDriveCache");
    CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);

    // Just checking: make sure the container exists
    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
    blobClient.GetContainerReference("drives").CreateIfNotExist();

    // Create cloud drive
    CloudDrive myCloudDrive = storageAccount.CreateCloudDrive(
        blobClient
        .GetContainerReference("drives")
        .GetPageBlobReference("mysupercooldrive.vhd")
        .Uri.ToString()
    );

    try
    {
        myCloudDrive.Create(64);
    }
    catch (CloudDriveException ex)
    {
        // handle exception here
        // exception is also thrown if all is well but the drive already exists
    }

    string driveLetter = myCloudDrive.Mount(25, DriveMountOptions.Force);

    for (int i = 0; i < 1000; i++)
    {
        System.IO.File.WriteAllText(driveLetter + "\\" + i.ToString() + ".txt", "Test");
    }

    myCloudDrive.Unmount();
}

[/code]

Enjoy!

kick it on DotNetKicks.com

Translating routes (ASP.NET MVC and Webforms)

Localized route in ASP.NET MVC - Translated route in ASP.NET MVC For one of the first blog posts of the new year, I thought about doing something cool. And being someone working with ASP.NET MVC, I thought about a cool thing related to that: let’s do something with routes! Since System.Web.Routing is not limited to ASP.NET MVC, this post will also play nice with ASP.NET Webforms. But what’s the cool thing? How about… translating route values?

Allow me to explain… I’m tired of seeing URLs like http://www.example.com/en/products and http://www.example.com/nl/products. Or something similar, with query parameters like “?culture=en-US”. Or even worse stuff. Wouldn’t it be nice to have http://www.example.com/products mapping to the English version of the site and http://www.exaple.com/producten mapping to the Dutch version? Better to remember when giving away a link to someone, better for SEO as well.

Of course, we do want both URLs above to map to the ProductsController in our ASP.NET MVC application. We do not want to duplicate logic because of a language change, right? And what’s more: it’s not fun if this would mean having to switch from <%=Html.ActionLink(…)%> to something else because of this.

Let’s see if we can leverage the routing engine in System.Web.Routing for this…

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

Mapping a translated route

First things first: here’s how I see a translated route being mapped in Global.asax.cs:

[code:c#]

routes.MapTranslatedRoute(
    "TranslatedRoute",
    "{controller}/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" },
    new { controller = translationProvider, action = translationProvider },
    true
);

[/code]

Looks pretty much the same as you would normally map a route, right? There’s only one difference: the new { controller = translationProvider, action = translationProvider } line of code. This line of code basically tells the routing engine to use the object translationProvider as a provider which allows to translate a route value. In this case, the same translation provider will handle translating controller names and action names.

Translation providers

The translation provider being used can actually be anything, as long as it conforms to the following contract:

[code:c#]

public interface IRouteValueTranslationProvider
{
    RouteValueTranslation TranslateToRouteValue(string translatedValue, CultureInfo culture);
    RouteValueTranslation TranslateToTranslatedValue(string routeValue, CultureInfo culture);
}

[/code]

This contract provides 2 method definitions: one for mapping a translated value to a route value (like: mapping the Dutch “Thuis” to “Home”). The other method will do the opposite.

TranslatedRoute

The “core” of this solution is the TranslatedRoute class. It’s basically an overridden implementation of the System.Web.Routing.Route class, using the IRouteValueTranslationProvider for translating a route. As a bonus, it also tries to set the current thread culture to the CultureInfo detected based on the route being called. Note that this is just a reasonable guess, not the very truth. It will not detect nl-NL versus nl-BE, for example. Here’s the code:

[code:c#]

public class TranslatedRoute : Route
{
    // ...

    public RouteValueDictionary RouteValueTranslationProviders { get; private set; }

    // ...

    public override RouteData GetRouteData(HttpContextBase httpContext)
    { 
        RouteData routeData = base.GetRouteData(httpContext);
        if (routeData == null) return null;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && routeData.Values.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation = translationProvider.TranslateToRouteValue(
                    routeData.Values[pair.Key].ToString(),
                    CultureInfo.CurrentCulture);

                routeData.Values[pair.Key] = translation.RouteValue;

                // Store detected culture
                if (routeData.DataTokens[DetectedCultureKey] == null)
                {
                    routeData.DataTokens.Add(DetectedCultureKey, translation.Culture);
                }

                // Set detected culture
                if (this.SetDetectedCulture)
                {
                    System.Threading.Thread.CurrentThread.CurrentCulture = translation.Culture;
                    System.Threading.Thread.CurrentThread.CurrentUICulture = translation.Culture;
                }
            }
        }

        return routeData;
    }

    public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
    {
        RouteValueDictionary translatedValues = values;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && translatedValues.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation =
                    translationProvider.TranslateToTranslatedValue(
                        translatedValues[pair.Key].ToString(), CultureInfo.CurrentCulture);

                translatedValues[pair.Key] = translation.TranslatedValue;
            }
        }

        return base.GetVirtualPath(requestContext, translatedValues);
    }
}

[/code]

The GetRouteData finds a corresponding route translation if I entered “/Thuis/Over” in the URL. The GetVirtualPath method does the opposite, and will be used for mapping a call to <%=Html.ActionLink(“About”, “About”, “Home”)%> to a route like “/Thuis/Over” if the current thread culture is nl-NL. This is not rocket science, it simply tries to translate every token in the requested path and update the route data with it so the ASP.NET MVC subsystem will know that “Thuis” maps to HomeController.

Tying everything together

We already tied the route definition in Global.asax.cs earlier in this blog post, but let’s do it again with a sample DictionaryRouteValueTranslationProvider that will be used for translating routes. This one goes in Global.asax.cs:

[code:c#]

public static void RegisterRoutes(RouteCollection routes)
{
    CultureInfo cultureEN = CultureInfo.GetCultureInfo("en-US");
    CultureInfo cultureNL = CultureInfo.GetCultureInfo("nl-NL");
    CultureInfo cultureFR = CultureInfo.GetCultureInfo("fr-FR");

    DictionaryRouteValueTranslationProvider translationProvider = new DictionaryRouteValueTranslationProvider(
        new List<RouteValueTranslation> {
            new RouteValueTranslation(cultureEN, "Home", "Home"),
            new RouteValueTranslation(cultureEN, "About", "About"),
            new RouteValueTranslation(cultureNL, "Home", "Thuis"),
            new RouteValueTranslation(cultureNL, "About", "Over"),
            new RouteValueTranslation(cultureFR, "Home", "Demarrer"),
            new RouteValueTranslation(cultureFR, "About", "Infos")
        }
    );

    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapTranslatedRoute(
        "TranslatedRoute",
        "{controller}/{action}/{id}",
        new { controller = "Home", action = "Index", id = "" },
        new { controller = translationProvider, action = translationProvider },
        true
    );

    routes.MapRoute(
        "Default",      // Route name
        "{controller}/{action}/{id}",   // URL with parameters
        new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
    );

}

[/code]

This is basically it! What I can now do is set the current thread’s culture to, let’s say fr-FR, and all action links generated by ASP.NET MVC will be using French. Easy? Yes! Cool? Yes!

Localizing ASP.NET MVC routing

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

PHPMEF 0.1.0 released!

PHP MEF A while ago, I did a conceptual blog post on PHP Managed Extensibility Framework – PHPMEF. Today, I’m proud to announce the first public release of PHPMEF! After PHPExcel, PHPLinq, PHPPowerPoint and the Windows Azure SDK for PHP, PHPMEF is the 5th open-source project I started on interoperability (or conceptual interoperability) between the Microsoft world and the PHP world. Noble price for peace upcoming :-)

What is this thing?

PHPMEF is a PHP port of the .NET Managed Extensibility Framework, allowing easy composition and extensibility in an application using the Inversion of Control principle and 2 easy keywords: @export and @import.

PHPMEF is based on a .NET library, MEF, targeting extensibility of projects. It allows you to declaratively extend your application instead of requiring you to do a lot of plumbing. All this is done with three concepts in mind: export, import and compose. “PHPMEF” uses the same concepts in order to provide this extensibility features.

Show me an example!

Ok, I will. But not here. Head over to http://phpmef.codeplex.com and have a look at the principles and features behind PHPMEF.

Enjoy!