Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Using FTP to access Windows Azure Blob Storage

A while ago, I did a blog post on creating an external facing Azure Worker Role endpoint, listening for incoming TCP connections. After doing that post, I had the idea of building a Windows Azure FTP server that served as a bridge to blob storage. Lack of time, other things to do, you name it: I did not work on that idea. Until now, that is.

Being a lazy developer, I did not start from scratch: writing an FTP server may be something that has been done before, and yes: “Binging” for “ Csharp FTP server” led me to this article on CodeGuru.com. Luckily, the author of the article had the idea of abstraction in mind: he did not build his software on top of a real file system, no, he did an abstraction. This would mean I would only have to host this thing in a worker role somehow and add some classes working with blobs and not with files. Cool!

kick it on DotNetKicks.com Shout it

Demo of the FTP to Blob Storage bridge

Well, you can try this one yourself actually… But let’s start with a disclaimer: I’m not logging your account details when you log in. Next, I’m not allowing you to transfer more than 10MB of data per day. If you require this, feel free to contact me and I’ll give you more traffic quotas.

Open up your favourite FTP client (like FileZilla), and open up an FTP connection to ftp.cloudapp.net. Don’t forget to use your Windows Azure storage account name as the username and the storage account key as the password. Connect, and you’ll be greeted in a nice way:

Windows Azure Blob FileZilla

The folders you are seeing are your blob storage containers. Feel free to browse your storage account and:

  • Create, remove and rename blob containers.
  • Create, remove and rename folders inside a container. Note that a .placeholder file will be created when doing this.
  • Upload and download blobs.

Feels like regular FTP, right? There’s more though… Using the Windows Azure storage API, you can also choose if a blob container is private or public. Why not do this using the FTP client? Right-click a blob container,  pick “File permissions…” and here you are: the public read permission is the one that you can use to control access to a blob container.

Change container permission through FTP

Show me the code!

Well… No! I think it’s not stable enough for releasing it to public yet. But what I will do is share some off my struggles I faced while developing on this.

Struggle #1: Quotas

As you may have noticed: I’m not allowing data transfers of more than 10 MB per day per storage account. This is not much, but I did not want  to go pay for other people’s traffic that comes trough a demo app. However: every command you send, every action you take, is generating traffic. I had to choose how this would be logged and persisted.

The strategy used is that all transferred bytes are counted and stored in a cache in the worker role. I created a dedicated thread that monitors this cache, and regularly persists the traffic log in blob storage. There is no fixed interval in which this happens, it just happens. I’m not sure yet that this is the best way to do it, but I feel it is a good mix between intensity of logging and intensity of an expensive write to blob storage.

Struggle #2: Sleep

This is not a technical struggle. Since I had fun, I dedicated a lot of time to this thing, mainly in fine-tuning, testing, testing with multiple concurrent clients, … I learnt that System.Net has some cool classes and also learnt that TcpClient that are closed should also be disposed. Otherwise, the socket will not be released and no new connections will be accepted after a while. Anyway: it caused a lack of sleep. The solution to this was drinking more coffee, just at the moment where I actually was drinking less coffee for over a month or two. I will have to go to coffee-rehab again…

Struggle #3: FTP PASV mode

I will not assume you know this because I also didn’t know the exact difference… When a client connects to an FTP server, it will have 2 connections with that server. One on the standard FTP TCP port 21 used for sending commands back and forth, and one on another TCP port used for transferring data. This second connection can be an active one or a passive one.

The main difference between active and passive FTP lies in the direction of the connection: with active FTP, the FTP server opens a connection to a TCP port on the client, while with passive FTP, the client will open a connection to another TCP port on the server. Here’s more details on that:

The Basics

FTP is a TCP based service exclusively. There is no UDP component to FTP. FTP is an unusual service in that it utilizes two ports, a 'data' port and a 'command' port (also known as the control port). Traditionally these are port 21 for the command port and port 20 for the data port. The confusion begins however, when we find that depending on the mode, the data port is not always on port 20.

Active FTP

In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. The server will then connect back to the client's specified data port from its local data port, which is port 20.

From the server-side firewall's standpoint, to support active mode FTP the following communication channels need to be opened:

  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's port 20 to ports > 1023 (Server initiates data connection to client's data port)
  • FTP server's port 20 from ports > 1023 (Client sends ACKs to server's data port)

When drawn out, the connection appears as follows:

In step 1, the client's command port contacts the server's command port and sends the command PORT 1027. The server then sends an ACK back to the client's command port in step 2. In step 3 the server initiates a connection on its local data port to the data port the client specified earlier. Finally, the client sends an ACK back as shown in step 4.

The main problem with active mode FTP actually falls on the client side. The FTP client doesn't make the actual connection to the data port of the server--it simply tells the server what port it is listening on and the server connects back to the specified port on the client. From the client side firewall this appears to be an outside system initiating a connection to an internal client--something that is usually blocked.

Passive FTP

In order to resolve the issue of the server initiating the connection to the client a different method for FTP connections was developed. This was known as passive mode, or PASV, after the command used by the client to tell the server it is in passive mode.

In passive mode FTP the client initiates both connections to the server, solving the problem of firewalls filtering the incoming data port connection to the client from the server. When opening an FTP connection, the client opens two random unprivileged ports locally (N > 1023 and N+1). The first port contacts the server on port 21, but instead of then issuing a PORT command and allowing the server to connect back to its data port, the client will issue the PASV command. The result of this is that the server then opens a random unprivileged port (P > 1023) and sends the PORT P command back to the client. The client then initiates the connection from port N+1 to port P on the server to transfer data.

From the server-side firewall's standpoint, to support passive mode FTP the following communication channels need to be opened:

  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's ports > 1023 from anywhere (Client initiates data connection to random port specified by server)
  • FTP server's ports > 1023 to remote ports > 1023 (Server sends ACKs (and data) to client's data port)

When drawn, a passive mode FTP connection looks like this:

In step 1, the client contacts the server on the command port and issues the PASV command. The server then replies in step 2 with PORT 2024, telling the client which port it is listening to for the data connection. In step 3 the client then initiates the data connection from its data port to the specified server data port. Finally, the server sends back an ACK in step 4 to the client's data port.

While passive mode FTP solves many of the problems from the client side, it opens up a whole range of problems on the server side. The biggest issue is the need to allow any remote connection to high numbered ports on the server. Fortunately, many FTP daemons, including the popular WU-FTPD allow the administrator to specify a range of ports which the FTP server will use. See Appendix 1 for more information.

The second issue involves supporting and troubleshooting clients which do (or do not) support passive mode. As an example, the command line FTP utility provided with Solaris does not support passive mode, necessitating a third-party FTP client, such as ncftp.

With the massive popularity of the World Wide Web, many people prefer to use their web browser as an FTP client. Most browsers only support passive mode when accessing ftp:// URLs. This can either be good or bad depending on what the servers and firewalls are configured to support.

(from http://slacksite.com/other/ftp.html)

Clear enough? Good! In order to support passive FTP, the Windows Azure worker role should be listening on more ports than only port 21. After doing some research, I found that most FTP servers allow specifying the passive FTP port range. Opening a range of over 1000 TCP ports is also something most FTP servers seem to do. Good, I tried this one on Windows Azure, deployed it and… found out that you can only define a maximum of 5 public endpoints per deployment.

This led me to re-implementing PASV mode, opening a new port on demand from a pool of 4 public endpoints defined. Again, I deployed this one but this failed as well: there was too much of a delay in opening a new TcpListener on the fly.

Option three seemed to work: I have a TcpListener open on TCP port 20 all the time and try to dispatch incoming connections immediately. There’s also a downside to this: if users send a lot of PASV requests, there will be a lot of unused connections that may cause the application to crash. So I did a trick here as well: close listening connections after a short delay.

Conclusion

Feel free to use the service and if you require more than 10 MB traffic a day, feel free to contact me. I can specify traffic quotas per storage account and may increase traffic quotas for yours.

kick it on DotNetKicks.com Shout it

Introducing RealDolmenBlogs.com

RealDolmenBlogs.com Here’s something I would like to share with you. A few months ago, our company (RealDolmen) started a new website, RealDolmenBlogs.com. This site syndicates content from employee blogs, people with lots of experience in their range of topics. These guys have lots of knowledge to share, but sometimes their blog does not have a lot of attention from, well, you. Since we would really love to share employee knowledge, RealDolmenBlogs.com was born.

The following topics are covered:

  • .NET
  • Application Lifecycle Management
  • Architecture
  • ASP.NET
  • Biztalk
  • PHP
  • Sharepoint
  • Silverlight
  • Visual Studio

Make sure to subscribe to the syndicated RSS feed and have quality content delivered to your RSS reader.

The technical side

Since I do not like to do blog posts on topic that do not have a technical touch, considered that the first few lines of text of this post are pure marketing in a sense, here’s the technical bit.

RealDolmenBlogs.com is built on Windows Azure and SQL Azure. As a company we believe there is value in cloud computing, in this case we chose for cloud computing due to the fact that the setup costs for the website were very small (pay-per-use) and that we can easily scale-up the website if needed.

The software behind the site is a customized version of BlogEngine.NET. It has been extended with a syndication feature, pulling content from employee blogs with a little help of the Argotic syndication framework. Running BlogEngine.NET on Windows Azure is not that hard, especially when you are using SQL Azure as well: the only thing to modify is the connection string to your database and you are done. Well… that is if you don’t care about images and attachments. We had to do some modifications to how BlogEngine.NET handles file uploads and made sure everything is now stored safe and sound in Windows Azure blob storage.

That being said: enjoy the content that my colleagues are sharing, posts are definitely worth a read!

Translating routes (ASP.NET MVC and Webforms)

Localized route in ASP.NET MVC - Translated route in ASP.NET MVC For one of the first blog posts of the new year, I thought about doing something cool. And being someone working with ASP.NET MVC, I thought about a cool thing related to that: let’s do something with routes! Since System.Web.Routing is not limited to ASP.NET MVC, this post will also play nice with ASP.NET Webforms. But what’s the cool thing? How about… translating route values?

Allow me to explain… I’m tired of seeing URLs like http://www.example.com/en/products and http://www.example.com/nl/products. Or something similar, with query parameters like “?culture=en-US”. Or even worse stuff. Wouldn’t it be nice to have http://www.example.com/products mapping to the English version of the site and http://www.exaple.com/producten mapping to the Dutch version? Better to remember when giving away a link to someone, better for SEO as well.

Of course, we do want both URLs above to map to the ProductsController in our ASP.NET MVC application. We do not want to duplicate logic because of a language change, right? And what’s more: it’s not fun if this would mean having to switch from <%=Html.ActionLink(…)%> to something else because of this.

Let’s see if we can leverage the routing engine in System.Web.Routing for this…

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

Mapping a translated route

First things first: here’s how I see a translated route being mapped in Global.asax.cs:

[code:c#]

routes.MapTranslatedRoute(
    "TranslatedRoute",
    "{controller}/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" },
    new { controller = translationProvider, action = translationProvider },
    true
);

[/code]

Looks pretty much the same as you would normally map a route, right? There’s only one difference: the new { controller = translationProvider, action = translationProvider } line of code. This line of code basically tells the routing engine to use the object translationProvider as a provider which allows to translate a route value. In this case, the same translation provider will handle translating controller names and action names.

Translation providers

The translation provider being used can actually be anything, as long as it conforms to the following contract:

[code:c#]

public interface IRouteValueTranslationProvider
{
    RouteValueTranslation TranslateToRouteValue(string translatedValue, CultureInfo culture);
    RouteValueTranslation TranslateToTranslatedValue(string routeValue, CultureInfo culture);
}

[/code]

This contract provides 2 method definitions: one for mapping a translated value to a route value (like: mapping the Dutch “Thuis” to “Home”). The other method will do the opposite.

TranslatedRoute

The “core” of this solution is the TranslatedRoute class. It’s basically an overridden implementation of the System.Web.Routing.Route class, using the IRouteValueTranslationProvider for translating a route. As a bonus, it also tries to set the current thread culture to the CultureInfo detected based on the route being called. Note that this is just a reasonable guess, not the very truth. It will not detect nl-NL versus nl-BE, for example. Here’s the code:

[code:c#]

public class TranslatedRoute : Route
{
    // ...

    public RouteValueDictionary RouteValueTranslationProviders { get; private set; }

    // ...

    public override RouteData GetRouteData(HttpContextBase httpContext)
    { 
        RouteData routeData = base.GetRouteData(httpContext);
        if (routeData == null) return null;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && routeData.Values.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation = translationProvider.TranslateToRouteValue(
                    routeData.Values[pair.Key].ToString(),
                    CultureInfo.CurrentCulture);

                routeData.Values[pair.Key] = translation.RouteValue;

                // Store detected culture
                if (routeData.DataTokens[DetectedCultureKey] == null)
                {
                    routeData.DataTokens.Add(DetectedCultureKey, translation.Culture);
                }

                // Set detected culture
                if (this.SetDetectedCulture)
                {
                    System.Threading.Thread.CurrentThread.CurrentCulture = translation.Culture;
                    System.Threading.Thread.CurrentThread.CurrentUICulture = translation.Culture;
                }
            }
        }

        return routeData;
    }

    public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
    {
        RouteValueDictionary translatedValues = values;

        // Translate route values
        foreach (KeyValuePair<string, object> pair in this.RouteValueTranslationProviders)
        {
            IRouteValueTranslationProvider translationProvider = pair.Value as IRouteValueTranslationProvider;
            if (translationProvider != null
                && translatedValues.ContainsKey(pair.Key))
            {
                RouteValueTranslation translation =
                    translationProvider.TranslateToTranslatedValue(
                        translatedValues[pair.Key].ToString(), CultureInfo.CurrentCulture);

                translatedValues[pair.Key] = translation.TranslatedValue;
            }
        }

        return base.GetVirtualPath(requestContext, translatedValues);
    }
}

[/code]

The GetRouteData finds a corresponding route translation if I entered “/Thuis/Over” in the URL. The GetVirtualPath method does the opposite, and will be used for mapping a call to <%=Html.ActionLink(“About”, “About”, “Home”)%> to a route like “/Thuis/Over” if the current thread culture is nl-NL. This is not rocket science, it simply tries to translate every token in the requested path and update the route data with it so the ASP.NET MVC subsystem will know that “Thuis” maps to HomeController.

Tying everything together

We already tied the route definition in Global.asax.cs earlier in this blog post, but let’s do it again with a sample DictionaryRouteValueTranslationProvider that will be used for translating routes. This one goes in Global.asax.cs:

[code:c#]

public static void RegisterRoutes(RouteCollection routes)
{
    CultureInfo cultureEN = CultureInfo.GetCultureInfo("en-US");
    CultureInfo cultureNL = CultureInfo.GetCultureInfo("nl-NL");
    CultureInfo cultureFR = CultureInfo.GetCultureInfo("fr-FR");

    DictionaryRouteValueTranslationProvider translationProvider = new DictionaryRouteValueTranslationProvider(
        new List<RouteValueTranslation> {
            new RouteValueTranslation(cultureEN, "Home", "Home"),
            new RouteValueTranslation(cultureEN, "About", "About"),
            new RouteValueTranslation(cultureNL, "Home", "Thuis"),
            new RouteValueTranslation(cultureNL, "About", "Over"),
            new RouteValueTranslation(cultureFR, "Home", "Demarrer"),
            new RouteValueTranslation(cultureFR, "About", "Infos")
        }
    );

    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapTranslatedRoute(
        "TranslatedRoute",
        "{controller}/{action}/{id}",
        new { controller = "Home", action = "Index", id = "" },
        new { controller = translationProvider, action = translationProvider },
        true
    );

    routes.MapRoute(
        "Default",      // Route name
        "{controller}/{action}/{id}",   // URL with parameters
        new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
    );

}

[/code]

This is basically it! What I can now do is set the current thread’s culture to, let’s say fr-FR, and all action links generated by ASP.NET MVC will be using French. Easy? Yes! Cool? Yes!

Localizing ASP.NET MVC routing

Want the sample code? Check LocalizedRouteExample.zip (23.25 kb)

PHPMEF 0.1.0 released!

PHP MEF A while ago, I did a conceptual blog post on PHP Managed Extensibility Framework – PHPMEF. Today, I’m proud to announce the first public release of PHPMEF! After PHPExcel, PHPLinq, PHPPowerPoint and the Windows Azure SDK for PHP, PHPMEF is the 5th open-source project I started on interoperability (or conceptual interoperability) between the Microsoft world and the PHP world. Noble price for peace upcoming :-)

What is this thing?

PHPMEF is a PHP port of the .NET Managed Extensibility Framework, allowing easy composition and extensibility in an application using the Inversion of Control principle and 2 easy keywords: @export and @import.

PHPMEF is based on a .NET library, MEF, targeting extensibility of projects. It allows you to declaratively extend your application instead of requiring you to do a lot of plumbing. All this is done with three concepts in mind: export, import and compose. “PHPMEF” uses the same concepts in order to provide this extensibility features.

Show me an example!

Ok, I will. But not here. Head over to http://phpmef.codeplex.com and have a look at the principles and features behind PHPMEF.

Enjoy!

PHP Managed Extensibility Framework – PHPMEF

image While flying sitting in the airplane to the Microsoft Web Developer Summit in Seattle, I was watching some PDC09 sessions on my laptop. During the MEF session, an idea popped up: there is no MEF for PHP! 3500 kilometers after that moment, PHP got its own MEF…

What is MEF about?

MEF is a .NET library, targeting extensibility of projects. It allows you to declaratively extend your application instead of requiring you to do a lot of plumbing. All this is done with three concepts in mind: export, import and compose. (Glenn, I stole the previous sentence from your blog). “PHPMEF” uses the same concepts in order to provide this extensibility features.

Let’s start with a story… Imagine you are building a Calculator. Yes, shoot me, this is not a sexy sample. Remember I wrote this one a plane with snoring people next to me…The Calculator is built of zero or more ICalculationFunction instances. Think command pattern. Here’s how such an interface can look like:

[code:c#]

interface ICalculationFunction
{
    public function execute($a, $b);
}

[/code]

Nothing special yet. Now let’s implement an instance which does sums:

[code:c#]

class Sum implements ICalculationFunction
{
    public function execute($a, $b)
    {
        return $a + $b;
    }
}

[/code]

Now how would you go about using this in the following Calculator class:

[code:c#]

class Calculator
{
    public $CalculationFunctions;
}

[/code]

Yes, you would do plumbing. Either instantiating the Sum object and adding it into the Calculator constructor, or something similar. Imagine you also have a Division object. And other calculation functions. How would you go about building this in a maintainable and extensible way? Easy: use exports…

Export

Exports are one of the three fundaments of PHPMEF. Basically, you can specify that you want class X to be “ exported”  for extensibility. Let’s export Sum:

[code:c#]

/**
  * @export ICalculationFunction
  */
class Sum implements ICalculationFunction
{
    public function execute($a, $b)
    {
        return $a + $b;
    }
}

[/code]

Sum is exported as Sum by default, but in this case I want PHPMEF to know that it is also exported as ICalculationFunction. Let’s see why this is in the import part…

Import

Import is a concept required for PHPMEF to know where to instantiate specific objects. Here’s an example:

[code:c#]

class Calculator
{
    /**
      * @import ICalculationFunction
      */
    public $SomeFunction;
}

[/code]

In this case, PHPMEF will simply instantiate the first ICalculationFunction instance it can find and assign it to the Calculator::SomeFunction variable. Now think of our first example: we want different calculation functions in our calculator! Here’s how:

[code:c#]

class Calculator
{
    /**
      *  @import-many ICalculationFunction
      */
    public $CalculationFunctions;
}

[/code]

Easy, no? PHPMEF will ensure that all possible ICalculationFunction instances are added to the Calculator::CalculationFunctions array. Now how is all this being plumbed together? It’s not plumbed! It’s composed!

Compose

Composing matches all exports and imports in a specific application path. How? Easy! Use the PartInitializer!

[code:c#]

// Create new Calculator instance
$calculator = new Calculator();

// Satisfy dynamic imports
$partInitializer = new Microsoft_MEF_PartInitializer();
$partInitializer->satisfyImports($calculator);

[/code]

Easy, no? Ask the PartInitializer to satisfy all imports and you are done!

Advanced usage scenarios

The above sample was used to demonstrate what PHPMEF is all about. I’m sure you can imagine more complex scenarios. Here are some other possibilities…

Single instance exports

By default, PHPMEF instantiates a new object every time an import has to be satisfied. However, imagine you want our Sum class to be re-used. You want PHPMEF to assign the same instance over and over again, no matter where and how much it is being imported. Again, no plumbing. Just add a declarative comment:

[code:c#]

/**
  * @export ICalculationFunction
  * @export-metadata singleinstance
  */
class Sum implements ICalculationFunction
{
    public function execute($a, $b)
    {
        return $a + $b;
    }
}

[/code]

Export/import metadata

Imagine you want to work with interfaces like mentioned above, but want to use a specific implementation that has certain metadata defined. Again: easy and no plumbing!

My calculator might look like the following:

[code:c#]

class Calculator
{
    /**
      *  @import-many ICalculationFunction
      */
    public $CalculationFunctions;

    /**
      *  @import ICalculationFunction
      *  @import-metadata CanDoSums
      */
    public $SomethingThatCanDoSums;
}

[/code]

Calculator::SomeThingThatCanDoSums is now constrained: I only want to import something that has the metadata “CanDoSums” attached. Here’s how to create such an export:

[code:c#]

/**
  * @export ICalculationFunction
  * @export-metadata CanDoSums
  */
class Sum implements ICalculationFunction
{
    public function execute($a, $b)
    {
        return $a + $b;
    }
}

[/code]

Here’s an answer to a question you may have: yes, multiple metadata definitions are possible and will be used to determine if an export matches an import.

One small note left: you can also ask the PartInitializer for the metadata defined on a class.

[code:c#]

// Create new Calculator instance
$calculator = new Calculator();

// Satisfy dynamic imports
$partInitializer = new Microsoft_MEF_PartInitializer();
$partInitializer->satisfyImports($calculator);

// Get metadata
$metadata = $partInitializer->getMetadataForClass('Sum');

[/code]

Can I get the source?

No, not yet. For a number of reasons. I first want to make this thing a bit more stable, as well as deciding if all MEF features should be ported. Also, I’m looking for an appropriate name/library to put this in. You may have noticed the Microsoft_* naming, a small hint to the Interop team in incorporating this as another Microsoft library in the PHP world. Yes Vijay, talking to you :-)

Simple API for Cloud Application Services

Zend, in co-operation with IBM, Microsoft, Rackspace, GoGrid and other cloud leaders, today have released their Simple API for Cloud Application Services project. The Simple Cloud API project empowers developers to use one interface to interact with the cloud services offered by different vendors. These vendors are all contributing to this open source project, making sure the Simple Cloud API “fits like a glove” on top of their service.

Zend Cloud adapters will be available for services such as:

  • File storage services, including Windows Azure blobs, Rackspace Cloud Files, Nirvanix Storage Delivery Network and Amazon S3
  • Document Storage services, including Windows Azure tables and Amazon SimpleDB
  • Simple queue services, including Amazon SQS and Windows Azure queues

Note that the Simple Cloud API is focused on providing a simple and re-usable interface across different cloud services. This implicates that specific features a service offers will not be available using the Simple Cloud API.

Here’s a quick code sample for the Simple Cloud API. Let’s upload some data and list the items in a Windows Azure Blob Storage container using the Simple Cloud API:

[code:c#]

require_once('Zend/Cloud/Storage/WindowsAzure.php');

// Create an instance
$storage = new Zend_Cloud_Storage_WindowsAzure(
'zendtest',
array(
  'host' => 'blob.core.windows.net',
  'accountname' => 'xxxxxx',
  'accountkey' => 'yyyyyy'
));

// Create some data and upload it
$item1 = new Zend_Cloud_Storage_Item('Hello World!', array('creator' => 'Maarten'));
$storage->storeItem($item1, 'data/item.txt');

// Now download it!
$item2 = $storage->fetchItem('data/item.txt', array('returntype' => 2));
var_dump($item2);

// List items
var_dump(
$storage->listItems()
);

[/code]

It’s quite fun to be a part of this kind of things: I started working for Microsoft on the Windows Azure SDK for PHP, we contributed the same codebase to Zend Framework, and now I’m building the Windows Azure implementations for the Simple Cloud API.

The full press release can be found at the Simple Cloud API website.

ASP.NET MVC MvcSiteMapProvider 1.0 released

image Back in March, I blogged about an experimental MvcSiteMap provider I was building. Today, I am proud to announce that it is stable enough to call it version 1.0! Download MvcSiteMapProvider 1.0 over at CodePlex.

Ever since the source code release I did back in March, a lot of new features have been added, such as HtmlHelper extension methods, attributes, dynamic parameters, … I’ll leave most of them up to you to discover, but there are some I want to quickly highlight.

kick it on DotNetKicks.com

ACL module extensibility

By default, MvcSiteMap will make nodes visible or invisible based on [Authorize] attributes that are placed on controllers or action methods. If you have implemented your own authentication mechanism, this may no longer be the best way to show or hide sitemap nodes. By implementing and registering the IMvcSiteMapAclModule interface, you can now plug in your own visibility logic.

[code:c#]

public interface IMvcSiteMapAclModule
{
    /// <summary>
    /// Determine if a node is accessible for a user
    /// </summary>
    /// <param name="provider">The MvcSiteMapProvider requesting the current method</param>
    /// <param name="context">Current HttpContext</param>
    /// <param name="node">SiteMap node</param>
    /// <returns>True/false if the node is accessible</returns>
    bool IsAccessibleToUser(MvcSiteMapProvider provider, HttpContext context, SiteMapNode node);
}

[/code]

Dynamic parameters

Quite often, action methods have parameters that are not really bound to a sitemap node. For instance, take a paging parameter. You may ignore this one safely when determining the active sitemap node: /Products/List?page=1 and /Products/List?page=2 should both have the same menu item highlighted. This is where dynamic parameters come in handy: MvcSiteMap will completely ignore the specified parameters when determining the current node.

[code:c#]

<mvcSiteMapNode title="Products" controller="Products" action="List" isDynamic="true" dynamicParameters="page" />

[/code]

The above sitemap node will always be highlighted, whatever the value of “page” is.

SiteMapTitle action filter attribute

In some situations, you may want to dynamically change the SiteMap.CurrentNode.Title in an action method. This can be done manually by setting  SiteMap.CurrentNode.Title, or by adding the SiteMapTitle action filter attribute.

Imagine you are building a blog and want to use the Blog’s Headline property as the site map node title. You can use the following snippet:

[code:c#]

[SiteMapTitle("Headline")]
public ViewResult Show(int blogId) {
   var blog = _repository.Find(blogIdId);
   return blog;
}

[/code]

You can also use a non-strong typed ViewData value as the site map node title:

[code:c#]

[SiteMapTitle("SomeKey")]
public ViewResult Show(int blogId) {
   ViewData["SomeKey"] = "This will be the title";

   var blog = _repository.Find(blogIdId);
   return blog;
}

[/code]

HtmlHelper extension methods

MvcSiteMap provides different HtmlHelper extension methods which you can use to generate SiteMap-specific HTML code on your ASP.NET MVC views. Here's a list of available HtmlHelper extension methods.

  • HtmlHelper.Menu() - Can be used to generate a menu
  • HtmlHelper.SiteMap() - Can be used to generate a list of all pages in your sitemap
  • HtmlHelper.SiteMapPath() - Can be used to generate a so-called "breadcrumb trail"

The MvcSiteMap release can be found on CodePlex.

kick it on DotNetKicks.com

SQL Azure Manager

image

A few days ago, the SQL Server Team announced the availability of three major CTP’s and several new upcoming projects in the SQL related family tree: SQL Server 2008 R2, SQL Server StreamInsight and SQL Azure. Now that last one is interesting: Microsoft will offer a 1GB or 10GB database server “in the cloud” for a good price.

Currently, SQL Azure is in CTP and will undergo some more development. Of course, I wanted to play with this, but… connecting to the thing using SQL Server management Studio is not the most intuitive and straightforward task. It’s more of a workaround. Juliën Hanssens, a colleague of mine, was going crazy for this. Being a good colleague, I poored some coffee tea in the guy and he came up with the SQL Azure manager: a community effort to quickly enable connecting to your SQL Azure database(s) and perform basic tasks.

SQL Azure Manager

And it does just that. Note that it is a first conceptual release. And that it is still a bit unstable. But it does the trick. At least at a bare minimum. And for the time being that is enough. Want to play with it? Check Juliën’s ClickOnce page!

Note that this thing will become open-soucre in the future, after he finds a good WF designer to do the main UI. Want to help him? Use the submit button!

kick it on DotNetKicks.com

Signed Access Signatures and PHP SDK for Windows Azure

PHP SDK for Windows Azure The latest Windows Azure storage release featured a new concept: “Shared Access Signatures”. The idea of those is that you can create signatures for specific resources in blob storage and that you can provide more granular access than the default “all-or-nothing” approach that is taken by Azure blob storage. Steve Marx posted a sample on this, demonstrating how you can provide read access to a blob for a specified amount of minutes, after which the access is revoked.

The PHP SDK for Windows Azure is now equipped with a credentials mechanism, based on Signed Access Signatures. Let’s see if we can demonstrate how this would work…

kick it on DotNetKicks.com

A quick start…

Let’s take Steve’s Wazdrop sample and upload a few files, we get a set of permissions:

https://wazdrop.blob.core.windows.net/files/7bf9417f-c405-4042-8f99-801acb1ea494?st=2009-08-17T08%3A52%3A48Z&se=2009-08-17T09%3A52%3A48Z&sr=b&sp=r&sig=Zcngfaq60OXtLxcsTjmPXUL9Q4Rj3zTPmW40eARVYxU%3D

https://wazdrop.blob.core.windows.net/files/d30769f6-35b9-4337-8c34-014ff590b18f?st=2009-08-17T08%3A54%3A19Z&se=2009-08-17T09%3A54%3A19Z&sr=b&sp=r&sig=Mm8CnmI3XXVbJ6y0FN9WfAOknVySfsF5jIA55drZ6MQ%3D

If we take a detailed look, the Azure account name used is “wazdrop”, and we have access to 2 files in Steve’s storage account, namely “7bf9417f-c405-4042-8f99-801acb1ea494” and “d30769f6-35b9-4337-8c34-014ff590b18f” in the “files” container.

Great! But if I want to use the PHP SDK for Windows Azure to access the resources above, how would I do that? Well, that should not be difficult. Instantiate a new Microsoft_Azure_Storage_Blob client, and pass it a new Microsoft_Azure_SharedAccessSignatureCredentials instance:

[code:c#]

$storageClient = new Microsoft_Azure_Storage_Blob('blob.core.windows.net', 'wazdrop', '');
$storageClient->setCredentials(
    new Microsoft_Azure_SharedAccessSignatureCredentials('wazdrop', '')
);

[/code]

One thing to notice here: we do know the storage account (“wazdrop”), but not Steve’s shared key to his storage account. Which is good for him, otherwise I would be able to manage all containers and blobs in his account.

The above code sample will now fail every action I invoke on it. Every getBlob(), putBlob(), createContainer(), … will fail because I cannot authenticate! Fortunately, Steve’s application provided me with two URL’s that I can use to read 2 blobs. Now set these as permissions on our storage client:

[code:c#]

$storageClient->getCredentials()->setPermissionSet(array(
    'https://wazdrop.blob.core.windows.net/files/7bf9417f-c405-4042-8f99-801acb1ea494?st=2009-08-17T08%3A52%3A48Z&se=2009-08-17T09%3A52%3A48Z&sr=b&sp=r&sig=Zcngfaq60OXtLxcsTjmPXUL9Q4Rj3zTPmW40eARVYxU%3D',
    'https://wazdrop.blob.core.windows.net/files/d30769f6-35b9-4337-8c34-014ff590b18f?st=2009-08-17T08%3A54%3A19Z&se=2009-08-17T09%3A54%3A19Z&sr=b&sp=r&sig=Mm8CnmI3XXVbJ6y0FN9WfAOknVySfsF5jIA55drZ6MQ%3D'
));

[/code]

We now have instructed the PHP SDK for Windows Azure that we have read permissions on two blobs, and can now use regular API calls to retrieve these blobs:

[code:c#]

$storageClient->getBlob('files', '7bf9417f-c405-4042-8f99-801acb1ea494', 'C:\downloadedfile1.txt');
$storageClient->getBlob('files', 'd30769f6-35b9-4337-8c34-014ff590b18f', 'C:\downloadedfile2.txt');

[/code]

The PHP SDK for Windows Azure will now take care of checking if a permission URL matches the call that is being made, and inject the signatures automatically.

A bit more advanced…

The above sample did demonstrate how the new Signed Access Signature is implemented in PHP SDK for Windows Azure, but it did not yet demonstrate all “coolness”. Let’s say the owner of a storage account named “phpstorage” has a private container named “phpazuretestshared1”, and that this owner wants to allow you to put some blobs in this container. Since the owner does not want to give you full access, nor wants to make the container public, he issues a Shared Access Signature:

http://phpstorage.blob.core.windows.net/phpazuretestshared1?st=2009-08-17T09%3A06%3A17Z&se=2009-08-17T09%3A56%3A17Z&sr=c&sp=w&sig=hscQ7Su1nqd91OfMTwTkxabhJSaspx%2BD%2Fz8UqZAgn9s%3D

This one allows us to write in the container “phpazuretest1” on account “phpstorage”. Now let’s see if we can put some blobs in there!

[code:c#]

$storageClient = new Microsoft_Azure_Storage_Blob('blob.core.windows.net', 'phpstorage', '');
$storageClient->setCredentials(
    new Microsoft_Azure_SharedAccessSignatureCredentials('phpstorage', '')
);

$storageClient->getCredentials()->setPermissionSet(array(
    'http://phpstorage.blob.core.windows.net/phpazuretestshared1?st=2009-08-17T09%3A06%3A17Z&se=2009-08-17T09%3A56%3A17Z&sr=c&sp=w&sig=hscQ7Su1nqd91OfMTwTkxabhJSaspx%2BD%2Fz8UqZAgn9s%3D'
));

$storageClient->putBlob('phpazuretestshared1', 'NewBlob.txt', 'C:\Files\dataforazure.txt');

[/code]

Did you see what happened? We did not specify an explicit permission to write to a specific blob. Instead, the PHP SDK for Windows Azure determined that a permission was required to either write to that specific blob, or to write to its container. Since we only had a signature for the latter, it chose those credentials to perform the request on Windows Azure blob storage.

kick it on DotNetKicks.com

Query the cloud with PHP (PHPLinq and Windows Azure)

PHPLinq Architecture I’m pleased to announce PHPLinq currently supports basic querying of Windows Azure Table Storage. PHPLinq is a class library for PHP, based on the idea of Microsoft’s LINQ technology. LINQ is short for language integrated query, a component in the .NET framework which enables you to perform queries on a variety of data sources like arrays, XML, SQL server, ... These queries are defined using a syntax which is very similar to SQL.

Next to PHPLinq querying arrays, XML and objects, which was already supported, PHPLinq now enables you to query Windows Azure Table Storage in the same manner as you would query a list of employees, simply by passing PHPLinq a Table Storage client and table name as storage hint in the in() method:

[code:c#]

$result = from('$employee')->in( array($storageClient, 'employees', 'AzureEmployee') )
            ->where('$employee => $employee->Name == "Maarten"')
            ->select('$employee');

[/code]

The Windows Azure Table Storage layer is provided by Microsoft’s PHP SDK for Windows Azure and leveraged by PHPLinq to enable querying “the cloud”.

kick it on DotNetKicks.com