Maarten Balliauw {blog}

ASP.NET, ASP.NET MVC, Windows Azure, PHP, ...

NAVIGATION - SEARCH

A phone call from the cloud: Windows Azure, SignalR & Twilio

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 05 ConfirmPhoneNumberDemo.zip (1.32 mb).
It has been written earlier this year, some versions of packages used (like jQuery or SignalR) may be outdated in this post. Live with it.

In the previous blog post we saw how you can send e-mails from Windows Azure. Why not take communication a step further and make a phone call from Windows Azure? I’ve already mentioned that Windows Azure is a platform which will run your code, topped with some awesomesauce in the form of a large number of components that will speed up development. One of those components is the API provided by Twilio, a third-party service.

Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio Voice allows your applications to make and receive phone calls. Twilio SMS allows your applications to make and receive SMS messages. We’ll use Twilio Voice in conjunction with jQuery and SignalR to spice up a sign-up process.

The scenario

The idea is simple: we want users to sign up using a username and password. In addition, they’ll have to provide their phone number. The user will submit the sign-up form and will be displayed a confirmation code. In the background, the user will be called and asked to enter this confirmation code in order to validate his phone number. Once finished, the browser will automatically continue the sign-up process. Here’s a visual:

clip_image002

Sounds too good to be true? Get ready, as it’s relatively simple using Windows Azure and Twilio.

Let’s start…

Before we begin, make sure you have a Twilio account. Twilio offers some free credits, enough to test with. After registering, make sure that you enable international calls and that your phone number is registered as a developer. Twilio takes this step in order to ensure that their service isn’t misused for making abusive phone calls using free developer accounts.

Next, create a Windows Azure project containing an ASP.NET MVC 4 web role. Install the following NuGet packages in it (right-click, Library Package Manager, go wild):

  • jQuery
  • jQuery.UI.Combined
  • jQuery.Validation
  • json2
  • Modernizr
  • SignalR
  • Twilio
  • Twilio.Mvc
  • Twilio.TwiML

It may also be useful to develop some familiarity with the concepts behind SignalR.

The registration form

Let’s create our form. Using a simple model class, SignUpModel, create the following action method:

public ActionResult Index() { return View(new SignUpModel()); }

This action method is accompanied with a view, a simple form requesting the required information from our user:

@using (Html.BeginForm("SignUp", "Home", FormMethod.Post)) { @Html.ValidationSummary(true) <fieldset> <legend>Sign Up for this awesome service</legend> @* etc etc etc *@ <div class="editor-label"> @Html.LabelFor(model => model.Phone) </div> <div class="editor-field"> @Html.EditorFor(model => model.Phone) @Html.ValidationMessageFor(model => model.Phone) </div> <p> <input type="submit" value="Sign up!" /> </p> </fieldset> }

We’ll spice up this form with a dialog first. Using jQuery UI, we can create a simple <div> element which will serve as the dialog’s content. Note the ui-helper-hidden class which is used to make it invisible to view.

<div id="phoneDialog" class="ui-helper-hidden"> <h1>Keep an eye on your phone...</h1> <p>Pick up the phone and follow the instructions.</p> <p>You will be asked to enter the following code:</p> <h2>1743</h2> </div>

This is a simple dialog in which we’ll show a hardcoded confirmation code which the user will have to provide when called using Twilio.

Next, let’s code our JavaScript logic which will spice up this form. First, add the required JavaScript libraries for SignalR (more on that later):

<script src="@Url.Content("~/Scripts/jquery.signalR-0.5.0.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/signalr/hubs")" type="text/javascript"></script>

Next, capture the form’s submit event and, if the phone number has not been validated yet, cancel the submit event and show our dialog instead:

$('form:first').submit(function (e) { if ($(this).valid() && $('#Phone').data('validated') != true) { // Show a dialog $('#phoneDialog').dialog({ title: '', modal: true, width: 400, height: 400, resizable: false, beforeClose: function () { if ($('#Phone').data('validated') != true) { return false; } } }); // Don't submit. Yet. e.preventDefault(); } });

Nothing fancy yet. If you now run this code, you’ll see that a dialog opens and remains open for eternity. Let’s craft some SignalR code now. SignalR uses a concept of Hubs to enable client-server communication, but also server-client communication. We’ll need the latter to inform our view whenever the user has confirmed his phone number. In the project, add the following class:

[HubName("phonevalidator")] public class PhoneValidatorHub : Hub { public void StartValidation(string phoneNumber) { } }

This class defines a service that the client can call. SignalR will also keep the connection with the client open so that this PhoneValidatorHub can later send a message to the client as well. Let’s connect our view to this hub. In the form submit event handler, add the following line of code:

// Validate the phone number using Twilio $.connection.phonevalidator.startValidation($('#Phone').val());

We’ve created a C# class with a StartValidation method and we’re calling the startValidation message from JavaScript. Coincidence? No. SignalR makes this possible. But we’re not finished yet. We can now call a method on the server side, but how would the server inform the client when the phone number has been validated? I’ll get to that point later. First, let’s make sure our JavaScript code can receive that call from the server. To do so, connect to the PhoneValidator hub and add a callback function to it:

var validatorHub = $.connection.phonevalidator; validatorHub.validated = function (phoneNumber) { if (phoneNumber == $('#Phone').val()) { $('#Phone').data('validated', true); $('#phoneDialog').dialog('destroy'); $('form:first').trigger('submit'); } }; $.connection.hub.start();

What we’re doing here is adding a client-side function named validated to the SignalR hub. We can call this function, sitting at the client side, from our server-side code later on. The function itself is easy: it checks whether the phone number that was validated matches the one the user entered and, if so, it submits the form and completes the signup.

All that’s left is calling the user and, when the confirmation succeeds, we’ll have to inform our client by calling the validated message on the hub.

Initiating a phone call

The phone call to our user will be initiated in the PhoneValidatorHub’s StartValidation method. Add the following code there:

var twilioClient = new TwilioRestClient("api user", "api password"); string url = "http://mas.cloudapp.net/Home/TwilioValidationMessage?passcode=1743" + "&phoneNumber=" + HttpContext.Current.Server.UrlEncode(phoneNumber); // Instantiate the call options that are passed to the outbound call CallOptions options = new CallOptions(); options.From = "+14155992671"; // Twilio's developer number options.To = phoneNumber; options.Url = url; // Make the call. twilioClient.InitiateOutboundCall(options);

Using the TwilioRestClient class, we create a request to Twilio. We also pass on a URL which points to our application. Twilio uses TwiML, an XML format to instruct their phone services. When calling the InitiateOutboundCall method, Twilio will issue a request to the URL we are hosting (http://.....cloudapp.net/Home/TwilioValidationMessage) to fetch the TwiML which tells Twilio what to say, ask, record, gather, … on the phone.

Next up: implementing the TwilioValidationMessage action method.

public ActionResult TwilioValidationMessage(string passcode, string phoneNumber) { var response = new TwilioResponse(); response.Say("Hi there, welcome to Maarten's Awesome Service."); response.Say("To validate your phone number, please enter the 4 digit" + " passcode displayed on your screen followed by the pound sign."); response.BeginGather(new { numDigits = 4, action = "http://mas.cloudapp.net/Home/TwilioValidationCallback?phoneNumber=" + Server.UrlEncode(phoneNumber), method = "GET" }); response.EndGather(); return new TwiMLResult(response); }

That’s right. We’re creating some TwiML here. Our ASP.NET MVC action method is telling Twilio to say some text and to gather 4 digits from his phone pad. These 4 digits will be posted to the TwilioValidationCallback action method by the Twilio service. Which is the next method we’ll have to implement.

public ActionResult TwilioValidationCallback(string phoneNumber) { var hubContext = GlobalHost.ConnectionManager.GetHubContext<PhoneValidatorHub>(); hubContext.Clients.validated(phoneNumber); var response = new TwilioResponse(); response.Say("Thank you! Your browser should automatically continue. Bye!"); response.Hangup(); return new TwiMLResult(response); }

The TwilioValidationCallback action method does two things. First, it gets a reference to our SignalR hub and calls the validated function on it. As you may recall, we created this method on the hub’s client side, so in fact our ASP.NET MVC server application is calling a method on the client side. Doing this triggers the client to hide the validation dialog and complete the user sign-up process.

Another action we’re doing here is generating some more TwiML (it’s fun!). We thank the user for validating his phone number and, after that, we hang up the call.

You see? Working with voice (and text messages too, if you want) isn’t that hard. It enables additional scenarios that can make your application stand out from all the many others out there. Enjoy!

05 ConfirmPhoneNumberDemo.zip (1.32 mb)

Sending e-mail from Windows Azure

Note: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 04 SendingEmailsFromTheCloud.zip (922.27 kb).

When a user subscribes, you send him a thank-you e-mail. When his account expires, you send him a warning message containing a link to purchase a new subscription. When he places an order, you send him an order confirmation. I think you get the picture: a fairly common scenario in almost any application is sending out e-mails.

Now, why would I spend a blog post on sending out an e-mail? Well, for two reasons. First, Windows Azure doesn’t have a built-in mail server. No reason to panic! I’ll explain why and how to overcome this in a minute. Second, I want to demonstrate a technique that will make your applications a lot more scalable and resilient to errors.

E-mail services for Windows Azure

Windows Azure doesn’t have a built-in mail server. And for good reasons: if you deploy your application to Windows Azure, it will be hosted on an IP address which previously belonged to someone else. It’s a shared platform, after all. Now, what if some obscure person used Windows Azure to send out a number of spam messages? Chances are your newly-acquired IP address has already been blacklisted, and any e-mail you send from it ends up in people’s spam filters.

All that is fine, but of course, you still want to send out e-mails. If you have your own SMTP server, you can simply configure your .NET application hosted on Windows Azure to make use of your own mail server. There are a number of so-called SMTP relay services out there as well. Even the Belgian hosters like Combell, Hostbasket or OVH offer this service. Microsoft has also partnered with SendGrid to have an officially-supported service for sending out e-mails too. Windows Azure customers receive a special offer of 25,000 free e-mails per month from them. It’s a great service to get started with sending e-mails from your applications: after all, you’ll be able to send 25,000 free e-mails every month. I’ll be using SendGrid in this blog post.

Asynchronous operations

I said earlier that I wanted to show you two things: sending e-mails and building scalable and fault-resilient applications. This can be done using asynchronous operations. No, I don’t mean AJAX. What I mean is that you should create loosely-coupled applications.

Imagine that I was to send out an e-mail whenever a user registers. If the mail server is not available for that millisecond when I want to use it, the send fails and I might have to show an error message to my user (or even worse: a YSOD). Why would that happen? Because my application logic expects that it can communicate with a mail server in a synchronous manner.

clip_image002

Now let’s remove that expectation. If we introduce a queue in between both services, the front-end can keep accepting registrations even when the mail server is down. And when it’s back up, the queue will be processed and e-mails will be sent out. Also, if you experience high loads, simply scale out the front-end and add more servers there. More e-mail messages will end up in the queue, but they are guaranteed to be processed in the future at the pace of the mail server. With synchronous communication, the mail service would probably experience high loads or even go down when a large number of front-end servers is added.

clip_image004

Show me the code!

Let’s combine the two approaches described earlier in this post: sending out e-mails over an asynchronous service. Before we start, make sure you have a SendGrid account (free!). Next, familiarise yourself with Windows Azure storage queues using this simple tutorial.

In a fresh Windows Azure web role, I’ve created a quick-and-dirty user registration form:

clip_image006

Nothing fancy, just a form that takes a post to an ASP.NET MVC action method. This action method stores the user in a database and adds a message to a queue named emailconfirmation. Here’s the code for this action method:

[HttpPost, ActionName("Register")] public ActionResult Register_Post(RegistrationModel model) { if (ModelState.IsValid) { // ... store the user in the database ... // serialize the model var serializer = new JavaScriptSerializer(); var modelAsString = serializer.Serialize(model); // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage(modelAsString)); return RedirectToAction("Thanks"); } return View(model); }

As you can see, it’s not difficult to work with queues. You just enter some data in a message and push it onto the queue. In the code above, I’ve serialized the registration model containing my newly-created user’s name and e-mail to the JSON format (using JavaScriptSerializer). A message can contain binary or textual data: as long as it’s less than 64 KB in data size, the message can be added to a queue.

Being cheap with Web Workers

When boning up on Windows Azure, you’ve probably read about so-called Worker Roles, virtual machines that are able to run your back-end code. The problem I see with Worker Roles is that they are expensive to start with. If your application has 100 users and your back-end load is low, why would you reserve an entire server to run that back-end code? The cloud and Windows Azure are all about scalability and using a “Web Worker” will be much more cost-efficient to start with - until you have a large user base, that is.

A Worker Role consists of a class that inherits the RoleEntryPoint class. It looks something along these lines:

public class WebRole : RoleEntryPoint { public override bool OnStart() { // ... return base.OnStart(); } public override void Run() { while (true) { // ... } } }

You can run this same code in a Web Role too! And that’s what I mean by a Web Worker: by simply adding this class which inherits RoleEntryPoint to your Web Role, it will act as both a Web and Worker role in one machine.

Call me cheap, but I think this is a nice hidden gem. The best part about this is that whenever your application’s load requires a separate virtual machine running the worker role code, you can simply drag and drop this file from the Web Role to the Worker Role and scale out your application as it grows.

Did you send that e-mail already?

Now that we have a pending e-mail message in our queue and we know we can reduce costs using a web worker, let’s get our e-mail across the wire. First of all, using SendGrid as our external e-mail service offers us a giant development speed advantage, since they are distributing their API client as a NuGet package. In Visual Studio, right-click your web role project and click the “Library Package Manager” menu. In the dialog (shown below), search for Sendgrid and install the package found. This will take a couple of seconds: it will download the SendGrid API client and will add an assembly reference to your project.

clip_image008

All that’s left to do is write the code that reads out the messages from the queue and sends the e-mails using SendGrid. Here’s the queue reading:

public class WebRole : RoleEntryPoint { public override bool OnStart() { CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => { string value = ""; if (RoleEnvironment.IsAvailable) { value = RoleEnvironment.GetConfigurationSettingValue(configName); } else { value = ConfigurationManager.AppSettings[configName]; } configSetter(value); }); return base.OnStart(); } public override void Run() { // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); while (true) { var message = queue.GetMessage(); if (message != null) { // ... // mark the message as processed queue.DeleteMessage(message); } else { Thread.Sleep(TimeSpan.FromSeconds(30)); } } } }

As you can see, reading from the queue is very straightforward. You use a storage account, get a queue reference from it and then, in an infinite loop, you fetch a message from the queue. If a message is present, process it. If not, sleep for 30 seconds. On a side note: why wait 30 seconds for every poll? Well, Windows Azure will bill you per 100,000 requests to your storage account. It’s a small amount, around 0.01 cent, but it may add up quickly if this code is polling your queue continuously on an 8 core machine… Bottom line: on any cloud platform, try to architect for cost as well.

Now that we have our message, we can deserialize it and create a new e-mail that can be sent out using SendGrid:

// deserialize the model var serializer = new JavaScriptSerializer(); var model = serializer.Deserialize<RegistrationModel>(message.AsString); // create a new email object using SendGrid var email = SendGrid.GenerateInstance(); email.From = new MailAddress("maarten@example.com", "Maarten"); email.AddTo(model.Email); email.Subject = "Welcome to Maarten's Awesome Service!"; email.Html = string.Format( "<html><p>Hello {0},</p><p>Welcome to Maarten's Awesome Service!</p>" + "<p>Best regards, <br />Maarten</p></html>", model.Name); var transportInstance = REST.GetInstance(new NetworkCredential("username", "password")); transportInstance.Deliver(email); // mark the message as processed queue.DeleteMessage(message);

Sending e-mail using SendGrid is in fact getting a new e-mail message instance from the SendGrid API client, passing the e-mail details (from, to, body, etc.) on to it and handing it your SendGrid username and password upon sending.

One last thing: you notice we’re only deleting the message from the queue after processing it has succeeded. This is to ensure the message is actually processed. If for some reason the worker role crashes during processing, the message will become visible again on the queue and will be processed by a new worker role which processes this specific queue. That way, messages are never lost and always guaranteed to be processed at least once.

(Almost) time for something new...

September 1st, 2005. Fresh from school, I got the opportunity to start at RealDolmen (Dolmen, back then). Not just a “welcome, here’s your customer, cya!”-start, but a start where my fresh colleagues and I got a 4 months deep dive from people in the industry. Entirely different from what school taught me, focused “on the job”. 4 months later, I started at my first customer, then the second, third, a project developed in-house, did some TFS customizations, some Windows Azure, …

During these past 7 years, I actually have never looked at a different job. Not once. Call me naïve, but I actually very much liked working at RealDolmen. Of course from time to time a project wasn’t as pleasant as you wanted it to be, but not everything can be rosy all the time, right? I’ve had a lot of opportunities (“Hey, would you mind diving in this Windows Azure thing?” – “Hey, public speaking, is that something you want to do?” – “Writing a book? Cool idea!”), all thanks to an awesome group of managers who really value personal growth in a direction you value yourself, not a direction which the company would value. I wasn’t planning on going away.

Until Hadi Hariri, who I met 4 years ago drinking wodka and beer water on one of those public speaking opportunities, asked me if I would like to join JetBrains as a technical evangelist. I found this a tough question. It was something I had in the back of my mind as “job I would love to do”, but I liked what I was doing at RealDolmen and all the opportunities that I’ve been able to jump on during the past years. I would even dare to say that this new opportunity at JetBrains is part the result of opportunities at RealDolmen in the past. RealDolmen is a great company to work with and I wouldn’t hesitate for a moment to go back.

Back to the question. It made my mind go in overdrive. Weighing pro’s and con’s, both at the job level as well as at a practical level (from a local company to an international company, from consultancy to evangelism, from some travel to a lot of travel, …). It meant considering leaving a company which felt (and still feels) like home behind, with colleagues I’ve come to consider friends. After a lot of pondering, I decided that taking this plunge would be the next step. Leaving behind these 7 years still causes mixed feelings. On the other hand an opportunity that checks of a box on your bucket list is one you can’t ignore. Hence…

I’m leaving RealDolmen. I will start working for JetBrains as a technical evangelist starting December 10, 2012.

I’m both excited and nervous about this change as it’s different from what I’ve been doing until now. For starters: my Twitter stream will contain less complaining about traffic. Why? Because my office is where my Internet connection is. At home, at my parents, while traveling, when parked near your house with an open WiFi, …

And then, of course the job itself. Going from “technical consultant”, doing mainly the deep-tech parts in the architectural and project starting phase, I’m now going to “technical evangelist”, sharing technology and knowledge about .NET and PHP with others through various platforms. I’ll be gathering product feedback, doing tutorials, demo, screencasts and a lot of conferences, interacting with the community. I’ve been doing some of these as a “side job” (thank you, RealDolmen, for being able to do all this in the past!) but now it’ll become my main job. Something very, very appealing.

Does this mean I’ll no longer be involved with the Belgian or international community? On the contrary! I’m planning to keep doing the things I do today on Windows Azure and ASP.NET-related technologies and have a chance to renew my focus on PHP as well. They are at the heart of my interest and I’ll keep them there.

So in short… I’m leaving RealDolmen with mixed emotions, changing a great partnership. Thank you for these past 7 years. And let’s go for at least 7 years at JetBrains. I’m confident that this will be a great partnership as well.

From API key to user with ASP.NET Web API

ASP.NET Web API is a great tool to build an API with. Or as my buddy Kristof Rennen (and the French) always say: “it makes you ‘api”. One of the things I like a lot is the fact that you can do very powerful things that you know and love from the ASP.NET MVC stack, like, for example, using filter attributes. Action filters, result filters and… authorization filters.

Say you wanted to protect your API and make use of the controller’s User property to return user-specific information. You probably will add an [Authorize] attribute (to ensure the user is authenticated) to either the entire API controller or to one of its action methods, like this:

[Authorize] public class SuperSecretController : ApiController { public string Get() { return string.Format("Hello, {0}", User.Identity.Name); } }

Great! But… How will your application know who’s calling? Forms authentication doesn’t really make sense for a lot of API’s. Configuring IIS and switching to Windows authentication or basic authentication may be an option. But not every ASP.NET Web API will live in IIS, right? And maybe you want to use some other form of authentication for your API, for example one that uses a custom HTTP header containing an API key? Let’s see how you can do that…

Our API authentication? An API key

API keys may make sense for your API. They provide an easy means of authenticating your API consumers based on a simple token that is passed around in a custom header. OAuth2 may make sense as well, but even that one boils down to a custom Authorization header at the HTTP level. (hint: the approach outlined in this post can be used for OAuth2 tokens as well)

Let’s build our API and require every API consumer to pass in a custom header, named “X-ApiKey”. Calls to our API will look like this:

GET http://localhost:60573/api/v1/SuperSecret HTTP/1.1 Host: localhost:60573 X-ApiKey: 12345

In our SuperSecretController above, we want to make sure that we’re working with a traditional IPrincipal which we can query for username, roles and possibly even claims if needed. How do we get that identity there?

Translating the API key using a DelegatingHandler

The title already gives you a pointer. We want to add a plugin into ASP.NET Web API’s pipeline which replaces the current thread’s IPrincipal with one that is mapped from the incoming API key. That plugin will come in the form of a DelegatingHandler, a class that’s plugged in really early in the ASP.NET Web API pipeline. I’m not going to elaborate on what DelegatingHandler does and where it fits, there’s a perfect post on that to be found here.

Our handler, which I’ll call AuthorizationHeaderHandler will be inheriting ASP.NET Web API’s DelegatingHandler. The method we’re interested in is SendAsync, which will be called on every request into our API.

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { // ... } }

This method offers access to the HttpRequestMessage, which contains everything you’ll probably be needing such as… HTTP headers! Let’s read out our X-ApiKey header, convert it to a ClaimsIdentity (so we can add additional claims if needed) and assign it to the current thread:

public class AuthorizationHeaderHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { IEnumerable<string> apiKeyHeaderValues = null; if (request.Headers.TryGetValues("X-ApiKey", out apiKeyHeaderValues)) { var apiKeyHeaderValue = apiKeyHeaderValues.First(); // ... your authentication logic here ... var username = (apiKeyHeaderValue == "12345" ? "Maarten" : "OtherUser"); var usernameClaim = new Claim(ClaimTypes.Name, username); var identity = new ClaimsIdentity(new[] {usernameClaim}, "ApiKey"); var principal = new ClaimsPrincipal(identity); Thread.CurrentPrincipal = principal; } return base.SendAsync(request, cancellationToken); } }

Easy, no? The only thing left to do is registering this handler in the pipeline during your application’s start:

GlobalConfiguration.Configuration.MessageHandlers.Add(new AuthorizationHeaderHandler());

From now on, any request coming in with the X-ApiKey header will be translated into an IPrincipal which you can easily use throughout your web API. Enjoy!

PS: if you’re looking into OAuth2, I’ve used a similar approach in  “ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service” to handle OAuth2 tokens.

Welcome to BlogEngine.NET 2.8

If you see this post it means that BlogEngine.NET 2.8 is running and the hard part of creating your own blog is done. There is only a few things left to do.

Write Permissions

To be able to log in to the blog and writing posts, you need to enable write permissions on the App_Data folder. If your blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support. You need write permissions on the App_Data folder because all posts, comments, and blog attachments are saved as XML files and placed in the App_Data folder. 

If you wish to use a database to to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine wiki to get started.

Security

When you've got write permissions to the App_Data folder, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.  Passwords are hashed by default so if you lose your password, please see the BlogEngine wiki for information on recovery.

Configuration and Profile

Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET 2.8 is set up to take full advantage of of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this.

Themes, Widgets & Extensions

One last thing to consider is customizing the look of your blog.  We have a few themes available right out of the box including two fully setup to use our new widget framework.  The widget framework allows drop and drag placement on your side bar as well as editing and configuration right in the widget while you are logged in.  Extensions allow you to extend and customize the behaivor of your blog.  Be sure to check the BlogEngine.NET Gallery at dnbegallery.org as the go-to location for downloading widgets, themes and extensions.

On the web

You can find BlogEngine.NET on the official website. Here you'll find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.  Again, new themes, widgets and extensions can be downloaded at the BlogEngine.NET gallery.

Good luck and happy writing.

The BlogEngine.NET team

What PartitionKey and RowKey are for in Windows Azure Table Storage

For the past few months, I’ve been coaching a “Microsoft Student Partner” (who has a great blog on Kinect for Windows by the way!) on Windows Azure. One of the questions he recently had was around PartitionKey and RowKey in Windows Azure Table Storage. What are these for? Do I have to specify them manually? Let’s explain…

Windows Azure storage partitions

All Windows Azure storage abstractions (Blob, Table, Queue) are built upon the same stack (whitepaper here). While there’s much more to tell about it, the reason why it scales is because of its partitioning logic. Whenever you store something on Windows Azure storage, it is located on some partition in the system. Partitions are used for scale out in the system. Imagine that there’s only 3 physical machines that are used for storing data in Windows Azure storage:

Windows Azure Storage partition

Based on the size and load of a partition, partitions are fanned out across these machines. Whenever a partition gets a high load or grows in size, the Windows Azure storage management can kick in and move a partition to another machine:

Windows Azure storage partition

By doing this, Windows Azure can ensure a high throughput as well as its storage guarantees. If a partition gets busy, it’s moved to a server which can support the higher load. If it gets large, it’s moved to a location where there’s enough disk space available.

Partitions are different for every storage mechanism:

  • In blob storage, each blob is in a separate partition. This means that every blob can get the maximal throughput guaranteed by the system.
  • In queues, every queue is a separate partition.
  • In tables, it’s different: you decide how data is co-located in the system.

PartitionKey in Table Storage

In Table Storage, you have to decide on the PartitionKey yourself. In essence, you are responsible for the throughput you’ll get on your system. If you put every entity in the same partition (by using the same partition key), you’ll be limited to the size of the storage machines for the amount of storage you can use. Plus, you’ll be constraining the maximal throughput as there’s lots of entities in the same partition.

Should you set the PartitionKey to the same value for every entity stored? No. You’ll end up with scaling issues at some point.
Should you set the PartitionKey to a unique value for every entity stored? No. You can do this and every entity stored will end up in its own partition, but you’ll find that querying your data becomes more difficult. And that’s where our next concept kicks in…

RowKey in Table Storage

A RowKey in Table Storage is a very simple thing: it’s your “primary key” within a partition. PartitionKey + RowKey form the composite unique identifier for an entity. Within one PartitionKey, you can only have unique RowKeys. If you use multiple partitions, the same RowKey can be reused in every partition.

So in essence, a RowKey is just the identifier of an entity within a partition.

PartitionKey and RowKey and performance

Before building your code, it’s a good idea to think about both properties. Don’t just assign them a guid or a random string as it does matter for performance.

The fastest way of querying? Specifying both PartitionKey and RowKey. By doing this, table storage will immediately know which partition to query and can simply do an ID lookup on RowKey within that partition.

Less fast but still fast enough will be querying by specifying PartitionKey: table storage will know which partition to query.

Less fast: querying on only RowKey. Doing this will give table storage no pointer on which partition to search in, resulting in a query that possibly spans multiple partitions, possibly multiple storage nodes as well. Wihtin a partition, searching on RowKey is still pretty fast as it’s a unique index.

Slow: searching on other properties (again, spans multiple partitions and properties).

Note that Windows Azure storage may decide to group partitions in so-called "Range partitions" - see http://msdn.microsoft.com/en-us/library/windowsazure/hh508997.aspx.

In order to improve query performance, think about your PartitionKey and RowKey upfront, as they are the fast way into your datasets.

Deciding on PartitionKey and RowKey

Here’s an exercise: say you want to store customers, orders and orderlines. What will you choose as the PartitionKey (PK) / RowKey (RK)?

Let’s use three tables: Customer, Order and Orderline.

An ideal setup may be this one, depending on how you want to query everything:

Customer (PK: sales region, RK: customer id) – it enables fast searches on region and on customer id
Order (PK: customer id, RK; order id) – it allows me to quickly fetch all orders for a specific customer (as they are colocated in one partition), it still allows fast querying on a specific order id as well)
Orderline (PK: order id, RK: order line id) – allows fast querying on both order id as well as order line id.

Of course, depending on the system you are building, the following may be a better setup:

Customer (PK: customer id, RK: display name) – it enables fast searches on customer id and display name
Order (PK: customer id, RK; order id) – it allows me to quickly fetch all orders for a specific customer (as they are colocated in one partition), it still allows fast querying on a specific order id as well)
Orderline (PK: order id, RK: item id) – allows fast querying on both order id as well as the item bought, of course given that one order can only contain one order line for a specific item (PK + RK should be unique)

You see? Choose them wisely, depending on your queries. And maybe an important sidenote: don’t be afraid of denormalizing your data and storing data twice in a different format, supporting more query variations.

There’s one additional “index”

That’s right! People have been asking Microsoft for a secondary index. And it’s already there… The table name itself! Take our customer – order – orderline sample again…

Having a Customer table containing all customers may be interesting to search within that data. But having an Orders table containing every order for every customer may not be the ideal solution. Maybe you want to create an order table per customer? Doing that, you can easily query the order id (it’s the table name) and within the order table, you can have more detail in PK and RK.

And there's one more: your account name. Split data over multiple storage accounts and you have yet another "partition".

Conclusion

In conclusion? Choose PartitionKey and RowKey wisely. The more meaningful to your application or business domain, the faster querying will be and the more efficient table storage will work in the long run.

MyGet Build Services - Join the private beta!

Good news! Over the past 4 weeks we’ve been sending out tweets about our secret project MyGet project “wonka”. Today is the day Wonka shows his great stuff to the world… In short: MyGet Build Services enable you to add packages to your feed by just giving us your GitHub repo. We build it, we package it, we publish it.

Our build server searches for a file called MyGet.sln and builds that. No probem if it's not there: we'll try and build other projects then. We'll run unit tests (NUnit, XUnit, MSTest and some more) and fail when those fail. We'll search for packages generated by your solution and if none are generated, we take a wild guess and create them for you.

To make it more visual, here are some screenshots. First, you have to add a build source, for example a GitHub repository (in fact, GitHub is all we currently support):

MyGet Add build source

After that, you simply click “Build”. A couple of seconds or minutes later, your fresh package is available on your feed:

MyGet build package

MyGet package result

If you want to see what happened, the build log is available for review as well:

MyGet build log

Enroll now!

Starting today, you can enroll for our private beta. You’ll get on a waiting list and as we improve build capacity, you will be granted access to the beta. If you’re in, tell us how it behaves. What works, what doesn’t, what would you like to see improved. Enroll for this private beta now via http://www.myget.org/buildservices. Limited seats!

Do note it’s still a beta, and as Willy Wonka would say… “Little surprises around every corner, but nothing dangerous.”

Happy packaging!

Get your Windows 8 up to speed fast

With the release of Windows 8 on MSDN yesterday, I have a gut feeling that today, around the globe, people are installing this fresh operating system on their machine. I’ve done so too and I wanted to share with your two tools: one that helped me get up to speed fast, one that will help me up to speed even faster the next time I want to reset my PC.

Chocolatey

One of the best things created for Windows, ever, is Chocolatey. If you are familiar with Ninite, you will find that both serve the same purpose, however Chocolatey is more developer focused.

Chocolatey provides a catalog of software packages like Notepad++, ReSharper, Paint.Net and a whole lot more. After installing Chocolatey, all you have to do to install such a package is invoke, from the command line, “cinst <package>”. The keyword command line is pretty important: what if you could just create a batch file containing all packages you need, like I did here?

Batch files are great, but even easier is creating a custom Chocolatey feed on www.myget.org (create a feed, go to package sources, add Chocolatey): you can simply add whatever you need on a fresh system to this feed and whenever you want to install every package from your custom feed, like I did yesterday evening, you invoke

cinst All -source "http://www.myget.org/F/chocolateymaarten"

and go to bed. In the morning, everything is on your PC.

Windows 8 - Reset Your PC

There’s a new feature in Windows 8 called “Refresh/reset Your PC”. What it does is revert to a certain baseline whenever you feel the need of a format C: coming up. This baseline, by default, is a fresh install. Now what if you could just set your own baseline and revert back to that one next time you need a reinstall? The good news: you can do this!

  • Configure your PC at will
  • From an elevated command prompt, issue:
    mkdir C:\SoFreshThatItSmellsGreat
    recimg -CreateImage C:\SoFreshThatItSmellsGreat

Done!

ASP.NET Web API OAuth2 delegation with Windows Azure Access Control Service

OAuth 2 Windows AzureIf you are familiar with OAuth2’s protocol flow, you know there’s a lot of things you should implement if you want to protect your ASP.NET Web API using OAuth2. To refresh your mind, here’s what’s required (at least):

  • OAuth authorization server
  • Keep track of consuming applications
  • Keep track of user consent (yes, I allow application X to act on my behalf)
  • OAuth token expiration & refresh token handling
  • Oh, and your API

That’s a lot to build there. Wouldn’t it be great to outsource part of that list to a third party? A little-known feature of the Windows Azure Access Control Service is that you can use it to keep track of applications, user consent and token expiration & refresh token handling. That leaves you with implementing:

  • OAuth authorization server
  • Your API

Let’s do it!

On a side note: I’m aware of the road-to-hell post released last week on OAuth2. I still think that whoever offers OAuth2 should be responsible enough to implement the protocol in a secure fashion. The protocol gives you the options to do so, and, as with regular web page logins, you as the implementer should think about security.

Building a simple API

I’ve been doing some demos lately using www.brewbuddy.net, a sample application (sources here) which enables hobby beer brewers to keep track of their recipes and current brews. There are a lot of applications out there that may benefit from being able to consume my recipes. I love the smell of a fresh API in the morning!

Here’s an API which would enable access to my BrewBuddy recipes:

1 [Authorize] 2 public class RecipesController 3 : ApiController 4 { 5 protected IRecipeService RecipeService { get; private set; } 6 7 public RecipesController(IRecipeService recipeService) 8 { 9 RecipeService = recipeService; 10 } 11 12 public IQueryable<RecipeViewModel> Get() 13 { 14 var recipes = RecipeService.GetRecipes(User.Identity.Name); 15 var model = AutoMapper.Mapper.Map(recipes, new List<RecipeViewModel>()); 16 17 return model.AsQueryable(); 18 } 19 }

Nothing special, right? We’re just querying our RecipeService for the current user’s recipes. And the current user should be logged in as specified using the [Authorize] attribute.  Wait a minute! The current user?

I’ve built this API on the standard ASP.NET Web API features such as the [Authorize] attribute and the expectation that the User.Identity.Name property is populated. The reason for that is simple: my API requires a user and should not care how that user is populated. If someone wants to consume my API by authenticating over Forms authentication, fine by me. If someone configures IIS to use Windows authentication or even hacks in basic authentication, fine by me. My API shouldn’t care about that.

OAuth2 is a different state of mind

OAuth2 adds a layer of complexity. Mental complexity that is. Your API consumer is not your end user. Your API consumer is acting on behalf of your end user. That’s a huge difference! Here’s what really happens:

OAuth2 protocol flow

The end user loads a consuming application (a mobile app or a web app that doesn’t really matter). That application requests a token from an authorization server trusted by your application. The user has to login, and usually accept the fact that the app can perform actions on the user’s behalf (think of Twitter’s “Allow/Deny” screen). If successful, the authorization server returns a code to the app which the app can then exchange for an access token containing the user’s username and potentially other claims.

Now remember what we started this post with? We want to get rid of part of the OAuth2 implementation. We don’t want to be bothered by too much of this. Let’s try to accomplish the following:

OAuth2 protocol flow with Windows Azure

Let’s introduce you to…

WindowsAzure.Acs.Oauth2

“That looks like an assembly name. Heck, even like a NuGet package identifier!” You’re right about that. I’ve done a lot of the integration work for you (sources / NuGet package).

WindowsAzure.Acs.Oauth2 is currently in alpha status, so you’ll will have to register this package in your ASP.NET MVC Web API project using the package manager console, issuing the following command:

Install-Package WindowsAzure.Acs.Oauth2 -IncludePrerelease

This command will bring some dependencies to your project and installs the following source files:

  • App_Start/AppStart_OAuth2API.cs - Makes sure that OAuth2-signed SWT tokens are transformed into a ClaimsIdentity for use in your API. Remember where I used User.Identity.Name in my API? Populating that is performed by this guy.

  • Controllers/AuthorizeController.cs - A standard authorization server implementation which is configured by the Web.config settings. You can override certain methods here, for example if you want to show additional application information on the consent page.

  • Views/Shared/_AuthorizationServer.cshtml - A default consent page. This can be customized at will.

Next to these files, the following entries are added to your Web.config file:

1 <?xml version="1.0" encoding="utf-8" ?> 2 <configuration> 3 <appSettings> 4 <add key="WindowsAzure.OAuth.SwtSigningKey" value="[your 256-bit symmetric key configured in the ACS]" /> 5 <add key="WindowsAzure.OAuth.RelyingPartyName" value="[your relying party name configured in the ACS]" /> 6 <add key="WindowsAzure.OAuth.RelyingPartyRealm" value="[your relying party realm configured in the ACS]" /> 7 <add key="WindowsAzure.OAuth.ServiceNamespace" value="[your ACS service namespace]" /> 8 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserName" value="ManagementClient" /> 9 <add key="WindowsAzure.OAuth.ServiceNamespaceManagementUserKey" value="[your ACS service management key]" /> 10 </appSettings> 11 </configuration>

These settings should be configured based on the Windows Azure Access Control settings. Details on this can be found on the Github page.

Consuming the API

After populating Windows Azure Access Control Service with a client_id and client_secret for my consuming app (which you can do using the excellent FluentACS package or manually, as shown in the following screenshot), you’re good to go.

ACS OAuth2 Service Identity

The WindowsAzure.Acs.Oauth2 package adds additional functionality to your application: it provides your ASP.NET Web API with the current user’s details (after a successful OAuth2 authorization flow took place) and it adds a controller and view to your app which provides a simple consent page (that can be customized):

image

After granting access, WindowsAzure.Acs.Oauth2 will store the choice of the user in Windows Azure ACS and redirect you back to the application. From there on, the application can ask Windows Azure ACS for an access token and refresh the access token once it expires. Without your application having to interfere with that process ever again. WindowsAzure.Acs.Oauth2 transforms the incoming OAuth2 token into a ClaimsIdentity which your API can use to determine which user is accessing your API. Focus on your API, not on OAuth.

Enjoy!

Hands-on Windows Azure Services for Windows

A couple of weeks ago, Microsoft announced their Windows Azure Services for Windows Server. If you’ve ever heard about the Windows Azure Appliance (which is vaporware imho :-)), you’ll be interested to see that the Windows Azure Services for Windows Server are in fact bringing the Windows Azure Services to your datacenter. It’s still a Technical Preview, but I took the plunge and installed this on a bunch of virtual machines I had lying around. In this post, I’ll share you with some impressions, ideas, pains and speculations.

Why would you run Windows Azure Services in your own datacenter? Why not! You will make your developers happy because they have access to all services they are getting to know and getting to love. You’ll be able to provide self-service access to SQL Server, MySQL, shared hosting and virtual machines. You decide on the quota. And if you’re a server hugger like a lot of companies in Belgium: you can keep hugging your servers. I’ll elaborate more on the “why?” further in this blog post.

Note: Currently only SQL Server, MySQL, Web Sites and Virtual Machines are supported in Windows Azure Services for Windows Server. Not storage, not ACS, not Service Bus, not...

You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late). It's down.

My setup

Since I did not have enough capacity to run enough virtual machines (you need at least four!) on my machine, I decided to deploy the Windows Azure Services for Windows Server on a series of virtual machines in Windows Azure’s IaaS offering.

You will need servers for the following roles:

  • Controller node (the management portal your users will be using)
  • SQL Server (can be hosted on the controller node)
  • Storage server (can be on the cntroller node as well)

If you want to host Windows Azure Websites (shared hosting):

  • At least one load balancer node (will route HTTP(S) traffic to a frontend node)
  • At least one frontend node (will host web sites, more frontends = more websites / redundancy)
  • At least one publisher node (will serve FTP and Webdeploy)

If you want to host Virtual Machines:

  • A System Center 2012 SP1 CTP2 node (managing VM’s)
  • At least one Hyper-V server (running VM’s)

Being a true ITPro (forgot the <irony /> element there…), I decided I did not want to host those virtual machines on the public Internet. Instead, I created a Windows Azure Virtual Network. Knowing CIDR notation (<irony />), I quickly crafted the BalliauwCloud virtual network: 172.16.240.0/24.

So a private network… Then again: I wanted to be able to access some of the resources hosted in my cloud on the Internet, so I decided to open up some ports in Windows Azure’s load balancer and firewall so that my users could use the SQL Sever both internally (172.16.240.9) and externally (sql1.cloud.balliauw.net). Same with high-density shared hosting in the form of Windows Azure Websites by the way.

Being a Visio pro (no <irony /> there!), here’s the schematical overview of what I setup:

Windows Azure Services for Windows Server - Virtual Network

Nice, huh? Even nicer is my to-be diagram where I also link crating Hyper-V machines to this portal (not there yet…):

Virtual machines

My setup experience

I found the detailed step-by-step installation guide and completed the installation as described. Not a great success! The Windows Azure Websites feature requires a file share and I forgot to open up a firewall port for that. The result? A failed setup. I restarted setup and ended with 500 Internal Server Terror a couple of times. Help!

Being a Technical Preview product, there is no support for cleaning / restarting a failed setup. Luckily, someone hooked me up with the team at Microsoft who built this and thanks to Andrew (thanks, Andrew!), I was able to continue my setup.

If everything works out for your setup: enjoy! If not, here’s some troubleshooting tips:

Keep an eye on the C:\inetpub\MgmtSvc-ConfigSite\trace.txt  log file. It holds valuable information, as well as the event log (Applications and Services Log > Microsoft > Windows > Antares).

If you’re also experiencing issues and want to retry installation, here are the steps to clean your installation:

  1. On the controller node: stop services:
    net stop w3svc
    net stop WebFarmService
    net stop ResourceMetering
    net stop QuotaEnforcement
  2. In IIS Manager (inetmgr), clean up the Hosting Administration REST API service. Under site MgmtSvc-WebSites:
    - Remove IIS application HostingAdministration (just the app, NOT the site itself)
    - Remove physical files: C:\inetpub\MgmtSvc-WebSites\HostingAdministration
  3. Drop databases, and logins by running the SQL script: C:\inetpub\MgmtSvc-ConfigSite\Drop-MgmtSvcDatabases.sql
  4. (Optional, but helped in my case) Repair permissions
    PowerShell.exe -c "Add-PSSnapin WebHostingSnapin ; Set-ReadAccessToAsymmetricKeys IIS_IUSRS"
  5. Clean up registry keys by deleting the three folders under the following registry key (NOT the key itself, just the child folders):
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\Web Hosting Framework

    Delete these folders: HostingAdmin, Metering, Security
  6. Restart IIS
    net start w3svc
  7. Re-run the installation with https://localhost:30101/

Configuration

After installation comes configuration. Configuration depends on the services you want to offer. I’m greedy so I wanted to provide them all. First, I registered my SQL Server and told the Windows Azure Services for Windows Server management portal that I have about 80 GB to spare for hosting my user’s databases. I did the same with MySQL (setup is similar):

Windows Azure Services for Windows Server SQL Server

You can add more SQL Servers and even define groups. For example, if you have a SQL Server which can be used for development purposes, add that one. If you have a high-end, failover setup for production, you can add that as a separate group so that only designated users can create databases on that SQL Server cluster of yours.

For Windows Azure Web Sites, I deployed one node of every role that was required:

Windows Azure Services for Windows Server Web Sites

What I liked in this setup is that if I want to add one of these roles, the only thing required is a fresh Windows Server 2008 R2 or 2012. No need to configure the machine: the Windows Azure Services for Windows Server management portal does that for me. All I have to do as an administrator in order to grow my pool of shared resources is spin up a machine and enter the IP address. Windows Azure Services for Windows Server management portal  takes care of the installation, linking, etc.

Windows Azure Services for Windows Server - Adding a role

The final step in offering services to my users is creating at least one plan they can subscribe to. Plans define the services provided as well as the quota on these services. Here’s an example quota configuration for SQL Server in my “Cloud Basics” plan:

Windows Azure Services for Windows Server Manage plans

Plans can be private (you assign them to a user) or public (users can self-subscribe, optionally only when they have a specific access code).

End-user experience

As an end user, I can have a plan. Either I enroll myself or an administrator enrolls me. You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late).

Sign up for Windows Azure Services for Windows Server

Side note: as an administrator, you can modify this page. It’s a bunch of ASP.NET MVC .cshtml files located under C:\inetpub\MgmtSvc-TenantSite\Views.

After signing in, you’ll be given access to a portal which resembles Windows Azure’s portal. You’ll have an at-a-glance look at all services you are using and can optionally just delete your account. Here’s the initial portal:

Windows Azure Services for Windows Server customer portal

You’ll be able to manage services yourself, for example create a new SQL Server database:

Windows Azure Services for Windows Server create database

After creating a database, you can see the connection information from within the portal:

Windows Azure Services for Windows Server connection string

Just imagine you could create databases on-the-fly, whenever you need them, in your internal infrastructure. Without an administrator having to interfere. Without creating a support ticket or a formal request…

Speculations

I’m not sure if I’m supposed to disclose this information, but… The following paragraphs are based on what I can see in the installation of my “private cloud” using Windows Azure Services for Windows Server.

  • I have a suspicion that the public cloud services can enter in Windows Azure Services for Windows Server. The SQL Server database for this management portal contains various additional tables, such as a table in which SQL Azure servers can be added to a pool linked to a plan. My guess is that you’ll be able to spread users and plans between public cloud (maybe your cheap test databases can go there) and private cloud (production applications run on a SQL Server cluster in your basement).
  • The management portals are clearly build with extensibility in mind. Yes, I’ve cracked open some assemblies using ILSpy, yes I’ve opened some of the XML configuration files in there. I expect the recently announced Service Bus for Windows Server to pop up in this product as well. And who knows, maybe a nice SDK to create your own services embedded in this portal so that users can create mailboxes as they please. Or link to a VMWare cloud, I know they have management API’s.

Conclusion

I’ve opened this post with a “Why?”, let’s end it with that question. Why would you want to use this? The product was announced on Microsoft’s hosting subsite, but the product name (Windows Azure Services for Windows Server) and my experience with it so far makes me tend to think that this product is a fit for any enterprise!

You will make your developers happy because they have access to all services they are getting to know and getting to love. You’ll be able to provide self-service access to SQL Server, MySQL, shared hosting and virtual machines. You decide on the quota. You manage this. The only thing you don’t have to manage is the actual provisioning of services: users can use the self-service possibilities in Windows Azure Services for Windows Server.

Want your departments to be able to quickly setup a Wordpress or Drupal site? No problem: using Web Sites, they are up and running. And depending on the front-end role you assign them, you can even put them on internet, intranet or both. (note: this is possible throug some Powershell scripting, by default it's just one pool of servers there)

The fact that there is support for server groups (say, development servers and high-end SQL Server clusters or 8-core IIS machines running your web applications) makes it easy for administrators to grant access to specific resources while some other resources are reserved for production applications. And I suspect this will extend to the public cloud making it possible to go hybrid if you wish. Some services out there, some in your basement.

I’m keeping an eye on this one.

Note: You can sign up for my “I read your blog plan” at http://cloud.balliauw.net and create your SQL Server databases on the fly! (I’ll keep this running for a couple of days, if it’s offline you’re too late). It's down.