At Tech Ed this year, another talk I did was titled “Writing Robust Azure Applications using P&P guidance”. The title is probably a bit hard in working out what the talk really contains, but it is split into two separate topics – one on the Windows Azure Autoscaling Application Block, aka WASABi, and one on the Transient Error Handling Application Block, aka Topaz.

Considering the fact that these are two separate topics, I thought I’d cover them separately in the blog as well, starting with WASABi in the first post. But even before I get into WASABi, I want to give a general introduction into scaling, and why you need it.

To discuss scaling, let’s take the example of the recently concluded Olympics. The Olympics as you know runs once every four years, and the traffic to the website that hosts the Olympics, spikes during the duration it runs, and may be a couple of weeks before and after the event – which is close to about 4 or so weeks – in 4 years! Sure you have traffic coming into the site during other times, but you don’t want to be catering for peak loads during the entire 4 years. That’s where scaling comes into play.

If you look at some of the reasons people move their application to the cloud, it is due to the following reasons –

  • low initial and low ongoing costs, in most cases metered by the hour
  • seemingly infinite resources at your disposal
  • elasticity or scaling on demand

The third one “elasticity or scaling on demand” is what makes the cloud so very appealing to a lot of customers. Scaling is all about balancing running cost with load and performance. If you had an infinite budget you wouldn’t really worry about scaling – in the real world though, it makes sense to get the cost saving benefit and scale only when needed.

Although your site may not experience the “once every four years” kind of surge like the Olympics, you may find that most sites will still have a predictable usage pattern – sometimes even on a day to day basis, based on which you can scale, and get some cost benefits.

Types of scaling

There are two types of scaling:

  • Vertical Scaling: this is when you scale up or down – Windows Azure comes with 5 different instance sizes – XS, S, M, L, and XL. Each instance size comes with different number of cores, CPU speed, memory along with other things including cost/hour.
  • Horizontal Scaling: this is when you scale in or out – By increasing or decreasing the number of instances, you can get true elasticity on demand.

In addition to this, it is also important to know when you are going to scale, you can again, scale in two ways:

  • Proactive: You can scale proactively if you know exactly when to scale. For instance, when your website is hosting a big sale, and you are expecting heaps of traffic, you can just scale out just before the sale starts. Even if you don’t have a sale, you may be able to see a consistent pattern in your site’s traffic (even on  a day to day basis) and scale based on that.
  • Reactive: Sometimes you may not know when the traffic to your site is going to surge. For instance, you may be running a web site that on-sells flight tickets, and an airline may suddenly announce a sale starting mid-night, leading to a lot of traffic on your site. If you did not know about the sale, and your site can’t handle the sudden surge in traffic, then your website could potentially lose out on good business. So, proactive scaling does not always work. You may need to also be reactive, and scale depending on the load on the server, traffic etc.

Scaling can be done manually. To manually do horizontal scaling, all you have to do is go to the Scaling tab in the Azure Management portal (http://manage.windowsazure.com), and increase/decrease the number of instances by dragging the slider. Manual scaling may work well, if you have to do it once in a while, but if you would like to adjust scaling on a day to day basis, then you have to start looking at Autoscaling.

AutoScaling Options

If you want to do Autoscaling, then you have 3 main options –

Roll your own: This is easier said than done. You are probably better of writing applications that provide business value than writing a autoscaling framework from scratch

Use a Service provider: There are companies out there that specialize in Autoscaling. All you have to do is pay them some money, provide your subscription details, and they will take care of all the autoscaling bits. Of course you still have control over the scaling configuration. One such provider is Paraleap with their AzureWatch product offering.

Use an existing framework: This, IMO, is probably the best option for autoscaling. The Patterns and Practices (p&p) group from Microsoft have done all the hard work and released an Application block for Autoscaling. This application block is called WASABi and stands for Windows Azure AutoScaling Application Block (with an i thrown in for Infrastructure – and to make the acronym sound nice). WASABi supports the following features out of the box:

  • Scale based on a time table
  • Scale based on Performance counters, queue size, etc
  • Work to SLAs + budget
  • Keeps configuration separate from code
  • Allows throttling
  • Allows working to billing cycles
  • Supports various hosting options including running in a Worker role
  • In addition, it can also cover multiple roles with one Application

There are three main things you need to know about WASABi: How to install it, how to configure it and how to implement it, which I cover in the following sections.


WASABi is available as a NuGet package. If you want to install it, just start up your NuGet console from within Visual Studio and type in the following command:

Install-Package EnterpriseLibrary.WindowsAzure.Autoscaling

This will bring in all the libraries and dependencies you will need for your Autoscaling. Typically, I run this against a new Worker role, but in theory you don’t have to – you can even run it as a Console App – which is great for testing purposes.

In addition, I would also recommend getting the Enterprise Library Configuration Editor, which allows editing the App or Web config file easily.


There are three pieces of configuration you need to do to get WASABi to work:

  1. The first one is the App or Web config file that specifies where all the configuration information for the Application block are stored. It is best to update this information using the EntLib Configuration Editor.
  2. A Service configuration file: This contains information about the Cloud services you wish to scale. This includes the subscription id, the certificates needed for authenticate, the different roles that need to be scaled, etc.
  3. A rule configuration file: This contains all the rules needed to scale the services.

One point to note is that although the service and rule configurations can be stored as actual files, it is better to store them in a blob, so that you don’t have to redeploy your worker role (assuming of course that you are deploying your Autoscaler as a web role). WASABi has watchers that will pick up any changes you make to the blob.


There are 4 main things you need to do actually implement autoscaling:

  • Changes to code: You need to create a worker role (or use an existing one), use NuGet to get WASABi, and make changes to your WorkerRole.cs file
  • Changes to your app.config: As mentioned in the previous section, you need to update your app.config file to specify where the configuration information for WASABi exists
  • Configure your service information: Specify the service information configuration, so that the Autoscaler knows how to connect to the services that need to be scaled
  • Configure the rules: Specify the rules based on which the Autoscaler scales the services

Changes to code

The code changes required to get Autoscaling working WASABi are minimal – actually just about 4 lines of code as shown below (shown in bold):

public class WorkerRole : RoleEntryPoint
 private Autoscaler _autoscaler;
 public override bool OnStart()
     _autoscaler = EnterpriseLibraryContainer.Current.

 public override void OnStop()

Changes to App.config

To make changes to the config file, right click the file from the Solution Explorer and choose “Edit Configuration File” from the menu. This will open up the Entlib Configuration Editor.

In the EntLib configuration editor, choose Blocks->Add Autoscaling settings from the menu. This will add the necessary sections in the app.config file. You need to expand and fill out all the necessary fields in the Autoscaling settings section – this includes specifying where the blobs holding the service information and rules are.


Creating the Rules store configuration

The rules configuration file in nothing more than an XML file, that has a bunch of constraint and reactive rules. Constraint rules let you set rules in a proactive fashion. Sample constraint rules are shown in the XML snippet below:

<rules xmlns=
   "http://schemas.microsoft.com/practices/2011/entlib/autoscaling/rules">    <constraintRules>     <rule name="Default" rank="1">        <actions>          <range min="2" max="6" target="SM.Website"/>        </actions>      </rule>      <rule name="Peak" rank="10">        <timetable startTime="08:00:00" duration="08:00:00" 
                     utcOffset="+10:00" >          <!--<weekly days="Monday Tuesday Wednesday Thursday Friday"/>—>          <daily/>        </timetable>        <actions> ... </actions>     </rule>    </constraintRules>

The XML is fairly self explanatory, but I would like to point out a few things:

  • The constraintRules element contains a bunch of rules
  • Each rule has actions and an optional timetable on which the actions needs to be performed
  • actions contain a set of range (min and max) elements that specify the minimum and maximum instance count of the role that it is targeting
  • When you have conflicting rules, then the rank takes precedence.

If you want to use reactive rules, the XML in rules looks something like this:

  <rule name="ScaleUpOnHighUtilization" rank="15" >
     <greater operand="CPU" than ="60"/>
      <scale target="SM.Website" by="1"/>
  <rule name="ScaleDownOnLowUtilization" rank="20" >
      <less operand="CPU" than ="30"/>
      <scale target="SM.Website" by="-1"/>

When you specify reactiveRules, instead of specifying a timetable as you did with constraintRules, you specify a when, followed by the actions to perform. A typical item in actions is scale, which is used to specify the number of instances you want to increase/decrease the role by. In addition to scale, you can also use changeSetting to change the value of a setting that you have defined in your Service configuration file. The application can then check that setting to perform some kind of throttling operation.

The operand specified in the when clause can be specified separately, and looks something like this:

   <performanceCounter alias="CPU" 
      performanceCounterName="\Processor(_Total)\% Processor Time"


Under operands, you can specify the performance counters or queues you want to monitor.

Creating the Service configuration

The service configuration is another XML file that looks something like this:

   <subscription name="3-Month Free Trial" 
     <service dnsPrefix="TechEdAuScaling" 
              notificationRecipients="mahesh@blah.net" >
           <role alias="WebApp" …/>
     <storageAccount …>
         <queue alias="dummy-text-queue" 
  <role roleAlias="WebApp" scaleDownCooldown="00:01:00" 

As you can see from the XML, the service configuration is  used to specify details about your subscription, what service you intend to scale, the storage accounts you are using, etc.

You will also notice a section called stablizer. The stablizer is used to ensure that oscillations that happen due to alternating scale out and scale in operations are prevented if they happen in quick succession.

In closing

Hopefully this post has given you enough to get started with WASABi. If you want to find out more about WASABi, here are some additional links to get you started:

Over to the cloud

(Reproduced from an article I wrote for a newsletter)

Gartner identified Cloud Computing as the No. 1 strategic technology for 2011. Companies like Microsoft, Google and Amazon are pumping lots of money into it. And more and more companies are moving their online presence and applications to the cloud.

So, what is this “cloud”? And what is all this hype surrounding it.

If you look up Wikipedia, you will find the National Institute of Standards and Technology (NIST)’s definition of what Cloud computing is:

“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Confused? Here is my simplistic definition –

“It is computing power that can be shared over the internet, with several advantages like reliability, scalability, ease of maintenance and most importantly – low cost.”

Anything that you make available over the internet, whether it is a simple website, a complex application or set of services can be hosted in the cloud.

The concept of cloud computing can be best explained with an analogy. Let us assume that you are a tour guide showing people around Melbourne. What you bring to the table is your knowledge of the city of Melbourne, but you are dependent on a transport mechanism (let’s say, a car, van or even bus) to show people around. Obviously, buying your own van or bus requires a lot of upfront cost. And you may have days where there are a lot of people interested in going on a tour and days where you hardly have anyone. And what if your vehicle breaks down when you have a lot of people signed up?

Wouldn’t it be awesome, if you could just pay someone for a vehicle every day? And if you could get a bigger or smaller vehicle depending on the number of people on the tour – and pay accordingly? Maybe even have more than one vehicle if there is big group interested in a tour. And what if you even had a back up to these vehicles in case something goes wrong? This is the kind of scenario in which cloud computing comes to your aid. Companies that offer cloud computing provide the “vans” and “back-up vehicles” for you to run your “tour-guide” business. You take care of your “tour-guide” business, while the cloud company takes care of size of the vehicle or number of vehicles (scaling) and providing backups (availability/reliability), and making sure that the vehicles are maintained in top condition. As a tour operator, you get very good pricing, and the cloud company uses some of the seats in the van to service other customers, thereby increasing their own profitability.

Pricing, reliability and on-demand scalability is what makes cloud a very attractive option.

The analogy provided is a simplistic view of the cloud, and when people talk about the cloud, they generally refer to three specific models – Platform as a service (PaaS), Infrastructure as a service (IaaS) and Software as a service (Saas).

Cloud implementations such as Windows Azure fall into the Platform as a Service model. What Windows Azure provides is a platform in which you can write and deploy applications to. These applications need to follow a set of rules and be designed in a specific way to run in this platform. Conforming to these rules is easy if you use the right tools (such as Visual Studio) and SDK provided by Microsoft. Once you deploy an application that plays by the platform rules, the platform takes care of other things like load balancing, reliability, etc. To keep up with the tour guide analogy, the company providing the van lays down rules saying that only people are allowed into their vans. If you wanted to transport other things such as equipment, you can’t do it.

This is where Infrastructure as a service comes into play. This is the equivalent of the cloud company saying – “I am going to give you a van to run your tours and I don’t care what you do with the van – you run and manage it yourself”. You can then transport anything you want – people, equipment, whatever. But this brings additional responsibility in managing things such as availability and scalability, which you didn’t have to worry with the PaaS model. Amazon EC2 is an example of a very popular IaaS implementation that charges customers a certain amount based on the virtual machine they want. In addition, you also pay for things like storage costs and data transfer cost.

The Software as a Service model, on the other hand, lets cloud companies provide software that can be used for a fee. This is particularly useful if running certain software involves a lot of infrastructure and running costs, which could be prohibitive for smaller companies. But because of the cost savings and scalability this model provides, it is also useful for medium to large size companies. Popular SaaS implementations such as SalesForce, provide users with advantages such as high adoption across multiple devices, lower initial costs, easier upgrades, low maintenance and high scalability. SaaS applications typically have no or very little installation that customers have to do – instead they access these applications as a service across the web.

Explaining SaaS using the tour guide analogy may be difficult, but think of it as the tour guide company providing its services for tour operators or airlines like Qantas to use. The airlines can then focus on their core business of running an airline, while it uses the tour guide company to run local tours.

The Windows Azure Platform

Microsoft has several offerings in the cloud space that is either already out or soon going to be out – Office 365 and Dynamics CRM Online are two such offerings. But the one developers need to be aware of is Windows Azure. Windows Azure is essentially Microsoft’s operating system for the cloud.

Windows Azure provides you the platform to develop and deploy applications that are hosted in Microsoft data centres around the world. The Windows Azure platform consists of the following components:

· Compute Services: This is responsible for running applications, whether they are web applications, services or long running code that needs to reside on the server. These applications are deployed into a concept called Roles. Roles are nothing but an abstraction of an application type running on a set of machines. Web applications run on a Web Role and background tasks and long running operations run on a Worker Role

· Storage Services: All applications need a place to store data. Azure storage provides them the capability to do that in the cloud. Azure storage supports Table Storage, which is a highly scalable way of storing data; and Blobs, which is a way of storing files – large or small. In addition, it provides Queues, which can be used as an asynchronous communication mechanism between different roles. In addition to this, SQL Azure can be used to store relational data in the form of tables. SQL Azure is version of SQL Server that has been modified to run in the cloud.

· Networking: When you start hosting applications in the cloud, you will soon find that you cannot move all your organisation’s applications and services to the cloud. Some of them have dependencies and sufficient reasons from preventing them from leaving the organisation’s data centre. But what if your roles in the cloud need to communicate with these services? Windows Azure Connect provides a way of connecting the two so that they can communicate. Apart from Connect services, Azure also provides the Azure Traffic Manager. The Traffic Manager provides a way of distributing incoming traffic to different hosted services, whether they are hosted in the same data centre or spread across different services around the world. This acts like a load balancer that diverts traffic at a more global scale.

· Caching: Apart from issues like scalability and reliability, performance is usually a very common requirement in most applications and it becomes even more important in the context of the cloud. Caching in Azure is provided using Content Delivery Networks and data caches. Azure’s Content Delivery Networks (CDN) is used to cache blobs at edge networks to speed up network access, and AppService caching can provide caching within Azure to speed up access to data.

· Security: Azure’s Access Control Service or ACS provides an infrastructure for Federation and user authentication using ADFS, Windows Live ID, Google, Facebook, etc.

Although, Azure provides a number of these components, when developers start working with Azure, they initially need to focus on just Azure Compute and Azure Storage.

So, how does one get started?

To get started, you need to download the Windows Azure Tools for Microsoft Visual Studio. This contains the Windows Azure SDK and extends Visual Studio by adding the necessary project templates and tools to run and test Azure applications. The Compute Emulator and Storage Emulator that are part of the download helps you run and test the Azure application locally in your own computer before you deploy it to the cloud. You also need to create an account and configure your subscriptions at http://windows.azure.com so that you can deploy your application to the cloud. The website provides a portal to manage all your azure subscriptions and your Azure services in one place.

So, what are you waiting for? Get cracking and move your first app to the cloud. I promise you, it won’t be your last.

Separate certificates for Transport and Message security in WCF

I’ve been busy of late writing my first book and doing so many other things that I haven’t had time to post anything on my blog. Now that I’ve got the book out of the way, I thought I should post something here. And what better topic than WCF 🙂

Recently, I had to interact with a financial institution using WCF for a Customer. The Service that the financial institution exposed was not written in WCF or .NET – not that it matters, but there were a number of specific things that had to be done to get it to work:

  • We needed to use transport security (https) that had to be encrypted using a specific X509 certificate
  • The body of the message had to be signed using another X509 certificate
  • The reply from the service did not have any security credentials attached to it – i.e. the transport was secure, but the message was not signed or encrypted

This may seem pretty straight forward – All you had to do is create a custom binding and specify something like this –

  &lt;binding name="Custom"&gt;
    &lt;security messagesecurityversion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" 
    &lt;httpstransport requireclientcertificate="true" 

The problem is that you can specify only one certificate in  Client credentials  and both message security as well as transport security will use the same certificate – we want to use two separate ones.

The solution to this is to add a new behavior that takes care of this. But rather than creating the behavior from scratch, an easier alternative is to extend the ClientCredentials class to cater for this additional certificate. So, I decided to use the existing certificate stored in ClientCredentials for message security and to add a separate property to hold the certificate for the Transport as shown in the code below –

/// Class that extends Client Credentials so that the certificate for the
/// Transport layer encryption can be separate
public class MyCredentials : ClientCredentials
  /// &lt;summary&gt;
  /// The X509 Certificate that is to be used for https
  /// &lt;/summary&gt;
  public X509Certificate2 TransportCertificate { get; set; }

  public MyCredentials(ClientCredentials existingCredentials)
    : base(existingCredentials)

  protected MyCredentials(MyCredentials other)
    : base(other)
    TransportCertificate = other.TransportCertificate;

  protected override ClientCredentials CloneCore()
    return new MyCredentials(this);

  public override SecurityTokenManager CreateSecurityTokenManager()
    return new MyCredentialsSecurityTokenManager(this);

  public void SetTransportCertificate(string subjectName, StoreLocation storeLocation, StoreName storeName)
    SetTransportCertificate(storeLocation, storeName, X509FindType.FindBySubjectDistinguishedName, subjectName);

  public void SetTransportCertificate(StoreLocation storeLocation, StoreName storeName, X509FindType x509FindType, string subjectName)
    TransportCertificate = FindCertificate(storeLocation, storeName, x509FindType, subjectName);

  private static X509Certificate2 FindCertificate(StoreLocation location, StoreName name,
    X509FindType findType, string findValue)
    X509Store store = new X509Store(name, location);
      X509Certificate2Collection col = store.Certificates.Find(findType, findValue, true);
      return col[0]; // return first certificate found


As part of the class, I added some helper methods to set the Transport certificate from code and also overrode the CreateSecurityTokenManager method so that I can create my own SecurityTokenManager that figures out which certificate to use for what operation.

But again, rather than create this class from scratch, I just extended the ClientCredentialsSecurityTokenManager class that ClientCredentials uses. In it I overrode the CreateSecurityTokenProvider method so that when a certificate is requested for Transport security, we pass back TransportCertificate that is stored in the MyCredentials object as shown in the code below –

internal class MyCredentialsSecurityTokenManager :
    MyCredentials credentials;

    public MyCredentialsSecurityTokenManager(MyCredentials credentials)
        : base(credentials)
        this.credentials = credentials;

    public override SecurityTokenProvider CreateSecurityTokenProvider(
        SecurityTokenRequirement requirement)
        SecurityTokenProvider result = null;

        if (requirement.Properties.ContainsKey(ServiceModelSecurityTokenRequirement.TransportSchemeProperty) &amp;&amp;
            requirement.TokenType == SecurityTokenTypes.X509Certificate)
            result = new X509SecurityTokenProvider(
        else if (requirement.KeyUsage == SecurityKeyUsage.Signature &amp;&amp;
            requirement.TokenType == SecurityTokenTypes.X509Certificate)
            result = new X509SecurityTokenProvider(
            result = base.CreateSecurityTokenProvider(requirement);

        return result;


The last step is create the stuff necessary to be able to specify this in your config file. For that I extended the ClientCredentialsElement, so that I can specify the Transport Certificate as a behavior using the code below –

class ClientCredentialsExtensionElement : ClientCredentialsElement
    ConfigurationPropertyCollection properties;

    public override Type BehaviorType
            return typeof(MyCredentials); 

    public X509InitiatorCertificateClientElement TransportCertificate 
            return base["transportCertificate"] 
                as X509InitiatorCertificateClientElement;

    protected override ConfigurationPropertyCollection Properties
            if (this.properties == null)
                ConfigurationPropertyCollection properties = base.Properties;
                properties.Add(new ConfigurationProperty(
                    null, null, null, 
                this.properties = properties;
            return this.properties;

    protected override object CreateBehavior()
        MyCredentials creds = new MyCredentials(
            base.CreateBehavior() as ClientCredentials);

        PropertyInformationCollection properties = 


        return creds;

With the changes made, you should be able to replace the clientCredential section in your config file with the clientCredentialsExtension section. Something like this –

     &lt;add name="clientCredentialsExtension" type="MyNamespace.ClientCredentialsExtensionElement, MyAssemblyName" /&gt;

     &lt;behavior name="SecureMessageAndTransportBehavior"&gt;


         &lt;!--This cert is used for signing the message--&gt;
         &lt;clientCertificate findValue="YourMessageCertName"
                            storeLocation ="LocalMachine"

         &lt;!--This cert is used for the transport--&gt;
         &lt;transportCertificate findValue="YourTransportCertName"
                            storeLocation ="LocalMachine"




That’s it – you are all set to go. Just make sure that you set this behavior for your endpoint.

Talk on Application Architecture

I will be presenting on Application Architecture Guide on Tuesday the 12th of May, 2009 at the Victoria .NET Dev SIG. Here is the blurb for the talk –

Microsoft patterns and practices group released the Application Architecture Guide v2.0 early this year and Mahesh Krishnan walks us through what is present in the guide. He talks about the design-level guidance it offers, deployment patterns, different architectural styles, understanding quality requirements, archetypes and much much more.

You don’t have to be an architect to attend, so don’t miss out.

Attendance is free, but RSVP to info@victoriadotnet.com.au. So, if you happen to be in Melbourne, drop by to heckle or cheer 🙂

  • When: 12th May, 2009, 6:00pm. Be early for free pizzas 🙂
  • Where: Microsoft Theatre, Level 5, 4 Freshwater Place, Southbank

Tarn Barford will also be giving a talk on IronPython, which I am really looking forward to.

On the same night, we are also having a Windows 7 Install Fest with a bit of introduction on Win 7 given by Dave Glover. More details can be found on Dave Glover’s blog about the Install fest.

Visual Studio Tips, Tricks and Techniques Talk

I will be presenting on Visual Studio Tips, Tricks and Techniques at the Victoria .NET Dev SIG tomorrow. The meeting is at Innovation@257 on Collins Street in the City. If you can’t make it to the meeting, you can also view it online by registering here:

My co-worker Jordan Knight will be presenting on ASP.NET AJAX History in the same session.