WASABi

At Tech Ed this year, another talk I did was titled “Writing Robust Azure Applications using P&P guidance”. The title is probably a bit hard in working out what the talk really contains, but it is split into two separate topics – one on the Windows Azure Autoscaling Application Block, aka WASABi, and one on the Transient Error Handling Application Block, aka Topaz.

Considering the fact that these are two separate topics, I thought I’d cover them separately in the blog as well, starting with WASABi in the first post. But even before I get into WASABi, I want to give a general introduction into scaling, and why you need it.

To discuss scaling, let’s take the example of the recently concluded Olympics. The Olympics as you know runs once every four years, and the traffic to the website that hosts the Olympics, spikes during the duration it runs, and may be a couple of weeks before and after the event – which is close to about 4 or so weeks – in 4 years! Sure you have traffic coming into the site during other times, but you don’t want to be catering for peak loads during the entire 4 years. That’s where scaling comes into play.

If you look at some of the reasons people move their application to the cloud, it is due to the following reasons –

  • low initial and low ongoing costs, in most cases metered by the hour
  • seemingly infinite resources at your disposal
  • elasticity or scaling on demand

The third one “elasticity or scaling on demand” is what makes the cloud so very appealing to a lot of customers. Scaling is all about balancing running cost with load and performance. If you had an infinite budget you wouldn’t really worry about scaling – in the real world though, it makes sense to get the cost saving benefit and scale only when needed.

Although your site may not experience the “once every four years” kind of surge like the Olympics, you may find that most sites will still have a predictable usage pattern – sometimes even on a day to day basis, based on which you can scale, and get some cost benefits.

Types of scaling

There are two types of scaling:

  • Vertical Scaling: this is when you scale up or down – Windows Azure comes with 5 different instance sizes – XS, S, M, L, and XL. Each instance size comes with different number of cores, CPU speed, memory along with other things including cost/hour.
  • Horizontal Scaling: this is when you scale in or out – By increasing or decreasing the number of instances, you can get true elasticity on demand.

In addition to this, it is also important to know when you are going to scale, you can again, scale in two ways:

  • Proactive: You can scale proactively if you know exactly when to scale. For instance, when your website is hosting a big sale, and you are expecting heaps of traffic, you can just scale out just before the sale starts. Even if you don’t have a sale, you may be able to see a consistent pattern in your site’s traffic (even on  a day to day basis) and scale based on that.
  • Reactive: Sometimes you may not know when the traffic to your site is going to surge. For instance, you may be running a web site that on-sells flight tickets, and an airline may suddenly announce a sale starting mid-night, leading to a lot of traffic on your site. If you did not know about the sale, and your site can’t handle the sudden surge in traffic, then your website could potentially lose out on good business. So, proactive scaling does not always work. You may need to also be reactive, and scale depending on the load on the server, traffic etc.

Scaling can be done manually. To manually do horizontal scaling, all you have to do is go to the Scaling tab in the Azure Management portal (http://manage.windowsazure.com), and increase/decrease the number of instances by dragging the slider. Manual scaling may work well, if you have to do it once in a while, but if you would like to adjust scaling on a day to day basis, then you have to start looking at Autoscaling.

AutoScaling Options

If you want to do Autoscaling, then you have 3 main options –

Roll your own: This is easier said than done. You are probably better of writing applications that provide business value than writing a autoscaling framework from scratch

Use a Service provider: There are companies out there that specialize in Autoscaling. All you have to do is pay them some money, provide your subscription details, and they will take care of all the autoscaling bits. Of course you still have control over the scaling configuration. One such provider is Paraleap with their AzureWatch product offering.

Use an existing framework: This, IMO, is probably the best option for autoscaling. The Patterns and Practices (p&p) group from Microsoft have done all the hard work and released an Application block for Autoscaling. This application block is called WASABi and stands for Windows Azure AutoScaling Application Block (with an i thrown in for Infrastructure – and to make the acronym sound nice). WASABi supports the following features out of the box:

  • Scale based on a time table
  • Scale based on Performance counters, queue size, etc
  • Work to SLAs + budget
  • Keeps configuration separate from code
  • Allows throttling
  • Allows working to billing cycles
  • Supports various hosting options including running in a Worker role
  • In addition, it can also cover multiple roles with one Application

There are three main things you need to know about WASABi: How to install it, how to configure it and how to implement it, which I cover in the following sections.

Installation

WASABi is available as a NuGet package. If you want to install it, just start up your NuGet console from within Visual Studio and type in the following command:

Install-Package EnterpriseLibrary.WindowsAzure.Autoscaling

This will bring in all the libraries and dependencies you will need for your Autoscaling. Typically, I run this against a new Worker role, but in theory you don’t have to – you can even run it as a Console App – which is great for testing purposes.

In addition, I would also recommend getting the Enterprise Library Configuration Editor, which allows editing the App or Web config file easily.

Configuration

There are three pieces of configuration you need to do to get WASABi to work:

  1. The first one is the App or Web config file that specifies where all the configuration information for the Application block are stored. It is best to update this information using the EntLib Configuration Editor.
  2. A Service configuration file: This contains information about the Cloud services you wish to scale. This includes the subscription id, the certificates needed for authenticate, the different roles that need to be scaled, etc.
  3. A rule configuration file: This contains all the rules needed to scale the services.

One point to note is that although the service and rule configurations can be stored as actual files, it is better to store them in a blob, so that you don’t have to redeploy your worker role (assuming of course that you are deploying your Autoscaler as a web role). WASABi has watchers that will pick up any changes you make to the blob.

Implementation

There are 4 main things you need to do actually implement autoscaling:

  • Changes to code: You need to create a worker role (or use an existing one), use NuGet to get WASABi, and make changes to your WorkerRole.cs file
  • Changes to your app.config: As mentioned in the previous section, you need to update your app.config file to specify where the configuration information for WASABi exists
  • Configure your service information: Specify the service information configuration, so that the Autoscaler knows how to connect to the services that need to be scaled
  • Configure the rules: Specify the rules based on which the Autoscaler scales the services

Changes to code

The code changes required to get Autoscaling working WASABi are minimal – actually just about 4 lines of code as shown below (shown in bold):

public class WorkerRole : RoleEntryPoint
{
 private Autoscaler _autoscaler;
 
...
 public override bool OnStart()
 {
     _autoscaler = EnterpriseLibraryContainer.Current.
                            GetInstance();
     _autoscaler.Start();

     ...
 }
 public override void OnStop()
 {
     _autoscaler.Stop();
     ...
 }
}

Changes to App.config

To make changes to the config file, right click the file from the Solution Explorer and choose “Edit Configuration File” from the menu. This will open up the Entlib Configuration Editor.

In the EntLib configuration editor, choose Blocks->Add Autoscaling settings from the menu. This will add the necessary sections in the app.config file. You need to expand and fill out all the necessary fields in the Autoscaling settings section – this includes specifying where the blobs holding the service information and rules are.

image

Creating the Rules store configuration

The rules configuration file in nothing more than an XML file, that has a bunch of constraint and reactive rules. Constraint rules let you set rules in a proactive fashion. Sample constraint rules are shown in the XML snippet below:

<rules xmlns=
   "http://schemas.microsoft.com/practices/2011/entlib/autoscaling/rules">    <constraintRules>     <rule name="Default" rank="1">        <actions>          <range min="2" max="6" target="SM.Website"/>        </actions>      </rule>      <rule name="Peak" rank="10">        <timetable startTime="08:00:00" duration="08:00:00" 
                     utcOffset="+10:00" >          <!--<weekly days="Monday Tuesday Wednesday Thursday Friday"/>—>          <daily/>        </timetable>        <actions> ... </actions>     </rule>    </constraintRules>
  </rules>

The XML is fairly self explanatory, but I would like to point out a few things:

  • The constraintRules element contains a bunch of rules
  • Each rule has actions and an optional timetable on which the actions needs to be performed
  • actions contain a set of range (min and max) elements that specify the minimum and maximum instance count of the role that it is targeting
  • When you have conflicting rules, then the rank takes precedence.

If you want to use reactive rules, the XML in rules looks something like this:

<reactiveRules>
  <rule name="ScaleUpOnHighUtilization" rank="15" >
   <when>
     <greater operand="CPU" than ="60"/>
   </when>
   <actions>
      <scale target="SM.Website" by="1"/>
   </actions>
  </rule>
  <rule name="ScaleDownOnLowUtilization" rank="20" >
    <when>
      <less operand="CPU" than ="30"/>
    </when>
    <actions>
      <scale target="SM.Website" by="-1"/>
    </actions>
  </rule>
</reactiveRules>

When you specify reactiveRules, instead of specifying a timetable as you did with constraintRules, you specify a when, followed by the actions to perform. A typical item in actions is scale, which is used to specify the number of instances you want to increase/decrease the role by. In addition to scale, you can also use changeSetting to change the value of a setting that you have defined in your Service configuration file. The application can then check that setting to perform some kind of throttling operation.

The operand specified in the when clause can be specified separately, and looks something like this:

<operands>
   <performanceCounter alias="CPU" 
      performanceCounterName="\Processor(_Total)\% Processor Time"
      source="SM.Website"
      timespan="00:05:00" 
      aggregate="Average"/>

</operands>

Under operands, you can specify the performance counters or queues you want to monitor.

Creating the Service configuration

The service configuration is another XML file that looks something like this:

<serviceModel 
    xmlns="http://schemas.microsoft.com/practices/2011/entlib/autoscaling/serviceModel">
 <subscriptions>
   <subscription name="3-Month Free Trial" 
                 subscriptionId=…>
    <services>
     <service dnsPrefix="TechEdAuScaling" 
              slot="Production"
              scalingMode="ScaleAndNotify" 
              notificationRecipients="mahesh@blah.net" >
       <roles>
           <role alias="WebApp" …/>
       </roles>
     </service>
    </services>
    <storageAccounts>
     <storageAccount …>
       <queues>
         <queue alias="dummy-text-queue" 
                queueName="dummy-text-queue"/>
       </queues>
    </storageAccount>
   </subscription>
 </subscriptions>
 <stabilizer>
  <role roleAlias="WebApp" scaleDownCooldown="00:01:00" 
        scaleUpCooldown="00:01:00"
   />
 </stabilizer>
</serviceModel>

As you can see from the XML, the service configuration is  used to specify details about your subscription, what service you intend to scale, the storage accounts you are using, etc.

You will also notice a section called stablizer. The stablizer is used to ensure that oscillations that happen due to alternating scale out and scale in operations are prevented if they happen in quick succession.

In closing

Hopefully this post has given you enough to get started with WASABi. If you want to find out more about WASABi, here are some additional links to get you started:

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s