Getting Started

Compute Software's VM Scaler simplifies the automatic scaling of clusters and makes optimal decisions based on your most important business factors. There are three steps needed to get up and running: integrating with your data source(s) and accessing application metrics, defining your workload, and enabling provisioning for automatic scaling decisions.

Step 1

Add metric data source

Application metrics are used to drive decision making. VM Scaler will need access to the data source where your metrics are stored. To do this, you’ll need to add an integration. Locate your Metric Service and select the guide to set up its integration.

AWS CloudWatch - Integrates with your AWS account using Role Delegation, following AWS best practices.
Datadog - Integrates with your Datadog account using Datadog API and Application keys.
New Relic - Integrates with your integrates with your New Relic account using New Relic’s REST API.

You can manage your integrations in your Organization’s section under the ‘Integrations’ tab.

Note: An integration only needs to be set up once, and can be used for multiple workloads.

Step 2

Define a Workload

A workload represents a grouping of scalable work. Typically this can be thought of as a service that can be horizontally scaled.

A workload definition contains the follow properties: Name, Core Metrics, and Business Factors.


Provide a unique name for your workload and an optional description. This will allow you to differentiate workloads in the side navigation easier.

Core Metrics

VM Scaler needs to know the realtime state that your cluster is in. This information is provided through metrics. All workloads require these Core Metrics metrics: Virtual Machines in Cluster, Work Volume, Cluster State, and VM Pricing.

Virtual Machines in Cluster

Number of healthy virtual machines in your cluster. This should not include the number of pending or terminating VMs.

AWS EC2 Auto Scaling Use the GroupTotalInstances metric defined here.
AWS ECS Use the Service RUNNING Task Count.

Work Volume

The amount of work being requested. For example, in a web application this may be the number of incoming HTTP requests. In an ETL workload, this may be the total volume of work.

Cluster State

Choose a metric that represents the amount of work your cluster is performing. If your application is CPU intensive, you’d choose CPU Utilization. If your application is IO intensive you choose disc read ops or disc write ops.

VM Pricing

Select the actively tracked, instance type based price or enter a custom average hourly price.

Business Factors

Make cloud resource decisions based not only on instance costs, but also on other factors that are important to your business. These could be service level agreements, customer satisfaction measures, or financial impacts.

These factors will drive the actions taken with a provisioning-enabled workload.

Adding Factors

A factor states how your metrics relate to business value. You can think of a factor as a function of metrics yielding business value. A workload can have as many factors as needed. A factor has the following inputs:

  • Name: A descriptive name for your factor
  • Formula: The formulas are written in the familiar Excel formula syntax. The syntax is identical with the exception of cell references replaced with Metric names.

Click here for more info on setting up Business Factors.

Step 3

Activate Provisioning

Provisioning allows scaling decisions to be performed automatically. For provisioning to work, you must have a provisoning-enabled integration set up. Note that some integrations may be able to be reused across Metric Data and Provisioning. For example, AWS has a service to store your metrics (CloudWatch) and several services to provision virtual machines (EC2, Fargate, EMR, etc).

Below are a list of provisioners that Compute Software supports. Note that currently only AWS services are supported. Have a provisioner in mind that we currently don’t support? Send us an email at to put in a feature request.

AWS Auto Scaling

After creating an Auto Scaling Group in AWS, input the following:

  • Auto Scaling Group Region: The region in which your Auto Scaling Group is deployed
  • Auto Scaling Group Name: The name you created for your Auto Scaling Group

VM Scaler will validate that the group exists before enabling provisioning.

Once enabled, VM Scaler will control the number of instances in your Auto Scaling Group by adjusting the Desired Capacity field via the SetDesiredCapacity API. The min and max number of instances set for the Auto Scaling Group will be respected by this operation. If we continually hit the min or max number of instances set on your Auto Scaling Group, we will send you an Alert.

AWS ECS Service or Fargate Scaling

After creating an ECS or Fargate service in AWS, input the following into the Provisioning modal:

  • AWS Region: The region in which your ECS cluster is deployed
  • Cluster Name: The ECS cluster name
  • Service Name: The name of the ECS service running within your ECS cluster. This can be a service with launch type EC2 or Fargate.

VM Scaler will validate that the cluster and service exist before enabling provisioning.

Once enabled, VM Scaler will control the number of services in your cluster by adjusting the desired count field via the UpdateService API.