Getting Started
Before You Begin
Prerequisites:
- An active Harness Cloud Cost Management (CCM) account
- Access to your cloud provider account(s) with appropriate permissions
- Resources you want to optimize with AutoStopping
Note for Kubernetes Users: For Kubernetes clusters (Amazon EKS, Azure AKS, or Google GKE), you must first set up the appropriate cloud provider connector before configuring AutoStopping.
Setup Process
Setting up AutoStopping is a straightforward process that involves three main steps:
Step 1: Create a Cloud Connector
First, you need to connect Harness to your cloud provider account by creating a connector.
- AWS
- Azure
- GCP
Create an AWS Connector with the following permissions:
- Amazon EC2 access
- AWS Cost and Usage Reports access
- AWS Auto Scaling access (if using Auto Scaling Groups)
- Amazon EKS access (if using Kubernetes clusters)
- Amazon RDS access (if using database instances)
Create an Azure Connector with:
- Azure Virtual Machine access
- Azure Cost Management access
- Azure Kubernetes Service (AKS) access (if using Kubernetes clusters)
Create a Google Cloud Platform (GCP) Connector with:
- Google Compute Engine access
- Google Cloud Billing access
- Google Kubernetes Engine (GKE) access (if using Kubernetes clusters)
Step 2: Set Up Proxy or Load Balancer
Next, you'll need to set up a proxy or load balancer that will intercept and manage traffic to your resources. This component is what enables the seamless start/stop functionality.
Note for Kubernetes Clusters: For Kubernetes workloads (Amazon EKS, Azure AKS, or Google GKE), you will configure AutoStopping directly through the Harness UI without requiring a separate proxy or load balancer setup. You'll be prompted to provide your Kubernetes cluster details during the AutoStopping rule configuration.
- AWS
- Azure
- GCP
Option 1: AutoStopping Proxy
Best for: Amazon EC2, Auto Scaling Groups, Amazon ECS Services, and Amazon RDS Instances
The AutoStopping Proxy acts as an intermediary that forwards traffic to your resources and automatically starts them when needed.
Set up AWS AutoStopping Proxy →
Option 2: Load Balancer Integration
Best for: Amazon EC2, Auto Scaling Groups, and Amazon ECS Services
Integrate with your existing AWS Load Balancers to enable AutoStopping functionality.
Option 1: AutoStopping Proxy
Best for: Azure Virtual Machines
Deploy an AutoStopping Proxy to manage traffic to your Azure VMs.
Set up Azure AutoStopping Proxy →
Option 2: Azure Application Gateway
Best for: Azure Virtual Machines in production-like environments
Integrate with Azure Application Gateway for enhanced routing capabilities.
AutoStopping Proxy
Best for: Google Compute Engine VMs and Google Cloud Instance Groups
Deploy an AutoStopping Proxy to manage your GCP resources.
Set up GCP AutoStopping Proxy →
Note: For GCP only Proxy is supported.
Step 3: Configure AutoStopping Rules
You can create AutoStopping Rules with two methods:
-
Using Terraform: For detailed instructions, see Create AutoStopping Rules for Terraform.
-
Using the Harness UI:
- AWS
- Azure
- GCP
- In Harness, navigate to Cloud Costs > Cost Optimization > AutoStopping
- Click New Rule and select AWS
- Follow the guided setup to configure
- Review and activate your rule
- In Harness, navigate to Cloud Costs > Cost Optimization > AutoStopping
- Click New Rule and select Azure
- Follow the guided setup to configure
- Review and activate your rule
- In Harness, navigate to Cloud Costs > Cost Optimization > AutoStopping
- Click New Rule and select GCP
- Follow the guided setup to configure
- Review and activate your rule