4 min read

Scaling workloads on Microsoft Azure with Kubernetes Event-Driven Autoscaling

In 2019, Microsoft and Red Hat announced Kubernetes Event-Driven Autoscaling (KEDA) which brings event-driven containers and functions to Kubernetes. In 2020, it was donated to the Cloud-Native Computing Foundation (CNCF) as a Sandbox project and moved to the Incubation stage in 2021.

The goal of KEDA is to make Kubernetes-based application autoscaling dead-simple, allowing developers to focus on what matters—building applications.

KEDA reduces the complexity of infrastructure autoscaling, it makes it simpler for Kubernetes cluster administrators to configure, manage, and secure the application auto-scaler.

We have seen a tremendous adoption of KEDA and are proud to have end-users such as Alibaba Cloud, Cisco, FedEx, Grafana Labs, KPMG, Reddit, Zapier, and many more. Microsoft uses KEDA in many places as well—it is included as part of some Azure services and in products such as Xbox, making it able to scale our gaming infrastructure while end-users enjoy playing games online.

In the meantime, KEDA has added even more capabilities. As of KEDA v2.11, developers can scale their apps with one of our over 60 scalers including 10 scalers for Microsoft Azure, doubling the number available with KEDA 2.0.

We are excited that KEDA was accepted by the CNCF as a CNCF Graduated project, joining projects such as Kubernetes, Prometheus, Envoy, and others.

Using KEDA on Azure

As a proud co-maintainer organization for KEDA, in Microsoft Azure we strive to make it insanely simple for application developers to auto-scale their workloads. We do that by integrating KEDA as a key building-block in our serverless platform as a service (PaaS) offerings. For Kubernetes users, we heavily invest in improving the user experience for installing and operating KEDA on our Kubernetes offering.

Let’s have a look at several examples of KEDA usage.

Scaling apps on Azure Container Apps, powered by KEDA

Azure Container Apps (ACA) is a generally available PaaS product that allows developers to build, run, scale, and operate container workloads without having to manage any infrastructure details, allowing customers to focus on developing your applications.

Under the hood, ACA is leveraging a managed KEDA component to power container auto-scaling, including scale to zero. Scale is driven by the number of HTTP requests, TCP connections, and custom metrics including application-defined event sources. Additionally, Azure Container Apps allows customers to use any of the open-source KEDA scalers, expanding its capabilities to the wider cloud-native ecosystem.

We recommend using Azure Container Apps if you want to focus on building applications with a managed serverless container offering.

Azure Kubernetes Service add-on for KEDA is becoming generally available soon

Kubernetes has been embraced by developers and administrators as the de-facto container orchestrator in the industry. Azure offers a managed Kubernetes Service (AKS).

Customers can deploy and use KEDA on their Kubernetes clusters to scale their applications. While open-source KEDA is already easy to install, we wanted to make it even simpler to start using KEDA on AKS and have recently introduced the preview of a KEDA add-on for AKS.

With the KEDA add-on for AKS, customers can configure their clusters to have KEDA installed and managed automatically through ARM, Bicep, or the Azure Command-Line Interface (CLI).

The add-on is planned to become generally available in the near future. Once generally available, the add-on will be covered under the AKS support policy.

KEDA recently introduced an overview of the supported KEDA versions per Kubernetes version which will be followed by AKS to guarantee the best performance and stability.

Scaling Azure Functions workloads on Kubernetes by KEDA

Azure Functions also leverages KEDA as an event-driven auto-scaler to make it possible to provide serverless function capabilities in any Kubernetes cluster. In this architecture, the scale controller of Functions monitors the rate of events that targets your function, it then proactively scales the number of instances by leveraging KEDA. KEDA can help Azure Functions scale in to zero instances when no events occur and out to n instances with this seamless integration of Functions runtime.

Additionally, the Azure Function Core Tools allow developers to easily deploy applications to Kubernetes and start scaling it with KEDA based on the triggers used.

For more information, check out Azure Functions on Kubernetes with KEDA.

Building scalable applications with low environmental footprint

As we build scalable applications, we must be mindful of the implications of resource usage. Scaling applications out means platforms consume more resources.

For this reason, the KEDA maintainers collaborated with the Technical Advisory Group (TAG) Environmental Sustainability community to focus on leveraging environmentally aware autoscaling and offer ways to reduce the application footprint.

As part of our ongoing sustainability effort at Microsoft, we want to reduce our environmental footprint and help our customers to do the same. Because of that, the AKS team launched a Carbon-Aware KEDA operator at KubeCon EU 2023 and opened a proposal to contribute it upstream to KEDA open-source to help others leverage this capability.

We strongly believe that the operator is a valuable component in the auto-scaling toolbelt of Kubernetes users and want to make sure everyone can easily use it.

The journey continues

While we are extremely proud of progress so far, this is only the beginning.

We are committed to continue working with the KEDA community to evolve the project further by making it more extensible, performant, and support multi-tenant installations.

We are excited about this new milestone and look forward to making it even simpler to use KEDA on Microsoft Azure.