Microsoft at Open Source Summit Europe 2024: Driving collaboration and innovation
Connect with other open source enthusiasts at Open Source Summit Europe 2024…
Event-driven applications are a key pattern for cloud-native applications. Event-driven is at the core of many growing trends, including serverless compute like Azure Functions. Event-driven means your application responds and reacts to different events – business or system events. For example, a “new subscriber” event may trigger some code to send a welcome email and create an entry in a CRM system.
Just as important as the code responding to events, is the scaling of that code and compute. Kubernetes-based event-driven autoscaling (KEDA) is an open sourced component that can run in a Kubernetes cluster to provide event-driven autoscaling for every container. Today we are thrilled to announce a 1.0 version of KEDA that is ready for use in production.
Kubernetes provides a powerful container orchestration platform, but by default will only scale based on system metrics like CPU and memory. This means there can be a significant delay in a system’s ability to respond to events. Take an event like a message landing on a message queue. Thousands of queue messages may be sent and awaiting processing, but Kubernetes knows nothing about how many events are happening. Kubernetes will see the CPU and memory of the containers start to rise, but will take some time for the system to react. KEDA greatly improves this by enriching Kubernetes autoscaling with metrics on the rate of events.
With KEDA installed, Kubernetes now can know how many messages are arriving, and use that information to start to scale an app even before the CPU starts to rise. KEDA can assist Kubernetes to scale apps to and from zero as well. If an app is responding to an event that has been idle, KEDA lets Kubernetes know it can scale it to zero to prevent it from consuming any CPU when not needed.
We’ve worked closely with users and organizations during the preview who have been using KEDA around event-driven Kubernetes. Two notable examples are Swiss Re Asset Management and CycloMedia. Tom van de Meent, lead architect for CycloMedia, shared, “At CycloMedia we’ve been running Azure Functions with Azure Storage Queues in Kubernetes. KEDA has been helpful in adding dynamic scaling to handle these workloads.”
We announced KEDA in collaboration with Red Hat back in April and have been delighted with the engagement from users and the community. With hundreds of merged pull requests, dozens of contributors, and weekly community calls, KEDA has only gotten better. KEDA now supports 13 event sources, including Azure Queues, Azure Event Hubs, AWS SQS, Google Cloud PubSub, RabbitMQ, NATS Streaming, Kafka, and more. KEDA also has an extensible gRPC contract so other scalers can be dynamically discovered to add additional capabilities. The community is also helping to build support on top of the popular Operator SDK for ease of installation and management in many environments including Red Hat OpenShift 4.
“Red Hat is working with the cloud-native community to enable portability of serverless applications in hybrid environments. Part of this effort is contributing to KEDA — both via the upstream project and by bringing its utility to customers using enterprise Kubernetes and containers with Red Hat OpenShift. We congratulate the community on reaching this milestone, and look forward to continued collaboration with Microsoft and others to help extend choice for developers that want to be able to build and deploy applications anywhere,” said William Markito Oliveira, Senior Manager of Product Management, Red Hat.
We couldn’t have gotten to this 1.0 milestone without the help from the community and look forward to continued effort and engagement. We recently presented KEDA as part of the serverless working group in CNCF and are in the process to nominate and donate KEDA to the CNCF as a sandbox project. We believe the best products are made in an open and inclusive way.
While KEDA works with any container, you can pair KEDA with the open sourced Azure Functions runtime to enable additional serverless capabilities within Kubernetes. Azure Functions provides a programming model that can run anywhere: in a container running on-premises, fully managed in Azure, or in any Kubernetes cluster.
With Azure Functions, application developers don’t need to worry about writing the code to connect, trigger, and pull from an event source like RabbitMQ, Kafka, or Azure Event Hubs. Instead, they focus only on the code and business logic. With the integrated Azure Functions tooling, you can take any Azure Function app and deploy it to Kubernetes alongside KEDA for event-driven scale and event-driven apps.
KEDA can also run alongside Virtual Kubelet and Azure Kubernetes Service Virtual Nodes. With Virtual Nodes you can spin up containers outside of the dedicated nodes for a cluster and run them on serverless containers. The combination of KEDA, Azure Functions, and Virtual Nodes brings event-driven serverless functions on top of on-demand compute for a powerful serverless platform.
I want to share my sincere appreciation for the response and involvement of the community on KEDA. If you haven’t used KEDA, you can learn more at KEDA.sh or try a step-by-step QuickStart. We are excited with how KEDA fits into cloud-native and serverless applications and look forward to more to come in this space as we continue to partner with the community.