5 things we learned from sponsoring a sampling of our open source dependencies
Microsoft is experimenting with and investing in sustainability of the open source…
Welcome to KubeCon + CloudNativeCon North America 2024! We’re thrilled to participate in this year’s event and showcase the latest enhancements and innovations in Azure, Azure Kubernetes Service (AKS), and our work in the cloud-native community.
At Microsoft, we are deeply committed to supporting and driving innovation in the cloud-native ecosystem through contributions and leadership from engineers across Azure. For example, we:
In fact, Microsoft has the second highest contributions to all CNCF projects over the last year and the third highest spot for contributions to Kubernetes over the same timeframe.
As an active member in CNCF leadership including Governing and Steering, we create and contribute to several CNCF projects, including:
These open-source contributions benefit the Kubernetes ecosystem and AKS customers alike by providing cross-vendor compatibility and enabling them to make optimal cloud investment decisions.
Our philosophy around open source is simple: everything we do, we do it in upstream first, then integrate it into the downstream product.
Here’s a look at our contributions, why we’re committed to these open-source projects, and how they impact both Azure customers and the broader cloud-native community. You can also meet many of our contributors in the Azure booth at KubeCon!
In addition to GitHub Enterprise, which hosts the portfolio of CNCF projects, Microsoft also provides Azure credits to CNCF and its hosted projects. The credits are distributed to CNCF’s projects and provide project maintainers with a cloud environment for completing upstream and performance testing, continuous integration and continuous delivery (CI/CD), and storage on Azure. In mid-2024, we moved existing CI jobs to the community infrastructure, enabling the community to maintain and contribute. Jobs like SIG Windows release informing, Cluster API for Azure, and the Azure container storage interface (CSI) providers were among those migrated.
Microsoft’s contributions to Kubernetes Cluster Management include projects such as Cluster API, Cluster Autoscaler, and Karpenter, and the official Azure providers for each. These projects help standardize cluster lifecycle and scalability APIs, streamline Kubernetes operations, reduce overhead, and enable efficient scaling across environments, making multicloud and hybrid-cloud strategies more accessible for enterprises. New features include Multi-cluster service APIs (MCS) and Cluster Autoscaler and kube-scheduler integration with dynamic resource allocation (DRA).
Microsoft has played a significant role in numerous Kubernetes enhancements. We spearheaded dual-stack IPv4/IPv6 networking support in Kubernetes, implemented auth features like KMSv2 to strengthen data security with encryption at rest, and led image pull policy enhancements to ensure pods are always granularly authorized to access images on a node. We also supported Windows Containers for hybrid workloads by maintaining release-informing windows jobs and bringing workload parity with Kubernetes enhancement proposal (KEPs) like the Graceful Node shutdown and CPU and Memory Managers.
These enhancements ensure Kubernetes clusters are highly adaptable, secure, and ready for enterprises, supporting both Windows and Linux workloads while maintaining flexibility in network configurations and data encryption. Upcoming features include structured authentication and structured authorization configuration to further improve identity and access control.
We also collaborated with the community to add user namespaces support to Kubernetes. User namespaces improve the isolation of Kubernetes pods, mitigating several CVEs rated HIGH/CRITICAL. This involved over three years of work across the stack (Kubernetes, containerd, runc, CRI-O, crun, and the Linux kernel). User namespaces are currently a beta feature in Kubernetes and are accompanied by improvements in the upcoming containerd 2.0 release.
Microsoft has made key contributions to enhancing security and policy enforcement for Kubernetes environments.
Together, these tools strengthen security, reduce operational risks, and ensure enterprises can efficiently manage secure cloud-native workloads on Kubernetes. Our investments cover tools in every stage of the software supply chain, from acquisition and cataloging base container images to building, deploying, and running them. To learn more, read The Future of the Cloud-native Software Supply Chain.
Being able to view what happens in Kubernetes clusters is crucial for Azure and our customers. We invest in open-source observability through the Headlamp and Inspektor Gadget projects.
Microsoft is the number two contributor (over the past six months) to Istio, a leading service mesh that provides traffic management, security, and observability. We also contribute heavily to Gateway API, which standardizes networking across service mesh solutions. These projects enable enterprises to seamlessly manage microservices with advanced traffic routing and security, ensuring better application resilience, network management, and interoperability across service mesh providers.
Currently in beta, Istio’s Ambient Mesh removes the need for sidecar proxies, using a lightweight, layered architecture to handle traffic management and security.
Microsoft contributes to Moby and containerd, which manage the full container lifecycle across Kubernetes environments. Projects like containerd runwasi and Wasm Shims expand Kubernetes capabilities to support WebAssembly workloads. Microsoft collaborated cross-industry to enable WebAssembly component distribution via OCI registries, allowing developers to manage containers and Wasm workloads in Kubernetes, contributing to more secure and efficient cloud-native infrastructure. Additionally, we recently released open-source project Hyperlight, which offers VM-based security for functions at scale. We are also proud to be the lead developers of Flatcar, a lightweight Linux OS for containers, which was just accepted into the CNCF at incubating level. Flatcar enhances security and manageability of containerized workloads and integrates closely with projects like Cluster API.
Microsoft also contributed to containerd runc shim in version 2.0 to significantly improve performance and efficiency for container workloads. The optimizations reduced binary size by half and memory usage by a third, leading to substantial memory savings on nodes running multiple pods or Docker containers.
Microsoft initiated the open-source project Kubernetes AI Toolchain Operator (Kaito) in November 2023. This project aims to build a Kubernetes operator using custom resource definitions (CRDs) to streamline the process of deploying open large language models (LLM) such as Falcon and Phi-2/3 in a Kubernetes cluster which hosts the GPU nodes. By providing a node auto provisioner, containerized model images, and preset configurations for setting up model parameters, Kaito significantly reduces the time for LLM deployment in Kubernetes from days to just minutes. Kaito supports LLM serving, model finetuning, and retrieval augment generation (RAG) for inference. Learn more at Jumpstart AI Workflows With Kubernetes AI Toolchain Operator and try it on AKS.
The standard Kubernetes APIs are evolving to better suit the requirement of workloads running on specialized hardware. We are contributing to the development of Structured Parameters as part of the beta release of Dynamic Resource Allocation (DRA), which will be introduced in Kubernetes v1.32.0. These changes will allow native integration between DRA and kube-scheduler, and consequently, Cluster Autoscaler and Karpenter.
We are also actively contributing with the Working Group (WG) Serving effort to build up a suite of end-to-end tests and reference configurations, to accelerate knowledge and adoption of emerging operational standards for running AI workloads on Kubernetes.
Through our contributions to these diverse cloud-native projects, Microsoft plays a vital role in ensuring the security, scalability, and efficiency of Kubernetes and other CNCF projects. We remain committed to fostering collaboration in the open-source community and driving innovation forward in cloud-native technologies.
In addition to our open-source work, we are excited to share several new capabilities in Azure Kubernetes Service designed to enhance security, networking, and ease of use for our customers.
When it comes to cloud, ensuring robust security measures and a reliable network infrastructure is critical to safeguarding data, maintaining compliance, and ensuring seamless connectivity. These foundational elements not only protect against threats but also enable organizations to fully leverage the cloud’s capabilities, driving innovation and efficiency. Some recent enhancements in AKS include:
AKS is also introducing new capabilities to further enhance user experience and streamline operations. This includes the ability to choose specific VMs when scaling down and ability to bypass certain constraints when not applicable, providing greater control and flexibility.
Additionally, Azure Linux 3.0 is now available in preview. This version offers increased package availability, an updated kernel, and improvements to performance, security, and developer experience.
The Azure Kubernetes Fleet team continues to support the open-source community with new multi-cluster features available to any Kubernetes operator, including the v1alpha1 release of staged release pipelines and multi-cluster network traffic shifting.
We hope you’re as excited as we are about KubeCon + CloudNativeCon North America 2024 and all the amazing things that Azure and Kubernetes have to offer.
Here’s ways you can get involved and learn more:
We look forward to connecting with you and hearing your feedback and suggestions. You can also follow us on X for more updates and news.
Happy KubeCon + CloudNativeCon!