What’s new with Microsoft in open-source and Kubernetes at KubeCon North America 2024
At Microsoft, we are committed to innovation in the cloud-native ecosystem through…
Greetings and welcome to KubeCon EU in Barcelona! As always, it is wonderful to see the community come together to celebrate how Kubernetes has made cloud-native ubiquitous and changed the way that we build and manage our software.
These conferences are fantastic because they represent an opportunity to meet with our users and learn about how our software is both making them more successful and also hindering their success. While learning about how we empower developers is awesome, it’s even more important (and significantly more humbling) to learn about the areas where they have unmet needs, have hit bugs, or are otherwise blocked by choices we’ve made or bugs that we’ve written.
Listening to our users and customers and focusing our energy on solving their real-world problems is the #1 (and #2 and #3) focus for my teams. In that spirit, I’m excited to announce: the Helm 3 release, Kubernetes integration with Visual Studio Code (VS Code), the Virtual Kubelet project 1.0 release, and the Service Mesh Interface (SMI), which is our new community project for collaboration around Service Mesh infrastructure.
The first, and arguably most significant for the Kubernetes community, is the first Alpha release of Helm 3. Helm 3 represents a nearly complete re-factoring of the Helm package manager to evolve it from its origins to a modern application package manager. The Helm project is nearly as old as Kubernetes itself. As a result, its original design pre-dated many advancements in Kubernetes like CustomResourceDefinitions and even Kubernetes RBAC.
Because of this, the Helm 2 architecture was forced to implement a number of features itself, which made it less tightly integrated with Kubernetes, and meant that managing things like RBAC of Charts and Resources was complicated and disconnected from Kubernetes itself. Helm 3 eliminates this impedance mismatch.
By replacing custom APIs for charts and deployments with CustomResourceDefinitions, things like Kubernetes RBAC directly apply to Helm and the whole system feels significantly more tightly-integrated and Kubernetes-native. You can now use the ‘kubectl’ command line to interact with your Helm charts and Kubernetes native RBAC to limit access and resources that users can create.
Helm has become the defacto standard for packaging and deploying Kubernetes applications by focusing on the end-user and enabling those users to be successful. The advances and improvements in Helm 3 continue this trend and make it even more useful to both existing users, as well as others who may have previously tried other solutions. Many congratulations to the Helm project and community on reaching this milestone!
Making Kubernetes accessible to everyone is a challenge that I think about every day. It’s great that we’ve built a system that makes it easier to deploy distributed systems, but if we haven’t made it possible for everyone to use Kubernetes then we’ve actually failed. In that vein I have spent a bunch of time recently working with the team that builds the open source Kubernetes extension for Visual Studio Code.
This extension brings native Kubernetes integration to VS Code. You can easily view the contents of your cluster, see the state of pods at a glance, right-click to get a terminal in a Pod or port-forward network traffic, and easily filter your way through logs to identify problems – all within the exact same environment where your code lives.
Furthermore, in keeping with our open principles, the extension is open source on GitHub and works with Kubernetes anywhere. No matter where you are running Kubernetes, the VS Code integration makes it easier to work with your applications and clusters with fewer windows and context switches.
Today at KubeCon EU in Barcelona, I’m excited our VS Code integration has reached the 1.0 milestone and is fully supported for production management of your Kubernetes clusters. Even more importantly, we’ve also added an extensibility API that makes it possible for others, like Red Hat OpenShift, to build their own integration experiences on top of our baseline Kubernetes integration. (‘yo dawg, I heard you like extensions, so I added extensibility to your extension.’) Much like Custom Resources, adding extensibility enables collaboration at the core while enabling others to build rich experiences targeted to specific environments. It’s a testament to the value of an open and extensible approach to tooling.
Speaking of community, I’m also incredibly excited to see the Virtual Kubelet hit the 1.0 milestone. The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. Enabling people to free themselves from the toil of managing an operating system, while still using Kubernetes for orchestration, is a powerful value proposition to startups and enterprises alike.
I’m really excited that this community is healthy and reaching this important milestone, and that Azure was able to play a role. Earlier this month at //build we also announced the general availability of AKS virtual nodes powered by the open source community code in the Virtual Kubelet project. Again, showing the value of open source even when delivering tightly integrated features for Azure.
Finally, I’m thrilled to see the community that is developing around the Service Mesh Interface specification. It has been clear to us for a while that users and customers are excited by the promise of Service Mesh to provide the next advances in cloud-native application development. However, it is also clear that the monolithic integration of interface and implementation that previously existed for Service Mesh has limited their adoption.
The world of Service Mesh is fast and evolving and users are concerned that if they are locked to any particular implementation, they may be locked into a complex surface area. By providing a generic API interface that is then implemented by various service mesh providers, like Istio, Linkerd, and Consul Connect, the Service Mesh Interface frees users to use service mesh capabilities without being bound to any particular implementation.
This means that they are free to experiment, and even change implementations without having to change their applications. This brings Service Mesh on parity with other parts of Kubernetes, like Ingress, Container Runtime (CRI), and Networking (CNI), which all have generic interfaces with pluggable implementations. We’ve really enjoyed working with our great partners on the SMI specification and we look forward to building an even larger community as we move forward and iterate on the specification.
Enjoy KubeCon and Barcelona (and make sure to check out the Sagrada Familia). The Azure team is will be busy learning from you, our community, so we can build the next set of great products and features that make you even more productive and happy!
Questions or feedback? Let us know in the comments below.