Hyperlight Nanvix: POSIX support for Hyperlight Micro-VMs
The Cloud Native Computing Foundation’s (CNCF) Hyperlight project delivers faster, more secure, and smaller workload execution to the cloud-native ecosystem.
The Cloud Native Computing Foundation’s (CNCF) Hyperlight project delivers faster, more secure, and smaller workload execution to the cloud-native ecosystem.
With a simple change to your PyTorch training script, you can now speed up training large language models with torch_ort.ORTModule, running on the target hardware of your choice. Training deep learning models requires ever-increasing compute and memory resources. Today we release torch_ort.
This post was co-authored by Jeff Daily, a Principal Member of Technical Staff, Deep Learning Software for AMD. ONNX Runtime is an open-source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms.
This post was co-authored by Alejandro Saucedo, Director of Machine Learning Engineering at Seldon Technologies. About the co-author: Alejandro leads teams of machine learning engineers focused on the scalability and extensibility of machine learning deployment and monitoring products with over five million installations.
With its resource-efficient and high-performance nature, ONNX Runtime helped us meet the need of deploying a large-scale multi-layer generative transformer model for code, a.k.a., GPT-C, to empower IntelliCode with the whole line of code completion suggestions in Visual Studio and Visual Studio Code.
With extensive support for all major Linux distributions including Red Hat, SUSE, Ubuntu, CentOS, Debian, and managed platform-as-a-service (PaaS) offerings for open source databases like Azure Database for MySQL, Azure Database for PostgreSQL, and Azure Database for MariaDB—it’s no surprise that Linux is the fastest growing platform on Azure.
At Microsoft, we are taking cloud architecture to the next level and our open cloud reduces the friction for developers to get applications up and running. We give autonomy and control to the developers to flexibly choose their infrastructure and give them options to build, migrate, and deploy across multiple environments on-premises, in the cloud, or at the edge. Our philosophy is to give developers the best technology as quickly as possible.
The V1.8 release of ONNX Runtime includes many exciting new features. This release launches ONNX Runtime machine learning model inferencing acceleration for Android and iOS mobile ecosystems (previously in preview) and introduces ONNX Runtime Web. Additionally, the release also debuts official packages for accelerating model training workloads in PyTorch.
In partnership with developers from Red Hat, we are announcing the introduction of Local Tunnel Debugging for Kubernetes service in the Kubernetes extension for Visual Studio Code, which enables you to use your native development tools against any cluster, live.
At Microsoft, we use PyTorch to power products such as Bing and Azure Cognitive Services and we actively contribute to several PyTorch open-source projects, including PyTorch Profiler, ONNX Runtime, DeepSpeed, and more. Today, we’re announcing a new initiative in collaboration with Facebook—the PyTorch Enterprise Support Program. This new program enables service providers to develop and offer tailored enterprise-grade support to their customers.
eBPF is a well-known but revolutionary technology—providing programmability, extensibility, and agility. eBPF has been applied to use cases such as denial-of-service protection and observability. Over time, a significant ecosystem of tools, products, and experience has been built up around eBPF.
Microsoft products and services run on trust, an extension of our commitment to building healthy open source communities.
Today we announcing the first preview of the Microsoft Build of OpenJDK, a new long-term support (LTS) distribution of OpenJDK that is open source and available for free for anyone to deploy anywhere.
This blog was co-authored with Manash Goswami, Principal Program Manager, Machine Learning Platform. The performance improvements provided by ONNX Runtime powered by Intel® Deep Learning Boost: Vector Neural Network Instructions (Intel® DL Boost: VNNI) greatly improves performance of machine learning model execution for developers.