Microsoft Open Source Blog

3 min read

Accelerate PyTorch training with torch-ort 

With a simple change to your PyTorch training script, you can now speed up training large language models with torch_ort.ORTModule, running on the target hardware of your choice. Training deep learning models requires ever-increasing compute and memory resources. Today we release torch_ort.

3 min read

How to migrate and modernize Linux workloads and open source databases to Azure 

With extensive support for all major Linux distributions including Red Hat, SUSE, Ubuntu, CentOS, Debian, and managed platform-as-a-service (PaaS) offerings for open source databases like Azure Database for MySQL, Azure Database for PostgreSQL, and Azure Database for MariaDB—it’s no surprise that Linux is the fastest growing platform on Azure.

1 min read

Empowering you to achieve more with open source on Azure 

At Microsoft, we are taking cloud architecture to the next level and our open cloud reduces the friction for developers to get applications up and running. We give autonomy and control to the developers to flexibly choose their infrastructure and give them options to build, migrate, and deploy across multiple environments on-premises, in the cloud, or at the edge. Our philosophy is to give developers the best technology as quickly as possible.

2 min read

ONNX Runtime 1.8: mobile, web, and accelerated training 

The V1.8 release of ONNX Runtime includes many exciting new features. This release launches ONNX Runtime machine learning model inferencing acceleration for Android and iOS mobile ecosystems (previously in preview) and introduces ONNX Runtime Web. Additionally, the release also debuts official packages for accelerating model training workloads in PyTorch.

3 min read

Delivering reliable production experiences with PyTorch Enterprise on Microsoft Azure 

At Microsoft, we use PyTorch to power products such as Bing and Azure Cognitive Services and we actively contribute to several PyTorch open-source projects, including PyTorch Profiler, ONNX Runtime, DeepSpeed, and more. Today, we’re announcing a new initiative in collaboration with Facebook—the PyTorch Enterprise Support Program. This new program enables service providers to develop and offer tailored enterprise-grade support to their customers.

3 min read

Making eBPF work on Windows 

eBPF is a well-known but revolutionary technology—providing programmability, extensibility, and agility. eBPF has been applied to use cases such as denial-of-service protection and observability. Over time, a significant ecosystem of tools, products, and experience has been built up around eBPF.

5 min read

Optimizing BERT model for Intel CPU Cores using ONNX runtime default execution provider 

This blog was co-authored with Manash Goswami, Principal Program Manager, Machine Learning Platform. The performance improvements provided by ONNX Runtime powered by Intel® Deep Learning Boost: Vector Neural Network Instructions (Intel® DL Boost: VNNI) greatly improves performance of machine learning model execution for developers.