2 min read

ONNX joins Linux Foundation

Today the Open Neural Network eXchange (ONNX) is joining the LF AI Foundation, an umbrella foundation of the Linux Foundation supporting open source innovation in artificial intelligence, machine learning, and deep learning.

ONNX was co-founded by Microsoft in 2017 to make it easier to create and deploy machine learning applications. In the past few years, the machine learning landscape has evolved at a rapid pace, resulting in increased fragmentation across frameworks, tools, and hardware. ONNX accelerates machine learning innovation by reducing the friction caused by this fragmentation, thus speeding up the process of taking models from research to production.

ONNX provides a common representation for both deep learning and traditional machine learning models, enabling interoperability between frameworks, compilers, runtimes, and hardware accelerators. Data scientists and researchers can use the frameworks they prefer, developers and ML engineers can use the hardware accelerators that best meet their needs, and hardware vendors can more easily develop accelerators that support multiple frameworks.

Leading companies from around the world have supported ONNX by making major contributions and implementing it in their own products and services. This support has resulted in a rich ecosystem of frameworks, compilers, runtimes, accelerators, and visualizers that all support ONNX.

We are excited that LF AI will host ONNX and continue the open-governance model, which encourages community participation and contributions. LF AI will provide long term leadership to ONNX enabling a community focused on accelerating the adoption of machine learning and fostering ONNX’s next wave of innovation and adoption.

Microsoft’s commitment to ONNX

Microsoft is committed to embracing openness and has invested heavily in the tools and platforms that make the developer community more productive. One of our major contributions to the ONNX community has been making it easy to generate ONNX models from frameworks such as PyTorch, TensorFlow, Keras, and SciKit-Learn. These exporters and converters support state of the art models for computer vision, speech, natural language processing, and more.

Aside from contributing to the ONNX codebase, we’ve also focused on increasing its accessibility. With built-in support for ONNX across our platforms, including Windows, SQL Server, .NET, and Azure, building AI-enabled solutions becomes easier for developers. We also actively leverage ONNX within our own products, including Office, Bing, Azure Cognitive Services, Windows, Skype, and more. These models span speech, image, and text.

We’ve also experienced that the effort required to inference a machine learning model is multiplied by the number of deployment targets – spanning GPUs, CPUs, cloud, and edge. We tackled this challenge by creating ONNX Runtime, a single inference engine that is highly performant across platforms and hardware. Using ONNX Runtime, we have seen an average of 2.5x performance improvement over previous inferencing solutions. In 2018 we open sourced ONNX Runtime to the developer community to make it easier to deploy machine learning models into production. Since then, the ONNX and ONNX Runtime communities have helped support its development via code contributions and ideas.

We are proud of the progress that ONNX has made and want to recognize the entire ONNX community for their contributions, ideas, and overall enthusiasm. We are excited about the future of ONNX and all that is to come.

To learn more about ONNX, visit onnx.ai.