{"id":80682,"date":"2020-05-19T08:01:47","date_gmt":"2020-05-19T15:01:47","guid":{"rendered":""},"modified":"2025-06-24T11:45:07","modified_gmt":"2025-06-24T18:45:07","slug":"announcing-support-for-accelerated-training-with-onnx-runtime","status":"publish","type":"post","link":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/","title":{"rendered":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster"},"content":{"rendered":"\n<p><a href=\"https:\/\/microsoft.github.io\/onnxruntime\/\" target=\"_blank\" rel=\"noopener noreferrer\">ONNX Runtime<\/a> is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. It is used extensively in Microsoft products, like Office 365 and Bing, delivering over 20 billion inferences every day and up to 17 times faster inferencing.<\/p>\n\n\n\n<p>Today we are introducing significant updates to ONNX Runtime. In addition to improvements for model inferencing, we\u2019re announcing the preview of training acceleration.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"onnx-runtime-for-training\">ONNX Runtime for training<\/h2>\n\n\n\n<p>ONNX Runtime now supports accelerated training of transformer models. Transformer models have become the building blocks for advanced language processing and generation. These models contain hundreds of millions of parameters and training them can consume many clusters of GPUs over days. Reducing the total training time can help enable rapid improvements in, and thus faster deployment of, these models.<\/p>\n\n\n\n<p>Today\u2019s preview release of training acceleration incorporates innovations from the&nbsp;<a href=\"https:\/\/aka.ms\/AA87dvg\" target=\"_blank\" rel=\"noreferrer noopener\">AI at Scale<\/a>&nbsp;initiative, such as&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters\/\" target=\"_blank\" rel=\"noreferrer noopener\">&nbsp;ZeRO optimization<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/parasail\/\" target=\"_blank\" rel=\"noreferrer noopener\">Project Parasail<\/a>, that improve memory utilization and parallelism on GPUs. ONNX Runtime also features mixed precision implementation to fit more training data in a single NVIDIA GPU\u2019s available memory, helping training jobs converge faster, thereby saving time. It is integrated into the existing trainer code for PyTorch and TensorFlow. ONNX Runtime is already being used for training models at Microsoft. For example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Office 365 uses ONNX Runtime to accelerate pre-training of the Turing Natural Language Representation (T-NLR) model, a transformer model with more than 400 million parameters, powering rich end-user features like\u00a0<a href=\"https:\/\/aka.ms\/SuggestedRepliesMay2020\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Suggested Replies<\/em><\/a><em>, Smart Find,<\/em>\u00a0and\u00a0<a href=\"https:\/\/aka.ms\/InsideLookMay2020\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Inside Look<\/em><\/a>. Using ONNX Runtime has reduced training time by 45% on a cluster of 64 NVIDIA V100 Tensor Core GPUs in Azure Machine Learning.<\/li>\n\n\n\n<li>Bing uses large transformer models with more than 500 million parameters to train and service task-specific models. These models use ONNX Runtime to accelerate pre-training and fine-tuning throughput, cutting training time by 44%.<\/li>\n\n\n\n<li>Visual Studio uses ONNX Runtime to accelerate pre-training a model, similar to GPT-2 Medium, with more than 300 million parameters to power code autocompletion in the\u00a0<a href=\"https:\/\/visualstudio.microsoft.com\/services\/intellicode\/\" target=\"_blank\" rel=\"noreferrer noopener\">IntelliCode<\/a>\u00a0feature.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2020\/05\/Accelerate-training-with-ONNX-Runtime-chart.png\" alt=\"Accelerate training with ONNX Runtime chart\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>To further accelerate training, we built custom kernels and graph optimizations to eliminate redundant operations. Additionally, ONNX Runtime enables larger batch sizes on the same 32GB memory of NVIDIA V100 Tensor Core GPUs. We tested ONNX Runtime by pretraining BERT-Large, reusing the training scripts and datasets from <a href=\"https:\/\/github.com\/NVIDIA\/DeepLearningExamples\/tree\/master\/PyTorch\/LanguageModeling\/BERT#pre-training-nvidia-dgx-1-with-32g\" target=\"_blank\" rel=\"noopener noreferrer\">benchmarking tests by NVIDIA<\/a>.<\/p>\n\n\n\n<p>In the table below, you\u2019ll see the relative training time improvements for pre-training the BERT-Large model on a 4 node NVIDIA DGX-2 cluster. The batch sizes reflect the Phase-1 and Phase-2 stages for the training experiment, using the datasets as detailed in NVIDIA <a href=\"https:\/\/github.com\/NVIDIA\/DeepLearningExamples\" target=\"_blank\" rel=\"noopener noreferrer\">repo<\/a>. The detailed test report is <a href=\"https:\/\/aka.ms\/onnxruntime-technical-blog\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>4x DGX2 <\/strong><br><strong>(64x V100 32GB)<\/strong><\/td><td><strong>PyTorch 1.5 <\/strong><br><strong>with <a href=\"https:\/\/ngc.nvidia.com\/registry\/nvidia-pytorch\" target=\"_blank\" rel=\"noopener noreferrer\">NGC 20.03-py3<\/a><\/strong><\/td><td><strong>PyTorch 1.5 <\/strong><br><strong>with ONNX Runtime<\/strong><\/td><td><strong>% Gain <\/strong><br><strong>with ONNX Runtime<\/strong><\/td><\/tr><tr><td>Phase 1 time (hours)<\/td><td>11.12<\/td><td>9.99<\/td><td>10.16%<\/td><\/tr><tr><td>Phase 2 time (hours)<\/td><td>6.62<\/td><td>5.77<\/td><td>12.84%<\/td><\/tr><tr><td>Total time (hours)<\/td><td>17.74<\/td><td>15.76<\/td><td>11.16%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p>Developers can use the sample for pretraining BERT-Large with ONNX Runtime and fine-tune to their datasets as needed. We have also published a ready-to-use sample to start experiments in Azure Machine Learning. To use in custom environments, developers can build from the source code using the instructions published <a href=\"https:\/\/github.com\/microsoft\/onnxruntime-training-examples\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"onnx-runtime-for-inferencing\">ONNX Runtime for inferencing<\/h2>\n\n\n\n<p>We continue to improve inference acceleration with ONNX Runtime and are now <a href=\"https:\/\/aka.ms\/hf-ort-blog1\" target=\"_blank\" rel=\"noopener noreferrer\">partnering with Hugging Face<\/a> to make it easy to accelerate popular transformer models.<\/p>\n\n\n\n<p><em>We have seen gains from using ONNX Runtime with transformer models and are excited to release functionality that makes it easy to inference Hugging Face Transformer models with ONNX Runtime.<\/em><\/p>\n\n\n\n<p>Cl\u00e9ment Delangue, CEO of Hugging Face.<\/p>\n\n\n\n<p>Today, we are also releasing multiple updates to ONNX Runtime for inferencing. The new ONNX Runtime inference version 1.3 includes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Compatibility with the new <a href=\"https:\/\/github.com\/onnx\/onnx\/issues\/2614\" target=\"_blank\" rel=\"noopener noreferrer\">ONNX v1.7<\/a> spec<\/li>\n\n\n\n<li><a href=\"https:\/\/aka.ms\/dml4ort\" target=\"_blank\" rel=\"noopener noreferrer\">DirectML execution provider<\/a> on Windows 10 platform generally available (GA)<\/li>\n\n\n\n<li>Javascript APIs preview, and Java APIs GA<\/li>\n\n\n\n<li>Python package for ARM64 CPU for Ubuntu, CentOS, and variants<\/li>\n\n\n\n<li>Preview release for Execution Provider 2.0 with support for the latest Intel\u00ae Distribution of OpenVINO\u2122 tool kit<\/li>\n\n\n\n<li>Preview release for integration with the VitisAI for acceleration on the Xilinx U250 FPGA platform<\/li>\n\n\n\n<li>Preview release for integration with Rockchip RK1808 AIoT chipset<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"get-started\">Get Started<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/github.com\/microsoft\/onnxruntime\" target=\"_blank\" rel=\"noopener noreferrer\">ONNX Runtime<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/github.com\/microsoft\/onnxruntime-training-examples\" target=\"_blank\" rel=\"noopener noreferrer\">ONNX Runtime training samples<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/github.com\/microsoft\/onnxruntime\/tree\/master\/samples\" target=\"_blank\" rel=\"noopener noreferrer\">ONNX Runtime inferencing samples<\/a><\/li>\n<\/ul>\n\n\n\n<p>Questions or feedback? Please let us know in the comments below.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. It is used extensively in Microsoft products, like Office 365 and Bing, delivering over 20 billion inferences every day and up to 17 times faster inferencing.<\/p>\n","protected":false},"author":5562,"featured_media":95478,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"msxcm_post_with_no_image":false,"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","footnotes":""},"post_tag":[2272,663],"content-type":[361],"topic":[2238,2241],"programming-languages":[],"coauthors":[654],"class_list":["post-80682","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-microsoft","tag-onnx","content-type-project-updates","topic-ai-machine-learning","topic-cloud","review-flag-1-1593580432-963","review-flag-2-1593580437-411","review-flag-3-1593580442-169","review-flag-4-1593580448-609","review-flag-5-1593580453-725","review-flag-6-1593580457-852","review-flag-7-1593580463-151","review-flag-9-1593580473-997","review-flag-ga-1593580756-435","review-flag-machi-1680214156-53","review-flag-new-1593580248-669"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog<\/title>\n<meta name=\"description\" content=\"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog\" \/>\n<meta property=\"og:description\" content=\"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Open Source Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-05-19T15:01:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-24T18:45:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1170\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Manash Goswami\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png\" \/>\n<meta name=\"twitter:creator\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:site\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Manash Goswami\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 min read\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\"},\"author\":[{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/manash-goswami\/\",\"@type\":\"Person\",\"@name\":\"Manash Goswami\"}],\"headline\":\"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster\",\"datePublished\":\"2020-05-19T15:01:47+00:00\",\"dateModified\":\"2025-06-24T18:45:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\"},\"wordCount\":707,\"commentCount\":1,\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"keywords\":[\"Microsoft\",\"ONNX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\",\"name\":\"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"datePublished\":\"2020-05-19T15:01:47+00:00\",\"dateModified\":\"2025-06-24T18:45:07+00:00\",\"description\":\"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.\",\"breadcrumb\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"width\":1170,\"height\":640},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/opensource.microsoft.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"name\":\"Microsoft Open Source Blog\",\"description\":\"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability\",\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\",\"name\":\"Microsoft Open Source Blog\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"width\":259,\"height\":194,\"caption\":\"Microsoft Open Source Blog\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/OpenAtMicrosoft\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog","description":"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/","og_locale":"en_US","og_type":"article","og_title":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog","og_description":"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.","og_url":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/","og_site_name":"Microsoft Open Source Blog","article_published_time":"2020-05-19T15:01:47+00:00","article_modified_time":"2025-06-24T18:45:07+00:00","og_image":[{"width":1170,"height":640,"url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png","type":"image\/png"}],"author":"Manash Goswami","twitter_card":"summary_large_image","twitter_image":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png","twitter_creator":"@OpenAtMicrosoft","twitter_site":"@OpenAtMicrosoft","twitter_misc":{"Written by":"Manash Goswami","Est. reading time":"3 min read"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#article","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/"},"author":[{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/manash-goswami\/","@type":"Person","@name":"Manash Goswami"}],"headline":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster","datePublished":"2020-05-19T15:01:47+00:00","dateModified":"2025-06-24T18:45:07+00:00","mainEntityOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/"},"wordCount":707,"commentCount":1,"publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","keywords":["Microsoft","ONNX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/","url":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/","name":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster | Microsoft Open Source Blog","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","datePublished":"2020-05-19T15:01:47+00:00","dateModified":"2025-06-24T18:45:07+00:00","description":"Today we are introducing significant updates to ONNX Runtime, including the ability to train models up to 45% faster.","breadcrumb":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#primaryimage","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","width":1170,"height":640},{"@type":"BreadcrumbList","@id":"https:\/\/opensource.microsoft.com\/blog\/2020\/05\/19\/announcing-support-for-accelerated-training-with-onnx-runtime\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/opensource.microsoft.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Announcing accelerated training with ONNX Runtime\u2014train models up to 45% faster"}]},{"@type":"WebSite","@id":"https:\/\/opensource.microsoft.com\/blog\/#website","url":"https:\/\/opensource.microsoft.com\/blog\/","name":"Microsoft Open Source Blog","description":"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability","publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/opensource.microsoft.com\/blog\/#organization","name":"Microsoft Open Source Blog","url":"https:\/\/opensource.microsoft.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","width":259,"height":194,"caption":"Microsoft Open Source Blog"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/OpenAtMicrosoft"]}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Open Source Blog","distributor_original_site_url":"https:\/\/opensource.microsoft.com\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/80682","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/users\/5562"}],"replies":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/comments?post=80682"}],"version-history":[{"count":1,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/80682\/revisions"}],"predecessor-version":[{"id":97673,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/80682\/revisions\/97673"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media\/95478"}],"wp:attachment":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media?parent=80682"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/post_tag?post=80682"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/content-type?post=80682"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/topic?post=80682"},{"taxonomy":"programming-languages","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/programming-languages?post=80682"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/coauthors?post=80682"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}