{"id":78324,"date":"2019-10-30T09:00:25","date_gmt":"2019-10-30T16:00:25","guid":{"rendered":"https:\/\/cloudblogs.microsoft.com\/opensource\/?p=78324"},"modified":"2025-06-27T04:50:19","modified_gmt":"2025-06-27T11:50:19","slug":"announcing-onnx-runtime-1-0","status":"publish","type":"post","link":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/","title":{"rendered":"Announcing ONNX Runtime 1.0"},"content":{"rendered":"\n<p>One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance machine learning model inferencing engine. This release marks our commitment to API stability for the cross-platform, multi-language APIs, and introduces a breadth of performance optimizations, broad operator coverage, and pluggable accelerators to take advantage of new and exciting hardware developments.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"year-in-review\">Year in review<\/h2>\n\n\n\n<p>In its first year, ONNX Runtime was shipped to production for more than 60 models at Microsoft, with adoption from a range of consumer and enterprise products, including Office, Bing, Cognitive Services, Windows, Skype, Ads, and others. These models span from speech to image to text (including state of the art models such as BERT) and ONNX Runtime has improved the performance of these models by an average of 2.5x over previous inferencing solutions.<\/p>\n\n\n\n<p>In addition to performance gains, the interoperable ONNX model format has also provided increased infrastructure flexibility, allowing teams to use a common runtime to scalably deploy a breadth of models to a range of hardware. Across Microsoft technologies, ONNX Runtime is serving hundreds of millions of devices and billions of requests daily.<\/p>\n\n\n\n<p>We also collaborated with a host of community partners to take advantage of ONNX Runtime\u2019s extensibility options to provide accelerators for a variety of hardware options. With active contributions from Intel, NVIDIA, JD.com, NXP, and others, today ONNX Runtime can provide acceleration on the Intel\u00ae Distribution of the OpenVINO\u2122 Toolkit, Deep Neural Network Library (DNNL) (formerly Intel\u00ae formerly MKL-DNN), nGraph, NVIDIA TensorRT, NN API for Android, the ARM Compute Library, and more.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-s-new-in-1-0\">What\u2019s new in 1.0<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"api-updates\">API updates<\/h3>\n\n\n\n<p>We\u2019ve made some changes to the C API for clarity of usage and introduced versioning to accommodate future updates.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>C APIs are ABI compatible and follow Semantic Versioning. Programs linked with the current version of the ONNX Runtime library will continue to work with subsequent releases without updating any client code or re-linking.<\/li>\n\n\n\n<li>We\u2019ve also enabled some new capabilities through the Python and C# APIs for feature parity, such as providing registration of execution providers in Python and setting additional run options in C#.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"onnx-1-6-compatibility-with-opset-11\">ONNX 1.6 compatibility with opset 11<\/h3>\n\n\n\n<p>Keeping up with the evolving ONNX spec remains a key focus for ONNX Runtime and this update provides the most thorough operator coverage to date. ONNX Runtime supports all versions of ONNX since 1.2 with backwards and forward compatibility to run a comprehensive variety of ONNX models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"execution-provider-ep-updates\">Execution Provider (EP) updates<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>General Availability of the OpenVINO\u2122 EP for Intel\u00ae CPU, Intel\u00ae Integrated Graphics, <a href=\"https:\/\/software.intel.com\/en-us\/neural-compute-stick\">Intel\u00ae Neural Compute Stick 2<\/a>, and the <a href=\"https:\/\/software.intel.com\/en-us\/iot\/hardware\/vision-accelerator-movidius-vpu\">Intel\u00ae Vision Accelerator Design with Intel\u00ae Movidius\u2122 Myriad\u2122 VPU<\/a> powered by OpenVINO\u2122nGraph EP support of new operators.<\/li>\n\n\n\n<li>nGraph EP support of new operators.<\/li>\n\n\n\n<li>TensorRT EP updated to the latest TensorRT 6.0 libraries.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"new-execution-providers-in-preview\">New Execution Providers in preview<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/aka.ms\/build-ort-nuphar\">NUPHAR<\/a> (<strong>N<\/strong>eural-network <strong>U<\/strong>nified <strong>P<\/strong>reprocessing <strong>H<\/strong>eterogeneous <strong>AR<\/strong>chitecture) is a TVM and LLVM based EP offering model acceleration by compiling nodes in subgraphs into optimized functions via JIT.<\/li>\n\n\n\n<li><a href=\"https:\/\/aka.ms\/build-ort-directml\">DirectML<\/a> is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows, providing GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.<\/li>\n\n\n\n<li>Support for <a href=\"https:\/\/software.intel.com\/en-us\/iot\/hardware\/vision-accelerator-arria-10\">Intel\u00ae Vision Accelerator Design with Intel\u00ae Arria\u2122 10 FPGA<\/a> powered by OpenVINO\u2122.<\/li>\n\n\n\n<li><a href=\"https:\/\/aka.ms\/build-ort-acl\">ARM Compute Library (ACL)<\/a> Execution Provider targets ARM CPUs and GPUs for optimized execution of ONNX operators using the low-level libraries.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"performance-improvements\">Performance improvements<\/h3>\n\n\n\n<p>Outside of adding new Execution Providers for hardware acceleration, we\u2019ve also made a host of updates to minimize default CPU and GPU (CUDA) latency for inference computations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"new-tooling\">New tooling<\/h3>\n\n\n\n<p>To facilitate production usage of ONNX Runtime, we\u2019ve released the complementary <a href=\"https:\/\/github.com\/microsoft\/OLive\">ONNX Go Live tool<\/a>, which automates the process of shipping ONNX models by combining model conversion, correctness tests, and performance tuning into a single pipeline as a series of Docker images. We\u2019ve also refreshed the <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\/tree\/df472cbfbdf0ff47f1051639fe0216178638e6ab\/onnxruntime\/python\/tools\/quantization\">quantization tool<\/a> to support improved performance and accuracy for inferencing quantized models in ONNX Runtime, with updates for node fusions and bias quantization for convolutions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"telemetry-collection\">Telemetry collection<\/h3>\n\n\n\n<p>We\u2019ve added component level logging through <a href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/win32\/tracelogging\/trace-logging-portal\">Trace Logging<\/a> to identify areas for improvement. You can read more about managing these settings and the data collected <a href=\"https:\/\/aka.ms\/ort-privacy\">here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"bug-fixes\">Bug fixes<\/h3>\n\n\n\n<p>This release contains many bug fixes identified during the past few months. As an active growing project, we do expect bugs to be uncovered as the breadth of models expands. We continue striving towards quality and are committed to actively resolve issues as they are uncovered. You can always report bugs on <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\/issues\">Github<\/a>.<\/p>\n\n\n\n<p>For full release notes, please see <a href=\"https:\/\/aka.ms\/onnxruntime-release\">https:\/\/aka.ms\/onnxruntime-release<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-s-next\">What\u2019s next<\/h2>\n\n\n\n<p>ONNX Runtime 1.0 is a notable milestone, but this is just the beginning of our journey. We support the mission of open and interoperable AI and will continue working towards improving ONNX Runtime by making it even more performant, extensible, and easily deployable across a variety of architectures and devices between cloud and edge. You can find our detailed roadmap <a href=\"http:\/\/aka.ms\/onnxruntime-roadmap\">here<\/a>.<\/p>\n\n\n\n<p>We thank our community of contributors and look forward to even greater impact to further innovation and operationalization of ML in the field.<\/p>\n\n\n\n<p><a href=\"https:\/\/aka.ms\/onnxruntime\">Learn more<\/a> about ONNX Runtime, and join us on <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\">Github<\/a>.<\/p>\n\n\n\n<p>Have feedback or questions about ONNX Runtime?&nbsp;<a href=\"https:\/\/github.com\/Microsoft\/onnxruntime\/issues\">File an issue<\/a> on GitHub and follow us on&nbsp;<a href=\"https:\/\/twitter.com\/onnxruntime\">Twitter<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance machine learning model inferencing engine.<\/p>\n","protected":false},"author":5562,"featured_media":95474,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"msxcm_post_with_no_image":false,"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","footnotes":""},"post_tag":[2272,663],"content-type":[346,361],"topic":[2238],"programming-languages":[],"coauthors":[657],"class_list":["post-78324","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-microsoft","tag-onnx","content-type-news","content-type-project-updates","topic-ai-machine-learning","review-flag-1593580428-734","review-flag-1-1593580432-963","review-flag-2-1593580437-411","review-flag-6-1593580457-852","review-flag-alway-1593580310-39","review-flag-gener-1593580751-533","review-flag-machi-1680214156-53","review-flag-ml-1680214110-748","review-flag-new-1593580248-669"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog<\/title>\n<meta name=\"description\" content=\"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog\" \/>\n<meta property=\"og:description\" content=\"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Open Source Blog\" \/>\n<meta property=\"article:published_time\" content=\"2019-10-30T16:00:25+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-27T11:50:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1170\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Faith Xu\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.png\" \/>\n<meta name=\"twitter:creator\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:site\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Faith Xu\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 min read\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\"},\"author\":[{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/faith-xu\/\",\"@type\":\"Person\",\"@name\":\"Faith Xu\"}],\"headline\":\"Announcing ONNX Runtime 1.0\",\"datePublished\":\"2019-10-30T16:00:25+00:00\",\"dateModified\":\"2025-06-27T11:50:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\"},\"wordCount\":867,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp\",\"keywords\":[\"Microsoft\",\"ONNX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\",\"name\":\"Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp\",\"datePublished\":\"2019-10-30T16:00:25+00:00\",\"dateModified\":\"2025-06-27T11:50:19+00:00\",\"description\":\"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.\",\"breadcrumb\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp\",\"width\":1170,\"height\":640,\"caption\":\"Developer looking at code\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/opensource.microsoft.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Announcing ONNX Runtime 1.0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"name\":\"Microsoft Open Source Blog\",\"description\":\"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability\",\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\",\"name\":\"Microsoft Open Source Blog\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"width\":259,\"height\":194,\"caption\":\"Microsoft Open Source Blog\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/OpenAtMicrosoft\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog","description":"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/","og_locale":"en_US","og_type":"article","og_title":"Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog","og_description":"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.","og_url":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/","og_site_name":"Microsoft Open Source Blog","article_published_time":"2019-10-30T16:00:25+00:00","article_modified_time":"2025-06-27T11:50:19+00:00","og_image":[{"width":1170,"height":640,"url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.png","type":"image\/png"}],"author":"Faith Xu","twitter_card":"summary_large_image","twitter_image":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.png","twitter_creator":"@OpenAtMicrosoft","twitter_site":"@OpenAtMicrosoft","twitter_misc":{"Written by":"Faith Xu","Est. reading time":"3 min read"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#article","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/"},"author":[{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/faith-xu\/","@type":"Person","@name":"Faith Xu"}],"headline":"Announcing ONNX Runtime 1.0","datePublished":"2019-10-30T16:00:25+00:00","dateModified":"2025-06-27T11:50:19+00:00","mainEntityOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/"},"wordCount":867,"commentCount":0,"publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp","keywords":["Microsoft","ONNX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/","url":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/","name":"Announcing ONNX Runtime 1.0 | Microsoft Open Source Blog","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp","datePublished":"2019-10-30T16:00:25+00:00","dateModified":"2025-06-27T11:50:19+00:00","description":"One year after ONNX Runtime\u2019s initial preview release, we\u2019re excited to announce v1.0 of the high-performance ML model inferencing engine.","breadcrumb":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#primaryimage","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO24-Azure-Manufacturing-008.webp","width":1170,"height":640,"caption":"Developer looking at code"},{"@type":"BreadcrumbList","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/10\/30\/announcing-onnx-runtime-1-0\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/opensource.microsoft.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Announcing ONNX Runtime 1.0"}]},{"@type":"WebSite","@id":"https:\/\/opensource.microsoft.com\/blog\/#website","url":"https:\/\/opensource.microsoft.com\/blog\/","name":"Microsoft Open Source Blog","description":"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability","publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/opensource.microsoft.com\/blog\/#organization","name":"Microsoft Open Source Blog","url":"https:\/\/opensource.microsoft.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","width":259,"height":194,"caption":"Microsoft Open Source Blog"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/OpenAtMicrosoft"]}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Open Source Blog","distributor_original_site_url":"https:\/\/opensource.microsoft.com\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/78324","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/users\/5562"}],"replies":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/comments?post=78324"}],"version-history":[{"count":1,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/78324\/revisions"}],"predecessor-version":[{"id":97720,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/78324\/revisions\/97720"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media\/95474"}],"wp:attachment":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media?parent=78324"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/post_tag?post=78324"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/content-type?post=78324"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/topic?post=78324"},{"taxonomy":"programming-languages","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/programming-languages?post=78324"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/coauthors?post=78324"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}