{"id":77832,"date":"2019-08-26T09:00:30","date_gmt":"2019-08-26T16:00:30","guid":{"rendered":"https:\/\/cloudblogs.microsoft.com\/opensource\/?p=77832"},"modified":"2025-06-27T05:07:03","modified_gmt":"2025-06-27T12:07:03","slug":"announcing-onnx-runtime-0-5-edge-hardware-acceleration-support","status":"publish","type":"post","link":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/","title":{"rendered":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration"},"content":{"rendered":"\n<p>ONNX Runtime 0.5, the latest update to the open source high performance inference engine for ONNX models, is now available. This release improves the customer experience and supports inferencing optimizations across hardware platforms.<\/p>\n\n\n\n<p>Since the <a href=\"https:\/\/cloudblogs.microsoft.com\/opensource\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\">last release in May<\/a>, Microsoft teams have deployed an additional 45+ models that leverage <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\">ONNX Runtime<\/a> for inferencing. These models are used in key products and services that reach millions of customers.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"onnx-runtime-0-5-release-summary\">ONNX Runtime 0.5 Release Summary<\/h2>\n\n\n\n<p>Building on the momentum of our last release, new features in ONNX Runtime 0.5 are targeted towards improving ease of use for experimentation and deployment. This release includes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A convenient C++ Inferencing API (in addition to existing C, C#, and Python APIs)<\/li>\n\n\n\n<li>A custom operator that supports running Python code even when official operators are missing (in preview)<\/li>\n\n\n\n<li>ONNX Runtime Server as a hosted application for serving ONNX models with HTTP and GRPC endpoints (in preview)<\/li>\n<\/ul>\n\n\n\n<p>With these additions, we advance the journey of making ONNX Runtime the preferred solution for operationalizing ML inferencing workflows.<\/p>\n\n\n\n<p>We are also excited by the community\u2019s continuous collaboration and enthusiasm, contributing multiple Execution Providers (EPs) to ONNX Runtime that further the baseline CPU and NVIDIA GPU CUDA-based EPs. This furthers our mission to endorse choice and versatility in hardware compute targets.<\/p>\n\n\n\n<p>Whether your target is PC, Mac, Linux on the cloud, local machines, light-weight or heavy IoT devices, ONNX Runtime strives to take advantage of available hardware capabilities to provide the best performance possible. Our continued collaboration allows ONNX Runtime to fully utilize available hardware acceleration on specialized devices and processors.<\/p>\n\n\n\n<p>The release of ONNX Runtime 0.5 introduces new support for Intel\u00ae Distribution of OpenVINO\u2122 Toolkit, along with updates for MKL-DNN. It\u2019s further optimized and accelerated by NVIDIA CUDA and NVIDIA TensorRT GPU platforms from the cloud to the edge. Additional information can be found below.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"onnx-runtime-execution-providers\">ONNX Runtime Execution Providers<\/h2>\n\n\n\n<p>Hardware platforms use custom libraries to execute and accelerate computations used in neural network models. These libraries and interfaces are different for every platform. To accelerate a machine learning model, developers need to be aware of these differences and write endpoint specific code.<\/p>\n\n\n\n<p>ONNX Runtime Execution Providers (EPs) allow you to run any ONNX model using a single set of inference APIs that provide access to the best hardware acceleration available. In simple terms, developers no longer need to worry about the nuances of hardware specific custom libraries to accelerate their machine learning models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"intel-distribution-of-the-openvino-toolkit\">Intel\u00ae Distribution of the OpenVINO\u2122 Toolkit<\/h3>\n\n\n\n<p>ONNX Runtime 0.5 includes OpenVINO toolkit as an Execution Provider (EP), now in public preview. OpenVINO empowers developers to create applications and solutions that emulate human vision. Built to optimize execution of convolutional neural networks (CNNs), the toolkit extends workloads for developers to using ONNX models across Intel\u00ae hardware, including accelerators such as VPUs, vision acceleration design cards, and the Neural Compute Stick (NC2). With the integration of Intel OpenVINO EP and ONNX Runtime, developers can take advantage of the neural network execution capabilities across the breadth of Intel platforms using their ONNX models.<\/p>\n\n\n\n<p>This is a significant milestone for ONNX Runtime as it is the first public preview EP integration for inferencing on IoT edge devices such as the Intel powered UP2 AI Vision Kit or IEI TANK AIoT platforms. Data scientists can use Azure Machine Learning service to train models using their framework of choice, export to ONNX, deploy via Azure IoT edge, and run hardware accelerated inferencing with ONNX Runtime. With the OpenVINO toolkit integration, we have now enabled inferencing across a variety of Intel edge devices.<\/p>\n\n\n\n<p><a href=\"http:\/\/aka.ms\/onnxruntime-openvino\">This Notebook<\/a> provides a sample tutorial covering the end-to-end scenario for deploying models with ONNX Runtime and OpenVINO EP, demonstrating how to train models in <a href=\"https:\/\/azure.microsoft.com\/services\/machine-learning-service\/\">Azure Machine Learning<\/a>, export to ONNX, and then deploy with <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/iot-edge\/\">Azure IoT Edge<\/a>. This tutorial has been validated in the Intel UP2 and IEI TANK reference platform containing Intel\u2019s NN accelerator chips.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"nvidia-jetson-nano\">NVIDIA Jetson Nano<\/h3>\n\n\n\n<p>As part of our continued partnership with NVIDIA, we worked with NVIDIA\u2019s Jetson team to build a reference solution for deploying trained models from Azure Machine Learning to the NVIDIA Jetson Nano with Azure IoT Edge.<\/p>\n\n\n\n<p>Today, we are releasing a <a href=\"http:\/\/aka.ms\/onnxruntime-arm64\">new tutorial<\/a> for developers to deploy ONNX models on the NVIDIA Jetson Nano. The Jetson Nano is the latest platform within the AI at the edge family of Jetson products, offering low-power and high-compute for IoT edge devices.<\/p>\n\n\n\n<p>This tutorial is a reference implementation for IoT solution developers looking to deploy AI workloads to the edge using Azure cloud and NVIDIA\u2019s GPU acceleration capabilities. In the tutorial, the captured data is processed on-device and only the inference results are sent to <a href=\"https:\/\/azure.microsoft.com\/services\/storage\/blobs\/\">Azure Blob Storage<\/a> for visualization in PowerBI. This approach expedites business outcomes by taking advantage of on-device compute capabilities to execute AI models.<\/p>\n\n\n\n<p>The tutorial splits the device\u2019s processing logic into three separate containers to enable modular customizations for user specific scenarios. To customize, users can train models in Azure Machine Learning service, export to ONNX, and change the reference code in the tutorial to include the trained model. ONNX Runtime executes the model in the inference container by taking advantage of the TensorRT libraries and provides significant inference capabilities at the edge.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"additional-resources\">Additional Resources<\/h2>\n\n\n\n<p>To learn more about ONNX Runtime Execution Providers, watch <a href=\"https:\/\/aka.ms\/iotshow\/160\">this video<\/a>.<\/p>\n\n\n\n<p>Get started on <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\">GitHub<\/a>.<\/p>\n\n\n\n<p>Have feedback or questions about ONNX Runtime? <a href=\"https:\/\/github.com\/Microsoft\/onnxruntime\/issues\">File an issue<\/a> on GitHub, and follow us on <a href=\"https:\/\/twitter.com\/onnxruntime\">Twitter<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ONNX Runtime 0.5, the latest update to the open source high performance inference engine for ONNX models, is now available. This release improves the customer experience and supports inferencing optimizations across hardware platforms. Since the last release in May, Microsoft teams have deployed an additional 45+ models that leverage ONNX Runtime for inferencing.<\/p>\n","protected":false},"author":5562,"featured_media":95478,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"msxcm_post_with_no_image":false,"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","footnotes":""},"post_tag":[2272,663],"content-type":[346],"topic":[2238,2246],"programming-languages":[],"coauthors":[654,657],"class_list":["post-77832","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-microsoft","tag-onnx","content-type-news","topic-ai-machine-learning","topic-iot","review-flag-1593580419-521","review-flag-5-1593580453-725","review-flag-iot-1680213327-385","review-flag-lever-1593580265-989","review-flag-machi-1680214156-53","review-flag-ml-1680214110-748","review-flag-new-1593580248-669","review-flag-partn-1593580279-545","review-flag-publi-1593580761-124"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Now available: ONNX Runtime 0.5 with support for edge hardware acceleration<\/title>\n<meta name=\"description\" content=\"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration\" \/>\n<meta property=\"og:description\" content=\"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Open Source Blog\" \/>\n<meta property=\"article:published_time\" content=\"2019-08-26T16:00:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-27T12:07:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1170\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Manash Goswami, Faith Xu\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/ONNX_Runtime_logo_dark_TW.png\" \/>\n<meta name=\"twitter:creator\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:site\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Manash Goswami, Faith Xu\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 min read\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\"},\"author\":[{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/manash-goswami\/\",\"@type\":\"Person\",\"@name\":\"Manash Goswami\"},{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/faith-xu\/\",\"@type\":\"Person\",\"@name\":\"Faith Xu\"}],\"headline\":\"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration\",\"datePublished\":\"2019-08-26T16:00:30+00:00\",\"dateModified\":\"2025-06-27T12:07:03+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\"},\"wordCount\":907,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"keywords\":[\"Microsoft\",\"ONNX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\",\"name\":\"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"datePublished\":\"2019-08-26T16:00:30+00:00\",\"dateModified\":\"2025-06-27T12:07:03+00:00\",\"description\":\"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.\",\"breadcrumb\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp\",\"width\":1170,\"height\":640},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/opensource.microsoft.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"name\":\"Microsoft Open Source Blog\",\"description\":\"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability\",\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\",\"name\":\"Microsoft Open Source Blog\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"width\":259,\"height\":194,\"caption\":\"Microsoft Open Source Blog\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/OpenAtMicrosoft\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration","description":"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/","og_locale":"en_US","og_type":"article","og_title":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration","og_description":"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.","og_url":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/","og_site_name":"Microsoft Open Source Blog","article_published_time":"2019-08-26T16:00:30+00:00","article_modified_time":"2025-06-27T12:07:03+00:00","og_image":[{"width":1170,"height":640,"url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.png","type":"image\/png"}],"author":"Manash Goswami, Faith Xu","twitter_card":"summary_large_image","twitter_image":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/ONNX_Runtime_logo_dark_TW.png","twitter_creator":"@OpenAtMicrosoft","twitter_site":"@OpenAtMicrosoft","twitter_misc":{"Written by":"Manash Goswami, Faith Xu","Est. reading time":"4 min read"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#article","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/"},"author":[{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/manash-goswami\/","@type":"Person","@name":"Manash Goswami"},{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/faith-xu\/","@type":"Person","@name":"Faith Xu"}],"headline":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration","datePublished":"2019-08-26T16:00:30+00:00","dateModified":"2025-06-27T12:07:03+00:00","mainEntityOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/"},"wordCount":907,"commentCount":0,"publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","keywords":["Microsoft","ONNX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/","url":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/","name":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","datePublished":"2019-08-26T16:00:30+00:00","dateModified":"2025-06-27T12:07:03+00:00","description":"ONNX Runtime is the open source high performance inference engine for ONNX models. This update supports inferencing optimizations across hardware platforms.","breadcrumb":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#primaryimage","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/MSC17_catapult_009.webp","width":1170,"height":640},{"@type":"BreadcrumbList","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/08\/26\/announcing-onnx-runtime-0-5-edge-hardware-acceleration-support\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/opensource.microsoft.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Now available: ONNX Runtime 0.5 with support for edge hardware acceleration"}]},{"@type":"WebSite","@id":"https:\/\/opensource.microsoft.com\/blog\/#website","url":"https:\/\/opensource.microsoft.com\/blog\/","name":"Microsoft Open Source Blog","description":"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability","publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/opensource.microsoft.com\/blog\/#organization","name":"Microsoft Open Source Blog","url":"https:\/\/opensource.microsoft.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","width":259,"height":194,"caption":"Microsoft Open Source Blog"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/OpenAtMicrosoft"]}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Open Source Blog","distributor_original_site_url":"https:\/\/opensource.microsoft.com\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77832","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/users\/5562"}],"replies":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/comments?post=77832"}],"version-history":[{"count":1,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77832\/revisions"}],"predecessor-version":[{"id":97732,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77832\/revisions\/97732"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media\/95478"}],"wp:attachment":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media?parent=77832"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/post_tag?post=77832"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/content-type?post=77832"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/topic?post=77832"},{"taxonomy":"programming-languages","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/programming-languages?post=77832"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/coauthors?post=77832"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}