{"id":77121,"date":"2019-05-22T14:00:08","date_gmt":"2019-05-22T21:00:08","guid":{"rendered":""},"modified":"2025-06-27T06:56:36","modified_gmt":"2025-06-27T13:56:36","slug":"onnx-runtime-machine-learning-inferencing-0-4-release","status":"publish","type":"post","link":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/","title":{"rendered":"ONNX Runtime: a one-stop shop for machine learning inferencing"},"content":{"rendered":"\n<p>Organizations that want to leverage AI at scale must overcome a number of challenges around model training and model inferencing. Today, there are a plethora of tools and frameworks that accelerate model training but inferencing remains a tough nut due to the variety of environments that models need to run in. For example, the same AI model might need be inferenced on cloud GPUs as well as desktop CPUs and even edge devices. Optimizing a single model for so many different environments takes time, let alone hundreds or thousands of models.<\/p>\n\n\n\n<p>In this blog post, we\u2019ll show you how Microsoft tackled this challenge internally and how you can leverage the latest version of the same technology.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-onnx-runtime\">What is ONNX Runtime?<\/h2>\n\n\n\n<p>At Microsoft we tackled our inferencing challenge by creating <a href=\"https:\/\/github.com\/microsoft\/onnxruntime\">ONNX Runtime<\/a>. Based on the <a href=\"https:\/\/onnx.ai\">ONNX<\/a> model format we co-developed with Facebook, ONNX Runtime is a single inference engine that\u2019s highly performant for multiple platforms and hardware. Using it is simple:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Train a model with any popular framework such as TensorFlow and PyTorch<\/li>\n\n\n\n<li>Export or convert the model to ONNX format<\/li>\n\n\n\n<li>Inference efficiently across multiple platforms and hardware (Windows, Linux, and Mac on both CPUs and GPUs) with ONNX Runtime<\/li>\n<\/ol>\n\n\n\n<p>Today, ONNX Runtime is used in millions of Windows devices and powers core models across Office, Bing, and Azure where an average of 2x performance gains have been seen. Here are a few examples:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>With ONNX Runtime, the Office team saw a <strong>14.6x reduction in latency <\/strong>for a grammar checking model that handles thousands of queries per minute<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/05\/new-picture-1024x494.png\" alt=\"ONNX performance chart\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Cognitive Services saw a <strong>3.5x reduction in latency <\/strong>for an optical character recognition (OCR) model<\/li>\n\n\n\n<li>Bing QnA saw a <strong>2.8x reduction in latency<\/strong> for a model that generates answers to questions<\/li>\n\n\n\n<li>Bing Visual Search saw a <strong>2x reduction in latency<\/strong> for a model that helps identify similar images<\/li>\n<\/ul>\n\n\n\n<p>Having seen significant gains internally, we <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/onnx-runtime-is-now-open-source\/\">open sourced ONNX Runtime<\/a> in December 2018. We also began working with leading hardware partners to integrate their technology into ONNX Runtime to achieve even greater performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"onnx-runtime-0-4-integration-with-intel-and-nvidia-accelerators\">ONNX Runtime 0.4 &#8211; integration with Intel and NVIDIA accelerators<\/h2>\n\n\n\n<p>6 months after open sourcing, we are excited to release <a href=\"https:\/\/github.com\/Microsoft\/onnxruntime\/releases\/tag\/v0.4.0\">ONNX Runtime 0.4<\/a>, which includes the general availability of the NVIDIA TensorRT execution provider and public preview of Intel nGraph execution provider. With this release, ONNX models can be executed on GPUs and CPUs while leveraging the respective neural network acceleration capabilities on these platforms. Developers can write their application once using the ONNX Runtime APIs and choose the specific ONNX Runtime base container image to build their deployment image for the targeted hardware. Microsoft will publish pre-built Docker base images for developers to integrate with their ONNX model and application code and deploy in Azure Kubernetes Service (AKS). This gives developers the freedom to choose amongst different hardware platforms for their inferencing environment.<\/p>\n\n\n\n<p>In addition, ONNX Runtime 0.4 is fully compatible with ONNX 1.5 and backwards compatible with previous versions, making it the most complete inference engine available for ONNX models. With newly added operators in ONNX 1.5, ONNX Runtime can now run important object detection models such as YOLO v3 and SSD (available in the <a href=\"https:\/\/github.com\/onnx\/models\">ONNX Model Zoo<\/a>).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"get-started-now\">Get started now<\/h2>\n\n\n\n<p>To enable easy use of ONNX Runtime with these execution providers, we are releasing <a href=\"https:\/\/aka.ms\/onnxrt-aml\">Jupyter Notebooks tutorials<\/a> to help developers get started. This notebook uses the FER+ emotion detection model from the ONNX Model Zoo to build a container image using the ONNX Runtime base image for TensorRT. Then this image is deployed in AKS using <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/machine-learning-service\/\">Azure Machine Learning service<\/a> to execute the inferencing within a container.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"looking-ahead\">Looking ahead<\/h2>\n\n\n\n<p>The release of ONNX Runtime 0.4 with execution providers for Intel and NVIDIA accelerators marks another milestone in our venture to create an open ecosystem for AI. We will continue to work with hardware partners to integrate their latest technology into ONNX Runtime to make it the most complete inference engine. Specifically, we are working on optimizations to target mobile and IOT edge devices, as well as supporting new hardware categories such as FPGAs. For instance, we are releasing a private preview of the Intel OpenVINO execution provider, allowing ONNX models to be executed across Intel CPUs, integrated GPUs, FPGAs and VPUs for edge scenarios. Public preview of this execution provider will be coming soon. Ultimately, our goal is to make it easier for developers using any framework to achieve high performance inferencing on any hardware.<\/p>\n\n\n\n<p>Have feedback or questions about ONNX Runtime?&nbsp;<a href=\"https:\/\/github.com\/Microsoft\/onnxruntime\/issues\">File an issue<\/a>&nbsp;on GitHub and follow us on&nbsp;<a href=\"https:\/\/twitter.com\/onnxruntime\">Twitter<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Organizations that want to leverage AI at scale must overcome a number of challenges around model training and model inferencing. Today, there are a plethora of tools and frameworks that accelerate model training but inferencing remains a tough nut due to the variety of environments that models need to run in.<\/p>\n","protected":false},"author":5562,"featured_media":95470,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"msxcm_post_with_no_image":false,"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","footnotes":""},"post_tag":[2272,663],"content-type":[346,361],"topic":[2238],"programming-languages":[],"coauthors":[612],"class_list":["post-77121","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-microsoft","tag-onnx","content-type-news","content-type-project-updates","topic-ai-machine-learning","review-flag-1593580419-521","review-flag-1-1593580432-963","review-flag-2-1593580437-411","review-flag-3-1593580442-169","review-flag-4-1593580448-609","review-flag-5-1593580453-725","review-flag-6-1593580457-852","review-flag-gener-1593580751-533","review-flag-integ-1593580288-449","review-flag-iot-1680213327-385","review-flag-lever-1593580265-989","review-flag-machi-1680214156-53","review-flag-new-1593580248-669","review-flag-priva-1593580766-136","review-flag-publi-1593580761-124"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>ONNX Runtime: a one-stop shop for machine learning inferencing<\/title>\n<meta name=\"description\" content=\"Today we&#039;re introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ONNX Runtime: a one-stop shop for machine learning inferencing\" \/>\n<meta property=\"og:description\" content=\"Today we&#039;re introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Open Source Blog\" \/>\n<meta property=\"article:published_time\" content=\"2019-05-22T21:00:08+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-27T13:56:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1170\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Prasanth Pulavarthi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:site\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Prasanth Pulavarthi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 min read\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\"},\"author\":[{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/prasanth-pulavarthi\/\",\"@type\":\"Person\",\"@name\":\"Prasanth Pulavarthi\"}],\"headline\":\"ONNX Runtime: a one-stop shop for machine learning inferencing\",\"datePublished\":\"2019-05-22T21:00:08+00:00\",\"dateModified\":\"2025-06-27T13:56:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\"},\"wordCount\":760,\"commentCount\":2,\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"keywords\":[\"Microsoft\",\"ONNX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\",\"name\":\"ONNX Runtime: a one-stop shop for machine learning inferencing\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"datePublished\":\"2019-05-22T21:00:08+00:00\",\"dateModified\":\"2025-06-27T13:56:36+00:00\",\"description\":\"Today we're introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.\",\"breadcrumb\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"width\":1170,\"height\":640},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/opensource.microsoft.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"ONNX Runtime: a one-stop shop for machine learning inferencing\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"name\":\"Microsoft Open Source Blog\",\"description\":\"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability\",\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\",\"name\":\"Microsoft Open Source Blog\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"width\":259,\"height\":194,\"caption\":\"Microsoft Open Source Blog\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/OpenAtMicrosoft\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"ONNX Runtime: a one-stop shop for machine learning inferencing","description":"Today we're introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/","og_locale":"en_US","og_type":"article","og_title":"ONNX Runtime: a one-stop shop for machine learning inferencing","og_description":"Today we're introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.","og_url":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/","og_site_name":"Microsoft Open Source Blog","article_published_time":"2019-05-22T21:00:08+00:00","article_modified_time":"2025-06-27T13:56:36+00:00","og_image":[{"width":1170,"height":640,"url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.png","type":"image\/png"}],"author":"Prasanth Pulavarthi","twitter_card":"summary_large_image","twitter_creator":"@OpenAtMicrosoft","twitter_site":"@OpenAtMicrosoft","twitter_misc":{"Written by":"Prasanth Pulavarthi","Est. reading time":"3 min read"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#article","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/"},"author":[{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/prasanth-pulavarthi\/","@type":"Person","@name":"Prasanth Pulavarthi"}],"headline":"ONNX Runtime: a one-stop shop for machine learning inferencing","datePublished":"2019-05-22T21:00:08+00:00","dateModified":"2025-06-27T13:56:36+00:00","mainEntityOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/"},"wordCount":760,"commentCount":2,"publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","keywords":["Microsoft","ONNX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/","url":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/","name":"ONNX Runtime: a one-stop shop for machine learning inferencing","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","datePublished":"2019-05-22T21:00:08+00:00","dateModified":"2025-06-27T13:56:36+00:00","description":"Today we're introducing the open source ONNX Runtime 0.4, including integration with Intel and NVIDIA accelerators.","breadcrumb":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#primaryimage","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","width":1170,"height":640},{"@type":"BreadcrumbList","@id":"https:\/\/opensource.microsoft.com\/blog\/2019\/05\/22\/onnx-runtime-machine-learning-inferencing-0-4-release\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/opensource.microsoft.com\/blog\/"},{"@type":"ListItem","position":2,"name":"ONNX Runtime: a one-stop shop for machine learning inferencing"}]},{"@type":"WebSite","@id":"https:\/\/opensource.microsoft.com\/blog\/#website","url":"https:\/\/opensource.microsoft.com\/blog\/","name":"Microsoft Open Source Blog","description":"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability","publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/opensource.microsoft.com\/blog\/#organization","name":"Microsoft Open Source Blog","url":"https:\/\/opensource.microsoft.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","width":259,"height":194,"caption":"Microsoft Open Source Blog"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/OpenAtMicrosoft"]}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Open Source Blog","distributor_original_site_url":"https:\/\/opensource.microsoft.com\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77121","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/users\/5562"}],"replies":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/comments?post=77121"}],"version-history":[{"count":1,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77121\/revisions"}],"predecessor-version":[{"id":97753,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/77121\/revisions\/97753"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media\/95470"}],"wp:attachment":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media?parent=77121"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/post_tag?post=77121"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/content-type?post=77121"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/topic?post=77121"},{"taxonomy":"programming-languages","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/programming-languages?post=77121"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/coauthors?post=77121"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}