{"id":94513,"date":"2023-06-26T09:00:00","date_gmt":"2023-06-26T16:00:00","guid":{"rendered":""},"modified":"2023-09-01T11:37:51","modified_gmt":"2023-09-01T18:37:51","slug":"olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization","status":"publish","type":"post","link":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/","title":{"rendered":"Olive: A user-friendly toolchain for hardware-aware model optimization"},"content":{"rendered":"\n<p>Hardware-aware model optimization is the process of optimizing machine learning models to make the most efficient use of specific hardware architectures\u2014like CPUs, GPUs, and neural processing units (NPUs)\u2014to meet production requirements such as accuracy, latency, and throughput. However, it can be challenging. Firstly, it requires expertise in various independent hardware vendor (IHV) toolkits to handle the unique characteristics and optimizations needed for each hardware architecture. Secondly, aggressive optimizations can have an impact on model quality, balancing accuracy and efficiency within hardware constraints needs to be carefully managed. Additionally, the rapidly evolving hardware landscape necessitates constant updates and adaptations. &nbsp;<\/p>\n\n\n\n<p>To alleviate this burden, <a href=\"https:\/\/github.com\/microsoft\/Olive\" target=\"_blank\" rel=\"noreferrer noopener\">we introduce Olive<\/a>, an easy-to-use toolchain for optimizing models with hardware awareness. With Olive, you don&#8217;t need to be an expert to explore diverse hardware optimization toolchains. It handles the complex optimization process for you, ensuring you achieve the best possible performance without the hassle.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Easing model optimization across hardware with Olive<\/h2>\n\n\n\n<p>As a hardware-aware model optimization solution, Olive composes effective techniques in model compression, optimization, and compilation. As shown in Figure 1, for a given model and target hardware, Olive intelligently tunes the most appropriate optimization techniques to generate highly efficient models for inference. Currently, a range of optimization techniques is supported in Olive, including model quantization tuning, transformer optimization, ONNX Runtime performance tuning, and more. Moreover, Olive considers various constraints such as accuracy and latency to ensure the optimized models meet your specific requirements. Olive streamlines the process of optimizing machine learning models to make the most efficient use of specific hardware architectures. Whether you&#8217;re working on cloud-based applications or edge devices, Olive enables you to optimize your models effortlessly and effectively. It works with <a href=\"https:\/\/onnxruntime.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">ONNX Runtime<\/a>, a high-performance inference engine, as an end-to-end inference optimization solution.&nbsp;&nbsp;<\/p>\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1024x559.webp\" alt=\"here is Olive architecture. By taking input model and production requirements, Olive tunes optimization techniques to output deployment-ready model packages. \" class=\"wp-image-94552 webp-format\" srcset=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1024x559.png 1024w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-300x164.png 300w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-768x419.png 768w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1536x839.png 1536w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-2048x1119.png 2048w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-800x437.png 800w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-400x218.png 400w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-450x246.png 450w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-650x355.webp 650w\" data-orig-src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1024x559.png\" data-orig-srcset=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1024x559.png 1024w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-300x164.png 300w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-768x419.png 768w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-1536x839.png 1536w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-2048x1119.png 2048w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-800x437.png 800w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-400x218.png 400w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-450x246.png 450w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-Architecture-650x355.png 650w\"><\/figure>\n\n\n\n<p><em>Figure 1: Olive architecture<\/em>&nbsp;<\/p>\n\n\n\n<p>By providing a configuration file specifying your model and scenario-specific information, Olive tunes optimization techniques to generate the optimal model(s) on the Pareto frontier based on the metrics goal you set. When working with the configuration file, you typically need to provide information about the input model\u2014including input names, shapes, and the location where the model is stored. Moreover, you specify your performance preferences, such as desired latency, accuracy, or other relevant factors. In addition to this information, you can choose from a <a href=\"https:\/\/microsoft.github.io\/Olive\/features\/model_transformations_and_optimizations.html\" target=\"_blank\" rel=\"noreferrer noopener\">range of optimizations<\/a> provided by Olive that you wish to apply to your specific hardware target. You also have the option to define the target hardware and utilize any additional features offered by Olive. By utilizing the configuration file, all you need to do is execute a simple command, eliminating the need for any Python code. \u00a0<\/p>\n\n\n\n<p><code><strong>python -m olive.workflows.run --config my_model_acceleration_description.json<\/strong>&nbsp;<\/code><\/p>\n\n\n\n<p>Here are <a href=\"https:\/\/microsoft.github.io\/Olive\/examples.html\" target=\"_blank\" rel=\"noreferrer noopener\">comprehensive examples<\/a> that demonstrate the process of optimizing models with Olive for various hardware targets. During the <a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/0ea15726-1273-4a7c-a71a-efc635172a3b?source=\/speakers\/e2acbbd7-85a7-4c87-bbb8-5658cb17475d\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft Build 2023<\/a> conference, we showcase how Olive and the ONNX Runtime (ORT) optimize <a href=\"https:\/\/github.com\/microsoft\/Olive\/tree\/main\/examples\/whisper\" target=\"_blank\" rel=\"noreferrer noopener\">a whisper model<\/a>, demonstrating a remarkable reduction in end-to-end latency by over two times on Intel Xeon device and a decrease in model size by 2.25 times, as shown in Figure 2.&nbsp;<\/p>\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1024x467.webp\" alt=\"The performance and model size benchmarks for whisper models using Olive reveal significant improvements. Our solution showcases a reduction of over 2 times in end-to-end latency and a 2.25 times reduction in model size.\" class=\"wp-image-94554 webp-format\" srcset=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1024x467.png 1024w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-300x137.png 300w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-768x351.png 768w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1536x701.png 1536w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-2048x935.png 2048w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-800x365.png 800w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-400x183.png 400w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-450x205.png 450w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-650x297.webp 650w\" data-orig-src=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1024x467.png\" data-orig-srcset=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1024x467.png 1024w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-300x137.png 300w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-768x351.png 768w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-1536x701.png 1536w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-2048x935.png 2048w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-800x365.png 800w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-400x183.png 400w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-450x205.png 450w, https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2023\/06\/Olive-for-whisper-650x297.png 650w\"><\/figure>\n\n\n\n<p><em>Figure 2: Whisper model optimization with Olive and ORT<\/em>&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Intel and AMD optimization innovations in Olive&nbsp;<\/h2>\n\n\n\n<p>In addition to simplifying the model optimization experience for model developers, Olive also provides a unified framework that allows industry experts to plug in their own optimization innovations as optimization passes into Olive, resulting in a comprehensive and ready-to-use solution. Intel and AMD have integrated their optimization innovations in Olive. Learn more about <a href=\"https:\/\/microsoft.github.io\/Olive\/extending_olive\/how_to_add_optimization_pass.html\" target=\"_blank\" rel=\"noreferrer noopener\">contributing your optimization techniques<\/a>.\u00a0<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Intel Neural Compressor (INC)<\/strong>. The Intel Neural Compressor framework develops several techniques for model compression to better leverage Intel hardware, such as quantization, pruning, and knowledge distillation. Now INC quantization, including both static quantization and dynamic quantization, is available in Olive. Learn more by <a href=\"https:\/\/github.com\/microsoft\/Olive\/tree\/main\/examples\/bert#bert-optimization-with-intel-neural-compressor-ptq-on-cpu\" target=\"_blank\" rel=\"noreferrer noopener\">reading this example<\/a> and our <a href=\"https:\/\/cloudblogs.microsoft.com\/opensource\/2023\/06\/26\/automate-optimization-techniques-for-transformer-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">blog<\/a>. More compression techniques will be added to Olive in the future. &nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AMD Vitis-AI quantizer<\/strong>. Vitis-AI quantizer is a tool provided by AMD as part of the Vitis-AI development platform. It is designed to facilitate model quantization for efficient deployment of deep learning models on AMD hardware platforms. You can easily set Vitis-AI quantizer in Olive for quantizing your model to get performance acceleration on AMD hardware. <a href=\"https:\/\/github.com\/microsoft\/Olive\/tree\/main\/examples\/resnet#resnet-optimization-with-vitis-ai-ptq-on-amd-dpu\" target=\"_blank\" rel=\"noreferrer noopener\">Here is an example<\/a>.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Streamlining user experience through Olive &nbsp;<\/h2>\n\n\n\n<p>To overcome any potential user hesitations surrounding technology that prioritizes performance gains over ease of use, Olive is dedicated to enhancing the user experience across various scenarios. This commitment is demonstrated through the implementation of a wide range of highlighted features that aim to improve usability and satisfaction for users:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/microsoft.github.io\/Olive\/features\/packaging_output_models.html\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Model packaging<\/strong><\/a>: Olive can produce a comprehensive package that includes optimized models, the appropriate runtime, and sample code for executing the model. This empowers you to effortlessly deploy the optimized models within your application.\u00a0<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Easy access <\/strong><a href=\"https:\/\/microsoft.github.io\/Olive\/overview\/options.html#azure-ml-client\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Microsoft Azure Machine Learning resources and assets<\/strong><\/a>: By utilizing your Azure Machine Learning authentication credentials, Olive can establish a connection with Azure Machine Learning. This connection enables Olive to access your registered model, as well as optimize the model for cloud computing within the Azure environment.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Built-in support for <a href=\"https:\/\/microsoft.github.io\/Olive\/features\/huggingface_model_optimization.html\" target=\"_blank\" rel=\"noreferrer noopener\">HuggingFace model optimization<\/a><\/strong>: The utilization of HuggingFace models has gained widespread popularity. Olive enhances this experience by seamlessly enabling the direct utilization of HuggingFace models, datasets, and metrics for optimizing and assessing HuggingFace models.\u00a0<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Looking ahead&nbsp;<\/h2>\n\n\n\n<p>Performance and ease of use are key priorities in<a href=\"https:\/\/github.com\/microsoft\/Olive\" target=\"_blank\" rel=\"noreferrer noopener\"> Olive<\/a>. Our ongoing efforts include collaborating with hardware partners to incorporate their latest technologies into Olive, making it the most comprehensive solution for model optimization. Simultaneously, we are committed to enhancing usability\u2014ensuring a smoother and more accessible model optimization experience for all users.&nbsp;<\/p>\n\n\n\n<p>If you have any feedback or questions regarding Olive, please don&#8217;t hesitate to <a href=\"https:\/\/github.com\/microsoft\/Olive\" target=\"_blank\" rel=\"noreferrer noopener\">file an issue on GitHub<\/a>. We highly encourage you to do so, and our team will promptly follow up to address your concerns and provide assistance.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introducing Olive, an easy-to-use toolchain for optimizing models with hardware awareness. With Olive, you don&#8217;t need to be an expert to explore diverse hardware optimization toolchains.<\/p>\n","protected":false},"author":6191,"featured_media":95470,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msxcm_post_with_no_image":false,"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","footnotes":""},"post_tag":[663,1824],"content-type":[346],"topic":[2252],"programming-languages":[2265],"coauthors":[699,2034,2035],"class_list":["post-94513","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-onnx","tag-onnx-runtime","content-type-news","topic-tools","programming-languages-pytorch","review-flag-1-1593580432-963","review-flag-2-1593580437-411","review-flag-lever-1593580265-989","review-flag-machi-1680214156-53"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog<\/title>\n<meta name=\"description\" content=\"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog\" \/>\n<meta property=\"og:description\" content=\"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Open Source Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-26T16:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-01T18:37:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1170\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Emma Ning, Devang Patel, Guoliang Hua\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:site\" content=\"@OpenAtMicrosoft\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emma Ning, Devang Patel, Guoliang Hua\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 min read\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\"},\"author\":[{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/emma-ning\/\",\"@type\":\"Person\",\"@name\":\"Emma Ning\"},{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/devang-patel\/\",\"@type\":\"Person\",\"@name\":\"Devang Patel\"},{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/author\/guoliang-hua\/\",\"@type\":\"Person\",\"@name\":\"Guoliang Hua\"}],\"headline\":\"Olive: A user-friendly toolchain for hardware-aware model optimization\",\"datePublished\":\"2023-06-26T16:00:00+00:00\",\"dateModified\":\"2023-09-01T18:37:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\"},\"wordCount\":984,\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"keywords\":[\"ONNX\",\"ONNX Runtime\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\",\"name\":\"Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog\",\"isPartOf\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"datePublished\":\"2023-06-26T16:00:00+00:00\",\"dateModified\":\"2023-09-01T18:37:51+00:00\",\"description\":\"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.\",\"breadcrumb\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp\",\"width\":1170,\"height\":640},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/opensource.microsoft.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Olive: A user-friendly toolchain for hardware-aware model optimization\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#website\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"name\":\"Microsoft Open Source Blog\",\"description\":\"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability\",\"publisher\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#organization\",\"name\":\"Microsoft Open Source Blog\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"contentUrl\":\"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png\",\"width\":259,\"height\":194,\"caption\":\"Microsoft Open Source Blog\"},\"image\":{\"@id\":\"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/OpenAtMicrosoft\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog","description":"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/","og_locale":"en_US","og_type":"article","og_title":"Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog","og_description":"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.","og_url":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/","og_site_name":"Microsoft Open Source Blog","article_published_time":"2023-06-26T16:00:00+00:00","article_modified_time":"2023-09-01T18:37:51+00:00","og_image":[{"width":1170,"height":640,"url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.png","type":"image\/png"}],"author":"Emma Ning, Devang Patel, Guoliang Hua","twitter_card":"summary_large_image","twitter_creator":"@OpenAtMicrosoft","twitter_site":"@OpenAtMicrosoft","twitter_misc":{"Written by":"Emma Ning, Devang Patel, Guoliang Hua","Est. reading time":"4 min read"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#article","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/"},"author":[{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/emma-ning\/","@type":"Person","@name":"Emma Ning"},{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/devang-patel\/","@type":"Person","@name":"Devang Patel"},{"@id":"https:\/\/opensource.microsoft.com\/blog\/author\/guoliang-hua\/","@type":"Person","@name":"Guoliang Hua"}],"headline":"Olive: A user-friendly toolchain for hardware-aware model optimization","datePublished":"2023-06-26T16:00:00+00:00","dateModified":"2023-09-01T18:37:51+00:00","mainEntityOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/"},"wordCount":984,"publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","keywords":["ONNX","ONNX Runtime"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/","url":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/","name":"Olive: A user-friendly toolchain for hardware-aware model optimization | Microsoft Open Source Blog","isPartOf":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage"},"thumbnailUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","datePublished":"2023-06-26T16:00:00+00:00","dateModified":"2023-09-01T18:37:51+00:00","description":"We introduce Olive, an easy-to-use toolchain for optimizing models with hardware awareness.","breadcrumb":{"@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#primaryimage","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2024\/06\/CLO22_Coworking_015.webp","width":1170,"height":640},{"@type":"BreadcrumbList","@id":"https:\/\/opensource.microsoft.com\/blog\/2023\/06\/26\/olive-a-user-friendly-toolchain-for-hardware-aware-model-optimization\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/opensource.microsoft.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Olive: A user-friendly toolchain for hardware-aware model optimization"}]},{"@type":"WebSite","@id":"https:\/\/opensource.microsoft.com\/blog\/#website","url":"https:\/\/opensource.microsoft.com\/blog\/","name":"Microsoft Open Source Blog","description":"Open dialogue about openness at Microsoft \u2013 open source, standards, interoperability","publisher":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/opensource.microsoft.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/opensource.microsoft.com\/blog\/#organization","name":"Microsoft Open Source Blog","url":"https:\/\/opensource.microsoft.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","contentUrl":"https:\/\/opensource.microsoft.com\/blog\/wp-content\/uploads\/2019\/08\/Microsoft-Logo.png","width":259,"height":194,"caption":"Microsoft Open Source Blog"},"image":{"@id":"https:\/\/opensource.microsoft.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/OpenAtMicrosoft"]}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Open Source Blog","distributor_original_site_url":"https:\/\/opensource.microsoft.com\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/94513","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/users\/6191"}],"replies":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/comments?post=94513"}],"version-history":[{"count":2,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/94513\/revisions"}],"predecessor-version":[{"id":94807,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/posts\/94513\/revisions\/94807"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media\/95470"}],"wp:attachment":[{"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/media?parent=94513"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/post_tag?post=94513"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/content-type?post=94513"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/topic?post=94513"},{"taxonomy":"programming-languages","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/programming-languages?post=94513"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/opensource.microsoft.com\/blog\/wp-json\/wp\/v2\/coauthors?post=94513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}