Tutorials and demos AI + Machine Learning • January 25, 2023 • 5 min read Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure By Cassie Breviu, Senior Technical Program Manager, ONNX Runtime, AI Frameworks—Microsoft Akhila Vidiyala, Cloud Software Development Engineer, OpenVINO™ AI Frameworks Architectures—Intel Devang Aggarwal, Product Manager, OpenVINO™ AI Framework Integrations—Intel Sachin Rastogi, Product Manager, OpenVINO™ AI Workflows —Intel Make large models smaller and faster with OpenVino Execution Provider, NNCF and ONNX Runtime leveraging Azure Machine Learning.