Tutorials and demos AI + Machine Learning • January 25, 2023 • 5 min read Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure By Cassie Breviu, Senior Technical Program Manager, ONNX Runtime, AI Frameworks—Microsoft Akhila Vidiyala, Cloud Software Development Engineer, OpenVINO™ AI Frameworks Architectures—Intel Devang Aggarwal, Product Manager, OpenVINO™ AI Framework Integrations—Intel Sachin Rastogi, Product Manager, OpenVINO™ AI Workflows —Intel Make large models smaller and faster with OpenVino Execution Provider, NNCF and ONNX Runtime leveraging Azure Machine Learning.
Project updates • December 1, 2022 • 4 min read Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT By Sai Jayanthi, AI Frameworks Engineer, OpenVINO™ AI Framework Integrations–Intel Vishnudas Thaniel S., AI Frameworks Engineer, OpenVINO™ AI Framework Integrations–Intel Devang Aggarwal, Product Manager, OpenVINO™ AI Framework Integrations—Intel Natalie Kershaw, Senior Program Manager, AI Frameworks, Microsoft Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training…