Tutorials and demos AI + Machine Learning • January 25, 2023 • 5 min read Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure Make large models smaller and faster with OpenVino Execution Provider, NNCF and ONNX Runtime leveraging Azure Machine Learning.
Project updates • December 1, 2022 • 4 min read Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training…