Introducing Hyperlight: Virtual machine-based security for functions at scale
The Microsoft Azure Core Upstream team is excited to announce the Hyperlight…
We’re excited to share the recent integration of ONNX Runtime in Apache OpenNLP! Apache OpenNLP is a Java machine learning library for natural language processing (NLP) tasks. Java is a popular production environment for deploying machine learning workloads, and this integration introduces the ability to run state-of-the-art transformer models directly through OpenNLP. This expands upon the existing classic machine learning algorithms that have powered Java NLP solutions in the past.
OpenNLP is used by prominent open source projects such as Apache Solr, Apache UIMA, and Apache Lucene, as well as in academia and research. ONNX Runtime is a cross-platform accelerator for machine learning models that provides high performance and compatibility with diverse hardware and development environments. This integration combines the best of both worlds to provide accelerated NLP inferencing to Java-based services and applications.
Hugging Face transformer models are amongst the most popular advanced solutions for natural language processing in the industry today, so to showcase this new capability, Jeff Zemerick shares a thorough tutorial on accelerating these models through OpenNLP with ONNX Runtime. Jeff Zemerick is the chair of the Apache OpenNLP project and a search relevance engineer at OpenSource Connections, LLC.
Check out the tutorial to learn more about this exciting new feature made possible through our open source community engagement!
ONNX Runtime optimizes performance for machine learning models trained from PyTorch and TensorFlow, with special focus on Hugging Face transformer model architectures. ONNX Runtime offers APIs for most development languages (Java, Python, C#, C++, C, Javascript), and is compatible with a variety of deployment targets ranging from Linux servers, Windows, macOS, ARM-based edge devices, Android and iOS mobile devices, web browsers, and more. This versatility makes ONNX Runtime a practical option for standardizing machine learning deployment workloads. ONNX Runtime is production-validated and serves over a trillion daily inferences through popular products and services at Microsoft including Office, Azure, Windows, Bing, and Dynamics.