5 min read

Live demos of machine learning models with ONNX and Hugging Face Spaces

Choosing which machine learning model to use, sharing a model with a colleague, and quickly trying out a model are all reasons why you may find yourself wanting to quickly run inference on a model. You can configure your environment and download Jupyter notebooks, but it would be nicer if there was a way to run a model with even less effort. This is the goal of Hugging Face Spaces and recently Hugging Face enabled this for models in the Open Neural Network Exchange (ONNX) Model Zoo.

In this article, we’ll walk you through:

  • Introduction of ONNX Model Zoo, ONNX Runtime, Gradio, and Hugging Face Spaces.
  • How we setup a Gradio demo for ONNX EfficientNet-Lite4 on Hugging Face Spaces.
  • How to contribute your own ONNX models and create live demos for them.

What is the ONNX Model Zoo?

ONNX is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.

The ONNX Model Zoo is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.

What are Hugging Face Spaces and Gradio?

Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.

Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with three SDK options: Gradio, Streamlit, and Static HTML demos. Spaces can be public or private and the workflow is similar to the below section: Setting up a Gradio Demo for EfficientNet-Lite4.

How did Hugging Face help the ONNX Model Zoo?

There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face Spaces, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio, all on cloud without downloading anything locally. There are various runtimes for ONNX but we chose ONNX Runtime since it has broad model coverage and strong performance.

What is the role of ONNX Runtime?

ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.

The Hugging Face Spaces we created use ONNX Runtime with its Python APIs running on the server. ONNX Runtime also has Java, C, C++, C#, and JavaScript APIs and runs on mobile devices and even inside the browser.

Setting up a Gradio Demo for EfficientNet-Lite4

EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4 percent ImageNet top-1 accuracy, while still running in real-time (such as 30ms/image) on a Pixel 4 CPU. To learn more read the model card.

Here we walk through setting up an example demo for EfficientNet-Lite4 using Gradio, try out the demo.

First we import our dependencies and download and load the efficientnet-lite4 model from the ONNX Model Zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio inferface for a user to interact with. See the full code below.

Import numpy as np
import math
import matplotlib.pyplot as plt
import cv2
import json
import gradio as gr
from onnx import hub
import onnxruntime as ort

# loads ONNX model from ONNX Model Zoo
model = hub.load(“efficientnet-lite4”)
# loads the labels text file
labels = json.load(open(“labels_map.txt”, “r”))

# sets image file dimensions to 224x224 by resizing and cropping image from center
def pre_process_edgetpu(img, dims):
    output_height, output_width, _ = dims
    img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
    img = center_crop(img, output_height, output_width)
    img = np.asarray(img, dtype=’float32’)
    # converts jpg pixel value from [0 – 255] to float array [-1.0 – 1.0]
    img -= [127.0, 127.0, 127.0]
    img /= [128.0, 128.0, 128.0]
    return img

# resizes the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
    height, width, _ = img.shape
    new_height = int(100. * out_height / scale)
    new_width = int(100. * out_width / scale)
    if height > width:
        w = new_width
        h = int(new_height * height / width)
    else:
        h = new_height
        w = int(new_width * width / height)
    img = cv2.resize(img, (w, h), interpolation=inter_pol)
    return img

# crops the image around the center based on given height and width
def center_crop(img, out_height, out_width):
    height, width, _ = img.shape
    left = int((width - out_width) / 2)
    right = int((width + out_width) / 2)
    top = int((height - out_height) / 2)
    bottom = int((height + out_height) / 2)
    img = img[top:bottom, left:right]
    return img

sess = ort.InferenceSession(model)

def inference(img):
  img = cv2.imread(img)
  img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

  img = pre_process_edgetpu(img, (224, 224, 3))

  img_batch = np.expand_dims(img, axis=0)

  results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
  result = reversed(results[0].argsort()[-5:])
  resultdic = {}
  for r in result:
      resultdic[labels[str(r)]] = float(results[0][r])
  return resultdic

title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
examples = [['catonnx.jpg']]
gr.Interface(inference, gr.inputs.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()

How to contribute Gradio demos on HF spaces using ONNX models

  • Add model to the ONNX Model Zoo.
  • Create an account on Hugging Face.
  • See list of models left to add to ONNX organization, please refer to the table with the Models list.
  • Add Gradio Demo under your username, see this blog post for setting up Gradio Demo on Hugging Face.
  • Request to join ONNX Organization.
  • Once approved transfer model from your username to ONNX organization.
  • Add a badge for model in model table, see examples in Models list.

Conclusions

This article shows how we can easily create a Gradio demo on Hugging Face Spaces with a model from ONNX Model Zoo, using ONNX Runtime for inference. The collaboration of Gradio, Hugging Face Spaces, ONNX Model Zoo, and ONNX Runtime greatly simplifies the complex development process and demonstrates accurate inference results with a friendly Web UI. We’re looking forward to more community collaborations and diverse scenarios using these tools.

Questions or feedback? Let us know by raising questions in the ONNX Model Zoo’s issues forum.