Introducing Hyperlight: Virtual machine-based security for functions at scale
The Microsoft Azure Core Upstream team is excited to announce the Hyperlight…
This blog is co-authored by Carlos Peña-Monferrer, Co-Founder, SIMZERO, and Carmen Díaz-Marín, Co-founder, SIMZERO
In this guest blog post, the SIMZERO team shares how they used ONNX Runtime to unlock the promise of recent developments in science for solving real world problems like preventing hazards and optimizing energy use.
The advent of machine learning (ML) is transforming every industry, from healthcare to finance, and from transportation to e-commerce. A promising use of machine learning is its combination with computational fluid dynamics (CFD) and other scientific applications. Different techniques, such as data-driven machine learning and physics-informed neural networks (PINNs), are transforming scientific simulations by bringing them into real-world, real-time processes. This development holds great promise for new discoveries, for optimizing the use of energy and materials, or for preventing potential hazards.
A step further is the generation of virtual worlds or digital twins to form the core of the industrial metaverse. The key part is to be able to interact with the virtual world seamlessly as though it were the real world. In this way, we can test many hypotheses at little to no cost and decide on an optimal solution before implementing it in reality. By thinking outside the box, we can envision creating a virtual multiverse. Within this innovative space, one can propose, evaluate, and decide on multiple hypotheses. Real-world examples of this approach include planning new product configurations, operating a plant, designing heating or cooling systems, or responding to catastrophes.
To bring CFD to virtual or augmented reality (VR/AR), there are two main challenges that need to be addressed: the need for significant computational power to perform CFD simulations in real-time or near-real time, and the need for specialized software that can integrate both CFD and VR systems. Next, we will explain how we solved this at SIMZERO.
One approach to accelerate CFD simulations is to use mathematical methods such as reduced-order models or ML algorithms to learn and approximate the results of CFD simulations. Another approach is to incorporate physical laws into the network architecture. Although there is still a long way to go in terms of developing physical models for complex processes and learning from them, there are already many applications that can benefit from this progress. In this blog entry we will not focus on the methods themselves but on the deployment of the ML-CFD pre-trained models. To ensure efficient deployment of ML-CFD, we opted to use Open Neural Network Exchange (ONNX) models, which allows for seamless interoperability between existing ML-CFD frameworks based on PyTorch, TensorFlow, Keras, and other frameworks. Another important factor in choosing ONNX was the capability for directly running inference in different programming languages thanks to ONNX Runtime. This allowed us to use ONNX Runtime to deploy the models to both backend and browser.
To streamline the process, we have created a Python module `cfdonnx` to bring together specific code to convert ML-CFD models to ONNX. The current module supports the U-Net model architecture but we plan to extend this to others in future releases. We have also created some reproducible examples with the workflows that cover CFD runs to ONNX using DeepCFD as the ML package. The example below shows how to export a PyTorch model to ONNX by passing a PyTorch model. Parameters such as network architecture, input shape, filters, and kernel sizes are read from the PyTorch model in this case, but they can be also passed as arguments for convenience.
python3 -m cfdonnx --input model.pt --output model.onnx
A missing part was the interface between the ML-CFD and VR components. To address this, we developed jsfluids, a high-level JavaScript module for real-time CFD inference. This module handles reduced-order models created by ITHACA-FV and generic ML models. When using ML, we relied on ONNX Runtime to handle the ML aspects and provide a seamless interface between the virtual world and on-the-fly post-processing. Specifically, ONNX Runtime inference is performed to calculate the new fields that are passed to jsfluids for creating visualization components. The visualization components are compatible with Babylon.js which is the real-time 3D engine used. Further analysis can also be performed aside from the visualization components. These tasks were achieved by porting specific parts of VTK/C++ to JavaScript.
In the end, this approach enables the inference and analysis of fluid dynamics at the edge. By executing models directly on devices, we can reduce infrastructure costs, reduce latency, and improve privacy. In addition, we can bring the models to low consumption devices for connecting them with sensors and real world or to reduce power consumption on deployment.
The following pseudocode shows how to use an ONNX model for inferring a velocity field, and creating and visualizing GLTF components on Babylon.js:
const model = jsfluids.ML;
// Load the ONNX model from an URL or local data
ort.InferenceSession.create(onnxURL).then((onnx) => {
// Load the CFD mesh (unstructured grid) from an URL or local data
model.loadMesh(vtuURL).then(() => {
// Create a visualization component
// (e.g. outline surface, plane, streamlines, ...)
var gltf = model.setComponent({ type: "surface" });
var blob = new Blob([gltf]);
var url = URL.createObjectURL(blob);
BABYLON.SceneLoader.ImportMesh("", url, "", scene, (meshes) => {
// Manipulate meshes (materials, positions, ...)
// Create the feeds input for your ONNX model
// var feeds = …
onnx.run(feeds).then((results) => {
// Manipulate results to get the fields if needed
// var U = …
// Set operations (if any) to be performed on the new field
// (e.g. vorticity or gradients)
model.setOperations({ operations: ["vorticity"] });
// Assign the field to the model and update
model.update({ field: "U", data: U });
// Render the component and assign the new field colors
var render = model.render({
field: "U",
// range: [0, 0.1]
});
meshes[1].setVerticesData(
BABYLON.VertexBuffer.ColorKind,
render.colors,
true
);
// Other jsfluids features as probing or integrating a field
var point = [0, 0, 0];
var probe = model.probe({ field: "U", point: point });
var integrate = model.integrate({
field: "vorticity",
target: "grid"
});
});
}, null, null, ".gltf");
Getting started with these packages can be complicated, especially as the scope extends to several different domains. For this reason, we have created a live editor playground powered by ChatGPT at SIMZERO Play for experimenting, building, and testing. The playground includes other libraries such as babyplots and JSCAD for data visualization and CAD modeling, respectively. JSCAD provides a programmer’s approach to designing 3D models. Although Babylon.js also allows for different geometry operations such as Constructive Solid Geometry (CSG), a user can choose to use JSCAD for convenience or for using models already developed by the community.
At SIMZERO Play you can find different examples and templates for starting building models and worlds while choosing the combination of tools that best suits your needs. For instance, one can just use Babylon.js together with ONNX Runtime for building virtual worlds with artificial intelligence, and add the CFD and CAD features if required. Additional features will be added so users could easily build and use their models and applications outside the playground in both development and production.
With the deployment of pre-trained ML-CFD models using ONNX Runtime, we can infer fluid dynamics and seamlessly integrate them with virtual and augmented reality systems using open-source tools. Together with our newly developed JavaScript library jsfluids and the playground, you can easily access this technology to create better worlds.
Get started today: