Mannequin Explorer: Simplifying ML fashions for Edge gadgets – Uplaza

We’re excited to share Mannequin Explorer – a robust graph visualization instrument designed that can assist you perceive and debug your ML fashions. With an intuitive, hierarchical visualization of even the most important graphs, Mannequin Explorer permits builders to beat the complexities of optimizing fashions for edge gadgets. That is the third weblog publish in our sequence masking Google AI Edge developer releases. If you happen to missed the primary two, remember to try the AI Edge Torch and Generative API blogs.

Developed initially as a utility for Google researchers and engineers, Mannequin Explorer is now publicly obtainable as a part of our Google AI Edge household of merchandise. The preliminary model of Mannequin Explorer provides the next:

  • GPU-based rendering engine to visualise giant mannequin graphs
  • Standard ML framework help
  • Runs straight in Colab notebooks
  • Adapter extension system to visualise extra mannequin codecs
  • Overlay metadata (e.g., attributes, inputs/outputs, and many others) and custom-data (e.g. efficiency) straight on nodes
  • Highly effective UI characteristic suite designed that can assist you work quicker

On this weblog publish we’ll stroll via find out how to get began with Mannequin Explorer and find out how to make the most of Mannequin Explorer’s {custom} knowledge overlay API to debug and optimize your fashions. Additional documentation and examples can be found right here.


Getting began

Mannequin Explorer prioritizes a seamless consumer expertise. Its easy-to-install PyPI bundle runs domestically in your machine, in Colab, and in a Python file, boosting the privateness and safety of your mannequin graphs.


Run domestically in your machine

$ pip set up ai-edge-mannequin-explorer
$ mannequin-explorer

Beginning Mannequin Explorer server at http://localhost:8080

These instructions will begin a server at localhost:8080 and open the Mannequin Explorer internet app in a browser tab. See extra information about Mannequin Explorer command strains within the command line information.

After getting a localhost server working, add your mannequin file out of your pc (codecs supported embrace these utilized by JAX, PyTorch, TensorFlow and TensorFlow Lite) and choose the very best adapter to your mannequin by way of the ‘Adapter’ drop down menu on the house web page. Go to right here to learn to make the most of the Mannequin Explorer adapter extension system to visualise unsupported mannequin codecs.


Run in Colab notebooks

# Obtain your fashions (this instance makes use of an Efficientdet TFLite mannequin)
import os
import tempfile
import urllib.request

tmp_path = tempfile.mkdtemp()
model_path = os.path.be part of(tmp_path, 'mannequin.tflite')
urllib.request.urlretrieve("https://storage.googleapis.com/tfweb/model-graph-vis-v2-test-models/efficientdet.tflite", model_path)

# Set up Mannequin Explorer
pip set up ai-edge-mannequin-explorer

# Visualize the downloaded EfficientDet mannequin
import model_explorer
model_explorer.visualize(model_path)

After working the cell, Mannequin Explorer might be displayed in an iFrame embedded in a brand new cell. In Chrome, the UI will even present an “Open in new tab” button that you could click on to point out the UI in a separate tab. Go to right here to study extra about working Mannequin Explorer Colab.

Visualize fashions by way of the Mannequin Explorer API

The model_explorer bundle supplies handy APIs to allow you to visualize fashions from recordsdata or from a PyTorch module, and a decrease stage API to visualise fashions from a number of sources. Make sure that to put in it first by following the set up information. To study extra try the Mannequin Explorer API information.

Beneath is an instance for visualizing PyTorch fashions. Visualizing PyTorch fashions requires a barely totally different method because of their lack of an ordinary serialization format. Mannequin Explorer provides a specialised API to visualise PyTorch fashions straight, utilizing the ExportedProgram from torch.export.export.

import model_explorer
import torch
import torchvision

# Put together a PyTorch mannequin and its inputs
mannequin = torchvision.fashions.mobilenet_v2().eval()
inputs = (torch.rand([1, 3, 224, 224]),)
ep = torch.export.export(mannequin, inputs)

# Visualize
model_explorer.visualize_pytorch('mobilenet', exported_program=ep)

Regardless of which manner you visualize your fashions, beneath the hood Mannequin Explorer implements GPU-accelerated graph rendering with WebGL and three.js that achieves a easy, 60 FPS visualization expertise even with graphs containing tens of hundreds of nodes.


Debug efficiency and numeric accuracy with node knowledge overlay

A key Mannequin Explorer characteristic is its capability to overlay per-node knowledge on a graph, permitting you to kind, search, and stylize nodes utilizing the values in that knowledge. Mixed with the hierarchical view, per-node knowledge overlay lets you rapidly slender down efficiency or numeric bottlenecks. The instance under exhibits the imply squared error of a quantized TFLite mannequin versus its floating level counterpart. Utilizing Mannequin Explorer, you are in a position to rapidly establish that the standard drop is close to the underside of the graph, and alter your quantization technique as wanted. Let’s stroll via find out how to put together and visualize {custom} node knowledge.

This per-node knowledge overlay permits customers to rapidly establish efficiency or numeric points inside a mannequin.

Put together {custom} node knowledge

We offer a set of Python APIs that can assist you create {custom} node knowledge and serialize it right into a JSON file. From a excessive stage, the {custom} node knowledge has the next construction:

ModelNodeData: The highest-level container storing all the info for a mannequin. It consists of a number of GraphNodeData objects listed by graph ids.

GraphNodeData: Holds the info for a selected graph inside the mannequin. It consists of:

  • outcomes: Shops the {custom} node values, listed by both node ids or output tensor names.
  • thresholds or gradient: coloration configurations that affiliate every node worth with a corresponding node background coloration or label coloration, enabling visible illustration of the info.

Beneath is a minimal instance of making ready {custom} node knowledge utilizing the node_data_builder API. For in-depth documentation for making ready {custom} node knowledge go to node_data_builder.py in our Github repo.

from model_explorer import node_data_builder as ndb

# Populate values for the primary graph in a mannequin.
main_graph_results: dict[str, ndb.NodeDataResult] = {}
main_graph_results['node_id1'] = ndb.NodeDataResult(worth=100)
main_graph_results['node_id2'] = ndb.NodeDataResult(worth=200)
main_graph_results['any/output/tensor/name/'] = ndb.NodeDataResult(worth=300)

# Create a gradient coloration mapping.
#
# The minimal worth in `main_graph_results` maps to the colour with cease=0.
# The utmost worth in `main_graph_results` maps to the colour with cease=1.
# Different values maps to a interpolated coloration in-between.
gradient: listing[ndb.GradientItem] = [
    ndb.GradientItem(stop=0, bgColor='yellow'),
    ndb.GradientItem(stop=1, bgColor='red'),
]

# Assemble the info for the primary graph.
main_graph_data = ndb.GraphNodeData(
    outcomes=main_graph_results, gradient=gradient)

# Assemble the info for the mannequin.
model_data = ndb.ModelNodeData(graphsData={'most important': main_graph_data})

# It can save you the info to a json file.
model_data.save_to_file('path/to/file.json')

You may also visualize {custom} node knowledge by making a config object and passing it to the visualize_from_config API.

import model_explorer
from model_explorer import node_data_builder as ndb

# Create a `ModelNodeData` as proven in earlier part.
model_node_data = ...

# Create a config.
config = model_explorer.config()

# Add mannequin and {custom} node knowledge to it.
(config
 .add_model_from_path('/path/to/a/mannequin')
 # Add node knowledge from a json file.
 # A node knowledge json file could be generated by calling `ModelNodeData.save_to_file`
 .add_node_data_from_path('/path/to/node_data.json')
 # Add node knowledge from knowledge class object
 .add_node_data('my knowledge', model_node_data))

# Visualize
model_explorer.visualize_from_config(config)

Early Adoption

Up to now few months, now we have labored carefully with early adoption companions together with Waymo and Google Silicon to enhance our visualization instrument. Notably, Mannequin Explorer has performed an important function in serving to these groups debug and optimize on-device fashions like Gemini Nano at the moment deployed in manufacturing.

What’s subsequent?

Within the coming months we’ll deal with enhancing the core by refining key UI options like graph diffing and modifying, empowering extensibility by permitting you to seamlessly combine your individual instruments into Mannequin Explorer, and open-sourcing the Mannequin Explorer front-end. That is the third and last publish of the AI Edge 3-part weblog sequence. To remain updated on the newest AI Edge updates go to the AI Edge web site.



Acknowledgements

This work is a collaboration throughout a number of purposeful groups at Google. We want to prolong our due to engineers Na Li, Jing Jin, Eric (Yijie) Yang, Akshat Sharma, Chi Zeng, Jacques Pienaar, Chun-nien Chan, Jun Jiang, Matthew Soulanille, Arian Arfaian, Majid Dadashi, Renjie Wu, Zichuan Wei, Advait Jain, Ram Iyengar, Matthias Grundmann, Cormac Brick, Ruofei Du, our Technical Program Supervisor, Kristen Wright, and our Product Supervisor, Aaron Karp. We’d additionally wish to thank the UX crew together with Zi Yuan, Anila Alexander, Elaine Thai, Joe Moran and Amber Heinbockel.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version