Qeexo Model Converter User Guide

Table of Contents

Qeexo Model Converter converts a tree-ensemble classifier in the ONNX format into a object file suitable for use on ARM Cortex platforms.

This document describes the features of version 1.0 of the Qeexo Model Converter. We would love to hear your thoughts, feedback, and feature requests for future versions. Contact us at [email protected].

Quick Start Guide

This quick start guide provides a complete working example of how to use Qeexo Model Converter. For more details on each step, see the relevant sections further down in the document.

This example requires the following packages: requests, sklearn, numpy, onnx, skl2onnx, and (optionally) onnxruntime. All are installable via pip.

First, train a tree-ensemble classifier (this example uses a sklearn RandomForestClassifier) and then convert to ONNX format (full details on conversion are below):

# Step 1: Load some data

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

num_features = X.shape[1]

# Step 2: Train the model

from sklearn.ensemble import RandomForestClassifier

clf = RandomForestClassifier().fit(X_train, y_train)

# Step 3: Write the model to an ONNX file

import onnx
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType

initial_type = [('float_input', FloatTensorType([None, num_features]))]

# Note that Qeexo Model Converter will work with or without the `options` argument to convert_sklearn().
# zipmap=False is used in this case to facilitate the optional use of onnxruntime in step 4 below.
onnx_model = convert_sklearn(clf, initial_types=initial_type, options={type(clf): {'zipmap': False}}, target_opset=10)

onnx_model.ir_version = 6
onnx_model_path = "random_forest.onnx"
onnx.save(onnx_model, onnx_model_path)

# Step 4 (optional): Run ONNX model and verify predictions (requires onnxruntime package)

# Predict using sklearn
proba_sklearn = clf.predict_proba(X_test)

# Predict using onnxruntime
import numpy as np
import onnxruntime

sess = onnxruntime.InferenceSession(onnx_model_path)
input_name = sess.get_inputs()[0].name
output_name = sess.get_outputs()[1].name
proba_onnxrt = sess.run([output_name], {input_name: X_test.astype(np.float32)})[0]

# Check that the predictions match
assert np.allclose(proba_onnxrt, proba_sklearn)

This should produce a file random_forest.onnx.

If you have an account, we can now use Qeexo Model Converter on that file:

import requests
from requests.auth import HTTPBasicAuth

email = '[email protected]'
password = 'password'
url = 'https://api.qeexo.com/modelconverter/v1/convert'
onnx_model_path = 'random_forest.onnx'
download_path = 'random_forest.zip'

options = {'target_arch': 'm4f_hard',
           'function_name': 'random_forest_classify'}
files = {'file': ('upload.onnx', open(onnx_model_path, 'rb'))}

post_r = requests.post(url, auth=HTTPBasicAuth(email, password), data=options, files=files)

if post_r.status_code == 200 and post_r.json()['statusCode'] == 200:
    download_url = post_r.json()['url']

    get_r = requests.get(download_url)
    with open(download_path, 'wb') as f:
        f.write(get_r.content)
else:
    print("Conversion failed: request returned: {}".format(post_r.text))

This should download a file random_forest.zip. Inside the zip file is an object file and an include file so you can use the classifier from your existing embedded C/C++ code:

static float features_array[NUM_FEATURES];
// compute your features...
static float output_probabilities[NUM_CLASSES];
random_forest_classify(features_array, output_probabilities);

Full details on these output files are provided below.

Converting a tree-ensemble model to ONNX format

The ONNX format provides the TreeEnsembleClassifier operator as a way of representing tree-based classifiers on disk. (Pickles are not acceptable to use because of security concerns and potential version incompatibilities.) Models can be exported to ONNX from popular frameworks like sklearn and xgboost.

Qeexo Model Converter currently supports only the TreeEnsembleClassifier operator.

You may find the Netron tool useful for visualizing and inspecting ONNX files.

scikit-learn models

Since version 0.24, scikit-learn has recommended the use of ONNX format for model persistence. The sklearn-onnx package (installable via pip) is used to convert sklearn models to ONNX.

Basic usage of sklearn-onnx is demonstrated in the Quick Start Guide above, and also in the sklearn-onnx docs.

Qeexo Model Converter has been tested with ONNX files converted from RandomForestClassifier and GradientBoostingClassifier classifiers. Any other sklearn object that converts to a single ONNX TreeEnsembleClassifier should also be supported.

XGBoost models

XGBoost models can be converted to ONNX using the onnxmltools package.

The basic usage of onnxmltools is similar to sklearn-onnx:

import onnx
import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType
from xgboost import XGBClassifier

clf = XGBClassifier()

# fit the classifier...

onnx_model_path = "xgb_classifier.onnx"
initial_type = [('float_input', FloatTensorType([None, num_features]))]
onnx_model = onnxmltools.convert.convert_xgboost(clf, initial_types=initial_type, target_opset=10)
onnx.save(onnx_model, onnx_model_path)

Important notes about XGBoost to ONNX conversion:

  • Model must be trained using the scikit-learn API of xgboost
  • The training data passed to XGBClassifier().fit() must not have feature names associated with it. For example, if your training data is a DataFrame called df, which has column names, you will need to use a representation without column names (i.e. df.values) when training.

We are working to add more robust support for XGBoost in future versions.

Other frameworks

onnxmltools also supports conversion from other frameworks such as lightgbm to ONNX.

Limitations on supported ONNX files

Certain features of the ONNX TreeEnsembleClassifier operator are not yet supported by Qeexo Model Converter:

  • The input tensor to TreeEnsembleClassifier must be of type float. Integer support may be added in future versions.
  • For any given ensemble, we require that all non-'LEAF' values of nodes_modes be the same. For example, we would support a classifier with mode 'BRANCH_LEQ' or a classifier with mode 'BRANCH_LT', but not a classifier that uses both in the same ensemble. All ONNX files converted from sklearn and XGBoost should meet this requirement.
  • Only 'NONE', 'SOFTMAX', and 'LOGISTIC' are supported as values of post_transform. 'PROBIT' and 'SOFTMAX_ZERO' are not supported—however, these modes are never used in any ONNX model output by onnxmltools or sklearn-onnx.

The output zip file

The zip file will have the following structure:

converter_output/
├── include
│   └── qx_predict_random_forest_classify.h
├── obj
│   └── qx_predict_random_forest_classify.o
├── qeexo_automl_model_conversion.txt
└── report.json

The files qeexo_automl_model_conversion.txt and report.json provide some useful metadata about the conversion process.

The include file gives the function signature and required shape of the inputs and outputs:

void random_forest_classify(const float *__restrict__ const val_float__input /* 1x20 */, float *__restrict__ const val_probabilities /* 1x3 */);

The object file is built with the GNU Arm Embedded Toolchain, and can be linked in with any existing project that uses the same toolchain.

The object file should not require linking with any external functions, except in the following cases:

  • If the model uses 'LOGISTIC' or 'SOFTMAX' post_transform, it must be linked with an implementation of expf (usually provided by the math library libm)
  • If the target architecture is M0+, or another architecture without a floating-point unit, implementations of standard floating-point operations like __aeabi_fadd must be provided (usually provided by libgcc)

Qeexo Model Converter options

The full API documentation is provided here. This section provides more context for a few of the options.

Quantization

If quantization is enabled, the leaves arrays is stored as a smaller integer type, rather than in a full 32-bit floating-point array.

We recommend enabling quantization. It allows a significant decrease in model size, and typically has a minimal effect on precision. We have found that the final output of the classifier (i.e. the probabilities) typically differs by no more than 1e-3 when comparing the quantized and unquantized model, although worse results are possible in pathological cases.

If you wish to disable quantization, the no_quantize option is provided.

Model-size reduction

The max_flash_size allows the user to request a limit on ROM size. ROM size is measured as the sum of the text and rodata sections of the object file.

If conversion of the original classifier results in a ROM size greater than max_flash_size, then model-size reduction is performed by removing trees from the end of the ensemble until the ROM size is below max_flash_size. The report.json file includes details of whether model-size reduction was activated (reduce_model), and if so, how many trees were removed (n_estimators and n_estimators_before_reduced). Of course, removing trees will change the output of the model, and could result in decreased performance. However, for random forest and gradient boosting classifiers, removing trees from the end will preserve desired properties of the ensemble and is one valid strategy for model-size reduction.

The max_flash_size argument is optional. If not provided, Qeexo Model Converter will not apply any model-size reduction techniques.

The flash size of the resulting object file is reported in report.json.

Note that we use no dynamic or static RAM allocation, except for a small use of stack space.

Supported platforms

Currently we support the Cortex M4F and M0+ platforms. More platforms will be added in future versions. We use the GNU Arm Embedded Toolchain.

We support the following values for the target_arch option:

target_arch platform flags
m4f_hard -mthumb -mcpu=cortex-m4 -mfloat-abi=hard -mfpu=fpv4-sp-d16
m4f_softfp -mthumb -mcpu=cortex-m4 -mfloat-abi=softfp -mfpu=fpv4-sp-d16
m0plus -mthumb -mcpu=cortex-m0plus