This directory contains scripts and other helper utilities to illustrate an end-to-end workflow to run a Core ML delegated torch.nn.module
with the ExecuTorch runtime.
coreml ├── scripts # Scripts to build the runner. ├── executor_runner # The runner implementation. └── README.md # This file.
We will walk through an example model to generate a Core ML delegated binary file from a python torch.nn.module
then we will use the coreml/executor_runner
to run the exported binary file.
Following the setup guide in Setting Up ExecuTorch you should be able to get the basic development environment for ExecuTorch working.
Run install_requirements.sh
to install dependencies required by the Core ML backend.
cd executorch ./backends/apple/coreml/scripts/install_requirements.sh
cd executorch # To get a list of example models python3 -m examples.portable.scripts.export -h # Generates ./add_coreml_all.pte file if successful. python3 -m examples.apple.coreml.scripts.export_and_delegate --model_name add
coreml_executor_runner
.cd executorch # Builds the Core ML executor runner. Generates ./coreml_executor_runner if successful. ./examples/apple/coreml/scripts/build_executor_runner.sh # Run the Core ML delegate model. ./coreml_executor_runner --model_path add_coreml_all.pte
examples.apple.coreml.scripts.export_and_delegate
could fail if the model is not supported by the Core ML backend. The following models from the examples models list ( python3 -m examples.portable.scripts.export -h
)are currently supported by the Core ML backend.add add_mul ic4 linear mul mv2 mv3 resnet18 resnet50 softmax vit w2l