Core ML delegate uses Core ML APIs to enable running neural networks via Apple's hardware acceleration. For more about coreml you can read here. In this tutorial, we will walk through the steps of lowering a PyTorch model to Core ML delegate
::::{grid} 2 :::{grid-item-card} What you will learn in this tutorial: :class-card: card-prerequisites
In order to be able to successfully build and run the ExecuTorch‘s Core ML backend you’ll need the following hardware and software components.
install_requirements.sh to install dependencies required by the Core ML backend.cd executorch ./backends/apple/coreml/scripts/install_requirements.sh
xcode-select --install
Exporting a Core ML delegated Program:
cd executorch # Generates ./mv3_coreml_all.pte file. python3 -m examples.apple.coreml.scripts.export --model_name mv3
.pte file.Running a Core ML delegated Program:
cd executorch # Builds `coreml_executor_runner`. ./examples/apple/coreml/scripts/build_executor_runner.sh
cd executorch # Runs the exported mv3 model using the Core ML backend. ./coreml_executor_runner --model_path mv3_coreml_all.pte
Profiling a Core ML delegated Program:
Note that profiling is supported on macOS >= 14.4.
cd executorch # Generates `mv3_coreml_all.pte` and `mv3_coreml_etrecord.bin` files. python3 -m examples.apple.coreml.scripts.export --model_name mv3 --generate_etrecord
# Builds `coreml_executor_runner`. ./examples/apple/coreml/scripts/build_executor_runner.sh
cd executorch # Generate the ETDump file. ./coreml_executor_runner --model_path mv3_coreml_all.pte --profile_model --etdump_path etdump.etdp
python examples/apple/coreml/scripts/inspector_cli.py --etdump_path etdump.etdp --etrecord_path mv3_coreml.bin
Running the Core ML delegated Program in the Demo iOS App:
Please follow the Export Model step of the tutorial to bundle the exported MobileNet V3 program. You only need to do the Core ML part.
Complete the Build Runtime and Backends section of the tutorial. When building the frameworks you only need the coreml option.
Complete the Final Steps section of the tutorial to build and run the demo app.
Running the Core ML delegated Program in your App
executorch.xcframework and coreml_backend.xcframework in the cmake-out directory.cd executorch ./build/build_apple_frameworks.sh --Release --coreml
Create a new Xcode project or open an existing project.
Drag the executorch.xcframework and coreml_backend.xcframework generated from Step 2 to Frameworks.
Go to the project's Build Phases - Link Binaries With Libraries, click the + sign, and add the following frameworks:
executorch.xcframework coreml_backend.xcframework Accelerate.framework CoreML.framework libsqlite3.tbd
Add the exported program to the Copy Bundle Phase of your Xcode target.
Please follow the running a model tutorial to integrate the code for loading an ExecuTorch program.
Update the code to load the program from the Application's bundle.
using namespace torch::executor; NSURL *model_url = [NBundle.mainBundle URLForResource:@"mv3_coreml_all" extension:@"pte"]; Result<util::FileDataLoader> loader = util::FileDataLoader::from(model_url.path.UTF8String);
Use Xcode to deploy the application on the device.
The application can now run the MobileNet V3 model on the Core ML backend.
In this tutorial, you have learned how to lower the MobileNet V3 model to the Core ML backend, deploy, and run it on an Apple device.
If you encountered any bugs or issues following this tutorial please file a bug/issue here with tag #coreml.