In this example, we showcase how to export a model (phi-3-mini) appended with LoRA layers to ExecuTorch. The model is exported to ExecuTorch for both inference and training.
To see how you can use the model exported for training in a fully involved finetuning loop, please see our example on LLM PTE Fintetuning.
./install_requirements.sh in ExecuTorch root directory.
./examples/models/phi-3-mini-lora/install_requirements.shpython export_model.py
# Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here. (rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) # Build the executor_runner target cmake --build cmake-out --target executor_runner -j9 # Run the model for inference. ./cmake-out/executor_runner --model_path phi3_mini_lora.pte