commit | d70db7d122a636319492b0bb179aed1304b75837 | [log] [tgz] |
---|---|---|
author | Huy Do <huydhn@gmail.com> | Thu Nov 30 19:27:56 2023 -0800 |
committer | Facebook GitHub Bot <facebook-github-bot@users.noreply.github.com> | Thu Nov 30 19:27:56 2023 -0800 |
tree | e4da1426d2191f7c04a371a02808064dd42a23ae | |
parent | 3a4bb06b3a3a36863ff9d7fca3cfee9d8f7b6613 [diff] |
Switch PyTorch nightly to commit-based pin (#1247) Summary: This PR replaces the existing PyTorch nightly pin with a commit-based one where the PyTorch commit could be: * A commit from main or viable/strict * A commit from a WIP pull request * or even a branch name The main benefit of this change is to allow folks who are working on both PyTorch and ExecuTorch the flexibility of using the commit they want from PyTorch without the need to wait for that to become available on nightly build while leaving ExecuTorch CI in a potential broken state. The changes here include: * Remove `nightly.txt` * PyTorch pinned commit is checkout and is built from source. * For Linux, the build happens during Docker image build, so there is no impact on CI duration. From dev point of view, a PyTorch snapshot has always been installed at the specified commit * For MacOS, the build happens during the CI job. This incurs additional build time, but I have added sccache to mitigate it. The increase here is less than 5 minutes from my spot check, for example [before](https://github.com/pytorch/executorch/actions/runs/6948971118/job/18906105216) v.s. [after](https://github.com/pytorch/executorch/actions/runs/6951428944/job/18913383896) * Audio, and vision will also need to be built from source (for compatibility with PyTorch C++ layer). For simplicity, audio and vision commit pins will come from core * Install sccache and its openssl dependency * Tweak the codebase here and there to make it work with building PyTorch from source After this is ready, I will announce it to the team before landing the change. Pull Request resolved: https://github.com/pytorch/executorch/pull/1247 Reviewed By: mergennachin, guangy10 Differential Revision: D51518043 Pulled By: huydhn fbshipit-source-id: 39a36028dcb646f4d56821ea6e101ead12d02de9
ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices.
Key value propositions of ExecuTorch are:
For a comprehensive technical overview of ExecuTorch and step-by-step tutorials, please visit our documentation website.
This is a preview version of ExecuTorch and should be used for testing and evaluation purposes only. It is not recommended for use in production settings. We welcome any feedback, suggestions, and bug reports from the community to help us improve the technology. Please use the PyTorch Forums for discussion and feedback about ExecuTorch using the ExecuTorch category, and our GitHub repository for bug reporting.
The ExecuTorch code and APIs are still changing quickly, and there are not yet any guarantees about forward/backward source compatibility. We recommend using the latest v#.#.#
release tag from the Releases page when experimenting with this preview release.
executorch ├── backends # Backend delegate implementations. ├── build # Utilities for managing the build system. ├── bundled_program # Utilities for attaching reference inputs and outputs to models. TODO move to extension ├── codegen # Tooling to autogenerate bindings between kernels and the runtime. TODO move to tool ├── configurations # TODO delete this ├── docs # Static docs tooling ├── examples # Examples of various user flows, such as model export, delegates, and runtime execution. ├── exir # Ahead of time library, model capture and lowering apis. | ├── _serialize # Serialize final export artifact. | ├── backend # Backend delegate ahead of time APIs | ├── capture # Program capture. | ├── dialects # Op sets for various dialects in the export process. | ├── emit # Conversion from ExportedProgram to ExecuTorch execution instructions. | ├── passes # Built-in compiler passes. | ├── program # Export artifacts. | ├── verification # IR verification. ├── extension # Extensions built on top of the runtime. | ├── aten_util | ├── data_loader # 1st party data loader implementations. | ├── memory_allocator # 1st party memory allocator implementations. | ├── pybindings # Python api for executorch runtime. | ├── pytree # C++ and Python flattening and unflattening lib for pytrees. | ├── testing_util ├── kernels # 1st party kernel implementations. | ├── aten | ├── optimized | ├── portable # Reference implementations of ATen operators. | ├── prim_ops # Special ops used in executorch runtime for control flow and symbolic primitives. | ├── quantized ├── profiler # Utilities for profiling. TODO delete in favor of ETDump in sdk/ ├── runtime # core cpp runtime of executorch | ├── backend # Backend delegate runtime APIs | ├── core # Core structures used across all levels of the runtime | ├── executor # Model loading, initalization, and execution. | ├── kernel # Kernel registration and management. | ├── platform # Layer between architecture specific code and user calls. ├── schema # ExecuTorch program definition, TODO move under serialization/ ├── scripts # Utility scripts for size management, dependency management, etc. ├── sdk # Model profiling, debugging, and introspection. ├── shim # Compatibility layer between OSS and Internal builds ├── test # Broad scoped end2end tests ├── third-party # third-party dependencies ├── util # TODO delete this
ExecuTorch is BSD licensed, as found in the LICENSE file.