commit | a463f0be2462d64b399e33229bacfcd723e593d9 | [log] [tgz] |
---|---|---|
author | Jorge Pineda <jorgep31415@meta.com> | Fri May 31 13:46:14 2024 -0700 |
committer | Facebook GitHub Bot <facebook-github-bot@users.noreply.github.com> | Fri May 31 13:46:14 2024 -0700 |
tree | 6268f66bac8c6fbae5903512bcf2b974a9e30d22 | |
parent | 8c8d9652ab85b84f8b1b00cbdc9a9569fdd0f86c [diff] |
aten.avg_pool2d (#3770) Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/3770 ## The Operator `nn.Module` invocations of [`torch.nn.AvgPool2d`](https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html) get compiled to `aten.avg_pool2d.default` in the Edge Dialect, which carries the following signature. ``` - func: avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor ``` ## Implementation This is a full C-packing implementation including dynamic shape support. We start with [LiteInterpreter's `avg_pool2d.glsl` logic](https://github.com/pytorch/pytorch/blob/9257a0698b57acc5607ee6fe31a16fdd93af1731/aten/src/ATen/native/vulkan/glsl/avg_pool2d.glsl), which is incomplete, and cover `ceil_mode=True`, `count_include_pad=True`, and `divisor_override` cases for full support. As a result, the divisor's computation is now a bit complex. If needed, we can simplify it into separate shaders in the future. ghstack-source-id: 228476264 Reviewed By: copyrightly Differential Revision: D57918523 fbshipit-source-id: 8069c4a2dcc5d46da7221d58661e57bf2055b521
ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices.
Key value propositions of ExecuTorch are:
For a comprehensive technical overview of ExecuTorch and step-by-step tutorials, please visit our documentation website for the latest release (or the main branch).
We welcome any feedback, suggestions, and bug reports from the community to help us improve our technology. Please use the PyTorch Forums for discussion and feedback about ExecuTorch using the ExecuTorch category, and our GitHub repository for bug reporting.
We recommend using the latest release tag from the Releases page when developing.
executorch ├── backends # Backend delegate implementations. ├── build # Utilities for managing the build system. ├── bundled_program # Utilities for attaching reference inputs and outputs to models. ├── codegen # Tooling to autogenerate bindings between kernels and the runtime. ├── configurations ├── docs # Static docs tooling ├── examples # Examples of various user flows, such as model export, delegates, and runtime execution. ├── exir # Ahead of time library, model capture and lowering apis. | ├── _serialize # Serialize final export artifact. | ├── backend # Backend delegate ahead of time APIs | ├── capture # Program capture. | ├── dialects # Op sets for various dialects in the export process. | ├── emit # Conversion from ExportedProgram to ExecuTorch execution instructions. | ├── passes # Built-in compiler passes. | ├── program # Export artifacts. | ├── verification # IR verification. ├── extension # Extensions built on top of the runtime. | ├── aten_util | ├── data_loader # 1st party data loader implementations. | ├── memory_allocator # 1st party memory allocator implementations. | ├── pybindings # Python api for executorch runtime. | ├── pytree # C++ and Python flattening and unflattening lib for pytrees. | ├── testing_util ├── kernels # 1st party kernel implementations. | ├── aten | ├── optimized | ├── portable # Reference implementations of ATen operators. | ├── prim_ops # Special ops used in executorch runtime for control flow and symbolic primitives. | ├── quantized ├── profiler # Utilities for profiling. ├── runtime # Core cpp runtime | ├── backend # Backend delegate runtime APIs | ├── core # Core structures used across all levels of the runtime | ├── executor # Model loading, initalization, and execution. | ├── kernel # Kernel registration and management. | ├── platform # Layer between architecture specific code and user calls. ├── schema # ExecuTorch program definition ├── scripts # Utility scripts for size management, dependency management, etc. ├── sdk # Model profiling, debugging, and introspection. ├── shim # Compatibility layer between OSS and Internal builds ├── test # Broad scoped end2end tests ├── third-party # Third-party dependencies ├── util
ExecuTorch is BSD licensed, as found in the LICENSE file.