commit | 523448a22ef8c61a618cc0f3d02700c995efcd1d | [log] [tgz] |
---|---|---|
author | Marat Dukhan <maratek@google.com> | Tue Oct 08 16:58:53 2019 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Tue Oct 08 16:59:15 2019 -0700 |
tree | 33039fe65b765722008978e0ce9cb9cfd7f8fa43 | |
parent | 2dbdc2fa7d5d13e9472b2e4b819975c0fbd55975 [diff] |
Add .gitignore file PiperOrigin-RevId: 273637007
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.