commit | 6ee6aedf20f6e8d89aff8fa9c37d4230cc03af6f | [log] [tgz] |
---|---|---|
author | Zeming Lin <misterabc@devgpu029.prn2.facebook.com> | Tue Feb 23 09:31:19 2016 -0800 |
committer | Zeming Lin <misterabc@devgpu029.prn2.facebook.com> | Tue Feb 23 09:31:19 2016 -0800 |
tree | ae0067233f6281fbb0e1c2a575d24862c9243a44 | |
parent | bae607590676d7eb0c0eec8be976caa4cb91f639 [diff] |
Adding SparseLinear with CUDA, requires buffer variable
THNN is a library that gathers nn‘s C implementations of neural network modules. It’s entirely free of Lua dependency and therefore can be used in any application that has a C FFI. Please note that it only contains quite low level functions, and an object oriented C/C++ wrapper will be created soon as another library.
There is also a CUDA counterpart of THNN (CUTHNN) in the cunn repository.
Torch‘s nn module provided many optimized C implementations of modules, but the source files contained Lua specific code and headers so they couldn’t be easily compiled and included anywhere else.
THNN is based on the same code, but is written in pure C, so it can be easily included in other code. Future C implementations should be committed to THNN.
THNN is a purely functional library. It provides 2-3 functions for each module, that perform the most important operations:
For information on argument types please check the API reference.
This is all THNN library provides. An object oriented implementation similar to nn will be provided in a separate library. This one is just a set of CPU kernels.
This section will be expanded when FFI refactoring will be finished.