fix dynamic memory management for distributed execution

Summary: Dynamic memory management in Data Parallel Model was broken for distributed computation because it also the parameter gradients where freed after been used. That is problem with GLOO because it expects the tensors to have the same address over multiple calls. It is not a huge loss to remove parameter gradients from recycling as they are relatively small for typical convnets.

Reviewed By: asaadaldien

Differential Revision: D6314095

fbshipit-source-id: 949161d8c592927ae2fa82b3262b5f9ee47bed6f
1 file changed
tree: dfa429a15ecf820a760833b3560d8b832b6be207
  1. .jenkins/
  2. .travis/
  3. caffe/
  4. caffe2/
  5. cmake/
  6. conda/
  7. docker/
  8. docs/
  9. modules/
  10. scripts/
  11. third_party/
  12. .Doxyfile
  13. .Doxyfile-c
  14. .Doxyfile-python
  15. .gitattributes
  16. .gitignore
  17. .gitmodules
  18. .travis.yml
  19. appveyor.yml
  20. CMakeLists.txt
  21. LICENSE
  22. Makefile
  23. NOTICE
  24. README.md
  25. release-notes.md
README.md

Caffe2

License TravisCI Build Status Appveyor Build Status

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

News and Events

Caffe2 research award competition request for proposals

Questions and Feedback

Please use Github issues (https://github.com/caffe2/caffe2/issues) to ask questions, report bugs, and request new features.

Please participate in our survey (https://www.surveymonkey.com/r/caffe2). We will send you information about new releases and special developer events/webinars.

License

Caffe2 is released under the Apache 2.0 license. See the NOTICE file for details.

Further Resources on Caffe2.ai