commit | e9cc41885e4dc7754d61113c072fd3f2cf842b48 | [log] [tgz] |
---|---|---|
author | Aapo Kyrola <akyrola@fb.com> | Mon Nov 13 11:28:53 2017 -0800 |
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | Mon Nov 13 12:09:11 2017 -0800 |
tree | dfa429a15ecf820a760833b3560d8b832b6be207 | |
parent | 97e4743aafe949c4128c1c175390e547ec73a484 [diff] |
fix dynamic memory management for distributed execution Summary: Dynamic memory management in Data Parallel Model was broken for distributed computation because it also the parameter gradients where freed after been used. That is problem with GLOO because it expects the tensors to have the same address over multiple calls. It is not a huge loss to remove parameter gradients from recycling as they are relatively small for typical convnets. Reviewed By: asaadaldien Differential Revision: D6314095 fbshipit-source-id: 949161d8c592927ae2fa82b3262b5f9ee47bed6f
Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.
Caffe2 research award competition request for proposals
Please use Github issues (https://github.com/caffe2/caffe2/issues) to ask questions, report bugs, and request new features.
Please participate in our survey (https://www.surveymonkey.com/r/caffe2). We will send you information about new releases and special developer events/webinars.
Caffe2 is released under the Apache 2.0 license. See the NOTICE file for details.