commit | 9a14c013c375c5c903e5e0dbe0e8e6dc72328f12 | [log] [tgz] |
---|---|---|
author | Henry Lu <henrylu@fb.com> | Tue Jun 27 19:32:37 2017 -0700 |
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | Tue Jun 27 19:35:24 2017 -0700 |
tree | a091b361e8026fcc26fbca0d618619cf78153269 | |
parent | c3b4d277bf2730302997658ff5de4d0e9fcc8548 [diff] |
Refactor data_parallel_model to take advantage of Gloo broadcast op in broadcasting across machines and GPUs in one operation Summary: Combine _AddDistributedParameterSync() and _SyncParams() into a single function to broadcast across distributes machines and all local GPU simultaneously. This is similar to how calls to Allreduce has already optimized using the functionalities of Gloo. All the refactoring work is contained in data_parallel_model.py. Reviewed By: akyrola, andrewwdye Differential Revision: D5329277 fbshipit-source-id: 4407b88980cf396f2e0f994d796294fa79fd39ed
Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.
Caffe2 research award competition request for proposals
Please use Github issues (https://github.com/caffe2/caffe2/issues) to ask questions, report bugs, and request new features.
Please participate in our survey (https://www.surveymonkey.com/r/caffe2). We will send you information about new releases and special developer events/webinars.
Caffe2 is released under the BSD 2-Clause license.