commit | 58bcf76ba3f52dc4f59516db4d8492b82580dfe0 | [log] [tgz] |
---|---|---|
author | Lei Chen <raychen@fb.com> | Mon Oct 16 15:56:35 2017 -0700 |
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | Mon Oct 16 16:03:48 2017 -0700 |
tree | 0e1a17416c63be2136851887ca740e77f1784332 | |
parent | 569bdb4b774111571272180bea6033720e9187cc [diff] |
Have model downloading as a separate plan Summary: For distributed offline training, downloading parameters from trainer_0 is part of epoch plan. However for distributed realtime training, we publish model by a specific time interval, so we need run multiple iterations for epoch plan before publishing the model. In this diff, I split downloading parameters from epoch plan as a separate plan, so we can explicitly execute it before model publishing for distributed online training. Reviewed By: boryiingsu Differential Revision: D5995122 fbshipit-source-id: 47d61d7b8c57cfae156e79b7ec32068ef579d7c3
Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.
Caffe2 research award competition request for proposals
Please use Github issues (https://github.com/caffe2/caffe2/issues) to ask questions, report bugs, and request new features.
Please participate in our survey (https://www.surveymonkey.com/r/caffe2). We will send you information about new releases and special developer events/webinars.
Caffe2 is released under the Apache 2.0 license. See the NOTICE file for details.