commit | 5d0167c8e70c812c59882ebf3d26077f6a513740 | [log] [tgz] |
---|---|---|
author | Aapo Kyrola <akyrola@fb.com> | Mon Nov 28 12:57:40 2016 -0800 |
committer | Bram Wasti <bwasti@dev11999.prn1.facebook.com> | Tue Nov 29 15:18:38 2016 -0800 |
tree | 9b977558bc66bb31c2c683d06691cea87dbaf4f4 | |
parent | 6ebae91d247a8b68f2ba14c7853e055bb4810b3e [diff] |
Example workflow for running disributed (syncsgd) imagenet training in Flow Summary: This diff introduces a simplified Imagenet trainer that uses data_parallel_model to parallellize training over GPUs and Nodes in synchronous manner. Flow's gang scheduling is used to launch the nodes, and data_parallel_model handles the synchronization among the gang members. This example also uses the operator-per-epoch model where each epoch produces a checkpoint consumed by the followup epoch. Reviewed By: salexspb Differential Revision: D4223384 fbshipit-source-id: 8c2c73f4f6b2fdadb98511075ebbd8426c91eadb
Caffe2 is a deep learning framework made with expression, speed, and modularity in mind. It is an experimental refactoring of Caffe, and allows a more flexible way to organize computation.
Read the installation instructions for installation details.
Caffe2 is released under the BSD 2-Clause license.