commit | a7df0e6724cf03042b5c6244739783138ef30413 | [log] [tgz] |
---|---|---|
author | Dmytro Dzhulgakov <dzhulgakov@fb.com> | Wed Nov 23 18:17:01 2016 -0800 |
committer | Bram Wasti <bwasti@dev11999.prn1.facebook.com> | Tue Nov 29 15:18:38 2016 -0800 |
tree | 0325c216a1e0fe4b3a89bfc9426567ac8f412435 | |
parent | a597c7b167bea68da30c937a5b3e8a7ada6dedb9 [diff] |
Clone model net to avoid hard-coded inputs Summary: Previously DPER was quite broken - we couldn't change loaders on the fly because serialized model had blob names hard-coded, e.g. "nn_loader/dense". In fact, the tests worked only by accident as both trainer and evaluator used the same loader type. This diff does the following: 1) when writing out model, remap input blobs to be 'inputs/<field_name>' 2) when loading eval model, remap them back to the current loader This diff uses Net.input_schema() for convenience, in particular the schema format is implicitly serialized in input blobs names. From our discussion with Andrey this type of hardcoding is actually acceptible since the schema of HiveReader on python side is inferred via the same string-parsing procedure It also modifies model saving a bit so that we don't pollute global namespace with shape_provider net. Overall code in mlp.py is pretty terrible. But I'd leave refactoring to xianjiec as a part of Layers migration. Reviewed By: xianjiec Differential Revision: D4218902 fbshipit-source-id: 6cd19f0343ec1be6ddaa3581512e61879957749e
Caffe2 is a deep learning framework made with expression, speed, and modularity in mind. It is an experimental refactoring of Caffe, and allows a more flexible way to organize computation.
Read the installation instructions for installation details.
Caffe2 is released under the BSD 2-Clause license.