Clone model net to avoid hard-coded inputs

Summary:
Previously DPER was quite broken - we couldn't change loaders on the fly because serialized model had blob names hard-coded, e.g. "nn_loader/dense". In fact, the tests worked only by accident as both trainer and evaluator used the same loader type.

This diff does the following:
1) when writing out model, remap input blobs to be 'inputs/<field_name>'
2) when loading eval model, remap them back to the current loader

This diff uses Net.input_schema() for convenience, in particular the schema format is implicitly serialized in input blobs names. From our discussion with Andrey this type of hardcoding is actually acceptible since the schema of HiveReader on python side is inferred via the same string-parsing procedure

It also modifies model saving a bit so that we don't pollute global namespace with shape_provider net.

Overall code in mlp.py is pretty terrible. But I'd leave refactoring to xianjiec as a part of Layers migration.

Reviewed By: xianjiec

Differential Revision: D4218902

fbshipit-source-id: 6cd19f0343ec1be6ddaa3581512e61879957749e
1 file changed
tree: 0325c216a1e0fe4b3a89bfc9426567ac8f412435
  1. caffe/
  2. caffe2/
  3. docs/
  4. third_party/
  5. .Doxyfile
  6. .gitignore
  7. .gitmodules
  8. build.py
  9. build_android.py
  10. build_android_prepare.py
  11. LICENSE
  12. Makefile
  13. README.md
README.md

Caffe2

Caffe2 is a deep learning framework made with expression, speed, and modularity in mind. It is an experimental refactoring of Caffe, and allows a more flexible way to organize computation.

Read the installation instructions for installation details.

License and Citation

Caffe2 is released under the BSD 2-Clause license.