Fix O(n^2) behavior in HLO parser.

This issue was introduced by my previous change to allow nested instructions in
HLO text.

Previously, we tried to parse each operand as an instruction.  If that failed,
it would generate an error, which we'd discard.  We'd then try parsing as a
normal operand.

The issue is that generating an error calls HLoLexer::GetLineAndColumn(), which
is O(n) in the length of the input text.  Therefore generating n "normal"
operands is O(n^2).  Oops.

We fix this by switching the order that we try things in.  First we try as a
normal instruction name, and only if that fails do we try as a nested
instruction.

Careful readers will observe that this doesn't actually fix the O(n^2)
behavior, it just changes it: Now we're O(n^2) in the number of nested
instructions we have, instead of the number of traditional operands.

But our HLO dumping doesn't generate nested instructions -- and probably never
will, because you can't while being strictly isomorphic to an HloModule's
contents.  In practice, you only get nested HLOs when writing text by hand.
Such inputs will be small enough that the O(n^2) behavior shouldn't be a
problem.

While we're here, we also fix a TODO and generate better error messages.  Now
if parsing fails, we tell the user why it failed as both a vanilla operand and
a nested instruction.

PiperOrigin-RevId: 417891461
Change-Id: Icfe7fe510c4edbfd5ee35c688d94127f43c67b24
2 files changed
tree: 16fd447977385d81ea0c41810cb3a38e3736a12a
  1. .github/
  2. tensorflow/
  3. third_party/
  4. tools/
  5. .bazelrc
  6. .bazelversion
  7. .clang-format
  8. .gitignore
  9. .zenodo.json
  10. arm_compiler.BUILD
  11. AUTHORS
  12. BUILD
  13. CITATION.cff
  14. CODE_OF_CONDUCT.md
  15. CODEOWNERS
  16. configure
  17. configure.cmd
  18. configure.py
  19. CONTRIBUTING.md
  20. ISSUE_TEMPLATE.md
  21. ISSUES.md
  22. LICENSE
  23. models.BUILD
  24. README.md
  25. RELEASE.md
  26. SECURITY.md
  27. WORKSPACE
README.md

Python PyPI DOI

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to announce@tensorflow.org. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:

Fuzzing Status CII Best Practices Contributor Covenant

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build TypeStatusArtifacts
Linux CPUStatusPyPI
Linux GPUStatusPyPI
Linux XLAStatusTBA
macOSStatusPyPI
Windows CPUStatusPyPI
Windows GPUStatusPyPI
AndroidStatusDownload
Raspberry Pi 0 and 1StatusPy3
Raspberry Pi 2 and 3StatusPy3
Libtensorflow MacOS CPUStatus Temporarily UnavailableNightly Binary Official GCS
Libtensorflow Linux CPUStatus Temporarily UnavailableNightly Binary Official GCS
Libtensorflow Linux GPUStatus Temporarily UnavailableNightly Binary Official GCS
Libtensorflow Windows CPUStatus Temporarily UnavailableNightly Binary Official GCS
Libtensorflow Windows GPUStatus Temporarily UnavailableNightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

License

Apache License 2.0