commit | f18697daa15744e7fc51caf5f0a2da40904dede2 | [log] [tgz] |
---|---|---|
author | Derek Murray <mrry@google.com> | Mon Mar 16 22:39:17 2020 -0700 |
committer | TensorFlower Gardener <gardener@tensorflow.org> | Mon Mar 16 22:42:49 2020 -0700 |
tree | f44335d7a441ebf018fbf83357067a939ff5fe75 | |
parent | 489126360df1b48a44f99fda2397c803053aba35 [diff] |
[tf.data] Several optimizations for the graph hashing code. 1. Avoid copying the `GraphDef` each time a `GraphHasher` is created. The graph always outlives the hasher, so an unowned pointer is acceptable here. Should save O(#nodes) copies. 2. Use the same `FunctionLibraryDefinition` for all hashing. Previously we were converting it to and from a submessage of `GraphDef`, which led to a lot of copies, dynamic allocations, etc. Instead, we either build it once for the root node, or (ideally) the user passes in an already-constructed library, then we use that for all nodes. Since the function library typically has O(1) functions per node, this saves O(#nodes^2) copies. PiperOrigin-RevId: 301307984 Change-Id: I6e28ffd1df908840e946e43d3be3dc2f5106eb55
Documentation |
---|
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.
TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.
Keep up-to-date with release announcements and security updates by subscribing to announce@tensorflow.org. See all the mailing lists.
See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.
To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):
$ pip install tensorflow
A smaller CPU-only package is also available:
$ pip install tensorflow-cpu
To update TensorFlow to the latest version, add --upgrade
flag to the above commands.
Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.
$ python
>>> import tensorflow as tf >>> tf.add(1, 2).numpy() 3 >>> hello = tf.constant('Hello, TensorFlow!') >>> hello.numpy() b'Hello, TensorFlow!'
For more examples, see the TensorFlow tutorials.
If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.
We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.
The TensorFlow project strives to abide by generally accepted best practices in open-source software development:
Build Type | Status | Artifacts |
---|---|---|
Linux CPU | PyPI | |
Linux GPU | PyPI | |
Linux XLA | TBA | |
macOS | PyPI | |
Windows CPU | PyPI | |
Windows GPU | PyPI | |
Android | ||
Raspberry Pi 0 and 1 | Py2 Py3 | |
Raspberry Pi 2 and 3 | Py2 Py3 |
Build Type | Status | Artifacts |
---|---|---|
Linux AMD ROCm GPU Nightly | Nightly | |
Linux AMD ROCm GPU Stable Release | Release 1.15 / 2.x | |
Linux s390x Nightly | Nightly | |
Linux s390x CPU Stable Release | Release | |
Linux ppc64le CPU Nightly | Nightly | |
Linux ppc64le CPU Stable Release | Release 1.15 / 2.x | |
Linux ppc64le GPU Nightly | Nightly | |
Linux ppc64le GPU Stable Release | Release 1.15 / 2.x | |
Linux CPU with Intel® MKL-DNN Nightly | Nightly | |
Linux CPU with Intel® MKL-DNN Stable Release | Release 1.15 / 2.x | |
Red Hat® Enterprise Linux® 7.6 CPU & GPU Python 2.7, 3.6 | 1.13.1 PyPI |
Learn more about the TensorFlow community and how to contribute.