commit | a6a8d57f661ca7b62b9e291d3bdd48244f3e58a6 | [log] [tgz] |
---|---|---|
author | Chris Lattner <clattner@google.com> | Fri Mar 29 22:23:34 2019 -0700 |
committer | Mehdi Amini <joker.eph@gmail.com> | Sat Mar 30 11:23:39 2019 -0700 |
tree | 0156dfc77ff94dc1ed88e2a9081402c96982707a | |
parent | 57b25427af762c753bee587ff65368346eee2459 [diff] |
Implement basic IR support for a builtin complex<> type. As with tuples, we have no standard ops for working with these yet, this is simply enough to represent and round trip them in the printer and parser. -- PiperOrigin-RevId: 241102728
The MLIR project aims to define a common intermediate representation (IR) that will unify the infrastructure required to execute high performance machine learning models in TensorFlow and similar ML frameworks. This project will include the application of HPC techniques, along with integration of search algorithms like reinforcement learning. This project aims to reduce the cost to bring up new hardware, and improve usability for existing TensorFlow users.
Whereas the MLIR draft specification discusses the details of the IR in a dry style intended to be a long-lived reference document, this document discusses higher level issues. This includes:
For more information on MLIR, please see:
or join the MLIR mailing list.
MLIR is intended to be a hybrid IR which can support multiple different requirements in a unified infrastructure. For example, this includes:
MLIR is a common IR which also supports hardware specific operations. Thus, any investment into the infrastructure surrounding MLIR (e.g. the compiler passes that work on it) should yield good returns; many targets can use that infrastructure and will benefit from it.
MLIR is a powerful representation, but it also has non-goals. We do not try to support low level machine code generation algorithms (like register allocation and instruction scheduling). They are a better fit for lower level optimizers (such as LLVM). Also, we do not intend MLIR to be a source language that end-users would themselves write kernels in (analogous to CUDA C++). While we'd love to see a kernel language happen someday, that will be an independent project that compiles down to MLIR.
We benefitted from the experience gained building HLO, LLVM and SIL when building MLIR. We will directly adopt existing best practices, e.g. writing and maintaining an IR spec, building an IR verifier, providing the ability to dump and parse MLIR files to text, writing extensive unit tests with the FileCheck tool, and building the infrastructure as a set of modular libraries that can be combined in new ways. We plan to use the infrastructure developed by the XLA team for performance analysis and benchmarking.
Other lessons have been incorporated and integrated into the design in subtle ways. For example, LLVM has non-obvious design mistakes that prevent a multithreaded compiler from working on multiple functions in an LLVM module at the same time. MLIR solves these problems by having per-function constant pools and by making references explicit with function_ref.
git clone https://github.com/llvm/llvm-project.git cd llvm/projects/ git clone https://github.com/tensorflow/mlir cd ../../ mkdir build cd build env CC=clang CXX=clang++ cmake -G Ninja -DLLVM_ENABLE_RTTI=1 ../llvm/ ninja check-mlir