Merge "[ATest] Support test result upload."
diff --git a/OWNERS b/OWNERS
index 7e1af30..925d49c 100644
--- a/OWNERS
+++ b/OWNERS
@@ -1,5 +1,6 @@
+albaltai@google.com
 dshi@google.com
 kevcheng@google.com
-albaltai@google.com
+morrislin@google.com
 patricktu@google.com
 yangbill@google.com
diff --git a/aidegen/README.md b/aidegen/README.md
index 738bca3..5da9edd 100755
--- a/aidegen/README.md
+++ b/aidegen/README.md
@@ -1,63 +1,92 @@
-AIDEgen aims to automate the project setup process for developers to work on
-Java project in popular IDE environment. Developers no longer need to
-manually configure an IntelliJ project, such as all the project dependencies.
-It's a **command line tool** that offers the following features:
+# AIDEGen
 
-* Configure Intellij or Android Studio project files with the relevant module
-  dependencies resolved.
+AIDEGen aims to automate the project setup process for developers to work on
+Java or C/C++project in popular IDE environment. Developers no longer need to manually
+configure an IntelliJ project, such as all the project dependencies. It's a
+**command line tool** that offers the following features:
 
-* Launch IDE for a specified sub-project or build target, i.e. frameworks/base
-  or Settings.
+*   Configure Android Studio of IntelliJ project files with the relevant module
+    dependencies resolved.
 
-* Launch IDE for a specified folder which contains build targets, i.e. cts.
+*   Launch IDE for a specified sub-project or build target, i.e. frameworks/base
+    or Settings.
 
-* Auto configure JDK and Android coding style for Intellij.
+*   Launch IDE for specified folder(s) which contains build targets, i.e. cts.
+
+*   Auto configure JDK and Android coding style for Intellij.
 
 ## 1. Prerequisites:
 
-    IDE installed and run $ '. build/envsetup.sh && lunch' under Android source
-    root folder.
+*   IDE installation, choose one of prefer IDE, including Android Studio,
+    IntelliJ IDEA, Eclipse, CLion and VS Code.
 
-## 2. Execution:
+*   Setup Android development environment.
 
-    $ 'aidegen <module_name>... <module_path>...'
-      Example to generate and launch IntelliJ project for framework and
-      Settings:
-        $ aidegen Settings framework
-        $ aidegen packages/apps/Settings frameworks/base
-        $ aidegen packages/apps/Settings framework
+```
+$ source build/envsetup.sh && lunch <TARGE>
+```
 
-    $ 'aidegen <module> -i s'
-      Example to generate and launch Android Studio project for framework:
-        $ aidegen framework -i s
+## 2. Basic Usage:
 
-## 3. More argument:
+### Example 1: Launch IDE with module name
 
-    $ aidegen --help
+Example to generate and launch IntelliJ project for framework and Settings:
+
+```
+$ aidegen Settings framework
+```
+
+### Example 2: Launch IDE with module path
+
+Example to generate and launch IntelliJ project for framework and Settings:
+
+```
+$ aidegen packages/apps/Settings frameworks/base
+```
+
+### Example 3: Launch IDE with sikpping build time
+
+Example to generate and launch IntelliJ project for framework and Settings and
+skip build time:
+
+```
+$ aidegen Settings framework -s
+```
+
+## 3. Optional arguments:
+
+Developers can also use the following optional arguments with AIDEGen commands.
+
+| Option | Long option       | Description                                     |
+| :----: | :---------------- | ----------------------------------------------- |
+| `-d`   | `--depth`         | The depth of module referenced by source.       |
+| `-i`   | `--ide`           | Launch IDE type, j=IntelliJ s=Android Studio e=Eclipse c=CLion v=VS Code|
+| `-p`   | `--ide-path`      | Specify user's IDE installed path.              |
+| `-n`   | `--no_launch`     | Do not launch IDE.                              |
+| `-r`   | `--config-reset`  | Reset all AIDEGen's saved configurations.       |
+| `-s`   | `--skip-build`    | Skip building jars or modules.                  |
+| `-v`   | `--verbose`       | Displays DEBUG level logging.                   |
+| `-a`   | `--android-tree`  | Generate whole Android source tree project file for IDE.|
+| `-e`   | `--exclude-paths` | Exclude the directories in IDE.                 |
+| `-l`   | `--language`      | Launch IDE with a specific language,j=java c=C/C++ r=Rust|
+| `-h`   | `--help`          | Shows help message and exits.                   |
 
 ## 4. FAQ:
 
-    1. Q: If I already have an IDE project file, and I run command AIDEGen to
-          generate the same project file again, what’ll happen?
-       A: The former IDEA project file will be overwritten by the newly
-          generated one from the aidegen command.
+Q1. If I already have an IDE project file, and I run command AIDEGen to generate
+the same project file again, what'll happen?
 
-    2. Q: When do I need to re-run AIDEGen?
-       A: Re-run AIDEGen after repo sync.
+A1: The former IDEA project file will be overwritten by the newly generated one
+from the aidegen command.
 
-    3. Q: Does AIDEGen support debug log dump?
-       A: Use aidegen -v to get more debug information.
+Q2: When do I need to re-run AIDEGen?
 
-    4. Q: After the aidegen command run locally, if there’s no IDEA with
-          project shown up, what can I do ?
-       A: Basic steps to do troubleshooting:
-          - Make sure development environment is set up, please refer to
-            prerequisites section.
-          - Check error message in the aidegen command output.
+A2: Re-run AIDEGen after repo sync.
 
-# Hint
-    1. In Intellij, uses [File] > [Invalidate Caches / Restart…] to force
+## 5. Hint:
+
+1. In Intellij, uses [File] > [Invalidate Caches / Restart...] to force
        project panel updated when your IDE didn't sync.
 
-    2. If you run aidegen on a remote desktop, make sure there is no IntelliJ
+2. If you run aidegen on a remote desktop, make sure there is no IntelliJ
        running in a different desktop session.
diff --git a/aidegen/constant.py b/aidegen/constant.py
index 8348e92..801b42e 100644
--- a/aidegen/constant.py
+++ b/aidegen/constant.py
@@ -116,15 +116,18 @@
 VERSION_FILE = 'VERSION'
 INTERMEDIATES = '.intermediates'
 TARGET_R_SRCJAR = 'R.srcjar'
+NAME_AAPT2 = 'aapt2'
 
 # Constants for file paths.
 RELATIVE_NATIVE_PATH = 'development/ide/clion'
 RELATIVE_COMPDB_PATH = 'development/ide/compdb'
+UNZIP_SRCJAR_PATH_HEAD = 'aidegen_'
 
 # Constants for whole Android tree.
 WHOLE_ANDROID_TREE_TARGET = '#WHOLE_ANDROID_TREE#'
 
 # Constants for ProjectInfo or ModuleData classes.
+SRCJAR_EXT = '.srcjar'
 JAR_EXT = '.jar'
 TARGET_LIBS = [JAR_EXT]
 
diff --git a/aidegen/idea/iml.py b/aidegen/idea/iml.py
index b3af80d..d533805 100644
--- a/aidegen/idea/iml.py
+++ b/aidegen/idea/iml.py
@@ -173,18 +173,33 @@
     def _generate_srcs(self):
         """Generates the source urls of the project's iml file."""
         srcs = []
+        framework_srcs = []
         for src in self._mod_info.get(constant.KEY_SRCS, []):
+            if constant.FRAMEWORK_PATH in src:
+                framework_srcs.append(templates.SOURCE.format(
+                    SRC=os.path.join(self._android_root, src),
+                    IS_TEST='false'))
+                continue
             srcs.append(templates.SOURCE.format(
                 SRC=os.path.join(self._android_root, src),
                 IS_TEST='false'))
         for test in self._mod_info.get(constant.KEY_TESTS, []):
+            if constant.FRAMEWORK_PATH in test:
+                framework_srcs.append(templates.SOURCE.format(
+                    SRC=os.path.join(self._android_root, test),
+                    IS_TEST='true'))
+                continue
             srcs.append(templates.SOURCE.format(
                 SRC=os.path.join(self._android_root, test),
                 IS_TEST='true'))
         self._excludes = self._mod_info.get(constant.KEY_EXCLUDES, '')
+
+        #For sovling duplicate package name, frameworks/base will be higher
+        #priority.
+        srcs = sorted(framework_srcs) + sorted(srcs)
         self._srcs = templates.CONTENT.format(MODULE_PATH=self._mod_path,
                                               EXCLUDES=self._excludes,
-                                              SOURCES=''.join(sorted(srcs)))
+                                              SOURCES=''.join(srcs))
 
     def _generate_dep_srcs(self):
         """Generates the source urls of the dependencies.iml."""
diff --git a/aidegen/lib/clion_project_file_gen.py b/aidegen/lib/clion_project_file_gen.py
index 1e6a9e9..3900c8b 100644
--- a/aidegen/lib/clion_project_file_gen.py
+++ b/aidegen/lib/clion_project_file_gen.py
@@ -285,10 +285,13 @@
             logging.warning("No source files in %s's module info.",
                             self.mod_name)
             return
+        root = common_util.get_android_root_dir()
         source_files = self.mod_info[constant.KEY_SRCS]
         hfile.write(_LIST_APPEND_HEADER)
         hfile.write(_SOURCE_FILES_LINE)
         for src in source_files:
+            if not os.path.exists(os.path.join(root, src)):
+                continue
             hfile.write(''.join([_build_cmake_path(src, '    '), '\n']))
         hfile.write(_END_WITH_ONE_BLANK_LINE)
 
diff --git a/aidegen/lib/common_util.py b/aidegen/lib/common_util.py
index e371c04..e7a527c 100644
--- a/aidegen/lib/common_util.py
+++ b/aidegen/lib/common_util.py
@@ -29,6 +29,7 @@
 import sys
 import time
 import xml.dom.minidom
+import zipfile
 
 from functools import partial
 from functools import wraps
@@ -775,3 +776,15 @@
             if fnmatch.filter(filenames, extension):
                 return True
     return False
+
+
+@io_error_handle
+def unzip_file(src, dest):
+    """Unzips the source zip file and extract it to the destination directory.
+
+    Args:
+        src: A string of the file to be unzipped.
+        dest: A string of the destination directory to be extracted to.
+    """
+    with zipfile.ZipFile(src, 'r') as zip_ref:
+        zip_ref.extractall(dest)
diff --git a/aidegen/lib/common_util_unittest.py b/aidegen/lib/common_util_unittest.py
index f73f012..29e2679 100644
--- a/aidegen/lib/common_util_unittest.py
+++ b/aidegen/lib/common_util_unittest.py
@@ -419,6 +419,16 @@
         self.assertEqual((constant.JAVA, constant.IDE_INTELLIJ),
                          common_util.determine_language_ide(lang, ide))
 
+    @mock.patch('zipfile.ZipFile.extractall')
+    @mock.patch('zipfile.ZipFile')
+    def test_unzip_file(self, mock_zipfile, mock_extract):
+        """Test unzip_file function."""
+        src = 'a/b/c.zip'
+        dest = 'a/b/d'
+        common_util.unzip_file(src, dest)
+        mock_zipfile.assert_called_with(src, 'r')
+        self.assertFalse(mock_extract.called)
+
     @mock.patch('os.walk')
     def test_check_java_or_kotlin_file_exists(self, mock_walk):
         """Test check_java_or_kotlin_file_exists with conditions."""
diff --git a/aidegen/lib/eclipse_project_file_gen.py b/aidegen/lib/eclipse_project_file_gen.py
index 5bc4b7e..3340c45 100644
--- a/aidegen/lib/eclipse_project_file_gen.py
+++ b/aidegen/lib/eclipse_project_file_gen.py
@@ -176,7 +176,7 @@
         links.update(self._gen_r_link())
         links.update(self._gen_bin_link())
         self.project_content = templates.ECLIPSE_PROJECT_XML.format(
-            PROJECTNAME=self.module_name,
+            PROJECTNAME=self.module_name.replace('/', '_'),
             LINKEDRESOURCES=''.join(sorted(list(links))))
 
     def _gen_r_path_entries(self):
diff --git a/aidegen/lib/module_info.py b/aidegen/lib/module_info.py
index a635d40..1b3a512 100644
--- a/aidegen/lib/module_info.py
+++ b/aidegen/lib/module_info.py
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-"""Module Info class used to hold cached merged_module_info.json.json."""
+"""Module Info class used to hold cached merged_module_info.json."""
 
 import logging
 import os
diff --git a/aidegen/lib/project_file_gen.py b/aidegen/lib/project_file_gen.py
index 62ad140..1115f9a 100644
--- a/aidegen/lib/project_file_gen.py
+++ b/aidegen/lib/project_file_gen.py
@@ -32,7 +32,7 @@
 from aidegen.lib import common_util
 from aidegen.lib import config
 from aidegen.lib import project_config
-from aidegen.project import source_splitter
+from aidegen.project import project_splitter
 
 # FACET_SECTION is a part of iml, which defines the framework of the project.
 _MODULE_SECTION = ('            <module fileurl="file:///$PROJECT_DIR$/%s.iml"'
@@ -109,7 +109,7 @@
         """
         # Initialization
         iml.IMLGenerator.USED_NAME_CACHE.clear()
-        proj_splitter = source_splitter.ProjectSplitter(projects)
+        proj_splitter = project_splitter.ProjectSplitter(projects)
         proj_splitter.get_dependencies()
         proj_splitter.revise_source_folders()
         iml_paths = [proj_splitter.gen_framework_srcjars_iml()]
@@ -143,13 +143,12 @@
             os.path.join(code_style_dir, _CODE_STYLE_CONFIG_XML),
             templates.XML_CODE_STYLE_CONFIG)
         code_style_target_path = os.path.join(code_style_dir, _PROJECT_XML)
-        if os.path.exists(code_style_target_path):
-            os.remove(code_style_target_path)
-        try:
-            shutil.copy2(_CODE_STYLE_SRC_PATH, code_style_target_path)
-        except (OSError, SystemError) as err:
-            logging.warning('%s can\'t copy the project files\n %s',
-                            code_style_target_path, err)
+        if not os.path.exists(code_style_target_path):
+            try:
+                shutil.copy2(_CODE_STYLE_SRC_PATH, code_style_target_path)
+            except (OSError, SystemError) as err:
+                logging.warning('%s can\'t copy the project files\n %s',
+                                code_style_target_path, err)
         # Create .gitignore if it doesn't exist.
         _generate_git_ignore(target_path)
         # Create jsonSchemas.xml for TEST_MAPPING.
diff --git a/aidegen/lib/project_file_gen_unittest.py b/aidegen/lib/project_file_gen_unittest.py
index 10bedf7..5fb64d3 100644
--- a/aidegen/lib/project_file_gen_unittest.py
+++ b/aidegen/lib/project_file_gen_unittest.py
@@ -29,7 +29,7 @@
 from aidegen.lib import project_config
 from aidegen.lib import project_file_gen
 from aidegen.lib import project_info
-from aidegen.project import source_splitter
+from aidegen.project import project_splitter
 
 
 # pylint: disable=protected-access
@@ -257,11 +257,12 @@
     @mock.patch.object(project_file_gen, '_merge_project_vcs_xmls')
     @mock.patch.object(project_file_gen.ProjectFileGenerator,
                        'generate_intellij_project_file')
-    @mock.patch.object(source_splitter.ProjectSplitter, 'gen_projects_iml')
-    @mock.patch.object(source_splitter.ProjectSplitter,
+    @mock.patch.object(project_splitter.ProjectSplitter, 'gen_projects_iml')
+    @mock.patch.object(project_splitter.ProjectSplitter,
                        'gen_framework_srcjars_iml')
-    @mock.patch.object(source_splitter.ProjectSplitter, 'revise_source_folders')
-    @mock.patch.object(source_splitter.ProjectSplitter, 'get_dependencies')
+    @mock.patch.object(project_splitter.ProjectSplitter,
+                       'revise_source_folders')
+    @mock.patch.object(project_splitter.ProjectSplitter, 'get_dependencies')
     @mock.patch.object(common_util, 'get_android_root_dir')
     @mock.patch.object(project_info, 'ProjectInfo')
     def test_generate_ide_project_files(self, mock_project, mock_get_root,
diff --git a/aidegen/lib/project_info.py b/aidegen/lib/project_info.py
index 8df9350..06a7e97 100644
--- a/aidegen/lib/project_info.py
+++ b/aidegen/lib/project_info.py
@@ -33,11 +33,6 @@
 
 _CONVERT_MK_URL = ('https://android.googlesource.com/platform/build/soong/'
                    '#convert-android_mk-files')
-_ANDROID_MK_WARN = (
-    '{} contains Android.mk file(s) in its dependencies:\n{}\nPlease help '
-    'convert these files into blueprint format in the future, otherwise '
-    'AIDEGen may not be able to include all module dependencies.\nPlease visit '
-    '%s for reference on how to convert makefile.' % _CONVERT_MK_URL)
 _ROBOLECTRIC_MODULE = 'Robolectric_all'
 _NOT_TARGET = ('The module %s does not contain any Java or Kotlin file, '
                'therefore we skip this module in the project.')
@@ -130,7 +125,6 @@
         else:
             self.dep_modules = self.get_dep_modules()
         self._filter_out_modules()
-        self._display_convert_make_files_message()
         self.dependencies = []
         self.iml_name = iml.IMLGenerator.get_unique_iml_name(abs_path)
         self.rel_out_soong_jar_path = self._get_rel_project_out_soong_jar_path()
@@ -160,14 +154,6 @@
             'srcjar_path': set()
         }
 
-    def _display_convert_make_files_message(self):
-        """Show message info users convert their Android.mk to Android.bp."""
-        mk_set = set(self._search_android_make_files())
-        if mk_set:
-            print('\n{} {}\n'.format(
-                common_util.COLORED_INFO('Warning:'),
-                _ANDROID_MK_WARN.format(self.module_name, '\n'.join(mk_set))))
-
     def _search_android_make_files(self):
         """Search project and dependency modules contain Android.mk files.
 
@@ -193,6 +179,7 @@
         """Find qualified modules under the rel_path.
 
         Find modules which contain any Java or Kotlin file as a target module.
+        If it's the whole source tree project, add all modules into it.
 
         Args:
             rel_path: A string, the project's relative path.
@@ -202,6 +189,8 @@
         """
         logging.info('Find modules contain any Java or Kotlin file under %s.',
                      rel_path)
+        if rel_path == '':
+            return self.modules_info.name_to_module_info.keys()
         modules = set()
         root_dir = common_util.get_android_root_dir()
         for name, data in self.modules_info.name_to_module_info.items():
@@ -639,7 +628,7 @@
 
     The jar files which have the same source codes as cls.projects' source files
     should be removed from the dependencies.iml file's jar paths. The codes are
-    written in aidegen.project.source_splitter.py.
+    written in aidegen.project.project_splitter.py.
     We should also add the jar project's unique iml name into self.dependencies
     which later will be written into its own iml project file. If we don't
     remove these files in dependencies.iml, it will cause the duplicated codes
diff --git a/aidegen/lib/project_info_unittest.py b/aidegen/lib/project_info_unittest.py
index b35172a..884cf8c 100644
--- a/aidegen/lib/project_info_unittest.py
+++ b/aidegen/lib/project_info_unittest.py
@@ -302,25 +302,6 @@
         mock_format.reset_mock()
         mock_build.reset_mock()
 
-    @mock.patch('builtins.print')
-    @mock.patch.object(project_info.ProjectInfo, '_search_android_make_files')
-    @mock.patch('atest.module_info.ModuleInfo')
-    def test_display_convert_make_files_message(
-            self, mock_module_info, mock_search, mock_print):
-        """Test _display_convert_make_files_message with conditions."""
-        mock_search.return_value = []
-        mock_module_info.get_paths.return_value = ['m1']
-        project_info.ProjectInfo.modules_info = mock_module_info
-        proj_info = project_info.ProjectInfo(self.args.module_name)
-        proj_info._display_convert_make_files_message()
-        self.assertFalse(mock_print.called)
-
-        mock_print.mock_reset()
-        mock_search.return_value = ['a/b/path/to/target.mk']
-        proj_info = project_info.ProjectInfo(self.args.module_name)
-        proj_info._display_convert_make_files_message()
-        self.assertTrue(mock_print.called)
-
     @mock.patch.object(project_info, '_build_target')
     @mock.patch.object(project_info, '_separate_build_targets')
     @mock.patch.object(logging, 'info')
diff --git a/aidegen/lib/source_locator.py b/aidegen/lib/source_locator.py
index db7b49c..9c13ae3 100644
--- a/aidegen/lib/source_locator.py
+++ b/aidegen/lib/source_locator.py
@@ -36,12 +36,10 @@
 # File extensions
 _JAVA_EXT = '.java'
 _KOTLIN_EXT = '.kt'
-_SRCJAR_EXT = '.srcjar'
 _TARGET_FILES = [_JAVA_EXT, _KOTLIN_EXT]
 _JARJAR_RULES_FILE = 'jarjar-rules.txt'
 _KEY_JARJAR_RULES = 'jarjar_rules'
-_NAME_AAPT2 = 'aapt2'
-_TARGET_AAPT2_SRCJAR = _NAME_AAPT2 + _SRCJAR_EXT
+_TARGET_AAPT2_SRCJAR = constant.NAME_AAPT2 + constant.SRCJAR_EXT
 _TARGET_BUILD_FILES = [_TARGET_AAPT2_SRCJAR, constant.TARGET_R_SRCJAR]
 _IGNORE_DIRS = [
     # The java files under this directory have to be ignored because it will
@@ -220,10 +218,10 @@
         target_folder, target_file = os.path.split(srcjar)
         base_dirname = os.path.basename(target_folder)
         if target_file == _TARGET_AAPT2_SRCJAR:
-            return os.path.join(target_folder, _NAME_AAPT2)
+            return os.path.join(target_folder, constant.NAME_AAPT2)
         if target_file == constant.TARGET_R_SRCJAR and base_dirname == _ANDROID:
             return os.path.join(os.path.dirname(target_folder),
-                                _NAME_AAPT2, 'R')
+                                constant.NAME_AAPT2, 'R')
         return None
 
     def _init_module_path(self):
diff --git a/aidegen/project/source_splitter.py b/aidegen/project/project_splitter.py
similarity index 74%
rename from aidegen/project/source_splitter.py
rename to aidegen/project/project_splitter.py
index 6b783f1..13ea1ed 100644
--- a/aidegen/project/source_splitter.py
+++ b/aidegen/project/project_splitter.py
@@ -16,7 +16,9 @@
 
 """Separate the sources from multiple projects."""
 
+import logging
 import os
+import shutil
 
 from aidegen import constant
 from aidegen.idea import iml
@@ -39,8 +41,10 @@
                     'toolchain', 'tools', 'vendor', 'out',
                     'art/tools/ahat/src/test-dump',
                     'cts/common/device-side/device-info/src_stub']
-_PERMISSION_DEFINED_JAR = ('frameworks/base/core/res/framework-res/'
-                           'android_common/gen/android')
+_PERMISSION_DEFINED_PATH = ('frameworks/base/core/res/framework-res/'
+                            'android_common/gen/')
+_ANDROID = 'android'
+_R = 'R'
 
 
 class ProjectSplitter:
@@ -78,9 +82,10 @@
         _framework_iml: A string, the name of the framework's iml.
         _full_repo: A boolean, True if loading with full Android sources.
         _full_repo_iml: A string, the name of the Android folder's iml.
-        _permission_definition_r_srcjar: A string, the absolute path of R.srcjar
-                                         of the permission and resource content
-                                         of framework-res.
+        _permission_r_srcjar: A string, the absolute path of R.srcjar file where
+                              the permission relative constants are defined.
+        _permission_aapt2: A string, the absolute path of aapt2/R directory
+                           where the permission relative constants are defined.
     """
     def __init__(self, projects):
         """ProjectSplitter initialize.
@@ -102,7 +107,8 @@
         if self._full_repo:
             self._full_repo_iml = os.path.basename(
                 common_util.get_android_root_dir())
-        self._permission_definition_r_srcjar = ''
+        self._permission_r_srcjar = _get_permission_r_srcjar_rel_path()
+        self._permission_aapt2 = _get_permission_aapt2_rel_path()
 
     def revise_source_folders(self):
         """Resets the source folders of each project.
@@ -211,9 +217,7 @@
         """Generates the framework_srcjars.iml.
 
         Create the iml file with only the srcjars of module framework-all. These
-        srcjars will be separated from the modules under frameworks/base. If the
-        framework-res/android_common/gen/android/R.srcjar file exists, add it
-        into framework_srcjars.iml too.
+        srcjars will be separated from the modules under frameworks/base.
 
         Returns:
             A string of the framework_srcjars.iml's absolute path.
@@ -227,18 +231,54 @@
         if self._full_repo:
             mod[constant.KEY_DEPENDENCIES].append(self._full_repo_iml)
         mod[constant.KEY_DEPENDENCIES].append(constant.KEY_DEPENDENCIES)
-        if self._permission_definition_r_srcjar:
-            mod[constant.KEY_SRCJARS] = [self._permission_definition_r_srcjar]
+        srcjar_dict = dict()
+        permission_src = self._get_permission_defined_source_path()
+        if permission_src:
+            mod[constant.KEY_SRCS] = [permission_src]
+            srcjar_dict = {constant.KEY_DEP_SRCS: True,
+                           constant.KEY_SRCJARS: True,
+                           constant.KEY_DEPENDENCIES: True}
+        else:
+            logging.warning('The permission definition relative paths are '
+                            'missing.')
+            srcjar_dict = {constant.KEY_SRCJARS: True,
+                           constant.KEY_DEPENDENCIES: True}
         framework_srcjars_iml = iml.IMLGenerator(mod)
-        framework_srcjars_iml.create({constant.KEY_SRCJARS: True,
-                                      constant.KEY_DEPENDENCIES: True})
+        framework_srcjars_iml.create(srcjar_dict)
         self._all_srcs[_KEY_SRCJAR_PATH] -= set(mod.get(constant.KEY_SRCJARS,
                                                         []))
         return framework_srcjars_iml.iml_path
 
+    def _get_permission_defined_source_path(self):
+        """Gets the source path where permission relative constants are defined.
+
+        For the definition permission constants, the priority is,
+        1) If framework-res/android_common/gen/aapt2/R directory exists, return
+           it.
+        2) If the framework-res/android_common/gen/android/R.srcjar file exists,
+           unzip it to 'aidegen_r.srcjar' folder and return the path.
+
+        Returns:
+            A string of the path of aapt2/R or android/aidegen_r.srcjar folder,
+            else None.
+        """
+        if os.path.isdir(self._permission_aapt2):
+            return self._permission_aapt2
+        if os.path.isfile(self._permission_r_srcjar):
+            dest = os.path.join(
+                os.path.dirname(self._permission_r_srcjar),
+                ''.join([constant.UNZIP_SRCJAR_PATH_HEAD,
+                         os.path.basename(self._permission_r_srcjar).lower()]))
+            if os.path.isdir(dest):
+                shutil.rmtree(dest)
+            common_util.unzip_file(self._permission_r_srcjar, dest)
+            return dest
+        return None
+
     def _gen_dependencies_iml(self):
         """Generates the dependencies.iml."""
         rel_project_soong_paths = self._get_rel_project_soong_paths()
+        self._unzip_all_scrjars()
         mod = {
             constant.KEY_SRCS: _get_real_dependencies_jars(
                 rel_project_soong_paths, self._all_srcs[_KEY_SOURCE_PATH]),
@@ -265,6 +305,43 @@
                         constant.KEY_JARS: True,
                         constant.KEY_DEPENDENCIES: True})
 
+    def _unzip_all_scrjars(self):
+        """Unzips all scrjar files to a specific folder 'aidegen_r.srcjar'.
+
+        For some versions of IntelliJ no more supports unzipping srcjar files
+        automatically, we have to unzip it to a 'aidegen_r.srcjar' directory.
+        The rules of the unzip process are,
+        1) If it's a aapt2/R type jar or other directory type sources, add them
+           into self._all_srcs[_KEY_SOURCE_PATH].
+        2) If it's an R.srcjar file, check if the same path of aapt2/R directory
+           exists if so add aapt2/R path into into the
+           self._all_srcs[_KEY_SOURCE_PATH], otherwise unzip R.srcjar into
+           the 'aidegen_r.srcjar' directory and add the unzipped path into
+           self._all_srcs[_KEY_SOURCE_PATH].
+        """
+        sjars = self._all_srcs[_KEY_R_PATH] | self._all_srcs[_KEY_SRCJAR_PATH]
+        self._all_srcs[_KEY_R_PATH] = set()
+        self._all_srcs[_KEY_SRCJAR_PATH] = set()
+        for sjar in sjars:
+            if not os.path.exists(sjar):
+                continue
+            if os.path.isdir(sjar):
+                self._all_srcs[_KEY_SOURCE_PATH].add(sjar)
+                continue
+            sjar_dir = os.path.dirname(sjar)
+            sjar_name = os.path.basename(sjar).lower()
+            aapt2 = os.path.join(
+                os.path.dirname(sjar_dir), constant.NAME_AAPT2, _R)
+            if os.path.isdir(aapt2):
+                self._all_srcs[_KEY_SOURCE_PATH].add(aapt2)
+                continue
+            dest = os.path.join(
+                sjar_dir, ''.join([constant.UNZIP_SRCJAR_PATH_HEAD, sjar_name]))
+            if os.path.isdir(dest):
+                shutil.rmtree(dest)
+            common_util.unzip_file(sjar, dest)
+            self._all_srcs[_KEY_SOURCE_PATH].add(dest)
+
     def gen_projects_iml(self):
         """Generates the projects' iml file."""
         root_path = common_util.get_android_root_dir()
@@ -314,27 +391,23 @@
     def _remove_permission_definition_srcjar_path(self):
         """Removes android.Manifest.permission definition srcjar path.
 
-        If the framework-res/android_common/gen/android/R.srcjar file exists in
-        self._all_srcs[_KEY_SRCJAR_PATH], remove it and later add it into the
-        framework_srcjars.iml file to fix the unresolved symbols of the
-        constants of android.Manifest.permission.
+        If framework-res/android_common/gen/aapt2/R directory or
+        framework-res/android_common/gen/android/R.srcjar file exists in
+        self._all_srcs[_KEY_SRCJAR_PATH], remove them.
         """
-        rel_path = _get_permission_definition_srcjar_path()
-        for rjar_path in self._all_srcs[_KEY_SRCJAR_PATH]:
-            if rjar_path.endswith(rel_path):
-                self._permission_definition_r_srcjar = rjar_path
-                break
-        if self._permission_definition_r_srcjar:
-            self._all_srcs[_KEY_SRCJAR_PATH].remove(
-                self._permission_definition_r_srcjar)
+        if self._permission_aapt2 in self._all_srcs[_KEY_SRCJAR_PATH]:
+            self._all_srcs[_KEY_SRCJAR_PATH].remove(self._permission_aapt2)
+        if self._permission_r_srcjar in self._all_srcs[_KEY_SRCJAR_PATH]:
+            self._all_srcs[_KEY_SRCJAR_PATH].remove(self._permission_r_srcjar)
 
 
 def _get_real_dependencies_jars(list_to_check, list_to_be_checked):
-    """Gets real dependencies' jar from the input list.
+    """Gets real dependencies' jar and srcjar from the input list.
 
-    There are jar files which have the same source codes as the self.projects
-    should be removed from dependencies. Otherwise these files will cause the
-    duplicated codes in IDE and lead to issues. The example: b/158583214.
+    There are jar, srcjar files which have the same source codes as the
+    self.projects should be removed from dependencies. Otherwise these files
+    will cause the duplicated codes in IDE and lead to issues: b/158583214 is an
+    example.
 
     Args:
         list_to_check: A list of relative projects' paths in the folder
@@ -345,10 +418,12 @@
     Returns:
         A list of dependency jar paths after duplicated ones removed.
     """
+    file_exts = [constant.JAR_EXT, constant.SRCJAR_EXT]
     real_jars = list_to_be_checked.copy()
     for jar in list_to_be_checked:
+        ext = os.path.splitext(jar)[-1]
         for check_path in list_to_check:
-            if check_path in jar and jar.endswith(constant.JAR_EXT):
+            if check_path in jar and ext in file_exts:
                 real_jars.remove(jar)
                 break
     return real_jars
@@ -402,10 +477,18 @@
     return rm_paths
 
 
-def _get_permission_definition_srcjar_path():
+def _get_permission_aapt2_rel_path():
     """Gets android.Manifest.permission definition srcjar path."""
     out_soong_dir = os.path.relpath(common_util.get_soong_out_path(),
                                     common_util.get_android_root_dir())
-    return os.sep.join(
-        [out_soong_dir, constant.INTERMEDIATES, _PERMISSION_DEFINED_JAR,
-         constant.TARGET_R_SRCJAR])
+    return os.path.join(out_soong_dir, constant.INTERMEDIATES,
+                        _PERMISSION_DEFINED_PATH, constant.NAME_AAPT2, _R)
+
+
+def _get_permission_r_srcjar_rel_path():
+    """Gets android.Manifest.permission definition srcjar path."""
+    out_soong_dir = os.path.relpath(common_util.get_soong_out_path(),
+                                    common_util.get_android_root_dir())
+    return os.path.join(out_soong_dir, constant.INTERMEDIATES,
+                        _PERMISSION_DEFINED_PATH, _ANDROID,
+                        constant.TARGET_R_SRCJAR)
diff --git a/aidegen/project/source_splitter_unittest.py b/aidegen/project/project_splitter_unittest.py
similarity index 71%
rename from aidegen/project/source_splitter_unittest.py
rename to aidegen/project/project_splitter_unittest.py
index fc2e0c8..3109142 100644
--- a/aidegen/project/source_splitter_unittest.py
+++ b/aidegen/project/project_splitter_unittest.py
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-"""Unittests for source_splitter."""
+"""Unittests for project_splitter."""
 
 import os
 import shutil
@@ -22,12 +22,13 @@
 import unittest
 from unittest import mock
 
+from aidegen import constant
 from aidegen import unittest_constants
 from aidegen.idea import iml
 from aidegen.lib import common_util
 from aidegen.lib import project_config
 from aidegen.lib import project_info
-from aidegen.project import source_splitter
+from aidegen.project import project_splitter
 
 
 # pylint: disable=protected-access
@@ -90,7 +91,7 @@
             config = mock.Mock()
             config.full_repo = False
             proj_cfg.return_value = config
-            self.split_projs = source_splitter.ProjectSplitter(projects)
+            self.split_projs = project_splitter.ProjectSplitter(projects)
 
     def tearDown(self):
         """Clear the testdata related path."""
@@ -110,19 +111,19 @@
             config = mock.Mock()
             config.full_repo = False
             mock_project.return_value = config
-            project = source_splitter.ProjectSplitter(proj_info(['a'], True))
+            project = project_splitter.ProjectSplitter(proj_info(['a'], True))
             self.assertFalse(project._framework_exist)
             config.full_repo = True
-            project = source_splitter.ProjectSplitter(proj_info(['a'], True))
+            project = project_splitter.ProjectSplitter(proj_info(['a'], True))
             self.assertEqual(project._full_repo_iml,
                              os.path.basename(
                                  ProjectSplitterUnittest._TEST_DIR))
 
-    @mock.patch.object(source_splitter.ProjectSplitter,
+    @mock.patch.object(project_splitter.ProjectSplitter,
                        '_remove_duplicate_sources')
-    @mock.patch.object(source_splitter.ProjectSplitter,
+    @mock.patch.object(project_splitter.ProjectSplitter,
                        '_keep_local_sources')
-    @mock.patch.object(source_splitter.ProjectSplitter,
+    @mock.patch.object(project_splitter.ProjectSplitter,
                        '_collect_all_srcs')
     def test_revise_source_folders(self, mock_copy_srcs, mock_keep_srcs,
                                    mock_remove_srcs):
@@ -162,7 +163,7 @@
         self.assertEqual(all_srcs['test_folder_path'], expected_all_tests)
 
     @mock.patch.object(
-        source_splitter, '_remove_child_duplicate_sources_from_parent')
+        project_splitter, '_remove_child_duplicate_sources_from_parent')
     def test_remove_duplicate_sources(self, mock_remove):
         """Test _remove_duplicate_sources."""
         self.split_projs._collect_all_srcs()
@@ -188,12 +189,17 @@
         self.assertEqual(self.split_projs._projects[1].dependencies, dep2)
         self.assertEqual(self.split_projs._projects[2].dependencies, dep3)
 
-    @mock.patch.object(source_splitter.ProjectSplitter,
+    @mock.patch.object(iml.IMLGenerator, 'create')
+    @mock.patch.object(project_splitter.ProjectSplitter,
+                       '_get_permission_defined_source_path')
+    @mock.patch.object(project_splitter.ProjectSplitter,
                        '_remove_permission_definition_srcjar_path')
     @mock.patch.object(common_util, 'get_android_root_dir')
-    def test_gen_framework_srcjars_iml(self, mock_root, mock_remove):
+    def test_gen_framework_srcjars_iml(
+        self, mock_root, mock_remove, mock_get, mock_create_iml):
         """Test gen_framework_srcjars_iml."""
         mock_root.return_value = self._TEST_DIR
+        mock_get.return_value = 'aapt2/R'
         self.split_projs._projects[0].dep_modules = {
             'framework-all': {
                 'module_name': 'framework-all',
@@ -204,6 +210,9 @@
         }
         self.split_projs._framework_exist = False
         self.split_projs.gen_framework_srcjars_iml()
+        srcjar_dict = {constant.KEY_DEP_SRCS: True, constant.KEY_SRCJARS: True,
+                       constant.KEY_DEPENDENCIES: True}
+        mock_create_iml.assert_called_with(srcjar_dict)
         expected_srcjars = [
             'other.srcjar',
             'srcjar1.srcjar',
@@ -214,30 +223,39 @@
                                      'frameworks/base/framework_srcjars.iml')
         self.split_projs._framework_exist = True
         self.split_projs.revise_source_folders()
+        mock_get.return_value = None
         iml_path = self.split_projs.gen_framework_srcjars_iml()
         srcjars = self.split_projs._all_srcs['srcjar_path']
         self.assertEqual(sorted(list(srcjars)), expected_srcjars)
         self.assertEqual(iml_path, expected_path)
         self.assertTrue(mock_remove.called)
+        srcjar_dict = {constant.KEY_SRCJARS: True,
+                       constant.KEY_DEPENDENCIES: True}
+        mock_create_iml.assert_called_with(srcjar_dict)
 
+    @mock.patch.object(project_splitter.ProjectSplitter, '_unzip_all_scrjars')
     @mock.patch.object(iml.IMLGenerator, 'create')
     @mock.patch.object(common_util, 'get_android_root_dir')
-    def test_gen_dependencies_iml(self, mock_root, mock_create_iml):
+    def test_gen_dependencies_iml(self, mock_root, mock_create_iml, mock_unzip):
         """Test _gen_dependencies_iml."""
         mock_root.return_value = self._TEST_DIR
         self.split_projs.revise_source_folders()
         self.split_projs._framework_exist = False
         self.split_projs._gen_dependencies_iml()
+        self.assertTrue(mock_unzip.called)
+        mock_unzip.mock_reset()
         self.split_projs._framework_exist = True
         self.split_projs._gen_dependencies_iml()
         self.assertTrue(mock_create_iml.called)
+        self.assertTrue(mock_unzip.called)
 
-    @mock.patch.object(source_splitter, 'get_exclude_content')
+    @mock.patch.object(project_splitter.ProjectSplitter, '_unzip_all_scrjars')
+    @mock.patch.object(project_splitter, 'get_exclude_content')
     @mock.patch.object(project_config.ProjectConfig, 'get_instance')
     @mock.patch.object(iml.IMLGenerator, 'create')
     @mock.patch.object(common_util, 'get_android_root_dir')
     def test_gen_projects_iml(self, mock_root, mock_create_iml, mock_project,
-                              mock_get_excludes):
+                              mock_get_excludes, mock_unzip):
         """Test gen_projects_iml."""
         mock_root.return_value = self._TEST_DIR
         config = mock.Mock()
@@ -246,14 +264,17 @@
         self.split_projs.revise_source_folders()
         self.split_projs.gen_projects_iml()
         self.assertTrue(mock_create_iml.called)
+        self.assertTrue(mock_unzip.called)
+        mock_unzip.mock_reset()
         self.assertFalse(mock_get_excludes.called)
         config.exclude_paths = ['a']
         self.split_projs.gen_projects_iml()
         self.assertTrue(mock_get_excludes.called)
+        self.assertTrue(mock_unzip.called)
 
     def test_get_exclude_content(self):
         """Test get_exclude_content."""
-        exclude_folders = source_splitter.get_exclude_content(self._TEST_PATH)
+        exclude_folders = project_splitter.get_exclude_content(self._TEST_PATH)
         self.assertEqual(self._SAMPLE_EXCLUDE_FOLDERS, exclude_folders)
 
     def test_remove_child_duplicate_sources_from_parent(self):
@@ -262,11 +283,11 @@
         child.project_relative_path = 'c/d'
         root = 'a/b'
         parent_sources = ['a/b/d/e', 'a/b/e/f']
-        result = source_splitter._remove_child_duplicate_sources_from_parent(
+        result = project_splitter._remove_child_duplicate_sources_from_parent(
             child, parent_sources, root)
         self.assertEqual(set(), result)
         parent_sources = ['a/b/c/d/e', 'a/b/e/f']
-        result = source_splitter._remove_child_duplicate_sources_from_parent(
+        result = project_splitter._remove_child_duplicate_sources_from_parent(
             child, parent_sources, root)
         self.assertEqual(set(['a/b/c/d/e']), result)
 
@@ -286,35 +307,47 @@
     def test_get_real_dependencies_jars(self):
         """Test _get_real_dependencies_jars with conditions."""
         expected = ['a/b/c/d']
-        self.assertEqual(expected, source_splitter._get_real_dependencies_jars(
+        self.assertEqual(expected, project_splitter._get_real_dependencies_jars(
             [], expected))
         expected = ['a/b/c/d.jar']
-        self.assertEqual(expected, source_splitter._get_real_dependencies_jars(
+        self.assertEqual(expected, project_splitter._get_real_dependencies_jars(
             ['a/e'], expected))
         expected = ['a/b/c/d.jar']
-        self.assertEqual([], source_splitter._get_real_dependencies_jars(
+        self.assertEqual([], project_splitter._get_real_dependencies_jars(
             ['a/b'], expected))
-        expected = ['a/b/c/d.srcjar']
-        self.assertEqual(expected, source_splitter._get_real_dependencies_jars(
-            ['a/b'], expected))
+        expected = ['a/b/c/R']
+        self.assertEqual(expected, project_splitter._get_real_dependencies_jars(
+            ['a/b'], ['a/b/c/d.srcjar', 'a/b/c/R']))
         expected = ['a/b/c/gen']
-        self.assertEqual(expected, source_splitter._get_real_dependencies_jars(
+        self.assertEqual(expected, project_splitter._get_real_dependencies_jars(
             ['a/b'], expected))
 
     @mock.patch.object(common_util, 'get_android_root_dir')
     @mock.patch.object(common_util, 'get_soong_out_path')
-    def test_get_permission_definition_srcjar_path(self, mock_soong, mock_root):
-        """Test _get_permission_definition_srcjar_path."""
+    def test_get_permission_aapt2_rel_path(self, mock_soong, mock_root):
+        """Test _get_permission_aapt2_rel_path."""
+        mock_soong.return_value = 'a/b/out/soong'
+        mock_root.return_value = 'a/b'
+        expected = ('out/soong/.intermediates/frameworks/base/core/res/'
+                    'framework-res/android_common/gen/aapt2/R')
+        self.assertEqual(
+            expected, project_splitter._get_permission_aapt2_rel_path())
+
+    @mock.patch.object(common_util, 'get_android_root_dir')
+    @mock.patch.object(common_util, 'get_soong_out_path')
+    def test_get_permission_r_srcjar_rel_path(self, mock_soong, mock_root):
+        """Test _get_permission_r_srcjar_rel_path."""
         mock_soong.return_value = 'a/b/out/soong'
         mock_root.return_value = 'a/b'
         expected = ('out/soong/.intermediates/frameworks/base/core/res/'
                     'framework-res/android_common/gen/android/R.srcjar')
         self.assertEqual(
-            expected, source_splitter._get_permission_definition_srcjar_path())
+            expected, project_splitter._get_permission_r_srcjar_rel_path())
 
-    @mock.patch.object(
-        source_splitter, '_get_permission_definition_srcjar_path')
-    def test_remove_permission_definition_srcjar_path(self, mock_get):
+    @mock.patch.object(project_splitter, '_get_permission_r_srcjar_rel_path')
+    @mock.patch.object(project_splitter, '_get_permission_aapt2_rel_path')
+    def test_remove_permission_definition_srcjar_path(
+        self, mock_get_aapt2, mock_get_r_srcjar):
         """Test _remove_permission_definition_srcjar_path with conditions."""
         expected_srcjars = [
             'other.srcjar',
@@ -322,7 +355,8 @@
             'srcjar2.srcjar',
             'srcjar3.srcjar',
         ]
-        mock_get.return_value = 'none.srcjar'
+        mock_get_aapt2.return_value = 'none/aapt2/R'
+        mock_get_r_srcjar.return_value = 'none.srcjar'
         self.split_projs._all_srcs['srcjar_path'] = expected_srcjars
         self.split_projs._remove_permission_definition_srcjar_path()
         srcjars = self.split_projs._all_srcs['srcjar_path']
@@ -333,12 +367,48 @@
             'srcjar2.srcjar',
             'srcjar3.srcjar',
         ]
-        mock_get.return_value = 'srcjar1.srcjar'
+        mock_get_r_srcjar.return_value = 'srcjar1.srcjar'
         self.split_projs._all_srcs['srcjar_path'] = expected_srcjars
         self.split_projs._remove_permission_definition_srcjar_path()
         srcjars = self.split_projs._all_srcs['srcjar_path']
         self.assertEqual(sorted(list(srcjars)), expected_srcjars)
 
+    @mock.patch('os.path.join')
+    @mock.patch.object(common_util, 'unzip_file')
+    @mock.patch('shutil.rmtree')
+    @mock.patch('os.path.isfile')
+    @mock.patch('os.path.isdir')
+    def test_get_permission_defined_source_path(
+        self, mock_is_dir, mock_is_file, mock_rmtree, mock_unzip, mock_join):
+        """Test _get_permission_defined_source_path function."""
+        mock_is_dir.return_value = True
+        self.split_projs._get_permission_defined_source_path()
+        self.assertFalse(mock_is_file.called)
+        self.assertFalse(mock_join.called)
+        self.assertFalse(mock_rmtree.called)
+        self.assertFalse(mock_unzip.called)
+        mock_is_dir.return_value = False
+        self.split_projs._get_permission_defined_source_path()
+        self.assertTrue(mock_is_file.called)
+        self.assertTrue(mock_join.called)
+        self.assertFalse(mock_rmtree.called)
+        self.assertTrue(mock_unzip.called)
+
+    @mock.patch.object(common_util, 'unzip_file')
+    @mock.patch('shutil.rmtree')
+    @mock.patch('os.path.join')
+    @mock.patch('os.path.dirname')
+    @mock.patch('os.path.isdir')
+    def test_unzip_all_scrjars(
+        self, mock_is_dir, mock_dirname, mock_join, mock_rmtree, mock_unzip):
+        """Test _unzip_all_scrjars function."""
+        mock_is_dir.return_value = True
+        self.split_projs._unzip_all_scrjars()
+        self.assertFalse(mock_dirname.called)
+        self.assertFalse(mock_join.called)
+        self.assertFalse(mock_rmtree.called)
+        self.assertFalse(mock_unzip.called)
+
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/aidegen_functional_test/aidegen_functional_test_main.py b/aidegen_functional_test/aidegen_functional_test_main.py
index 84989c2..62b2b0e 100644
--- a/aidegen_functional_test/aidegen_functional_test_main.py
+++ b/aidegen_functional_test/aidegen_functional_test_main.py
@@ -17,6 +17,7 @@
 """Functional test for aidegen project files."""
 
 from __future__ import absolute_import
+from __future__ import print_function
 
 import argparse
 import functools
@@ -32,6 +33,7 @@
 from aidegen import aidegen_main
 from aidegen import constant
 from aidegen.lib import clion_project_file_gen
+# pylint: disable=no-name-in-module
 from aidegen.lib import common_util
 from aidegen.lib import errors
 from aidegen.lib import module_info_util
@@ -47,6 +49,7 @@
 _VERIFY_COMMANDS_JSON = os.path.join(_TEST_DATA_PATH, 'verify_commands.json')
 _GOLDEN_SAMPLES_JSON = os.path.join(_TEST_DATA_PATH, 'golden_samples.json')
 _VERIFY_BINARY_JSON = os.path.join(_TEST_DATA_PATH, 'verify_binary_upload.json')
+_VERIFY_PRESUBMIT_JSON = os.path.join(_TEST_DATA_PATH, 'verify_presubmit.json')
 _ANDROID_COMMON = 'android_common'
 _LINUX_GLIBC_COMMON = 'linux_glibc_common'
 _SRCS = 'srcs'
@@ -114,6 +117,12 @@
         '--remove_bp_json',
         action='store_true',
         help='Remove module_bp_java_deps.json for each use case test.')
+    parser.add_argument(
+        '-m',
+        '--make_clean',
+        action='store_true',
+        help=('Make clean before testing to create a clean environment, the '
+              'aidegen_functional_test can run only once if users command it.'))
     group.add_argument(
         '-u',
         '--use_cases',
@@ -127,6 +136,12 @@
         help=('Verify aidegen\'s use cases by executing different aidegen '
               'commands.'))
     group.add_argument(
+        '-p',
+        action='store_true',
+        dest='binary_presubmit_verified',
+        help=('Verify aidegen\'s tool in presubmit test by executing'
+              'different aidegen commands.'))
+    group.add_argument(
         '-a',
         '--test-all',
         action='store_true',
@@ -198,6 +213,7 @@
         dep_name: a string of the merged project and dependencies file's name,
                   e.g., frameworks-dependencies.iml.
     """
+    # pylint: disable=maybe-no-member
     code_name = project_file_gen.ProjectFileGenerator.get_unique_iml_name(
         abs_path)
     file_name = ''.join([code_name, '.iml'])
@@ -366,6 +382,7 @@
 
     Args:
         test_list: a list of module name and module path.
+
     Returns:
         data: a dictionary contains dependent files' data of project file's
               contents.
@@ -384,7 +401,6 @@
                 ]
             }
     """
-    _make_clean()
     data = {}
     spec_and_cur_commit_id_dict = _checkout_baseline_code_to_spec_commit_id()
     for target in test_list:
@@ -410,6 +426,7 @@
     with open(_GOLDEN_SAMPLES_JSON, 'r') as infile:
         try:
             data_sample = json.load(infile)
+        # pylint: disable=maybe-no-member
         except json.JSONDecodeError as err:
             print("Json decode error: {}".format(err))
             data_sample = {}
@@ -559,7 +576,8 @@
 # pylint: disable=eval-used
 @common_util.back_to_cwd
 @common_util.time_logged
-def _verify_aidegen(verified_file_path, forced_remove_bp_json):
+def _verify_aidegen(verified_file_path, forced_remove_bp_json,
+                    is_presubmit=False):
     """Verify various use cases of executing aidegen.
 
     There are two types of running commands:
@@ -596,9 +614,9 @@
         raise errors.JsonFileNotExistError(
             '%s does not exist, error: %s.' % (verified_file_path, err))
 
-    _make_clean()
+    if not is_presubmit:
+        _compare_sample_native_content()
 
-    _compare_sample_native_content()
     os.chdir(common_util.get_android_root_dir())
     for use_case in data:
         print('Use case "{}" is running.'.format(use_case))
@@ -740,12 +758,18 @@
     args = _parse_args(argv)
     common_util.configure_logging(args.verbose)
     os.environ[constant.AIDEGEN_TEST_MODE] = 'true'
+
+    if args.make_clean:
+        _make_clean()
+
     if args.create_sample:
         _create_some_sample_json_file(args.targets)
     elif args.use_cases_verified:
         _verify_aidegen(_VERIFY_COMMANDS_JSON, args.remove_bp_json)
     elif args.binary_upload_verified:
         _verify_aidegen(_VERIFY_BINARY_JSON, args.remove_bp_json)
+    elif args.binary_presubmit_verified:
+        _verify_aidegen(_VERIFY_PRESUBMIT_JSON, args.remove_bp_json, True)
     elif args.test_all_samples:
         _test_all_samples_iml()
     elif args.compare_sample_native:
@@ -755,6 +779,7 @@
             _test_some_sample_iml()
         else:
             _test_some_sample_iml(args.targets)
+
     del os.environ[constant.AIDEGEN_TEST_MODE]
 
 
diff --git a/aidegen_functional_test/test_data/verify_presubmit.json b/aidegen_functional_test/test_data/verify_presubmit.json
new file mode 100644
index 0000000..7df3455
--- /dev/null
+++ b/aidegen_functional_test/test_data/verify_presubmit.json
@@ -0,0 +1,4 @@
+{
+    "test whole android tree with frameworks/base -a": ["aidegen frameworks/base -a -n -s"],
+    "test help": ["aidegen -h"]
+}
diff --git a/asuite_run_unittests.py b/asuite_run_unittests.py
index af5da17..4f66b4d 100755
--- a/asuite_run_unittests.py
+++ b/asuite_run_unittests.py
@@ -27,17 +27,16 @@
 import subprocess
 import sys
 
-
-EXIT_ALL_CLEAN = 0
-EXIT_TEST_FAIL = 1
-ASUITE_PLUGIN_PATH = "tools/asuite/asuite_plugin"
-# TODO: remove echo when atest migration has done.
-ATEST_CMD = "echo {}/tools/asuite/atest/atest_run_unittests.py".format(
-    os.getenv('ANDROID_BUILD_TOP'))
+ASUITE_HOME = os.path.dirname(os.path.realpath(__file__))
+ASUITE_PLUGIN_PATH = os.path.join(ASUITE_HOME, "asuite_plugin")
+ATEST_CMD = os.path.join(ASUITE_HOME, "atest", "atest_run_unittests.py")
+ATEST2_CMD = os.path.join(ASUITE_HOME, "atest-py2", "atest_run_unittests.py")
 AIDEGEN_CMD = "atest aidegen_unittests --host"
 PLUGIN_LIB_CMD = "atest plugin_lib_unittests --host"
 GRADLE_TEST = "/gradlew test"
-
+# Definition of exit codes.
+EXIT_ALL_CLEAN = 0
+EXIT_TEST_FAIL = 1
 
 def run_unittests(files):
     """Parse modified files and tell if they belong to aidegen, atest or both.
@@ -52,15 +51,15 @@
     for f in files:
         if 'atest' in f:
             cmd_dict.update({ATEST_CMD: None})
+        if 'atest-py2' in f:
+            cmd_dict.update({ATEST2_CMD: None})
         if 'aidegen' in f:
             cmd_dict.update({AIDEGEN_CMD: None})
         if 'plugin_lib' in f:
             cmd_dict.update({PLUGIN_LIB_CMD: None})
         if 'asuite_plugin' in f:
-            full_path = os.path.join(
-                os.getenv('ANDROID_BUILD_TOP'), ASUITE_PLUGIN_PATH)
-            cmd = full_path + GRADLE_TEST
-            cmd_dict.update({cmd : full_path})
+            cmd = ASUITE_PLUGIN_PATH + GRADLE_TEST
+            cmd_dict.update({cmd : ASUITE_PLUGIN_PATH})
     try:
         for cmd, path in cmd_dict.items():
             subprocess.check_call(shlex.split(cmd), cwd=path)
diff --git a/atest-py2/Android.bp b/atest-py2/Android.bp
new file mode 100644
index 0000000..a403173
--- /dev/null
+++ b/atest-py2/Android.bp
@@ -0,0 +1,202 @@
+// Copyright (C) 2018 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+python_defaults {
+    name: "atest_lib_default",
+    pkg_path: "atest",
+    version: {
+        py2: {
+            enabled: false,
+            embedded_launcher: false,
+        },
+        py3: {
+            enabled: true,
+            embedded_launcher: false,
+        },
+    },
+}
+
+// Remove this defaults after python3 migration is finished.
+python_defaults {
+    name: "atest_py2_default",
+    pkg_path: "atest",
+    version: {
+        py2: {
+            enabled: true,
+            embedded_launcher: false,
+        },
+        py3: {
+            enabled: false,
+            embedded_launcher: false,
+        },
+    },
+}
+
+python_binary_host {
+    name: "atest-py2",
+    main: "atest.py",
+    srcs: [
+        "**/*.py",
+    ],
+    exclude_srcs: [
+        "*_unittest.py",
+        "*/*_unittest.py",
+        "asuite_lib_test/*.py",
+        "proto/*_pb2.py",
+        "proto/__init__.py",
+    ],
+    libs: [
+        "atest_proto",
+    ],
+    data: [
+        "tools/updatedb_darwin.sh",
+        ":asuite_version",
+    ],
+    // Make atest's built name to atest-py2-dev
+    stem: "atest-py2-dev",
+    defaults: ["atest_py2_default"],
+    dist: {
+        targets: ["droidcore"],
+    },
+}
+
+python_library_host {
+    name: "atest_module_info",
+    defaults: ["atest_lib_default"],
+    srcs: [
+        "atest_error.py",
+        "atest_decorator.py",
+        "atest_utils.py",
+        "constants.py",
+        "constants_default.py",
+        "module_info.py",
+    ],
+}
+
+// Move asuite_default and asuite_metrics to //tools/asuite when atest is
+// running as a prebuilt.
+python_defaults {
+    name: "asuite_default",
+    pkg_path: "asuite",
+    version: {
+        py2: {
+            enabled: true,
+            embedded_launcher: false,
+        },
+        py3: {
+            enabled: true,
+            embedded_launcher: false,
+        },
+    },
+}
+
+python_library_host {
+    name: "asuite_metrics",
+    defaults: ["asuite_default"],
+    srcs: [
+        "asuite_metrics.py",
+    ],
+}
+
+// Exclude atest_updatedb_unittest due to it's a test for ATest's wrapper of updatedb, but there's
+// no updatedb binary on test server.
+python_test_host {
+    name: "atest-py2_unittests",
+    main: "atest_run_unittests.py",
+    pkg_path: "atest",
+    srcs: [
+        "**/*.py",
+    ],
+    data: [
+        "tools/updatedb_darwin.sh",
+        "unittest_data/**/*",
+        "unittest_data/**/.*",
+    ],
+    exclude_srcs: [
+        "asuite_lib_test/*.py",
+        "proto/*_pb2.py",
+        "proto/__init__.py",
+        "tools/atest_updatedb_unittest.py",
+    ],
+    libs: [
+        "py-mock",
+        "atest_proto",
+    ],
+    test_config: "atest_unittests.xml",
+    defaults: ["atest_py2_default"],
+}
+
+python_library_host {
+    name: "atest_proto",
+    defaults: ["atest_py2_default"],
+    srcs: [
+        "proto/*.proto",
+    ],
+    proto: {
+        canonical_path_from_root: false,
+    },
+}
+
+java_library_host {
+    name: "asuite_proto_java",
+    srcs: [
+        "proto/*.proto",
+    ],
+    proto: {
+        type: "full",
+        canonical_path_from_root: false,
+        include_dirs: ["external/protobuf/src"],
+    },
+}
+
+python_library_host {
+    name: "asuite_proto",
+    defaults: ["asuite_default"],
+    srcs: [
+        "proto/*.proto",
+    ],
+    proto: {
+        canonical_path_from_root: false,
+    },
+}
+
+python_library_host {
+    name: "asuite_cc_client",
+    defaults: ["asuite_default"],
+    srcs: [
+        "atest_error.py",
+        "atest_decorator.py",
+        "atest_utils.py",
+        "constants.py",
+        "constants_default.py",
+        "metrics/*.py",
+    ],
+    libs: [
+        "asuite_proto",
+        "asuite_metrics",
+    ],
+}
+
+genrule {
+    name: "asuite_version",
+    cmd: "DATETIME=$$(TZ='America/Log_Angelos' date +'%F');" +
+         "if [[ -n $$BUILD_NUMBER ]]; then" +
+         "  echo $${DATETIME}_$${BUILD_NUMBER} > $(out);" +
+         "else" +
+         "  echo $$(date +'%F_%R') > $(out);" +
+         "fi",
+    out: [
+        "VERSION",
+    ],
+}
diff --git a/atest-py2/INTEGRATION_TESTS b/atest-py2/INTEGRATION_TESTS
new file mode 100644
index 0000000..2bf986e
--- /dev/null
+++ b/atest-py2/INTEGRATION_TESTS
@@ -0,0 +1,86 @@
+# TODO (b/121362882): Add deviceless tests when dry-run is ready.
+###[Test Finder: MODULE, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: MODULE and runner: AtestTradefedTestRunner###
+HelloWorldTests
+hello_world_test
+
+
+###[Test Finder: MODULE_FILE_PATH, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: MODULE_FILE_PATH and runner: AtestTradefedTestRunner###
+# frameworks/base/services/tests/servicestests/src/com/android/server/wm/ScreenDecorWindowTests.java#testFlagChange
+# packages/apps/Bluetooth/tests/unit/Android.mk
+platform_testing/tests/example/native
+# platform_testing/tests/example/native/
+platform_testing/tests/example/native/Android.bp
+
+
+###[Test Finder: INTEGRATION_FILE_PATH, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: INTEGRATION_FILE_PATH and runner: AtestTradefedTestRunner###
+tools/tradefederation/core/res/config/native-benchmark.xml
+
+
+###[Test Finder: MODULE_CLASS, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: MODULE_CLASS and runner: AtestTradefedTestRunner###
+CtsAnimationTestCases:AnimatorTest
+CtsSampleDeviceTestCases:SampleDeviceTest#testSharedPreferences
+CtsSampleDeviceTestCases:android.sample.cts.SampleDeviceReportLogTest
+
+
+###[Test Finder: QUALIFIED_CLASS, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: QUALIFIED_CLASS and runner: AtestTradefedTestRunner###
+# com.android.server.display.DisplayManagerServiceTest
+# com.android.server.wm.ScreenDecorWindowTests#testMultipleDecors
+
+
+###[Test Finder: MODULE_PACKAGE, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: MODULE_PACKAGE and runner: AtestTradefedTestRunner###
+CtsSampleDeviceTestCases:android.sample.cts
+
+
+###[Test Finder: PACKAGE, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: PACKAGE and runner: AtestTradefedTestRunner###
+android.animation.cts
+
+
+###[Test Finder: CLASS, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: CLASS and runner: AtestTradefedTestRunner###
+AnimatorTest
+
+
+###[Test Finder: CC_CLASS, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: CC_CLASS and runner: AtestTradefedTestRunner###
+PacketFragmenterTest
+# PacketFragmenterTest#test_no_fragment_necessary
+PacketFragmenterTest#test_no_fragment_necessary,test_ble_fragment_necessary
+
+
+###[Test Finder: INTEGRATION, Test Runner:AtestTradefedTestRunner]###
+###Purpose: Test with finder: INTEGRATION and runner: AtestTradefedTestRunner###
+native-benchmark
+
+
+###[Test Finder: MODULE, Test Runner: VtsTradefedTestRunner]####
+###Purpose: Test with finder: MODULE and runner: VtsTradefedTestRunner###
+VtsCodelabHelloWorldTest
+
+
+###[Test Finder: MODULE, Test Runner: RobolectricTestRunner]#####
+###Purpose: Test with finder: MODULE and runner: RobolectricTestRunner###
+CarMessengerRoboTests
+###Purpose: Test with input path for RobolectricTest###
+packages/apps/Car/Messenger/tests/robotests/src/com/android/car/messenger/MessengerDelegateTest.java
+
+
+###[Test Finder: SUITE_PLAN, Test Runner: SuitePlanTestRunner]###
+###Purpose: Test with finder: SUITE_PLAN and runner: SuitePlanTestRunner###
+# cts-common
+
+
+###[Test Finder: SUITE_PLAN_FILE_PATH, Test Runner: SuitePlanTestRunner]###
+###Purpose: Test with finder: SUITE_PLAN_FILE_PATH and runner: SuitePlanTestRunner###
+# test/suite_harness/tools/cts-tradefed/res/config/cts.xml
+
+
+###[MULTIPLE-TESTS + AtestTradefedTestRunner]###
+###Purpose: Test with mixed testcases###
+CtsSampleDeviceTestCases CtsAnimationTestCases
diff --git a/atest-py2/OWNERS b/atest-py2/OWNERS
new file mode 100644
index 0000000..4d3541a
--- /dev/null
+++ b/atest-py2/OWNERS
@@ -0,0 +1,4 @@
+dshi@google.com
+easoncylee@google.com
+kevcheng@google.com
+yangbill@google.com
diff --git a/atest-py2/README.md b/atest-py2/README.md
new file mode 100644
index 0000000..3824245
--- /dev/null
+++ b/atest-py2/README.md
@@ -0,0 +1,6 @@
+# Atest
+
+The contents of this page have been moved to source.android.com.
+
+See:
+[Atest](https://source.android.com/compatibility/tests/development/atest)
diff --git a/atest/TEST_MAPPING b/atest-py2/TEST_MAPPING
similarity index 84%
rename from atest/TEST_MAPPING
rename to atest-py2/TEST_MAPPING
index cebe409..6cbf5e7 100644
--- a/atest/TEST_MAPPING
+++ b/atest-py2/TEST_MAPPING
@@ -2,11 +2,11 @@
 // the expectation of ASuite are still good.
 {
   "presubmit": [
-    {
-      // Host side ATest unittests.
-      "name": "atest_unittests",
-      "host": true
-    },
+//    {
+//      // Host side ATest unittests.
+//      "name": "atest_unittests",
+//      "host": true
+//    },
     {
       // Host side metrics tests.
       "name": "asuite_metrics_lib_tests",
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/__init__.py
diff --git a/atest-py2/asuite_lib_test/Android.bp b/atest-py2/asuite_lib_test/Android.bp
new file mode 100644
index 0000000..46c0516
--- /dev/null
+++ b/atest-py2/asuite_lib_test/Android.bp
@@ -0,0 +1,95 @@
+// Copyright (C) 2019 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Separate asuite_metrics and asuite_cc_client libs to different tests, due to asuite_cc_client
+// also include asuite_metrics and other needed python files, in order to make sure asuite_metrics
+// tests result is accurate, separate them to two different test modules.
+
+// For testing asuite_metrics python2 libs
+python_test_host {
+    name: "asuite_metrics_lib_tests",
+    main: "asuite_lib_run_test.py",
+    // These tests primarily check that the metric libs can be imported properly (see b/132086641).
+    // Specify a different pkg_path so that we can properly test them in isolation.
+    pkg_path: "asuite_test",
+    srcs: [
+        "asuite_lib_run_test.py",
+        "asuite_metrics_test.py",
+    ],
+    test_options: {
+        unit_test: true,
+    },
+    libs: [
+        "asuite_metrics",
+    ],
+    test_suites: ["general-tests"],
+    defaults: ["atest_py2_default"],
+}
+
+// For testing asuite_metrics python3 libs
+python_test_host {
+    name: "asuite_metrics_lib_py3_tests",
+    main: "asuite_lib_run_test.py",
+    pkg_path: "asuite_test",
+    srcs: [
+        "asuite_lib_run_test.py",
+        "asuite_metrics_test.py",
+    ],
+    libs: [
+        "asuite_metrics",
+    ],
+    test_options: {
+        unit_test: true,
+    },
+    test_suites: ["general-tests"],
+    defaults: ["atest_lib_default"],
+}
+
+// For testing asuite_cc_client python2 libs
+python_test_host {
+    name: "asuite_cc_lib_tests",
+    main: "asuite_lib_run_test.py",
+    pkg_path: "asuite_test",
+    srcs: [
+        "asuite_lib_run_test.py",
+        "asuite_cc_client_test.py",
+    ],
+    libs: [
+        "asuite_cc_client",
+    ],
+    test_options: {
+        unit_test: true,
+    },
+    test_suites: ["general-tests"],
+    defaults: ["atest_py2_default"],
+}
+
+// For testing asuite_cc_client python3 libs
+python_test_host {
+    name: "asuite_cc_lib_py3_tests",
+    main: "asuite_lib_run_test.py",
+    pkg_path: "asuite_test",
+    srcs: [
+        "asuite_lib_run_test.py",
+        "asuite_cc_client_test.py",
+    ],
+    libs: [
+        "asuite_cc_client",
+    ],
+    test_options: {
+        unit_test: true,
+    },
+    test_suites: ["general-tests"],
+    defaults: ["atest_lib_default"],
+}
diff --git a/atest-py2/asuite_lib_test/asuite_cc_client_test.py b/atest-py2/asuite_lib_test/asuite_cc_client_test.py
new file mode 100755
index 0000000..8ae1068
--- /dev/null
+++ b/atest-py2/asuite_lib_test/asuite_cc_client_test.py
@@ -0,0 +1,35 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittest for atest_execution_info."""
+
+import unittest
+
+
+class AsuiteCCLibTest(unittest.TestCase):
+    """Tests for verify asuite_metrics libs"""
+
+    def test_import_asuite_cc_lib(self):
+        """Test asuite_cc_lib."""
+        # pylint: disable=import-error, unused-variable
+        from asuite.metrics import metrics
+        from asuite.metrics import metrics_base
+        from asuite.metrics import metrics_utils
+
+        # TODO (b/132602907): Add the real usage for checking if metrics pass or fail.
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/atest-py2/asuite_lib_test/asuite_lib_run_test.py b/atest-py2/asuite_lib_test/asuite_lib_run_test.py
new file mode 100644
index 0000000..ad98ae8
--- /dev/null
+++ b/atest-py2/asuite_lib_test/asuite_lib_run_test.py
@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+#
+# Copyright 2019 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Main entrypoint for all of atest's unittest."""
+
+import os
+import sys
+import unittest
+from importlib import import_module
+
+
+def get_test_modules():
+    """Returns a list of test modules.
+
+    Finds all the test files (*_test.py) and get their relative
+    path (internal/lib/utils_test.py) and translate it to an import path and
+    strip the py ext (internal.lib.utils_test).
+
+    Returns:
+        List of strings (the testable module import path).
+    """
+    testable_modules = []
+    base_path = os.path.dirname(os.path.realpath(__file__))
+
+    for dirpath, _, files in os.walk(base_path):
+        for f in files:
+            if f.endswith("_test.py"):
+                # Now transform it into a relative import path.
+                full_file_path = os.path.join(dirpath, f)
+                rel_file_path = os.path.relpath(full_file_path, base_path)
+                rel_file_path, _ = os.path.splitext(rel_file_path)
+                rel_file_path = rel_file_path.replace(os.sep, ".")
+                testable_modules.append(rel_file_path)
+
+    return testable_modules
+
+
+def main(_):
+    """Main unittest entry.
+
+    Args:
+        argv: A list of system arguments. (unused)
+
+    Returns:
+        0 if success. None-zero if fails.
+    """
+    # Force remove syspath related to atest to make sure the env is clean.
+    # These tests need to run in isolation (to find bugs like b/132086641)
+    # so we scrub out all atest modules.
+    for path in sys.path:
+        if path.endswith('/atest'):
+            sys.path.remove(path)
+    test_modules = get_test_modules()
+    for mod in test_modules:
+        import_module(mod)
+
+    loader = unittest.defaultTestLoader
+    test_suite = loader.loadTestsFromNames(test_modules)
+    runner = unittest.TextTestRunner(verbosity=2)
+    result = runner.run(test_suite)
+    sys.exit(not result.wasSuccessful())
+
+
+if __name__ == '__main__':
+    main(sys.argv[1:])
diff --git a/atest-py2/asuite_lib_test/asuite_metrics_test.py b/atest-py2/asuite_lib_test/asuite_metrics_test.py
new file mode 100755
index 0000000..b150f7f
--- /dev/null
+++ b/atest-py2/asuite_lib_test/asuite_metrics_test.py
@@ -0,0 +1,33 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittest for atest_execution_info."""
+
+import unittest
+
+
+class AsuiteMetricsTest(unittest.TestCase):
+    """Tests for verify asuite_metrics libs"""
+
+    def test_import_asuite_metrics_lib(self):
+        """Test asuite_metrics_lib."""
+        # pylint: disable=import-error, unused-variable
+        from asuite import asuite_metrics
+
+        # TODO (b/132602907): Add the real usage for checking if metrics pass or fail.
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/atest-py2/asuite_metrics.py b/atest-py2/asuite_metrics.py
new file mode 100644
index 0000000..88fca0a
--- /dev/null
+++ b/atest-py2/asuite_metrics.py
@@ -0,0 +1,111 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Asuite simple Metrics Functions"""
+
+import json
+import logging
+import os
+import uuid
+
+try:
+    # PYTHON2
+    from urllib2 import Request
+    from urllib2 import urlopen
+except ImportError:
+    # PYTHON3
+    from urllib.request import Request
+    from urllib.request import urlopen
+
+
+_JSON_HEADERS = {'Content-Type': 'application/json'}
+_METRICS_RESPONSE = 'done'
+_METRICS_TIMEOUT = 2 #seconds
+_META_FILE = os.path.join(os.path.expanduser('~'),
+                          '.config', 'asuite', '.metadata')
+_ANDROID_BUILD_TOP = 'ANDROID_BUILD_TOP'
+
+UNUSED_UUID = '00000000-0000-4000-8000-000000000000'
+
+
+#pylint: disable=broad-except
+def log_event(metrics_url, unused_key_fallback=True, **kwargs):
+    """Base log event function for asuite backend.
+
+    Args:
+        metrics_url: String, URL to report metrics to.
+        unused_key_fallback: Boolean, If True and unable to get grouping key,
+                            use a unused key otherwise return out. Sometimes we
+                            don't want to return metrics for users we are
+                            unable to identify. Default True.
+        kwargs: Dict, additional fields we want to return metrics for.
+    """
+    try:
+        try:
+            key = str(_get_grouping_key())
+        except Exception:
+            if not unused_key_fallback:
+                return
+            key = UNUSED_UUID
+        data = {'grouping_key': key,
+                'run_id': str(uuid.uuid4())}
+        if kwargs:
+            data.update(kwargs)
+        data = json.dumps(data)
+        request = Request(metrics_url, data=data,
+                          headers=_JSON_HEADERS)
+        response = urlopen(request, timeout=_METRICS_TIMEOUT)
+        content = response.read()
+        if content != _METRICS_RESPONSE:
+            raise Exception('Unexpected metrics response: %s' % content)
+    except Exception as e:
+        logging.debug('Exception sending metrics: %s', e)
+
+
+def _get_grouping_key():
+    """Get grouping key. Returns UUID.uuid4."""
+    if os.path.isfile(_META_FILE):
+        with open(_META_FILE) as f:
+            try:
+                return uuid.UUID(f.read(), version=4)
+            except ValueError:
+                logging.debug('malformed group_key in file, rewriting')
+    # TODO: Delete get_old_key() on 11/17/2018
+    key = _get_old_key() or uuid.uuid4()
+    dir_path = os.path.dirname(_META_FILE)
+    if os.path.isfile(dir_path):
+        os.remove(dir_path)
+    try:
+        os.makedirs(dir_path)
+    except OSError as e:
+        if not os.path.isdir(dir_path):
+            raise e
+    with open(_META_FILE, 'w+') as f:
+        f.write(str(key))
+    return key
+
+
+def _get_old_key():
+    """Get key from old meta data file if exists, else return None."""
+    old_file = os.path.join(os.environ[_ANDROID_BUILD_TOP],
+                            'tools/tradefederation/core/atest', '.metadata')
+    key = None
+    if os.path.isfile(old_file):
+        with open(old_file) as f:
+            try:
+                key = uuid.UUID(f.read(), version=4)
+            except ValueError:
+                logging.debug('error reading old key')
+        os.remove(old_file)
+    return key
diff --git a/atest-py2/atest.py b/atest-py2/atest.py
new file mode 100755
index 0000000..fe9b240
--- /dev/null
+++ b/atest-py2/atest.py
@@ -0,0 +1,721 @@
+#!/usr/bin/env python
+#
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Command line utility for running Android tests through TradeFederation.
+
+atest helps automate the flow of building test modules across the Android
+code base and executing the tests via the TradeFederation test harness.
+
+atest is designed to support any test types that can be ran by TradeFederation.
+"""
+
+from __future__ import print_function
+
+import logging
+import os
+import sys
+import tempfile
+import time
+import platform
+
+from multiprocessing import Process
+
+import atest_arg_parser
+import atest_error
+import atest_execution_info
+import atest_utils
+import bug_detector
+import cli_translator
+# pylint: disable=import-error
+import constants
+import module_info
+import result_reporter
+import test_runner_handler
+
+from metrics import metrics
+from metrics import metrics_base
+from metrics import metrics_utils
+from test_runners import regression_test_runner
+from tools import atest_tools
+
+EXPECTED_VARS = frozenset([
+    constants.ANDROID_BUILD_TOP,
+    'ANDROID_TARGET_OUT_TESTCASES',
+    constants.ANDROID_OUT])
+TEST_RUN_DIR_PREFIX = "%Y%m%d_%H%M%S"
+CUSTOM_ARG_FLAG = '--'
+OPTION_NOT_FOR_TEST_MAPPING = (
+    'Option `%s` does not work for running tests in TEST_MAPPING files')
+
+DEVICE_TESTS = 'tests that require device'
+HOST_TESTS = 'tests that do NOT require device'
+RESULT_HEADER_FMT = '\nResults from %(test_type)s:'
+RUN_HEADER_FMT = '\nRunning %(test_count)d %(test_type)s.'
+TEST_COUNT = 'test_count'
+TEST_TYPE = 'test_type'
+# Tasks that must run in the build time but unable to build by soong.
+# (e.g subprocesses that invoke host commands.)
+EXTRA_TASKS = {
+    'index-targets': atest_tools.index_targets
+}
+
+
+def _run_extra_tasks(join=False):
+    """Execute EXTRA_TASKS with multiprocessing.
+
+    Args:
+        join: A boolean that indicates the process should terminate when
+        the main process ends or keep itself alive. True indicates the
+        main process will wait for all subprocesses finish while False represents
+        killing all subprocesses when the main process exits.
+    """
+    _running_procs = []
+    for task in EXTRA_TASKS.values():
+        proc = Process(target=task)
+        proc.daemon = not join
+        proc.start()
+        _running_procs.append(proc)
+    if join:
+        for proc in _running_procs:
+            proc.join()
+
+
+def _parse_args(argv):
+    """Parse command line arguments.
+
+    Args:
+        argv: A list of arguments.
+
+    Returns:
+        An argspace.Namespace class instance holding parsed args.
+    """
+    # Store everything after '--' in custom_args.
+    pruned_argv = argv
+    custom_args_index = None
+    if CUSTOM_ARG_FLAG in argv:
+        custom_args_index = argv.index(CUSTOM_ARG_FLAG)
+        pruned_argv = argv[:custom_args_index]
+    parser = atest_arg_parser.AtestArgParser()
+    parser.add_atest_args()
+    args = parser.parse_args(pruned_argv)
+    args.custom_args = []
+    if custom_args_index is not None:
+        args.custom_args = argv[custom_args_index+1:]
+    return args
+
+
+def _configure_logging(verbose):
+    """Configure the logger.
+
+    Args:
+        verbose: A boolean. If true display DEBUG level logs.
+    """
+    log_format = '%(asctime)s %(filename)s:%(lineno)s:%(levelname)s: %(message)s'
+    datefmt = '%Y-%m-%d %H:%M:%S'
+    if verbose:
+        logging.basicConfig(level=logging.DEBUG, format=log_format, datefmt=datefmt)
+    else:
+        logging.basicConfig(level=logging.INFO, format=log_format, datefmt=datefmt)
+
+
+def _missing_environment_variables():
+    """Verify the local environment has been set up to run atest.
+
+    Returns:
+        List of strings of any missing environment variables.
+    """
+    missing = filter(None, [x for x in EXPECTED_VARS if not os.environ.get(x)])
+    if missing:
+        logging.error('Local environment doesn\'t appear to have been '
+                      'initialized. Did you remember to run lunch? Expected '
+                      'Environment Variables: %s.', missing)
+    return missing
+
+
+def make_test_run_dir():
+    """Make the test run dir in ATEST_RESULT_ROOT.
+
+    Returns:
+        A string of the dir path.
+    """
+    if not os.path.exists(constants.ATEST_RESULT_ROOT):
+        os.makedirs(constants.ATEST_RESULT_ROOT)
+    ctime = time.strftime(TEST_RUN_DIR_PREFIX, time.localtime())
+    test_result_dir = tempfile.mkdtemp(prefix='%s_' % ctime,
+                                       dir=constants.ATEST_RESULT_ROOT)
+    return test_result_dir
+
+
+def get_extra_args(args):
+    """Get extra args for test runners.
+
+    Args:
+        args: arg parsed object.
+
+    Returns:
+        Dict of extra args for test runners to utilize.
+    """
+    extra_args = {}
+    if args.wait_for_debugger:
+        extra_args[constants.WAIT_FOR_DEBUGGER] = None
+    steps = args.steps or constants.ALL_STEPS
+    if constants.INSTALL_STEP not in steps:
+        extra_args[constants.DISABLE_INSTALL] = None
+    # The key and its value of the dict can be called via:
+    # if args.aaaa:
+    #     extra_args[constants.AAAA] = args.aaaa
+    arg_maps = {'all_abi': constants.ALL_ABI,
+                'collect_tests_only': constants.COLLECT_TESTS_ONLY,
+                'custom_args': constants.CUSTOM_ARGS,
+                'disable_teardown': constants.DISABLE_TEARDOWN,
+                'dry_run': constants.DRY_RUN,
+                'generate_baseline': constants.PRE_PATCH_ITERATIONS,
+                'generate_new_metrics': constants.POST_PATCH_ITERATIONS,
+                'host': constants.HOST,
+                'instant': constants.INSTANT,
+                'iterations': constants.ITERATIONS,
+                'rerun_until_failure': constants.RERUN_UNTIL_FAILURE,
+                'retry_any_failure': constants.RETRY_ANY_FAILURE,
+                'serial': constants.SERIAL,
+                'sharding': constants.SHARDING,
+                'tf_debug': constants.TF_DEBUG,
+                'tf_template': constants.TF_TEMPLATE,
+                'user_type': constants.USER_TYPE}
+    not_match = [k for k in arg_maps if k not in vars(args)]
+    if not_match:
+        raise AttributeError('%s object has no attribute %s'
+                             %(type(args).__name__, not_match))
+    extra_args.update({arg_maps.get(k): v for k, v in vars(args).items()
+                       if arg_maps.get(k) and v})
+    return extra_args
+
+
+def _get_regression_detection_args(args, results_dir):
+    """Get args for regression detection test runners.
+
+    Args:
+        args: parsed args object.
+        results_dir: string directory to store atest results.
+
+    Returns:
+        Dict of args for regression detection test runner to utilize.
+    """
+    regression_args = {}
+    pre_patch_folder = (os.path.join(results_dir, 'baseline-metrics') if args.generate_baseline
+                        else args.detect_regression.pop(0))
+    post_patch_folder = (os.path.join(results_dir, 'new-metrics') if args.generate_new_metrics
+                         else args.detect_regression.pop(0))
+    regression_args[constants.PRE_PATCH_FOLDER] = pre_patch_folder
+    regression_args[constants.POST_PATCH_FOLDER] = post_patch_folder
+    return regression_args
+
+
+def _validate_exec_mode(args, test_infos, host_tests=None):
+    """Validate all test execution modes are not in conflict.
+
+    Exit the program with error code if have device-only and host-only.
+    If no conflict and host side, add args.host=True.
+
+    Args:
+        args: parsed args object.
+        test_info: TestInfo object.
+        host_tests: True if all tests should be deviceless, False if all tests
+            should be device tests. Default is set to None, which means
+            tests can be either deviceless or device tests.
+    """
+    all_device_modes = [x.get_supported_exec_mode() for x in test_infos]
+    err_msg = None
+    # In the case of '$atest <device-only> --host', exit.
+    if (host_tests or args.host) and constants.DEVICE_TEST in all_device_modes:
+        err_msg = ('Test side and option(--host) conflict. Please remove '
+                   '--host if the test run on device side.')
+    # In the case of '$atest <host-only> <device-only> --host' or
+    # '$atest <host-only> <device-only>', exit.
+    if (constants.DEVICELESS_TEST in all_device_modes and
+            constants.DEVICE_TEST in all_device_modes):
+        err_msg = 'There are host-only and device-only tests in command.'
+    if host_tests is False and constants.DEVICELESS_TEST in all_device_modes:
+        err_msg = 'There are host-only tests in command.'
+    if err_msg:
+        logging.error(err_msg)
+        metrics_utils.send_exit_event(constants.EXIT_CODE_ERROR, logs=err_msg)
+        sys.exit(constants.EXIT_CODE_ERROR)
+    # In the case of '$atest <host-only>', we add --host to run on host-side.
+    # The option should only be overridden if `host_tests` is not set.
+    if not args.host and host_tests is None:
+        args.host = bool(constants.DEVICELESS_TEST in all_device_modes)
+
+
+def _validate_tm_tests_exec_mode(args, test_infos):
+    """Validate all test execution modes are not in conflict.
+
+    Split the tests in Test Mapping files into two groups, device tests and
+    deviceless tests running on host. Validate the tests' host setting.
+    For device tests, exit the program if any test is found for host-only.
+    For deviceless tests, exit the program if any test is found for device-only.
+
+    Args:
+        args: parsed args object.
+        test_info: TestInfo object.
+    """
+    device_test_infos, host_test_infos = _split_test_mapping_tests(
+        test_infos)
+    # No need to verify device tests if atest command is set to only run host
+    # tests.
+    if device_test_infos and not args.host:
+        _validate_exec_mode(args, device_test_infos, host_tests=False)
+    if host_test_infos:
+        _validate_exec_mode(args, host_test_infos, host_tests=True)
+
+
+def _will_run_tests(args):
+    """Determine if there are tests to run.
+
+    Currently only used by detect_regression to skip the test if just running regression detection.
+
+    Args:
+        args: parsed args object.
+
+    Returns:
+        True if there are tests to run, false otherwise.
+    """
+    return not (args.detect_regression and len(args.detect_regression) == 2)
+
+
+def _has_valid_regression_detection_args(args):
+    """Validate regression detection args.
+
+    Args:
+        args: parsed args object.
+
+    Returns:
+        True if args are valid
+    """
+    if args.generate_baseline and args.generate_new_metrics:
+        logging.error('Cannot collect both baseline and new metrics at the same time.')
+        return False
+    if args.detect_regression is not None:
+        if not args.detect_regression:
+            logging.error('Need to specify at least 1 arg for regression detection.')
+            return False
+        elif len(args.detect_regression) == 1:
+            if args.generate_baseline or args.generate_new_metrics:
+                return True
+            logging.error('Need to specify --generate-baseline or --generate-new-metrics.')
+            return False
+        elif len(args.detect_regression) == 2:
+            if args.generate_baseline:
+                logging.error('Specified 2 metric paths and --generate-baseline, '
+                              'either drop --generate-baseline or drop a path')
+                return False
+            if args.generate_new_metrics:
+                logging.error('Specified 2 metric paths and --generate-new-metrics, '
+                              'either drop --generate-new-metrics or drop a path')
+                return False
+            return True
+        else:
+            logging.error('Specified more than 2 metric paths.')
+            return False
+    return True
+
+
+def _has_valid_test_mapping_args(args):
+    """Validate test mapping args.
+
+    Not all args work when running tests in TEST_MAPPING files. Validate the
+    args before running the tests.
+
+    Args:
+        args: parsed args object.
+
+    Returns:
+        True if args are valid
+    """
+    is_test_mapping = atest_utils.is_test_mapping(args)
+    if not is_test_mapping:
+        return True
+    options_to_validate = [
+        (args.generate_baseline, '--generate-baseline'),
+        (args.detect_regression, '--detect-regression'),
+        (args.generate_new_metrics, '--generate-new-metrics'),
+    ]
+    for arg_value, arg in options_to_validate:
+        if arg_value:
+            logging.error(OPTION_NOT_FOR_TEST_MAPPING, arg)
+            return False
+    return True
+
+
+def _validate_args(args):
+    """Validate setups and args.
+
+    Exit the program with error code if any setup or arg is invalid.
+
+    Args:
+        args: parsed args object.
+    """
+    if _missing_environment_variables():
+        sys.exit(constants.EXIT_CODE_ENV_NOT_SETUP)
+    if args.generate_baseline and args.generate_new_metrics:
+        logging.error(
+            'Cannot collect both baseline and new metrics at the same time.')
+        sys.exit(constants.EXIT_CODE_ERROR)
+    if not _has_valid_regression_detection_args(args):
+        sys.exit(constants.EXIT_CODE_ERROR)
+    if not _has_valid_test_mapping_args(args):
+        sys.exit(constants.EXIT_CODE_ERROR)
+
+
+def _print_module_info_from_module_name(mod_info, module_name):
+    """print out the related module_info for a module_name.
+
+    Args:
+        mod_info: ModuleInfo object.
+        module_name: A string of module.
+
+    Returns:
+        True if the module_info is found.
+    """
+    title_mapping = {
+        constants.MODULE_PATH: "Source code path",
+        constants.MODULE_INSTALLED: "Installed path",
+        constants.MODULE_COMPATIBILITY_SUITES: "Compatibility suite"}
+    target_module_info = mod_info.get_module_info(module_name)
+    is_module_found = False
+    if target_module_info:
+        atest_utils.colorful_print(module_name, constants.GREEN)
+        for title_key in title_mapping.iterkeys():
+            atest_utils.colorful_print("\t%s" % title_mapping[title_key],
+                                       constants.CYAN)
+            for info_value in target_module_info[title_key]:
+                print("\t\t{}".format(info_value))
+        is_module_found = True
+    return is_module_found
+
+
+def _print_test_info(mod_info, test_infos):
+    """Print the module information from TestInfos.
+
+    Args:
+        mod_info: ModuleInfo object.
+        test_infos: A list of TestInfos.
+
+    Returns:
+        Always return EXIT_CODE_SUCCESS
+    """
+    for test_info in test_infos:
+        _print_module_info_from_module_name(mod_info, test_info.test_name)
+        atest_utils.colorful_print("\tRelated build targets", constants.MAGENTA)
+        print("\t\t{}".format(", ".join(test_info.build_targets)))
+        for build_target in test_info.build_targets:
+            if build_target != test_info.test_name:
+                _print_module_info_from_module_name(mod_info, build_target)
+        atest_utils.colorful_print("", constants.WHITE)
+    return constants.EXIT_CODE_SUCCESS
+
+
+def is_from_test_mapping(test_infos):
+    """Check that the test_infos came from TEST_MAPPING files.
+
+    Args:
+        test_infos: A set of TestInfos.
+
+    Returns:
+        True if the test infos are from TEST_MAPPING files.
+    """
+    return list(test_infos)[0].from_test_mapping
+
+
+def _split_test_mapping_tests(test_infos):
+    """Split Test Mapping tests into 2 groups: device tests and host tests.
+
+    Args:
+        test_infos: A set of TestInfos.
+
+    Returns:
+        A tuple of (device_test_infos, host_test_infos), where
+        device_test_infos: A set of TestInfos for tests that require device.
+        host_test_infos: A set of TestInfos for tests that do NOT require
+            device.
+    """
+    assert is_from_test_mapping(test_infos)
+    host_test_infos = set([info for info in test_infos if info.host])
+    device_test_infos = set([info for info in test_infos if not info.host])
+    return device_test_infos, host_test_infos
+
+
+# pylint: disable=too-many-locals
+def _run_test_mapping_tests(results_dir, test_infos, extra_args):
+    """Run all tests in TEST_MAPPING files.
+
+    Args:
+        results_dir: String directory to store atest results.
+        test_infos: A set of TestInfos.
+        extra_args: Dict of extra args to add to test run.
+
+    Returns:
+        Exit code.
+    """
+    device_test_infos, host_test_infos = _split_test_mapping_tests(test_infos)
+    # `host` option needs to be set to True to run host side tests.
+    host_extra_args = extra_args.copy()
+    host_extra_args[constants.HOST] = True
+    test_runs = [(host_test_infos, host_extra_args, HOST_TESTS)]
+    if extra_args.get(constants.HOST):
+        atest_utils.colorful_print(
+            'Option `--host` specified. Skip running device tests.',
+            constants.MAGENTA)
+    else:
+        test_runs.append((device_test_infos, extra_args, DEVICE_TESTS))
+
+    test_results = []
+    for tests, args, test_type in test_runs:
+        if not tests:
+            continue
+        header = RUN_HEADER_FMT % {TEST_COUNT: len(tests), TEST_TYPE: test_type}
+        atest_utils.colorful_print(header, constants.MAGENTA)
+        logging.debug('\n'.join([str(info) for info in tests]))
+        tests_exit_code, reporter = test_runner_handler.run_all_tests(
+            results_dir, tests, args, delay_print_summary=True)
+        atest_execution_info.AtestExecutionInfo.result_reporters.append(reporter)
+        test_results.append((tests_exit_code, reporter, test_type))
+
+    all_tests_exit_code = constants.EXIT_CODE_SUCCESS
+    failed_tests = []
+    for tests_exit_code, reporter, test_type in test_results:
+        atest_utils.colorful_print(
+            RESULT_HEADER_FMT % {TEST_TYPE: test_type}, constants.MAGENTA)
+        result = tests_exit_code | reporter.print_summary()
+        if result:
+            failed_tests.append(test_type)
+        all_tests_exit_code |= result
+
+    # List failed tests at the end as a reminder.
+    if failed_tests:
+        atest_utils.colorful_print(
+            '\n==============================', constants.YELLOW)
+        atest_utils.colorful_print(
+            '\nFollowing tests failed:', constants.MAGENTA)
+        for failure in failed_tests:
+            atest_utils.colorful_print(failure, constants.RED)
+
+    return all_tests_exit_code
+
+
+def _dry_run(results_dir, extra_args, test_infos):
+    """Only print the commands of the target tests rather than running them in actual.
+
+    Args:
+        results_dir: Path for saving atest logs.
+        extra_args: Dict of extra args for test runners to utilize.
+        test_infos: A list of TestInfos.
+
+    Returns:
+        A list of test commands.
+    """
+    all_run_cmds = []
+    for test_runner, tests in test_runner_handler.group_tests_by_test_runners(test_infos):
+        runner = test_runner(results_dir)
+        run_cmds = runner.generate_run_commands(tests, extra_args)
+        for run_cmd in run_cmds:
+            all_run_cmds.append(run_cmd)
+            print('Would run test via command: %s'
+                  % (atest_utils.colorize(run_cmd, constants.GREEN)))
+    return all_run_cmds
+
+def _print_testable_modules(mod_info, suite):
+    """Print the testable modules for a given suite.
+
+    Args:
+        mod_info: ModuleInfo object.
+        suite: A string of suite name.
+    """
+    testable_modules = mod_info.get_testable_modules(suite)
+    print('\n%s' % atest_utils.colorize('%s Testable %s modules' % (
+        len(testable_modules), suite), constants.CYAN))
+    print('-------')
+    for module in sorted(testable_modules):
+        print('\t%s' % module)
+
+def _is_inside_android_root():
+    """Identify whether the cwd is inside of Android source tree.
+
+    Returns:
+        False if the cwd is outside of the source tree, True otherwise.
+    """
+    build_top = os.getenv(constants.ANDROID_BUILD_TOP, ' ')
+    return build_top in os.getcwd()
+
+# pylint: disable=too-many-statements
+# pylint: disable=too-many-branches
+# pylint: disable=too-many-return-statements
+def main(argv, results_dir, args):
+    """Entry point of atest script.
+
+    Args:
+        argv: A list of arguments.
+        results_dir: A directory which stores the ATest execution information.
+        args: An argspace.Namespace class instance holding parsed args.
+
+    Returns:
+        Exit code.
+    """
+    _configure_logging(args.verbose)
+    _validate_args(args)
+    metrics_utils.get_start_time()
+    os_pyver = '{}:{}'.format(platform.platform(), platform.python_version())
+    metrics.AtestStartEvent(
+        command_line=' '.join(argv),
+        test_references=args.tests,
+        cwd=os.getcwd(),
+        os=os_pyver)
+    if args.version:
+        if os.path.isfile(constants.VERSION_FILE):
+            with open(constants.VERSION_FILE) as version_file:
+                print(version_file.read())
+        return constants.EXIT_CODE_SUCCESS
+    if not _is_inside_android_root():
+        atest_utils.colorful_print(
+            "\nAtest must always work under ${}!".format(
+                constants.ANDROID_BUILD_TOP), constants.RED)
+        return constants.EXIT_CODE_OUTSIDE_ROOT
+    if args.help:
+        atest_arg_parser.print_epilog_text()
+        return constants.EXIT_CODE_SUCCESS
+    if args.history:
+        atest_execution_info.print_test_result(constants.ATEST_RESULT_ROOT,
+                                               args.history)
+        return constants.EXIT_CODE_SUCCESS
+    if args.latest_result:
+        atest_execution_info.print_test_result_by_path(
+            constants.LATEST_RESULT_FILE)
+        return constants.EXIT_CODE_SUCCESS
+    mod_info = module_info.ModuleInfo(force_build=args.rebuild_module_info)
+    if args.rebuild_module_info:
+        _run_extra_tasks(join=True)
+    translator = cli_translator.CLITranslator(module_info=mod_info,
+                                              print_cache_msg=not args.clear_cache)
+    if args.list_modules:
+        _print_testable_modules(mod_info, args.list_modules)
+        return constants.EXIT_CODE_SUCCESS
+    build_targets = set()
+    test_infos = set()
+    # Clear cache if user pass -c option
+    if args.clear_cache:
+        atest_utils.clean_test_info_caches(args.tests)
+    if _will_run_tests(args):
+        build_targets, test_infos = translator.translate(args)
+        if not test_infos:
+            return constants.EXIT_CODE_TEST_NOT_FOUND
+        if not is_from_test_mapping(test_infos):
+            _validate_exec_mode(args, test_infos)
+        else:
+            _validate_tm_tests_exec_mode(args, test_infos)
+    if args.info:
+        return _print_test_info(mod_info, test_infos)
+    build_targets |= test_runner_handler.get_test_runner_reqs(mod_info,
+                                                              test_infos)
+    extra_args = get_extra_args(args)
+    if args.update_cmd_mapping or args.verify_cmd_mapping:
+        args.dry_run = True
+    if args.dry_run:
+        args.tests.sort()
+        dry_run_cmds = _dry_run(results_dir, extra_args, test_infos)
+        if args.verify_cmd_mapping:
+            try:
+                atest_utils.handle_test_runner_cmd(' '.join(args.tests),
+                                                   dry_run_cmds,
+                                                   do_verification=True)
+            except atest_error.DryRunVerificationError as e:
+                atest_utils.colorful_print(str(e), constants.RED)
+                return constants.EXIT_CODE_VERIFY_FAILURE
+        if args.update_cmd_mapping:
+            atest_utils.handle_test_runner_cmd(' '.join(args.tests),
+                                               dry_run_cmds)
+        return constants.EXIT_CODE_SUCCESS
+    if args.detect_regression:
+        build_targets |= (regression_test_runner.RegressionTestRunner('')
+                          .get_test_runner_build_reqs())
+    # args.steps will be None if none of -bit set, else list of params set.
+    steps = args.steps if args.steps else constants.ALL_STEPS
+    if build_targets and constants.BUILD_STEP in steps:
+        if constants.TEST_STEP in steps and not args.rebuild_module_info:
+            # Run extra tasks along with build step concurrently. Note that
+            # Atest won't index targets when only "-b" is given(without -t).
+            _run_extra_tasks(join=False)
+        # Add module-info.json target to the list of build targets to keep the
+        # file up to date.
+        build_targets.add(mod_info.module_info_target)
+        build_start = time.time()
+        success = atest_utils.build(build_targets, verbose=args.verbose)
+        metrics.BuildFinishEvent(
+            duration=metrics_utils.convert_duration(time.time() - build_start),
+            success=success,
+            targets=build_targets)
+        if not success:
+            return constants.EXIT_CODE_BUILD_FAILURE
+    elif constants.TEST_STEP not in steps:
+        logging.warn('Install step without test step currently not '
+                     'supported, installing AND testing instead.')
+        steps.append(constants.TEST_STEP)
+    tests_exit_code = constants.EXIT_CODE_SUCCESS
+    test_start = time.time()
+    if constants.TEST_STEP in steps:
+        if not is_from_test_mapping(test_infos):
+            tests_exit_code, reporter = test_runner_handler.run_all_tests(
+                results_dir, test_infos, extra_args)
+            atest_execution_info.AtestExecutionInfo.result_reporters.append(reporter)
+        else:
+            tests_exit_code = _run_test_mapping_tests(
+                results_dir, test_infos, extra_args)
+    if args.detect_regression:
+        regression_args = _get_regression_detection_args(args, results_dir)
+        # TODO(b/110485713): Should not call run_tests here.
+        reporter = result_reporter.ResultReporter()
+        atest_execution_info.AtestExecutionInfo.result_reporters.append(reporter)
+        tests_exit_code |= regression_test_runner.RegressionTestRunner(
+            '').run_tests(
+                None, regression_args, reporter)
+    metrics.RunTestsFinishEvent(
+        duration=metrics_utils.convert_duration(time.time() - test_start))
+    preparation_time = atest_execution_info.preparation_time(test_start)
+    if preparation_time:
+        # Send the preparation time only if it's set.
+        metrics.RunnerFinishEvent(
+            duration=metrics_utils.convert_duration(preparation_time),
+            success=True,
+            runner_name=constants.TF_PREPARATION,
+            test=[])
+    if tests_exit_code != constants.EXIT_CODE_SUCCESS:
+        tests_exit_code = constants.EXIT_CODE_TEST_FAILURE
+    return tests_exit_code
+
+if __name__ == '__main__':
+    RESULTS_DIR = make_test_run_dir()
+    ARGS = _parse_args(sys.argv[1:])
+    with atest_execution_info.AtestExecutionInfo(sys.argv[1:],
+                                                 RESULTS_DIR,
+                                                 ARGS) as result_file:
+        metrics_base.MetricsBase.tool_name = constants.TOOL_NAME
+        EXIT_CODE = main(sys.argv[1:], RESULTS_DIR, ARGS)
+        DETECTOR = bug_detector.BugDetector(sys.argv[1:], EXIT_CODE)
+        metrics.LocalDetectEvent(
+            detect_type=constants.DETECT_TYPE_BUG_DETECTED,
+            result=DETECTOR.caught_result)
+        if result_file:
+            print("Run 'atest --history' to review test result history.")
+    sys.exit(EXIT_CODE)
diff --git a/atest-py2/atest_arg_parser.py b/atest-py2/atest_arg_parser.py
new file mode 100644
index 0000000..6fb2205
--- /dev/null
+++ b/atest-py2/atest_arg_parser.py
@@ -0,0 +1,682 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Atest Argument Parser class for atest.
+"""
+
+# pylint: disable=line-too-long
+
+import argparse
+import pydoc
+
+import atest_utils
+import constants
+
+# Constants used for AtestArgParser and EPILOG_TEMPLATE
+HELP_DESC = ('A command line tool that allows users to build, install, and run '
+             'Android tests locally, greatly speeding test re-runs without '
+             'requiring knowledge of Trade Federation test harness command line'
+             ' options.')
+
+# Constants used for arg help message(sorted in alphabetic)
+ALL_ABI = 'Set to run tests for all abis.'
+BUILD = 'Run a build.'
+CLEAR_CACHE = 'Wipe out the test_infos cache of the test.'
+COLLECT_TESTS_ONLY = ('Collect a list test cases of the instrumentation tests '
+                      'without testing them in real.')
+DISABLE_TEARDOWN = 'Disable test teardown and cleanup.'
+DRY_RUN = 'Dry run atest without building, installing and running tests in real.'
+ENABLE_FILE_PATTERNS = 'Enable FILE_PATTERNS in TEST_MAPPING.'
+HISTORY = ('Show test results in chronological order(with specified number or '
+           'all by default).')
+HOST = ('Run the test completely on the host without a device. '
+        '(Note: running a host test that requires a device without '
+        '--host will fail.)')
+INCLUDE_SUBDIRS = 'Search TEST_MAPPING files in subdirs as well.'
+INFO = 'Show module information.'
+INSTALL = 'Install an APK.'
+INSTANT = ('Run the instant_app version of the module if the module supports it. '
+           'Note: Nothing\'s going to run if it\'s not an Instant App test and '
+           '"--instant" is passed.')
+ITERATION = 'Loop-run tests until the max iteration is reached. (10 by default)'
+LATEST_RESULT = 'Print latest test result.'
+LIST_MODULES = 'List testable modules for the given suite.'
+REBUILD_MODULE_INFO = ('Forces a rebuild of the module-info.json file. '
+                       'This may be necessary following a repo sync or '
+                       'when writing a new test.')
+RERUN_UNTIL_FAILURE = ('Rerun all tests until a failure occurs or the max '
+                       'iteration is reached. (10 by default)')
+RETRY_ANY_FAILURE = ('Rerun failed tests until passed or the max iteration '
+                     'is reached. (10 by default)')
+SERIAL = 'The device to run the test on.'
+SHARDING = 'Option to specify sharding count. The default value is 2'
+TEST = ('Run the tests. WARNING: Many test configs force cleanup of device '
+        'after test run. In this case, "-d" must be used in previous test run to '
+        'disable cleanup for "-t" to work. Otherwise, device will need to be '
+        'setup again with "-i".')
+TEST_MAPPING = 'Run tests defined in TEST_MAPPING files.'
+TF_TEMPLATE = ('Add extra tradefed template for ATest suite, '
+               'e.g. atest <test> --tf-template <template_key>=<template_path>')
+TF_DEBUG = 'Enable tradefed debug mode with a specify port. Default value is 10888.'
+UPDATE_CMD_MAPPING = ('Update the test command of input tests. Warning: result '
+                      'will be saved under tools/tradefederation/core/atest/test_data.')
+USER_TYPE = 'Run test with specific user type, e.g. atest <test> --user-type secondary_user'
+VERBOSE = 'Display DEBUG level logging.'
+VERIFY_CMD_MAPPING = 'Verify the test command of input tests.'
+VERSION = 'Display version string.'
+WAIT_FOR_DEBUGGER = 'Wait for debugger prior to execution (Instrumentation tests only).'
+
+def _positive_int(value):
+    """Verify value by whether or not a positive integer.
+
+    Args:
+        value: A string of a command-line argument.
+
+    Returns:
+        int of value, if it is an positive integer.
+        Otherwise, raise argparse.ArgumentTypeError.
+    """
+    err_msg = "invalid positive int value: '%s'" % value
+    try:
+        converted_value = int(value)
+        if converted_value < 1:
+            raise argparse.ArgumentTypeError(err_msg)
+        return converted_value
+    except ValueError:
+        raise argparse.ArgumentTypeError(err_msg)
+
+class AtestArgParser(argparse.ArgumentParser):
+    """Atest wrapper of ArgumentParser."""
+
+    def __init__(self):
+        """Initialise an ArgumentParser instance."""
+        atest_utils.print_data_collection_notice()
+        super(AtestArgParser, self).__init__(
+            description=HELP_DESC, add_help=False)
+
+    def add_atest_args(self):
+        """A function that does ArgumentParser.add_argument()"""
+        self.add_argument('tests', nargs='*', help='Tests to build and/or run.')
+        # Options that to do with testing.
+        self.add_argument('-a', '--all-abi', action='store_true', help=ALL_ABI)
+        self.add_argument('-b', '--build', action='append_const', dest='steps',
+                          const=constants.BUILD_STEP, help=BUILD)
+        self.add_argument('-d', '--disable-teardown', action='store_true',
+                          help=DISABLE_TEARDOWN)
+        self.add_argument('--host', action='store_true', help=HOST)
+        self.add_argument('-i', '--install', action='append_const',
+                          dest='steps', const=constants.INSTALL_STEP,
+                          help=INSTALL)
+        self.add_argument('-m', constants.REBUILD_MODULE_INFO_FLAG,
+                          action='store_true', help=REBUILD_MODULE_INFO)
+        self.add_argument('-s', '--serial', help=SERIAL)
+        self.add_argument('--sharding', nargs='?', const=2,
+                          type=_positive_int, default=0,
+                          help=SHARDING)
+        self.add_argument('-t', '--test', action='append_const', dest='steps',
+                          const=constants.TEST_STEP, help=TEST)
+        self.add_argument('-w', '--wait-for-debugger', action='store_true',
+                          help=WAIT_FOR_DEBUGGER)
+
+        # Options related to Test Mapping
+        self.add_argument('-p', '--test-mapping', action='store_true',
+                          help=TEST_MAPPING)
+        self.add_argument('--include-subdirs', action='store_true',
+                          help=INCLUDE_SUBDIRS)
+        # TODO(146980564): Remove enable-file-patterns when support
+        # file-patterns in TEST_MAPPING by default.
+        self.add_argument('--enable-file-patterns', action='store_true',
+                          help=ENABLE_FILE_PATTERNS)
+
+        # Options for information queries and dry-runs:
+        # A group of options for dry-runs. They are mutually exclusive
+        # in a command line.
+        group = self.add_mutually_exclusive_group()
+        group.add_argument('--collect-tests-only', action='store_true',
+                           help=COLLECT_TESTS_ONLY)
+        group.add_argument('--dry-run', action='store_true', help=DRY_RUN)
+        self.add_argument('-h', '--help', action='store_true',
+                          help='Print this help message.')
+        self.add_argument('--info', action='store_true', help=INFO)
+        self.add_argument('-L', '--list-modules', help=LIST_MODULES)
+        self.add_argument('-v', '--verbose', action='store_true', help=VERBOSE)
+        self.add_argument('-V', '--version', action='store_true', help=VERSION)
+
+        # Obsolete options that will be removed soon.
+        self.add_argument('--generate-baseline', nargs='?',
+                          type=int, const=5, default=0,
+                          help='Generate baseline metrics, run 5 iterations by'
+                               'default. Provide an int argument to specify '
+                               '# iterations.')
+        self.add_argument('--generate-new-metrics', nargs='?',
+                          type=int, const=5, default=0,
+                          help='Generate new metrics, run 5 iterations by '
+                               'default. Provide an int argument to specify '
+                               '# iterations.')
+        self.add_argument('--detect-regression', nargs='*',
+                          help='Run regression detection algorithm. Supply '
+                               'path to baseline and/or new metrics folders.')
+
+        # Options related to module parameterization
+        self.add_argument('--instant', action='store_true', help=INSTANT)
+        self.add_argument('--user-type', help=USER_TYPE)
+
+        # Option for dry-run command mapping result and cleaning cache.
+        self.add_argument('-c', '--clear-cache', action='store_true',
+                          help=CLEAR_CACHE)
+        self.add_argument('-u', '--update-cmd-mapping', action='store_true',
+                          help=UPDATE_CMD_MAPPING)
+        self.add_argument('-y', '--verify-cmd-mapping', action='store_true',
+                          help=VERIFY_CMD_MAPPING)
+
+        # Options for Tradefed debug mode.
+        self.add_argument('-D', '--tf-debug', nargs='?', const=10888,
+                          type=_positive_int, default=0,
+                          help=TF_DEBUG)
+
+        # Options for Tradefed customization related.
+        self.add_argument('--tf-template', action='append',
+                          help=TF_TEMPLATE)
+
+        # A group of options for rerun strategy. They are mutually exclusive
+        # in a command line.
+        group = self.add_mutually_exclusive_group()
+        # Option for rerun tests for the specified number iterations.
+        group.add_argument('--iterations', nargs='?',
+                           type=_positive_int, const=10, default=0,
+                           metavar='MAX_ITERATIONS', help=ITERATION)
+        group.add_argument('--rerun-until-failure', nargs='?',
+                           type=_positive_int, const=10, default=0,
+                           metavar='MAX_ITERATIONS', help=RERUN_UNTIL_FAILURE)
+        group.add_argument('--retry-any-failure', nargs='?',
+                           type=_positive_int, const=10, default=0,
+                           metavar='MAX_ITERATIONS', help=RETRY_ANY_FAILURE)
+
+        # A group of options for history. They are mutually exclusive
+        # in a command line.
+        history_group = self.add_mutually_exclusive_group()
+        # History related options.
+        history_group.add_argument('--latest-result', action='store_true',
+                                   help=LATEST_RESULT)
+        history_group.add_argument('--history', nargs='?', const='99999',
+                                   help=HISTORY)
+
+        # This arg actually doesn't consume anything, it's primarily used for
+        # the help description and creating custom_args in the NameSpace object.
+        self.add_argument('--', dest='custom_args', nargs='*',
+                          help='Specify custom args for the test runners. '
+                               'Everything after -- will be consumed as '
+                               'custom args.')
+
+    def get_args(self):
+        """This method is to get args from actions and return optional args.
+
+        Returns:
+            A list of optional arguments.
+        """
+        argument_list = []
+        # The output of _get_optional_actions(): [['-t', '--test'], [--info]]
+        # return an argument list: ['-t', '--test', '--info']
+        for arg in self._get_optional_actions():
+            argument_list.extend(arg.option_strings)
+        return argument_list
+
+
+def print_epilog_text():
+    """Pagination print EPILOG_TEXT.
+
+    Returns:
+        STDOUT from pydoc.pager().
+    """
+    epilog_text = EPILOG_TEMPLATE.format(ALL_ABI=ALL_ABI,
+                                         BUILD=BUILD,
+                                         CLEAR_CACHE=CLEAR_CACHE,
+                                         COLLECT_TESTS_ONLY=COLLECT_TESTS_ONLY,
+                                         DISABLE_TEARDOWN=DISABLE_TEARDOWN,
+                                         DRY_RUN=DRY_RUN,
+                                         ENABLE_FILE_PATTERNS=ENABLE_FILE_PATTERNS,
+                                         HELP_DESC=HELP_DESC,
+                                         HISTORY=HISTORY,
+                                         HOST=HOST,
+                                         INCLUDE_SUBDIRS=INCLUDE_SUBDIRS,
+                                         INFO=INFO,
+                                         INSTALL=INSTALL,
+                                         INSTANT=INSTANT,
+                                         ITERATION=ITERATION,
+                                         LATEST_RESULT=LATEST_RESULT,
+                                         LIST_MODULES=LIST_MODULES,
+                                         REBUILD_MODULE_INFO=REBUILD_MODULE_INFO,
+                                         RERUN_UNTIL_FAILURE=RERUN_UNTIL_FAILURE,
+                                         RETRY_ANY_FAILURE=RETRY_ANY_FAILURE,
+                                         SERIAL=SERIAL,
+                                         SHARDING=SHARDING,
+                                         TEST=TEST,
+                                         TEST_MAPPING=TEST_MAPPING,
+                                         TF_DEBUG=TF_DEBUG,
+                                         TF_TEMPLATE=TF_TEMPLATE,
+                                         USER_TYPE=USER_TYPE,
+                                         UPDATE_CMD_MAPPING=UPDATE_CMD_MAPPING,
+                                         VERBOSE=VERBOSE,
+                                         VERSION=VERSION,
+                                         VERIFY_CMD_MAPPING=VERIFY_CMD_MAPPING,
+                                         WAIT_FOR_DEBUGGER=WAIT_FOR_DEBUGGER)
+    return pydoc.pager(epilog_text)
+
+
+EPILOG_TEMPLATE = r'''ATEST(1)                       ASuite/ATest
+
+NAME
+        atest - {HELP_DESC}
+
+
+SYNOPSIS
+        atest [OPTION]... [TEST_TARGET]... -- [CUSTOM_ARGS]...
+
+
+OPTIONS
+        Below arguments are catagorised by features and purposes. Arguments marked with default will apply even the user does not pass it explicitly.
+
+        [ Testing ]
+        -a, --all-abi
+            {ALL_ABI}
+
+        -b, --build:
+            {BUILD} (default)
+
+        -d, --disable-teardown
+            {DISABLE_TEARDOWN}
+
+        -D --tf-debug
+            {TF_DEBUG}
+
+        --history
+            {HISTORY}
+
+        --host
+            {HOST}
+
+        -i, --install
+            {INSTALL} (default)
+
+        -m, --rebuild-module-info
+            {REBUILD_MODULE_INFO} (default)
+
+        -s, --serial
+            {SERIAL}
+
+        --sharding
+          {SHARDING}
+
+        -t, --test
+            {TEST} (default)
+
+        --tf-template
+            {TF_TEMPLATE}
+
+        -w, --wait-for-debugger
+            {WAIT_FOR_DEBUGGER}
+
+
+        [ Test Mapping ]
+        -p, --test-mapping
+            {TEST_MAPPING}
+
+        --include-subdirs
+            {INCLUDE_SUBDIRS}
+
+        --enable-file-patterns
+            {ENABLE_FILE_PATTERNS}
+
+
+        [ Information/Queries ]
+        --collect-tests-only
+            {COLLECT_TESTS_ONLY}
+
+        --info
+            {INFO}
+
+        -L, --list-modules
+            {LIST_MODULES}
+
+        --latest-result
+            {LATEST_RESULT}
+
+        -v, --verbose
+            {VERBOSE}
+
+        -V, --version
+            {VERSION}
+
+
+        [ Dry-Run and Caching ]
+        --dry-run
+            {DRY_RUN}
+
+        -c, --clear-cache
+            {CLEAR_CACHE}
+
+        -u, --update-cmd-mapping
+            {UPDATE_CMD_MAPPING}
+
+        -y, --verify-cmd-mapping
+            {VERIFY_CMD_MAPPING}
+
+
+        [ Module Parameterization ]
+        --instant
+            {INSTANT}
+
+        --user-type
+            {USER_TYPE}
+
+
+        [ Iteration Testing ]
+        --iterations
+            {ITERATION}
+
+        --rerun-until-failure
+            {RERUN_UNTIL_FAILURE}
+
+        --retry-any-failure
+            {RETRY_ANY_FAILURE}
+
+
+EXAMPLES
+    - - - - - - - - -
+    IDENTIFYING TESTS
+    - - - - - - - - -
+
+    The positional argument <tests> should be a reference to one or more of the tests you'd like to run. Multiple tests can be run in one command by separating test references with spaces.
+
+    Usage template: atest <reference_to_test_1> <reference_to_test_2>
+
+    A <reference_to_test> can be satisfied by the test's MODULE NAME, MODULE:CLASS, CLASS NAME, TF INTEGRATION TEST, FILE PATH or PACKAGE NAME. Explanations and examples of each follow.
+
+
+    < MODULE NAME >
+
+        Identifying a test by its module name will run the entire module. Input the name as it appears in the LOCAL_MODULE or LOCAL_PACKAGE_NAME variables in that test's Android.mk or Android.bp file.
+
+        Note: Use < TF INTEGRATION TEST > to run non-module tests integrated directly into TradeFed.
+
+        Examples:
+            atest FrameworksServicesTests
+            atest CtsJankDeviceTestCases
+
+
+    < MODULE:CLASS >
+
+        Identifying a test by its class name will run just the tests in that class and not the whole module. MODULE:CLASS is the preferred way to run a single class. MODULE is the same as described above. CLASS is the name of the test class in the .java file. It can either be the fully qualified class name or just the basic name.
+
+        Examples:
+            atest FrameworksServicesTests:ScreenDecorWindowTests
+            atest FrameworksServicesTests:com.android.server.wm.ScreenDecorWindowTests
+            atest CtsJankDeviceTestCases:CtsDeviceJankUi
+
+
+    < CLASS NAME >
+
+        A single class can also be run by referencing the class name without the module name.
+
+        Examples:
+            atest ScreenDecorWindowTests
+            atest CtsDeviceJankUi
+
+        However, this will take more time than the equivalent MODULE:CLASS reference, so we suggest using a MODULE:CLASS reference whenever possible. Examples below are ordered by performance from the fastest to the slowest:
+
+        Examples:
+            atest FrameworksServicesTests:com.android.server.wm.ScreenDecorWindowTests
+            atest FrameworksServicesTests:ScreenDecorWindowTests
+            atest ScreenDecorWindowTests
+
+    < TF INTEGRATION TEST >
+
+        To run tests that are integrated directly into TradeFed (non-modules), input the name as it appears in the output of the "tradefed.sh list configs" cmd.
+
+        Examples:
+           atest example/reboot
+           atest native-benchmark
+
+
+    < FILE PATH >
+
+        Both module-based tests and integration-based tests can be run by inputting the path to their test file or dir as appropriate. A single class can also be run by inputting the path to the class's java file.
+
+        Both relative and absolute paths are supported.
+
+        Example - 2 ways to run the `CtsJankDeviceTestCases` module via path:
+        1. run module from android <repo root>:
+            atest cts/tests/jank/jank
+
+        2. from <android root>/cts/tests/jank:
+            atest .
+
+        Example - run a specific class within CtsJankDeviceTestCases module from <android repo> root via path:
+           atest cts/tests/jank/src/android/jank/cts/ui/CtsDeviceJankUi.java
+
+        Example - run an integration test from <android repo> root via path:
+           atest tools/tradefederation/contrib/res/config/example/reboot.xml
+
+
+    < PACKAGE NAME >
+
+        Atest supports searching tests from package name as well.
+
+        Examples:
+           atest com.android.server.wm
+           atest android.jank.cts
+
+
+    - - - - - - - - - - - - - - - - - - - - - - - - - -
+    SPECIFYING INDIVIDUAL STEPS: BUILD, INSTALL OR RUN
+    - - - - - - - - - - - - - - - - - - - - - - - - - -
+
+    The -b, -i and -t options allow you to specify which steps you want to run. If none of those options are given, then all steps are run. If any of these options are provided then only the listed steps are run.
+
+    Note: -i alone is not currently support and can only be included with -t.
+    Both -b and -t can be run alone.
+
+    Examples:
+        atest -b <test>    (just build targets)
+        atest -t <test>    (run tests only)
+        atest -it <test>   (install apk and run tests)
+        atest -bt <test>   (build targets, run tests, but skip installing apk)
+
+
+    Atest now has the ability to force a test to skip its cleanup/teardown step. Many tests, e.g. CTS, cleanup the device after the test is run, so trying to rerun your test with -t will fail without having the --disable-teardown parameter. Use -d before -t to skip the test clean up step and test iteratively.
+
+        atest -d <test>    (disable installing apk and cleanning up device)
+        atest -t <test>
+
+    Note that -t disables both setup/install and teardown/cleanup of the device. So you can continue to rerun your test with just
+
+        atest -t <test>
+
+    as many times as you want.
+
+
+    - - - - - - - - - - - - -
+    RUNNING SPECIFIC METHODS
+    - - - - - - - - - - - - -
+
+    It is possible to run only specific methods within a test class. To run only specific methods, identify the class in any of the ways supported for identifying a class (MODULE:CLASS, FILE PATH, etc) and then append the name of the method or method using the following template:
+
+      <reference_to_class>#<method1>
+
+    Multiple methods can be specified with commas:
+
+      <reference_to_class>#<method1>,<method2>,<method3>...
+
+    Examples:
+      atest com.android.server.wm.ScreenDecorWindowTests#testMultipleDecors
+
+      atest FrameworksServicesTests:ScreenDecorWindowTests#testFlagChange,testRemoval
+
+
+    - - - - - - - - - - - - -
+    RUNNING MULTIPLE CLASSES
+    - - - - - - - - - - - - -
+
+    To run multiple classes, deliminate them with spaces just like you would when running multiple tests.  Atest will handle building and running classes in the most efficient way possible, so specifying a subset of classes in a module will improve performance over running the whole module.
+
+
+    Examples:
+    - two classes in same module:
+      atest FrameworksServicesTests:ScreenDecorWindowTests FrameworksServicesTests:DimmerTests
+
+    - two classes, different modules:
+      atest FrameworksServicesTests:ScreenDecorWindowTests CtsJankDeviceTestCases:CtsDeviceJankUi
+
+
+    - - - - - - - - - - -
+    RUNNING NATIVE TESTS
+    - - - - - - - - - - -
+
+    Atest can run native test.
+
+    Example:
+    - Input tests:
+      atest -a libinput_tests inputflinger_tests
+
+    Use -a|--all-abi to run the tests for all available device architectures, which in this example is armeabi-v7a (ARM 32-bit) and arm64-v8a (ARM 64-bit).
+
+    To select a specific native test to run, use colon (:) to specify the test name and hashtag (#) to further specify an individual method. For example, for the following test definition:
+
+        TEST_F(InputDispatcherTest, InjectInputEvent_ValidatesKeyEvents)
+
+    You can run the entire test using:
+
+        atest inputflinger_tests:InputDispatcherTest
+
+    or an individual test method using:
+
+        atest inputflinger_tests:InputDispatcherTest#InjectInputEvent_ValidatesKeyEvents
+
+
+    - - - - - - - - - - - - - -
+    RUNNING TESTS IN ITERATION
+    - - - - - - - - - - - - - -
+
+    To run tests in iterations, simply pass --iterations argument. No matter pass or fail, atest won't stop testing until the max iteration is reached.
+
+    Example:
+        atest <test> --iterations    # 10 iterations(by default).
+        atest <test> --iterations 5  # run <test> 5 times.
+
+    Two approaches that assist users to detect flaky tests:
+
+    1) Run all tests until a failure occurs or the max iteration is reached.
+
+    Example:
+        - 10 iterations(by default).
+        atest <test> --rerun-until-failure
+        - stop when failed or reached the 20th run.
+        atest <test> --rerun-until-failure 20
+
+    2) Run failed tests until passed or the max iteration is reached.
+
+    Example:
+        - 10 iterations(by default).
+        atest <test> --retry-any-failure
+        - stop when passed or reached the 20th run.
+        atest <test> --retry-any-failure 20
+
+
+    - - - - - - - - - - - - - - - -
+    REGRESSION DETECTION (obsolute)
+    - - - - - - - - - - - - - - - -
+
+    ********************** Warning **********************
+    Please STOP using arguments below -- they are obsolete and will be removed in a near future:
+        --detect-regression
+        --generate-baseline
+        --generate-new-metrics
+
+    Please check RUNNING TESTS IN ITERATION out for alternatives.
+    ******************************************************
+
+    Generate pre-patch or post-patch metrics without running regression detection:
+
+    Example:
+        atest <test> --generate-baseline <optional iter>
+        atest <test> --generate-new-metrics <optional iter>
+
+    Local regression detection can be run in three options:
+
+    1) Provide a folder containing baseline (pre-patch) metrics (generated previously). Atest will run the tests n (default 5) iterations, generate a new set of post-patch metrics, and compare those against existing metrics.
+
+    Example:
+        atest <test> --detect-regression </path/to/baseline> --generate-new-metrics <optional iter>
+
+    2) Provide a folder containing post-patch metrics (generated previously). Atest will run the tests n (default 5) iterations, generate a new set of pre-patch metrics, and compare those against those provided. Note: the developer needs to revert the device/tests to pre-patch state to generate baseline metrics.
+
+    Example:
+        atest <test> --detect-regression </path/to/new> --generate-baseline <optional iter>
+
+    3) Provide 2 folders containing both pre-patch and post-patch metrics. Atest will run no tests but the regression detection algorithm.
+
+    Example:
+        atest --detect-regression </path/to/baseline> </path/to/new>
+
+
+    - - - - - - - - - - - -
+    TESTS IN TEST MAPPING
+    - - - - - - - - - - - -
+
+    Atest can run tests in TEST_MAPPING files:
+
+    1) Run presubmit tests in TEST_MAPPING files in current and parent
+       directories. You can also specify a target directory.
+
+    Example:
+        atest  (run presubmit tests in TEST_MAPPING files in current and parent directories)
+        atest --test-mapping </path/to/project>
+               (run presubmit tests in TEST_MAPPING files in </path/to/project> and its parent directories)
+
+    2) Run a specified test group in TEST_MAPPING files.
+
+    Example:
+        atest :postsubmit
+              (run postsubmit tests in TEST_MAPPING files in current and parent directories)
+        atest :all
+              (Run tests from all groups in TEST_MAPPING files)
+        atest --test-mapping </path/to/project>:postsubmit
+              (run postsubmit tests in TEST_MAPPING files in </path/to/project> and its parent directories)
+
+    3) Run tests in TEST_MAPPING files including sub directories
+
+    By default, atest will only search for tests in TEST_MAPPING files in current (or given directory) and its parent directories. If you want to run tests in TEST_MAPPING files in the sub-directories, you can use option --include-subdirs to force atest to include those tests too.
+
+    Example:
+        atest --include-subdirs [optional </path/to/project>:<test_group_name>]
+              (run presubmit tests in TEST_MAPPING files in current, sub and parent directories)
+    A path can be provided optionally if you want to search for tests in a given directory, with optional test group name. By default, the test group is presubmit.
+
+
+    - - - - - - - - - - - - - -
+    ADDITIONAL ARGS TO TRADEFED
+    - - - - - - - - - - - - - -
+
+    When trying to pass custom arguments for the test runners, everything after '--'
+    will be consumed as custom args.
+
+    Example:
+        atest -v <test> -- <custom_args1> <custom_args2>
+
+
+                                                     2019-12-19
+'''
diff --git a/atest-py2/atest_arg_parser_unittest.py b/atest-py2/atest_arg_parser_unittest.py
new file mode 100755
index 0000000..fd4c321
--- /dev/null
+++ b/atest-py2/atest_arg_parser_unittest.py
@@ -0,0 +1,41 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for atest_arg_parser."""
+
+import unittest
+
+import atest_arg_parser
+
+
+class AtestArgParserUnittests(unittest.TestCase):
+    """Unit tests for atest_arg_parser.py"""
+
+    def test_get_args(self):
+        """Test get_args(): flatten a nested list. """
+        parser = atest_arg_parser.AtestArgParser()
+        parser.add_argument('-t', '--test', help='Run the tests.')
+        parser.add_argument('-b', '--build', help='Run a build.')
+        parser.add_argument('--generate-baseline', help='Generate a baseline.')
+        test_args = ['-t', '--test',
+                     '-b', '--build',
+                     '--generate-baseline',
+                     '-h', '--help'].sort()
+        self.assertEqual(test_args, parser.get_args().sort())
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/atest_completion.sh b/atest-py2/atest_completion.sh
new file mode 100644
index 0000000..3ac8e0d
--- /dev/null
+++ b/atest-py2/atest_completion.sh
@@ -0,0 +1,159 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+ATEST_REL_DIR="tools/tradefederation/core/atest"
+
+_fetch_testable_modules() {
+    [[ -z $ANDROID_BUILD_TOP ]] && return 0
+    export ATEST_DIR="$ANDROID_BUILD_TOP/$ATEST_REL_DIR"
+    $PYTHON - << END
+import os
+import pickle
+import sys
+
+sys.path.append(os.getenv('ATEST_DIR'))
+import constants
+
+if os.path.isfile(constants.MODULE_INDEX):
+    with open(constants.MODULE_INDEX, 'rb') as cache:
+        try:
+            print("\n".join(pickle.load(cache, encoding="utf-8")))
+        except:
+            print("\n".join(pickle.load(cache)))
+else:
+    print("")
+END
+    unset ATEST_DIR
+}
+
+# This function invoke get_args() and return each item
+# of the list for tab completion candidates.
+_fetch_atest_args() {
+    [[ -z $ANDROID_BUILD_TOP ]] && return 0
+    export ATEST_DIR="$ANDROID_BUILD_TOP/$ATEST_REL_DIR"
+    $PYTHON - << END
+import os
+import sys
+
+atest_dir = os.path.join(os.getenv('ATEST_DIR'))
+sys.path.append(atest_dir)
+
+import atest_arg_parser
+
+parser = atest_arg_parser.AtestArgParser()
+parser.add_atest_args()
+print("\n".join(parser.get_args()))
+END
+    unset ATEST_DIR
+}
+
+# This function returns devices recognised by adb.
+_fetch_adb_devices() {
+    while read dev; do echo $dev | awk '{print $1}'; done < <(adb devices | egrep -v "^List|^$"||true)
+}
+
+# This function returns all paths contain TEST_MAPPING.
+_fetch_test_mapping_files() {
+    [[ -z $ANDROID_BUILD_TOP ]] && return 0
+    find -maxdepth 5 -type f -name TEST_MAPPING |sed 's/^.\///g'| xargs dirname 2>/dev/null
+}
+
+# The main tab completion function.
+_atest() {
+    # Not support completion on Darwin since the bash version of it
+    # is too old to fully support useful built-in commands/functions
+    # such as compopt, _get_comp_words_by_ref and __ltrim_colon_completions.
+    [[ "$(uname -s)" == "Darwin" ]] && return 0
+
+    local cur prev
+    COMPREPLY=()
+    cur="${COMP_WORDS[COMP_CWORD]}"
+    prev="${COMP_WORDS[COMP_CWORD-1]}"
+    _get_comp_words_by_ref -n : cur prev || true
+
+    case "$cur" in
+        -*)
+            COMPREPLY=($(compgen -W "$(_fetch_atest_args)" -- $cur))
+            ;;
+        */*)
+            ;;
+        *)
+            local candidate_args=$(ls; _fetch_testable_modules)
+            COMPREPLY=($(compgen -W "$candidate_args" -- $cur))
+            ;;
+    esac
+
+    case "$prev" in
+        --iterations|--rerun-until-failure|--retry-any-failure)
+            COMPREPLY=(10) ;;
+        --list-modules|-L)
+            # TODO: genetate the list automately when the API is available.
+            COMPREPLY=($(compgen -W "cts vts" -- $cur)) ;;
+        --serial|-s)
+            local adb_devices="$(_fetch_adb_devices)"
+            if [ -n "$adb_devices" ]; then
+                COMPREPLY=($(compgen -W "$(_fetch_adb_devices)" -- $cur))
+            else
+                # Don't complete files/dirs when there'is no devices.
+                compopt -o nospace
+                COMPREPLY=("")
+            fi ;;
+        --test-mapping|-p)
+            local mapping_files="$(_fetch_test_mapping_files)"
+            if [ -n "$mapping_files" ]; then
+                COMPREPLY=($(compgen -W "$mapping_files" -- $cur))
+            else
+                # Don't complete files/dirs when TEST_MAPPING wasn't found.
+                compopt -o nospace
+                COMPREPLY=("")
+            fi ;;
+    esac
+    __ltrim_colon_completions "$cur" "$prev" || true
+    return 0
+}
+
+function _atest_main() {
+    # Only use this in interactive mode.
+    # Warning: below check must be "return", not "exit". "exit" won't break the
+    # build in interactive shell(e.g VM), but will result in build breakage in
+    # non-interactive shell(e.g docker container); therefore, using "return"
+    # adapts both conditions.
+    [[ ! $- =~ 'i' ]] && return 0
+
+    # Use Py2 as the default interpreter. This script is aiming for being
+    # compatible with both Py2 and Py3.
+    if [ -x "$(which python)" ]; then
+        PYTHON=$(which python)
+    elif [ -x "$(which python3)" ]; then
+        PYTHON=$(which python3)
+    else
+        PYTHON="/usr/bin/env python"
+    fi
+
+    # Complete file/dir name first by using option "nosort".
+    # BASH version <= 4.3 doesn't have nosort option.
+    # Note that nosort has no effect for zsh.
+    local _atest_comp_options="-o default -o nosort"
+    local _atest_executables=(atest atest-dev atest-src)
+    for exec in "${_atest_executables[*]}"; do
+        complete -F _atest $_atest_comp_options $exec 2>/dev/null || \
+        complete -F _atest -o default $exec
+    done
+
+    # Install atest-src for the convenience of debugging.
+    local atest_src="$(gettop)/$ATEST_REL_DIR/atest.py"
+    [[ -f "$atest_src" ]] && alias atest-src="$atest_src"
+}
+
+_atest_main
diff --git a/atest-py2/atest_decorator.py b/atest-py2/atest_decorator.py
new file mode 100644
index 0000000..6f171df
--- /dev/null
+++ b/atest-py2/atest_decorator.py
@@ -0,0 +1,33 @@
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+ATest decorator.
+"""
+
+def static_var(varname, value):
+    """Decorator to cache static variable.
+
+    Args:
+        varname: Variable name you want to use.
+        value: Variable value.
+
+    Returns: decorator function.
+    """
+
+    def fun_var_decorate(func):
+        """Set the static variable in a function."""
+        setattr(func, varname, value)
+        return func
+    return fun_var_decorate
diff --git a/atest-py2/atest_enum.py b/atest-py2/atest_enum.py
new file mode 100644
index 0000000..f4fb656
--- /dev/null
+++ b/atest-py2/atest_enum.py
@@ -0,0 +1,21 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Atest custom enum class.
+"""
+
+class AtestEnum(tuple):
+    """enum library isn't a Python 2.7 built-in, so roll our own."""
+    __getattr__ = tuple.index
diff --git a/atest-py2/atest_error.py b/atest-py2/atest_error.py
new file mode 100644
index 0000000..7ab8b5f
--- /dev/null
+++ b/atest-py2/atest_error.py
@@ -0,0 +1,66 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+atest exceptions.
+"""
+
+
+class UnsupportedModuleTestError(Exception):
+    """Error raised when we find a module that we don't support."""
+
+class TestDiscoveryException(Exception):
+    """Base Exception for issues with test discovery."""
+
+class NoTestFoundError(TestDiscoveryException):
+    """Raised when no tests are found."""
+
+class TestWithNoModuleError(TestDiscoveryException):
+    """Raised when test files have no parent module directory."""
+
+class MissingPackageNameError(TestDiscoveryException):
+    """Raised when the test class java file does not contain a package name."""
+
+class TooManyMethodsError(TestDiscoveryException):
+    """Raised when input string contains more than one # character."""
+
+class MethodWithoutClassError(TestDiscoveryException):
+    """Raised when method is appended via # but no class file specified."""
+
+class UnknownTestRunnerError(Exception):
+    """Raised when an unknown test runner is specified."""
+
+class NoTestRunnerName(Exception):
+    """Raised when Test Runner class var NAME isn't defined."""
+
+class NoTestRunnerExecutable(Exception):
+    """Raised when Test Runner class var EXECUTABLE isn't defined."""
+
+class HostEnvCheckFailed(Exception):
+    """Raised when Test Runner's host env check fails."""
+
+class ShouldNeverBeCalledError(Exception):
+    """Raised when something is called when it shouldn't, used for testing."""
+
+class FatalIncludeError(TestDiscoveryException):
+    """Raised if expanding include tag fails."""
+
+class MissingCCTestCaseError(TestDiscoveryException):
+    """Raised when the cc file does not contain a test case class."""
+
+class XmlNotExistError(TestDiscoveryException):
+    """Raised when the xml file does not exist."""
+
+class DryRunVerificationError(Exception):
+    """Base Exception if verification fail."""
diff --git a/atest-py2/atest_execution_info.py b/atest-py2/atest_execution_info.py
new file mode 100644
index 0000000..0c67e19
--- /dev/null
+++ b/atest-py2/atest_execution_info.py
@@ -0,0 +1,329 @@
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+ATest execution info generator.
+"""
+
+from __future__ import print_function
+
+import glob
+import logging
+import json
+import os
+import sys
+
+import atest_utils as au
+import constants
+
+from metrics import metrics_utils
+
+_ARGS_KEY = 'args'
+_STATUS_PASSED_KEY = 'PASSED'
+_STATUS_FAILED_KEY = 'FAILED'
+_STATUS_IGNORED_KEY = 'IGNORED'
+_SUMMARY_KEY = 'summary'
+_TOTAL_SUMMARY_KEY = 'total_summary'
+_TEST_RUNNER_KEY = 'test_runner'
+_TEST_NAME_KEY = 'test_name'
+_TEST_TIME_KEY = 'test_time'
+_TEST_DETAILS_KEY = 'details'
+_TEST_RESULT_NAME = 'test_result'
+_EXIT_CODE_ATTR = 'EXIT_CODE'
+_MAIN_MODULE_KEY = '__main__'
+_UUID_LEN = 30
+_RESULT_LEN = 35
+_COMMAND_LEN = 50
+_LOGCAT_FMT = '{}/log/invocation_*/{}*logcat-on-failure*'
+
+_SUMMARY_MAP_TEMPLATE = {_STATUS_PASSED_KEY : 0,
+                         _STATUS_FAILED_KEY : 0,
+                         _STATUS_IGNORED_KEY : 0,}
+
+PREPARE_END_TIME = None
+
+
+def preparation_time(start_time):
+    """Return the preparation time.
+
+    Args:
+        start_time: The time.
+
+    Returns:
+        The preparation time if PREPARE_END_TIME is set, None otherwise.
+    """
+    return PREPARE_END_TIME - start_time if PREPARE_END_TIME else None
+
+
+def symlink_latest_result(test_result_dir):
+    """Make the symbolic link to latest result.
+
+    Args:
+        test_result_dir: A string of the dir path.
+    """
+    symlink = os.path.join(constants.ATEST_RESULT_ROOT, 'LATEST')
+    if os.path.exists(symlink) or os.path.islink(symlink):
+        os.remove(symlink)
+    os.symlink(test_result_dir, symlink)
+
+
+def print_test_result(root, history_arg):
+    """Make a list of latest n test result.
+
+    Args:
+        root: A string of the test result root path.
+        history_arg: A string of an integer or uuid. If it's an integer string,
+                     the number of lines of test result will be given; else it
+                     will be treated a uuid and print test result accordingly
+                     in detail.
+    """
+    if not history_arg.isdigit():
+        path = os.path.join(constants.ATEST_RESULT_ROOT, history_arg,
+                            'test_result')
+        print_test_result_by_path(path)
+        return
+    target = '%s/20*_*_*' % root
+    paths = glob.glob(target)
+    paths.sort(reverse=True)
+    print('{:-^{uuid_len}} {:-^{result_len}} {:-^{command_len}}'
+          .format('uuid', 'result', 'command',
+                  uuid_len=_UUID_LEN,
+                  result_len=_RESULT_LEN,
+                  command_len=_COMMAND_LEN))
+    for path in paths[0: int(history_arg)+1]:
+        result_path = os.path.join(path, 'test_result')
+        if os.path.isfile(result_path):
+            try:
+                with open(result_path) as json_file:
+                    result = json.load(json_file)
+                    total_summary = result.get(_TOTAL_SUMMARY_KEY, {})
+                    summary_str = ', '.join([k+':'+str(v)
+                                             for k, v in total_summary.items()])
+                    print('{:<{uuid_len}} {:<{result_len}} {:<{command_len}}'
+                          .format(os.path.basename(path),
+                                  summary_str,
+                                  'atest '+result.get(_ARGS_KEY, ''),
+                                  uuid_len=_UUID_LEN,
+                                  result_len=_RESULT_LEN,
+                                  command_len=_COMMAND_LEN))
+            except ValueError:
+                pass
+
+
+def print_test_result_by_path(path):
+    """Print latest test result.
+
+    Args:
+        path: A string of test result path.
+    """
+    if os.path.isfile(path):
+        with open(path) as json_file:
+            result = json.load(json_file)
+            print("\natest {}".format(result.get(_ARGS_KEY, '')))
+            print('\nTotal Summary:\n--------------')
+            total_summary = result.get(_TOTAL_SUMMARY_KEY, {})
+            print(', '.join([(k+':'+str(v))
+                             for k, v in total_summary.items()]))
+            fail_num = total_summary.get(_STATUS_FAILED_KEY)
+            if fail_num > 0:
+                message = '%d test failed' % fail_num
+                print('\n')
+                print(au.colorize(message, constants.RED))
+                print('-' * len(message))
+                test_runner = result.get(_TEST_RUNNER_KEY, {})
+                for runner_name in test_runner.keys():
+                    test_dict = test_runner.get(runner_name, {})
+                    for test_name in test_dict:
+                        test_details = test_dict.get(test_name, {})
+                        for fail in test_details.get(_STATUS_FAILED_KEY):
+                            print(au.colorize('{}'.format(
+                                fail.get(_TEST_NAME_KEY)), constants.RED))
+                            failure_files = glob.glob(_LOGCAT_FMT.format(
+                                os.path.dirname(path), fail.get(_TEST_NAME_KEY)
+                                ))
+                            if failure_files:
+                                print('{} {}'.format(
+                                    au.colorize('LOGCAT-ON-FAILURES:',
+                                                constants.CYAN),
+                                    failure_files[0]))
+                            print('{} {}'.format(
+                                au.colorize('STACKTRACE:\n', constants.CYAN),
+                                fail.get(_TEST_DETAILS_KEY)))
+
+
+def has_non_test_options(args):
+    """
+    check whether non-test option in the args.
+
+    Args:
+        args: An argspace.Namespace class instance holding parsed args.
+
+    Returns:
+        True, if args has at least one non-test option.
+        False, otherwise.
+    """
+    return (args.collect_tests_only
+            or args.dry_run
+            or args.help
+            or args.history
+            or args.info
+            or args.version
+            or args.latest_result)
+
+
+class AtestExecutionInfo(object):
+    """Class that stores the whole test progress information in JSON format.
+
+    ----
+    For example, running command
+        atest hello_world_test HelloWorldTest
+
+    will result in storing the execution detail in JSON:
+    {
+      "args": "hello_world_test HelloWorldTest",
+      "test_runner": {
+          "AtestTradefedTestRunner": {
+              "hello_world_test": {
+                  "FAILED": [
+                      {"test_time": "(5ms)",
+                       "details": "Hello, Wor...",
+                       "test_name": "HelloWorldTest#PrintHelloWorld"}
+                      ],
+                  "summary": {"FAILED": 1, "PASSED": 0, "IGNORED": 0}
+              },
+              "HelloWorldTests": {
+                  "PASSED": [
+                      {"test_time": "(27ms)",
+                       "details": null,
+                       "test_name": "...HelloWorldTest#testHalloWelt"},
+                      {"test_time": "(1ms)",
+                       "details": null,
+                       "test_name": "....HelloWorldTest#testHelloWorld"}
+                      ],
+                  "summary": {"FAILED": 0, "PASSED": 2, "IGNORED": 0}
+              }
+          }
+      },
+      "total_summary": {"FAILED": 1, "PASSED": 2, "IGNORED": 0}
+    }
+    """
+
+    result_reporters = []
+
+    def __init__(self, args, work_dir, args_ns):
+        """Initialise an AtestExecutionInfo instance.
+
+        Args:
+            args: Command line parameters.
+            work_dir: The directory for saving information.
+            args_ns: An argspace.Namespace class instance holding parsed args.
+
+        Returns:
+               A json format string.
+        """
+        self.args = args
+        self.work_dir = work_dir
+        self.result_file = None
+        self.args_ns = args_ns
+
+    def __enter__(self):
+        """Create and return information file object."""
+        full_file_name = os.path.join(self.work_dir, _TEST_RESULT_NAME)
+        try:
+            self.result_file = open(full_file_name, 'w')
+        except IOError:
+            logging.error('Cannot open file %s', full_file_name)
+        return self.result_file
+
+    def __exit__(self, exit_type, value, traceback):
+        """Write execution information and close information file."""
+        if self.result_file:
+            self.result_file.write(AtestExecutionInfo.
+                                   _generate_execution_detail(self.args))
+            self.result_file.close()
+            if not has_non_test_options(self.args_ns):
+                symlink_latest_result(self.work_dir)
+        main_module = sys.modules.get(_MAIN_MODULE_KEY)
+        main_exit_code = getattr(main_module, _EXIT_CODE_ATTR,
+                                 constants.EXIT_CODE_ERROR)
+        if main_exit_code == constants.EXIT_CODE_SUCCESS:
+            metrics_utils.send_exit_event(main_exit_code)
+        else:
+            metrics_utils.handle_exc_and_send_exit_event(main_exit_code)
+
+    @staticmethod
+    def _generate_execution_detail(args):
+        """Generate execution detail.
+
+        Args:
+            args: Command line parameters that you want to save.
+
+        Returns:
+            A json format string.
+        """
+        info_dict = {_ARGS_KEY: ' '.join(args)}
+        try:
+            AtestExecutionInfo._arrange_test_result(
+                info_dict,
+                AtestExecutionInfo.result_reporters)
+            return json.dumps(info_dict)
+        except ValueError as err:
+            logging.warn('Parsing test result failed due to : %s', err)
+
+    @staticmethod
+    def _arrange_test_result(info_dict, reporters):
+        """Append test result information in given dict.
+
+        Arrange test information to below
+        "test_runner": {
+            "test runner name": {
+                "test name": {
+                    "FAILED": [
+                        {"test time": "",
+                         "details": "",
+                         "test name": ""}
+                    ],
+                "summary": {"FAILED": 0, "PASSED": 0, "IGNORED": 0}
+                },
+            },
+        "total_summary": {"FAILED": 0, "PASSED": 0, "IGNORED": 0}
+
+        Args:
+            info_dict: A dict you want to add result information in.
+            reporters: A list of result_reporter.
+
+        Returns:
+            A dict contains test result information data.
+        """
+        info_dict[_TEST_RUNNER_KEY] = {}
+        for reporter in reporters:
+            for test in reporter.all_test_results:
+                runner = info_dict[_TEST_RUNNER_KEY].setdefault(test.runner_name, {})
+                group = runner.setdefault(test.group_name, {})
+                result_dict = {_TEST_NAME_KEY : test.test_name,
+                               _TEST_TIME_KEY : test.test_time,
+                               _TEST_DETAILS_KEY : test.details}
+                group.setdefault(test.status, []).append(result_dict)
+
+        total_test_group_summary = _SUMMARY_MAP_TEMPLATE.copy()
+        for runner in info_dict[_TEST_RUNNER_KEY]:
+            for group in info_dict[_TEST_RUNNER_KEY][runner]:
+                group_summary = _SUMMARY_MAP_TEMPLATE.copy()
+                for status in info_dict[_TEST_RUNNER_KEY][runner][group]:
+                    count = len(info_dict[_TEST_RUNNER_KEY][runner][group][status])
+                    if _SUMMARY_MAP_TEMPLATE.has_key(status):
+                        group_summary[status] = count
+                        total_test_group_summary[status] += count
+                info_dict[_TEST_RUNNER_KEY][runner][group][_SUMMARY_KEY] = group_summary
+        info_dict[_TOTAL_SUMMARY_KEY] = total_test_group_summary
+        return info_dict
diff --git a/atest-py2/atest_execution_info_unittest.py b/atest-py2/atest_execution_info_unittest.py
new file mode 100755
index 0000000..f638f82
--- /dev/null
+++ b/atest-py2/atest_execution_info_unittest.py
@@ -0,0 +1,164 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittest for atest_execution_info."""
+
+import time
+import unittest
+
+from test_runners import test_runner_base
+import atest_execution_info as aei
+import result_reporter
+
+RESULT_TEST_TEMPLATE = test_runner_base.TestResult(
+    runner_name='someRunner',
+    group_name='someModule',
+    test_name='someClassName#sostName',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+# pylint: disable=protected-access
+class AtestRunInfoUnittests(unittest.TestCase):
+    """Unit tests for atest_execution_info.py"""
+
+    def test_arrange_test_result_one_module(self):
+        """Test _arrange_test_result method with only one module."""
+        pass_1 = self._create_test_result(status=test_runner_base.PASSED_STATUS)
+        pass_2 = self._create_test_result(status=test_runner_base.PASSED_STATUS)
+        pass_3 = self._create_test_result(status=test_runner_base.PASSED_STATUS)
+        fail_1 = self._create_test_result(status=test_runner_base.FAILED_STATUS)
+        fail_2 = self._create_test_result(status=test_runner_base.FAILED_STATUS)
+        ignore_1 = self._create_test_result(status=test_runner_base.IGNORED_STATUS)
+        reporter_1 = result_reporter.ResultReporter()
+        reporter_1.all_test_results.extend([pass_1, pass_2, pass_3])
+        reporter_2 = result_reporter.ResultReporter()
+        reporter_2.all_test_results.extend([fail_1, fail_2, ignore_1])
+        info_dict = {}
+        aei.AtestExecutionInfo._arrange_test_result(info_dict, [reporter_1, reporter_2])
+        expect_summary = {aei._STATUS_IGNORED_KEY : 1,
+                          aei._STATUS_FAILED_KEY : 2,
+                          aei._STATUS_PASSED_KEY : 3}
+        self.assertEqual(expect_summary, info_dict[aei._TOTAL_SUMMARY_KEY])
+
+    def test_arrange_test_result_multi_module(self):
+        """Test _arrange_test_result method with multi module."""
+        group_a_pass_1 = self._create_test_result(group_name='grpup_a',
+                                                  status=test_runner_base.PASSED_STATUS)
+        group_b_pass_1 = self._create_test_result(group_name='grpup_b',
+                                                  status=test_runner_base.PASSED_STATUS)
+        group_c_pass_1 = self._create_test_result(group_name='grpup_c',
+                                                  status=test_runner_base.PASSED_STATUS)
+        group_b_fail_1 = self._create_test_result(group_name='grpup_b',
+                                                  status=test_runner_base.FAILED_STATUS)
+        group_c_fail_1 = self._create_test_result(group_name='grpup_c',
+                                                  status=test_runner_base.FAILED_STATUS)
+        group_c_ignore_1 = self._create_test_result(group_name='grpup_c',
+                                                    status=test_runner_base.IGNORED_STATUS)
+        reporter_1 = result_reporter.ResultReporter()
+        reporter_1.all_test_results.extend([group_a_pass_1, group_b_pass_1, group_c_pass_1])
+        reporter_2 = result_reporter.ResultReporter()
+        reporter_2.all_test_results.extend([group_b_fail_1, group_c_fail_1, group_c_ignore_1])
+
+        info_dict = {}
+        aei.AtestExecutionInfo._arrange_test_result(info_dict, [reporter_1, reporter_2])
+        expect_group_a_summary = {aei._STATUS_IGNORED_KEY : 0,
+                                  aei._STATUS_FAILED_KEY : 0,
+                                  aei._STATUS_PASSED_KEY : 1}
+        self.assertEqual(
+            expect_group_a_summary,
+            info_dict[aei._TEST_RUNNER_KEY]['someRunner']['grpup_a'][aei._SUMMARY_KEY])
+
+        expect_group_b_summary = {aei._STATUS_IGNORED_KEY : 0,
+                                  aei._STATUS_FAILED_KEY : 1,
+                                  aei._STATUS_PASSED_KEY : 1}
+        self.assertEqual(
+            expect_group_b_summary,
+            info_dict[aei._TEST_RUNNER_KEY]['someRunner']['grpup_b'][aei._SUMMARY_KEY])
+
+        expect_group_c_summary = {aei._STATUS_IGNORED_KEY : 1,
+                                  aei._STATUS_FAILED_KEY : 1,
+                                  aei._STATUS_PASSED_KEY : 1}
+        self.assertEqual(
+            expect_group_c_summary,
+            info_dict[aei._TEST_RUNNER_KEY]['someRunner']['grpup_c'][aei._SUMMARY_KEY])
+
+        expect_total_summary = {aei._STATUS_IGNORED_KEY : 1,
+                                aei._STATUS_FAILED_KEY : 2,
+                                aei._STATUS_PASSED_KEY : 3}
+        self.assertEqual(expect_total_summary, info_dict[aei._TOTAL_SUMMARY_KEY])
+
+    def test_preparation_time(self):
+        """Test preparation_time method."""
+        start_time = time.time()
+        aei.PREPARE_END_TIME = None
+        self.assertTrue(aei.preparation_time(start_time) is None)
+        aei.PREPARE_END_TIME = time.time()
+        self.assertFalse(aei.preparation_time(start_time) is None)
+
+    def test_arrange_test_result_multi_runner(self):
+        """Test _arrange_test_result method with multi runner."""
+        runner_a_pass_1 = self._create_test_result(runner_name='runner_a',
+                                                   status=test_runner_base.PASSED_STATUS)
+        runner_a_pass_2 = self._create_test_result(runner_name='runner_a',
+                                                   status=test_runner_base.PASSED_STATUS)
+        runner_a_pass_3 = self._create_test_result(runner_name='runner_a',
+                                                   status=test_runner_base.PASSED_STATUS)
+        runner_b_fail_1 = self._create_test_result(runner_name='runner_b',
+                                                   status=test_runner_base.FAILED_STATUS)
+        runner_b_fail_2 = self._create_test_result(runner_name='runner_b',
+                                                   status=test_runner_base.FAILED_STATUS)
+        runner_b_ignore_1 = self._create_test_result(runner_name='runner_b',
+                                                     status=test_runner_base.IGNORED_STATUS)
+
+        reporter_1 = result_reporter.ResultReporter()
+        reporter_1.all_test_results.extend([runner_a_pass_1, runner_a_pass_2, runner_a_pass_3])
+        reporter_2 = result_reporter.ResultReporter()
+        reporter_2.all_test_results.extend([runner_b_fail_1, runner_b_fail_2, runner_b_ignore_1])
+        info_dict = {}
+        aei.AtestExecutionInfo._arrange_test_result(info_dict, [reporter_1, reporter_2])
+        expect_group_a_summary = {aei._STATUS_IGNORED_KEY : 0,
+                                  aei._STATUS_FAILED_KEY : 0,
+                                  aei._STATUS_PASSED_KEY : 3}
+        self.assertEqual(
+            expect_group_a_summary,
+            info_dict[aei._TEST_RUNNER_KEY]['runner_a']['someModule'][aei._SUMMARY_KEY])
+
+        expect_group_b_summary = {aei._STATUS_IGNORED_KEY : 1,
+                                  aei._STATUS_FAILED_KEY : 2,
+                                  aei._STATUS_PASSED_KEY : 0}
+        self.assertEqual(
+            expect_group_b_summary,
+            info_dict[aei._TEST_RUNNER_KEY]['runner_b']['someModule'][aei._SUMMARY_KEY])
+
+        expect_total_summary = {aei._STATUS_IGNORED_KEY : 1,
+                                aei._STATUS_FAILED_KEY : 2,
+                                aei._STATUS_PASSED_KEY : 3}
+        self.assertEqual(expect_total_summary, info_dict[aei._TOTAL_SUMMARY_KEY])
+
+    def _create_test_result(self, **kwargs):
+        """A Helper to create TestResult"""
+        test_info = test_runner_base.TestResult(**RESULT_TEST_TEMPLATE._asdict())
+        return test_info._replace(**kwargs)
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/atest-py2/atest_integration_tests.py b/atest-py2/atest_integration_tests.py
new file mode 100755
index 0000000..3287c1b
--- /dev/null
+++ b/atest-py2/atest_integration_tests.py
@@ -0,0 +1,153 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+ATest Integration Test Class.
+
+The purpose is to prevent potential side-effects from breaking ATest at the
+early stage while landing CLs with potential side-effects.
+
+It forks a subprocess with ATest commands to validate if it can pass all the
+finding, running logic of the python code, and waiting for TF to exit properly.
+    - When running with ROBOLECTRIC tests, it runs without TF, and will exit
+    the subprocess with the message "All tests passed"
+    - If FAIL, it means something breaks ATest unexpectedly!
+"""
+
+from __future__ import print_function
+
+import os
+import subprocess
+import sys
+import tempfile
+import time
+import unittest
+
+_TEST_RUN_DIR_PREFIX = 'atest_integration_tests_%s_'
+_LOG_FILE = 'integration_tests.log'
+_FAILED_LINE_LIMIT = 50
+_INTEGRATION_TESTS = 'INTEGRATION_TESTS'
+_EXIT_TEST_FAILED = 1
+
+
+class ATestIntegrationTest(unittest.TestCase):
+    """ATest Integration Test Class."""
+    NAME = 'ATestIntegrationTest'
+    EXECUTABLE = 'atest'
+    OPTIONS = ''
+    _RUN_CMD = '{exe} {options} {test}'
+    _PASSED_CRITERIA = ['will be rescheduled', 'All tests passed']
+
+    def setUp(self):
+        """Set up stuff for testing."""
+        self.full_env_vars = os.environ.copy()
+        self.test_passed = False
+        self.log = []
+
+    def run_test(self, testcase):
+        """Create a subprocess to execute the test command.
+
+        Strategy:
+            Fork a subprocess to wait for TF exit properly, and log the error
+            if the exit code isn't 0.
+
+        Args:
+            testcase: A string of testcase name.
+        """
+        run_cmd_dict = {'exe': self.EXECUTABLE, 'options': self.OPTIONS,
+                        'test': testcase}
+        run_command = self._RUN_CMD.format(**run_cmd_dict)
+        try:
+            subprocess.check_output(run_command,
+                                    stderr=subprocess.PIPE,
+                                    env=self.full_env_vars,
+                                    shell=True)
+        except subprocess.CalledProcessError as e:
+            self.log.append(e.output)
+            return False
+        return True
+
+    def get_failed_log(self):
+        """Get a trimmed failed log.
+
+        Strategy:
+            In order not to show the unnecessary log such as build log,
+            it's better to get a trimmed failed log that contains the
+            most important information.
+
+        Returns:
+            A trimmed failed log.
+        """
+        failed_log = '\n'.join(filter(None, self.log[-_FAILED_LINE_LIMIT:]))
+        return failed_log
+
+
+def create_test_method(testcase, log_path):
+    """Create a test method according to the testcase.
+
+    Args:
+        testcase: A testcase name.
+        log_path: A file path for storing the test result.
+
+    Returns:
+        A created test method, and a test function name.
+    """
+    test_function_name = 'test_%s' % testcase.replace(' ', '_')
+    # pylint: disable=missing-docstring
+    def template_test_method(self):
+        self.test_passed = self.run_test(testcase)
+        open(log_path, 'a').write('\n'.join(self.log))
+        failed_message = 'Running command: %s failed.\n' % testcase
+        failed_message += '' if self.test_passed else self.get_failed_log()
+        self.assertTrue(self.test_passed, failed_message)
+    return test_function_name, template_test_method
+
+
+def create_test_run_dir():
+    """Create the test run directory in tmp.
+
+    Returns:
+        A string of the directory path.
+    """
+    utc_epoch_time = int(time.time())
+    prefix = _TEST_RUN_DIR_PREFIX % utc_epoch_time
+    return tempfile.mkdtemp(prefix=prefix)
+
+
+if __name__ == '__main__':
+    # TODO(b/129029189) Implement detail comparison check for dry-run mode.
+    ARGS = ' '.join(sys.argv[1:])
+    if ARGS:
+        ATestIntegrationTest.OPTIONS = ARGS
+    TEST_PLANS = os.path.join(os.path.dirname(__file__), _INTEGRATION_TESTS)
+    try:
+        LOG_PATH = os.path.join(create_test_run_dir(), _LOG_FILE)
+        with open(TEST_PLANS) as test_plans:
+            for test in test_plans:
+                # Skip test when the line startswith #.
+                if not test.strip() or test.strip().startswith('#'):
+                    continue
+                test_func_name, test_func = create_test_method(
+                    test.strip(), LOG_PATH)
+                setattr(ATestIntegrationTest, test_func_name, test_func)
+        SUITE = unittest.TestLoader().loadTestsFromTestCase(ATestIntegrationTest)
+        RESULTS = unittest.TextTestRunner(verbosity=2).run(SUITE)
+    finally:
+        if RESULTS.failures:
+            print('Full test log is saved to %s' % LOG_PATH)
+            sys.exit(_EXIT_TEST_FAILED)
+        else:
+            os.remove(LOG_PATH)
diff --git a/atest-py2/atest_integration_tests.xml b/atest-py2/atest_integration_tests.xml
new file mode 100644
index 0000000..dd8ee82
--- /dev/null
+++ b/atest-py2/atest_integration_tests.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2018 The Android Open Source Project
+     Licensed under the Apache License, Version 2.0 (the "License");
+     you may not use this file except in compliance with the License.
+     You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+     Unless required by applicable law or agreed to in writing, software
+     distributed under the License is distributed on an "AS IS" BASIS,
+     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+     See the License for the specific language governing permissions and
+     limitations under the License.
+-->
+<configuration description="Config to run atest integration tests">
+    <option name="test-suite-tag" value="atest_integration_tests" />
+
+    <test class="com.android.tradefed.testtype.python.PythonBinaryHostTest" >
+        <option name="par-file-name" value="atest_integration_tests" />
+        <option name="test-timeout" value="120m" />
+    </test>
+</configuration>
diff --git a/atest-py2/atest_metrics.py b/atest-py2/atest_metrics.py
new file mode 100755
index 0000000..d2ac3ad
--- /dev/null
+++ b/atest-py2/atest_metrics.py
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Simple Metrics Functions"""
+
+import constants
+import asuite_metrics
+
+
+#pylint: disable=broad-except
+def log_start_event():
+    """Log that atest started."""
+    asuite_metrics.log_event(constants.METRICS_URL)
diff --git a/atest-py2/atest_run_unittests.py b/atest-py2/atest_run_unittests.py
new file mode 100755
index 0000000..f23c59d
--- /dev/null
+++ b/atest-py2/atest_run_unittests.py
@@ -0,0 +1,73 @@
+#!/usr/bin/env python
+#
+# Copyright 2018 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Main entrypoint for all of atest's unittest."""
+
+import logging
+import os
+import sys
+import unittest
+from importlib import import_module
+
+# Setup logging to be silent so unittests can pass through TF.
+logging.disable(logging.ERROR)
+
+def get_test_modules():
+    """Returns a list of testable modules.
+
+    Finds all the test files (*_unittest.py) and get their relative
+    path (internal/lib/utils_test.py) and translate it to an import path and
+    strip the py ext (internal.lib.utils_test).
+
+    Returns:
+        List of strings (the testable module import path).
+    """
+    testable_modules = []
+    base_path = os.path.dirname(os.path.realpath(__file__))
+
+    for dirpath, _, files in os.walk(base_path):
+        for f in files:
+            if f.endswith("_unittest.py"):
+                # Now transform it into a relative import path.
+                full_file_path = os.path.join(dirpath, f)
+                rel_file_path = os.path.relpath(full_file_path, base_path)
+                rel_file_path, _ = os.path.splitext(rel_file_path)
+                rel_file_path = rel_file_path.replace(os.sep, ".")
+                testable_modules.append(rel_file_path)
+
+    return testable_modules
+
+def main(_):
+    """Main unittest entry.
+
+    Args:
+        argv: A list of system arguments. (unused)
+
+    Returns:
+        0 if success. None-zero if fails.
+    """
+    test_modules = get_test_modules()
+    for mod in test_modules:
+        import_module(mod)
+
+    loader = unittest.defaultTestLoader
+    test_suite = loader.loadTestsFromNames(test_modules)
+    runner = unittest.TextTestRunner(verbosity=2)
+    result = runner.run(test_suite)
+    sys.exit(not result.wasSuccessful())
+
+
+if __name__ == '__main__':
+    main(sys.argv[1:])
diff --git a/atest-py2/atest_unittest.py b/atest-py2/atest_unittest.py
new file mode 100755
index 0000000..84f640c
--- /dev/null
+++ b/atest-py2/atest_unittest.py
@@ -0,0 +1,298 @@
+#!/usr/bin/env python
+#
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for atest."""
+
+import datetime
+import os
+import sys
+import tempfile
+import unittest
+import mock
+
+import atest
+import constants
+import module_info
+
+from metrics import metrics_utils
+from test_finders import test_info
+
+if sys.version_info[0] == 2:
+    from StringIO import StringIO
+else:
+    from io import StringIO
+
+#pylint: disable=protected-access
+class AtestUnittests(unittest.TestCase):
+    """Unit tests for atest.py"""
+
+    @mock.patch('os.environ.get', return_value=None)
+    def test_missing_environment_variables_uninitialized(self, _):
+        """Test _has_environment_variables when no env vars."""
+        self.assertTrue(atest._missing_environment_variables())
+
+    @mock.patch('os.environ.get', return_value='out/testcases/')
+    def test_missing_environment_variables_initialized(self, _):
+        """Test _has_environment_variables when env vars."""
+        self.assertFalse(atest._missing_environment_variables())
+
+    def test_parse_args(self):
+        """Test _parse_args parses command line args."""
+        test_one = 'test_name_one'
+        test_two = 'test_name_two'
+        custom_arg = '--custom_arg'
+        custom_arg_val = 'custom_arg_val'
+        pos_custom_arg = 'pos_custom_arg'
+
+        # Test out test and custom args are properly retrieved.
+        args = [test_one, test_two, '--', custom_arg, custom_arg_val]
+        parsed_args = atest._parse_args(args)
+        self.assertEqual(parsed_args.tests, [test_one, test_two])
+        self.assertEqual(parsed_args.custom_args, [custom_arg, custom_arg_val])
+
+        # Test out custom positional args with no test args.
+        args = ['--', pos_custom_arg, custom_arg_val]
+        parsed_args = atest._parse_args(args)
+        self.assertEqual(parsed_args.tests, [])
+        self.assertEqual(parsed_args.custom_args, [pos_custom_arg,
+                                                   custom_arg_val])
+
+    def test_has_valid_test_mapping_args(self):
+        """Test _has_valid_test_mapping_args mehod."""
+        # Test test mapping related args are not mixed with incompatible args.
+        options_no_tm_support = [
+            ('--generate-baseline', '5'),
+            ('--detect-regression', 'path'),
+            ('--generate-new-metrics', '5')
+        ]
+        tm_options = [
+            '--test-mapping',
+            '--include-subdirs'
+        ]
+
+        for tm_option in tm_options:
+            for no_tm_option, no_tm_option_value in options_no_tm_support:
+                args = [tm_option, no_tm_option]
+                if no_tm_option_value != None:
+                    args.append(no_tm_option_value)
+                parsed_args = atest._parse_args(args)
+                self.assertFalse(
+                    atest._has_valid_test_mapping_args(parsed_args),
+                    'Failed to validate: %s' % args)
+
+    @mock.patch('json.load', return_value={})
+    @mock.patch('__builtin__.open', new_callable=mock.mock_open)
+    @mock.patch('os.path.isfile', return_value=True)
+    @mock.patch('atest_utils._has_colors', return_value=True)
+    @mock.patch.object(module_info.ModuleInfo, 'get_module_info',)
+    def test_print_module_info_from_module_name(self, mock_get_module_info,
+                                                _mock_has_colors, _isfile,
+                                                _open, _json):
+        """Test _print_module_info_from_module_name mehod."""
+        mod_one_name = 'mod1'
+        mod_one_path = ['src/path/mod1']
+        mod_one_installed = ['installed/path/mod1']
+        mod_one_suites = ['device_test_mod1', 'native_test_mod1']
+        mod_one = {constants.MODULE_NAME: mod_one_name,
+                   constants.MODULE_PATH: mod_one_path,
+                   constants.MODULE_INSTALLED: mod_one_installed,
+                   constants.MODULE_COMPATIBILITY_SUITES: mod_one_suites}
+
+        # Case 1: The testing_module('mod_one') can be found in module_info.
+        mock_get_module_info.return_value = mod_one
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        mod_info = module_info.ModuleInfo()
+        # Check return value = True, since 'mod_one' can be found.
+        self.assertTrue(
+            atest._print_module_info_from_module_name(mod_info, mod_one_name))
+        # Assign sys.stdout back to default.
+        sys.stdout = sys.__stdout__
+        correct_output = ('\x1b[1;32mmod1\x1b[0m\n'
+                          '\x1b[1;36m\tCompatibility suite\x1b[0m\n'
+                          '\t\tdevice_test_mod1\n'
+                          '\t\tnative_test_mod1\n'
+                          '\x1b[1;36m\tSource code path\x1b[0m\n'
+                          '\t\tsrc/path/mod1\n'
+                          '\x1b[1;36m\tInstalled path\x1b[0m\n'
+                          '\t\tinstalled/path/mod1\n')
+        # Check the function correctly printed module_info in color to stdout
+        self.assertEqual(capture_output.getvalue(), correct_output)
+
+        # Case 2: The testing_module('mod_one') can NOT be found in module_info.
+        mock_get_module_info.return_value = None
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        # Check return value = False, since 'mod_one' can NOT be found.
+        self.assertFalse(
+            atest._print_module_info_from_module_name(mod_info, mod_one_name))
+        # Assign sys.stdout back to default.
+        sys.stdout = sys.__stdout__
+        null_output = ''
+        # Check if no module_info, then nothing printed to screen.
+        self.assertEqual(capture_output.getvalue(), null_output)
+
+    @mock.patch('json.load', return_value={})
+    @mock.patch('__builtin__.open', new_callable=mock.mock_open)
+    @mock.patch('os.path.isfile', return_value=True)
+    @mock.patch('atest_utils._has_colors', return_value=True)
+    @mock.patch.object(module_info.ModuleInfo, 'get_module_info',)
+    def test_print_test_info(self, mock_get_module_info, _mock_has_colors,
+                             _isfile, _open, _json):
+        """Test _print_test_info mehod."""
+        mod_one_name = 'mod1'
+        mod_one = {constants.MODULE_NAME: mod_one_name,
+                   constants.MODULE_PATH: ['path/mod1'],
+                   constants.MODULE_INSTALLED: ['installed/mod1'],
+                   constants.MODULE_COMPATIBILITY_SUITES: ['suite_mod1']}
+        mod_two_name = 'mod2'
+        mod_two = {constants.MODULE_NAME: mod_two_name,
+                   constants.MODULE_PATH: ['path/mod2'],
+                   constants.MODULE_INSTALLED: ['installed/mod2'],
+                   constants.MODULE_COMPATIBILITY_SUITES: ['suite_mod2']}
+        mod_three_name = 'mod3'
+        mod_three = {constants.MODULE_NAME: mod_two_name,
+                     constants.MODULE_PATH: ['path/mod3'],
+                     constants.MODULE_INSTALLED: ['installed/mod3'],
+                     constants.MODULE_COMPATIBILITY_SUITES: ['suite_mod3']}
+        test_name = mod_one_name
+        build_targets = set([mod_one_name, mod_two_name, mod_three_name])
+        t_info = test_info.TestInfo(test_name, 'mock_runner', build_targets)
+        test_infos = set([t_info])
+
+        # The _print_test_info() will print the module_info of the test_info's
+        # test_name first. Then, print its related build targets. If the build
+        # target be printed before(e.g. build_target == test_info's test_name),
+        # it will skip it and print the next build_target.
+        # Since the build_targets of test_info are mod_one, mod_two, and
+        # mod_three, it will print mod_one first, then mod_two, and mod_three.
+        #
+        # _print_test_info() calls _print_module_info_from_module_name() to
+        # print the module_info. And _print_module_info_from_module_name()
+        # calls get_module_info() to get the module_info. So we can mock
+        # get_module_info() to achieve that.
+        mock_get_module_info.side_effect = [mod_one, mod_two, mod_three]
+
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        mod_info = module_info.ModuleInfo()
+        atest._print_test_info(mod_info, test_infos)
+        # Assign sys.stdout back to default.
+        sys.stdout = sys.__stdout__
+        correct_output = ('\x1b[1;32mmod1\x1b[0m\n'
+                          '\x1b[1;36m\tCompatibility suite\x1b[0m\n'
+                          '\t\tsuite_mod1\n'
+                          '\x1b[1;36m\tSource code path\x1b[0m\n'
+                          '\t\tpath/mod1\n'
+                          '\x1b[1;36m\tInstalled path\x1b[0m\n'
+                          '\t\tinstalled/mod1\n'
+                          '\x1b[1;35m\tRelated build targets\x1b[0m\n'
+                          '\t\tmod1, mod2, mod3\n'
+                          '\x1b[1;32mmod2\x1b[0m\n'
+                          '\x1b[1;36m\tCompatibility suite\x1b[0m\n'
+                          '\t\tsuite_mod2\n'
+                          '\x1b[1;36m\tSource code path\x1b[0m\n'
+                          '\t\tpath/mod2\n'
+                          '\x1b[1;36m\tInstalled path\x1b[0m\n'
+                          '\t\tinstalled/mod2\n'
+                          '\x1b[1;32mmod3\x1b[0m\n'
+                          '\x1b[1;36m\tCompatibility suite\x1b[0m\n'
+                          '\t\tsuite_mod3\n'
+                          '\x1b[1;36m\tSource code path\x1b[0m\n'
+                          '\t\tpath/mod3\n'
+                          '\x1b[1;36m\tInstalled path\x1b[0m\n'
+                          '\t\tinstalled/mod3\n'
+                          '\x1b[1;37m\x1b[0m\n')
+        self.assertEqual(capture_output.getvalue(), correct_output)
+
+    @mock.patch.object(metrics_utils, 'send_exit_event')
+    def test_validate_exec_mode(self, _send_exit):
+        """Test _validate_exec_mode."""
+        args = []
+        parsed_args = atest._parse_args(args)
+        no_install_test_info = test_info.TestInfo(
+            'mod', '', set(), data={}, module_class=["JAVA_LIBRARIES"],
+            install_locations=set(['device']))
+        host_test_info = test_info.TestInfo(
+            'mod', '', set(), data={}, module_class=["NATIVE_TESTS"],
+            install_locations=set(['host']))
+        device_test_info = test_info.TestInfo(
+            'mod', '', set(), data={}, module_class=["NATIVE_TESTS"],
+            install_locations=set(['device']))
+        both_test_info = test_info.TestInfo(
+            'mod', '', set(), data={}, module_class=["NATIVE_TESTS"],
+            install_locations=set(['host', 'device']))
+
+        # $atest <Both-support>
+        test_infos = [host_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos)
+        self.assertFalse(parsed_args.host)
+
+        # $atest <Both-support> with host_tests set to True
+        parsed_args = atest._parse_args([])
+        test_infos = [host_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos, host_tests=True)
+        # Make sure the host option is not set.
+        self.assertFalse(parsed_args.host)
+
+        # $atest <Both-support> with host_tests set to False
+        parsed_args = atest._parse_args([])
+        test_infos = [host_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos, host_tests=False)
+        self.assertFalse(parsed_args.host)
+
+        # $atest <device-only> with host_tests set to False
+        parsed_args = atest._parse_args([])
+        test_infos = [device_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos, host_tests=False)
+        # Make sure the host option is not set.
+        self.assertFalse(parsed_args.host)
+
+        # $atest <device-only> with host_tests set to True
+        parsed_args = atest._parse_args([])
+        test_infos = [device_test_info]
+        self.assertRaises(SystemExit, atest._validate_exec_mode,
+                          parsed_args, test_infos, host_tests=True)
+
+        # $atest <Both-support>
+        parsed_args = atest._parse_args([])
+        test_infos = [both_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos)
+        self.assertFalse(parsed_args.host)
+
+        # $atest <no_install_test_info>
+        parsed_args = atest._parse_args([])
+        test_infos = [no_install_test_info]
+        atest._validate_exec_mode(parsed_args, test_infos)
+        self.assertFalse(parsed_args.host)
+
+    def test_make_test_run_dir(self):
+        """Test make_test_run_dir."""
+        tmp_dir = tempfile.mkdtemp()
+        constants.ATEST_RESULT_ROOT = tmp_dir
+        date_time = None
+
+        work_dir = atest.make_test_run_dir()
+        folder_name = os.path.basename(work_dir)
+        date_time = datetime.datetime.strptime('_'.join(folder_name.split('_')[0:2]),
+                                               atest.TEST_RUN_DIR_PREFIX)
+
+        reload(constants)
+        self.assertIsNotNone(date_time)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/atest_unittests.xml b/atest-py2/atest_unittests.xml
new file mode 100644
index 0000000..6649026
--- /dev/null
+++ b/atest-py2/atest_unittests.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2018 The Android Open Source Project
+     Licensed under the Apache License, Version 2.0 (the "License");
+     you may not use this file except in compliance with the License.
+     You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+     Unless required by applicable law or agreed to in writing, software
+     distributed under the License is distributed on an "AS IS" BASIS,
+     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+     See the License for the specific language governing permissions and
+     limitations under the License.
+-->
+<configuration description="Config to run atest unittests">
+    <option name="test-suite-tag" value="atest_unittests" />
+
+    <test class="com.android.tradefed.testtype.python.PythonBinaryHostTest" >
+        <option name="par-file-name" value="atest-py2_unittests" />
+        <option name="test-timeout" value="2m" />
+    </test>
+</configuration>
diff --git a/atest-py2/atest_utils.py b/atest-py2/atest_utils.py
new file mode 100644
index 0000000..f1be007
--- /dev/null
+++ b/atest-py2/atest_utils.py
@@ -0,0 +1,629 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Utility functions for atest.
+"""
+
+
+from __future__ import print_function
+
+import hashlib
+import itertools
+import json
+import logging
+import os
+import pickle
+import re
+import shutil
+import subprocess
+import sys
+
+import atest_decorator
+import atest_error
+import constants
+
+from metrics import metrics_base
+from metrics import metrics_utils
+
+
+_BASH_RESET_CODE = '\033[0m\n'
+# Arbitrary number to limit stdout for failed runs in _run_limited_output.
+# Reason for its use is that the make command itself has its own carriage
+# return output mechanism that when collected line by line causes the streaming
+# full_output list to be extremely large.
+_FAILED_OUTPUT_LINE_LIMIT = 100
+# Regular expression to match the start of a ninja compile:
+# ex: [ 99% 39710/39711]
+_BUILD_COMPILE_STATUS = re.compile(r'\[\s*(\d{1,3}%\s+)?\d+/\d+\]')
+_BUILD_FAILURE = 'FAILED: '
+CMD_RESULT_PATH = os.path.join(os.environ.get(constants.ANDROID_BUILD_TOP,
+                                              os.getcwd()),
+                               'tools/tradefederation/core/atest/test_data',
+                               'test_commands.json')
+BUILD_TOP_HASH = hashlib.md5(os.environ.get(constants.ANDROID_BUILD_TOP, '').
+                             encode()).hexdigest()
+TEST_INFO_CACHE_ROOT = os.path.join(os.path.expanduser('~'), '.atest',
+                                    'info_cache', BUILD_TOP_HASH[:8])
+_DEFAULT_TERMINAL_WIDTH = 80
+_DEFAULT_TERMINAL_HEIGHT = 25
+_BUILD_CMD = 'build/soong/soong_ui.bash'
+_FIND_MODIFIED_FILES_CMDS = (
+    "cd {};"
+    "local_branch=$(git rev-parse --abbrev-ref HEAD);"
+    "remote_branch=$(git branch -r | grep '\\->' | awk '{{print $1}}');"
+    # Get the number of commits from local branch to remote branch.
+    "ahead=$(git rev-list --left-right --count $local_branch...$remote_branch "
+    "| awk '{{print $1}}');"
+    # Get the list of modified files from HEAD to previous $ahead generation.
+    "git diff HEAD~$ahead --name-only")
+
+
+def get_build_cmd():
+    """Compose build command with relative path and flag "--make-mode".
+
+    Returns:
+        A list of soong build command.
+    """
+    make_cmd = ('%s/%s' %
+                (os.path.relpath(os.environ.get(
+                    constants.ANDROID_BUILD_TOP, os.getcwd()), os.getcwd()),
+                 _BUILD_CMD))
+    return [make_cmd, '--make-mode']
+
+
+def _capture_fail_section(full_log):
+    """Return the error message from the build output.
+
+    Args:
+        full_log: List of strings representing full output of build.
+
+    Returns:
+        capture_output: List of strings that are build errors.
+    """
+    am_capturing = False
+    capture_output = []
+    for line in full_log:
+        if am_capturing and _BUILD_COMPILE_STATUS.match(line):
+            break
+        if am_capturing or line.startswith(_BUILD_FAILURE):
+            capture_output.append(line)
+            am_capturing = True
+            continue
+    return capture_output
+
+
+def _run_limited_output(cmd, env_vars=None):
+    """Runs a given command and streams the output on a single line in stdout.
+
+    Args:
+        cmd: A list of strings representing the command to run.
+        env_vars: Optional arg. Dict of env vars to set during build.
+
+    Raises:
+        subprocess.CalledProcessError: When the command exits with a non-0
+            exitcode.
+    """
+    # Send stderr to stdout so we only have to deal with a single pipe.
+    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
+                            stderr=subprocess.STDOUT, env=env_vars)
+    sys.stdout.write('\n')
+    term_width, _ = get_terminal_size()
+    white_space = " " * int(term_width)
+    full_output = []
+    while proc.poll() is None:
+        line = proc.stdout.readline()
+        # Readline will often return empty strings.
+        if not line:
+            continue
+        full_output.append(line.decode('utf-8'))
+        # Trim the line to the width of the terminal.
+        # Note: Does not handle terminal resizing, which is probably not worth
+        #       checking the width every loop.
+        if len(line) >= term_width:
+            line = line[:term_width - 1]
+        # Clear the last line we outputted.
+        sys.stdout.write('\r%s\r' % white_space)
+        sys.stdout.write('%s' % line.strip())
+        sys.stdout.flush()
+    # Reset stdout (on bash) to remove any custom formatting and newline.
+    sys.stdout.write(_BASH_RESET_CODE)
+    sys.stdout.flush()
+    # Wait for the Popen to finish completely before checking the returncode.
+    proc.wait()
+    if proc.returncode != 0:
+        # Parse out the build error to output.
+        output = _capture_fail_section(full_output)
+        if not output:
+            output = full_output
+        if len(output) >= _FAILED_OUTPUT_LINE_LIMIT:
+            output = output[-_FAILED_OUTPUT_LINE_LIMIT:]
+        output = 'Output (may be trimmed):\n%s' % ''.join(output)
+        raise subprocess.CalledProcessError(proc.returncode, cmd, output)
+
+
+def build(build_targets, verbose=False, env_vars=None):
+    """Shell out and make build_targets.
+
+    Args:
+        build_targets: A set of strings of build targets to make.
+        verbose: Optional arg. If True output is streamed to the console.
+                 If False, only the last line of the build output is outputted.
+        env_vars: Optional arg. Dict of env vars to set during build.
+
+    Returns:
+        Boolean of whether build command was successful, True if nothing to
+        build.
+    """
+    if not build_targets:
+        logging.debug('No build targets, skipping build.')
+        return True
+    full_env_vars = os.environ.copy()
+    if env_vars:
+        full_env_vars.update(env_vars)
+    print('\n%s\n%s' % (colorize("Building Dependencies...", constants.CYAN),
+                        ', '.join(build_targets)))
+    logging.debug('Building Dependencies: %s', ' '.join(build_targets))
+    cmd = get_build_cmd() + list(build_targets)
+    logging.debug('Executing command: %s', cmd)
+    try:
+        if verbose:
+            subprocess.check_call(cmd, stderr=subprocess.STDOUT,
+                                  env=full_env_vars)
+        else:
+            # TODO: Save output to a log file.
+            _run_limited_output(cmd, env_vars=full_env_vars)
+        logging.info('Build successful')
+        return True
+    except subprocess.CalledProcessError as err:
+        logging.error('Error building: %s', build_targets)
+        if err.output:
+            logging.error(err.output)
+        return False
+
+
+def _can_upload_to_result_server():
+    """Return True if we can talk to result server."""
+    # TODO: Also check if we have a slow connection to result server.
+    if constants.RESULT_SERVER:
+        try:
+            try:
+                # If PYTHON2
+                from urllib2 import urlopen
+            except ImportError:
+                metrics_utils.handle_exc_and_send_exit_event(
+                    constants.IMPORT_FAILURE)
+                from urllib.request import urlopen
+            urlopen(constants.RESULT_SERVER,
+                    timeout=constants.RESULT_SERVER_TIMEOUT).close()
+            return True
+        # pylint: disable=broad-except
+        except Exception as err:
+            logging.debug('Talking to result server raised exception: %s', err)
+    return False
+
+
+def get_result_server_args(for_test_mapping=False):
+    """Return list of args for communication with result server.
+
+    Args:
+        for_test_mapping: True if the test run is for Test Mapping to include
+            additional reporting args. Default is False.
+    """
+    # TODO (b/147644460) Temporarily disable Sponge V1 since it will be turned
+    # down.
+    if _can_upload_to_result_server():
+        if for_test_mapping:
+            return (constants.RESULT_SERVER_ARGS +
+                    constants.TEST_MAPPING_RESULT_SERVER_ARGS)
+        return constants.RESULT_SERVER_ARGS
+    return []
+
+
+def sort_and_group(iterable, key):
+    """Sort and group helper function."""
+    return itertools.groupby(sorted(iterable, key=key), key=key)
+
+
+def is_test_mapping(args):
+    """Check if the atest command intends to run tests in test mapping.
+
+    When atest runs tests in test mapping, it must have at most one test
+    specified. If a test is specified, it must be started with  `:`,
+    which means the test value is a test group name in TEST_MAPPING file, e.g.,
+    `:postsubmit`.
+
+    If any test mapping options is specified, the atest command must also be
+    set to run tests in test mapping files.
+
+    Args:
+        args: arg parsed object.
+
+    Returns:
+        True if the args indicates atest shall run tests in test mapping. False
+        otherwise.
+    """
+    return (
+        args.test_mapping or
+        args.include_subdirs or
+        not args.tests or
+        (len(args.tests) == 1 and args.tests[0][0] == ':'))
+
+@atest_decorator.static_var("cached_has_colors", {})
+def _has_colors(stream):
+    """Check the output stream is colorful.
+
+    Args:
+        stream: The standard file stream.
+
+    Returns:
+        True if the file stream can interpreter the ANSI color code.
+    """
+    cached_has_colors = _has_colors.cached_has_colors
+    if stream in cached_has_colors:
+        return cached_has_colors[stream]
+    else:
+        cached_has_colors[stream] = True
+    # Following from Python cookbook, #475186
+    if not hasattr(stream, "isatty"):
+        cached_has_colors[stream] = False
+        return False
+    if not stream.isatty():
+        # Auto color only on TTYs
+        cached_has_colors[stream] = False
+        return False
+    try:
+        import curses
+        curses.setupterm()
+        cached_has_colors[stream] = curses.tigetnum("colors") > 2
+    # pylint: disable=broad-except
+    except Exception as err:
+        logging.debug('Checking colorful raised exception: %s', err)
+        cached_has_colors[stream] = False
+    return cached_has_colors[stream]
+
+
+def colorize(text, color, highlight=False):
+    """ Convert to colorful string with ANSI escape code.
+
+    Args:
+        text: A string to print.
+        color: ANSI code shift for colorful print. They are defined
+               in constants_default.py.
+        highlight: True to print with highlight.
+
+    Returns:
+        Colorful string with ANSI escape code.
+    """
+    clr_pref = '\033[1;'
+    clr_suff = '\033[0m'
+    has_colors = _has_colors(sys.stdout)
+    if has_colors:
+        if highlight:
+            ansi_shift = 40 + color
+        else:
+            ansi_shift = 30 + color
+        clr_str = "%s%dm%s%s" % (clr_pref, ansi_shift, text, clr_suff)
+    else:
+        clr_str = text
+    return clr_str
+
+
+def colorful_print(text, color, highlight=False, auto_wrap=True):
+    """Print out the text with color.
+
+    Args:
+        text: A string to print.
+        color: ANSI code shift for colorful print. They are defined
+               in constants_default.py.
+        highlight: True to print with highlight.
+        auto_wrap: If True, Text wraps while print.
+    """
+    output = colorize(text, color, highlight)
+    if auto_wrap:
+        print(output)
+    else:
+        print(output, end="")
+
+
+# pylint: disable=no-member
+# TODO: remove the above disable when migrating to python3.
+def get_terminal_size():
+    """Get terminal size and return a tuple.
+
+    Returns:
+        2 integers: the size of X(columns) and Y(lines/rows).
+    """
+    # Determine the width of the terminal. We'll need to clear this many
+    # characters when carriage returning. Set default value as 80.
+    try:
+        if sys.version_info[0] == 2:
+            _y, _x = subprocess.check_output(['stty', 'size']).decode().split()
+            return int(_x), int(_y)
+        return (shutil.get_terminal_size().columns,
+                shutil.get_terminal_size().lines)
+    # b/137521782 stty size could have changed for reasones.
+    except subprocess.CalledProcessError:
+        return _DEFAULT_TERMINAL_WIDTH, _DEFAULT_TERMINAL_HEIGHT
+
+
+def is_external_run():
+    # TODO(b/133905312): remove this function after aidegen calling
+    #       metrics_base.get_user_type directly.
+    """Check is external run or not.
+
+    Determine the internal user by passing at least one check:
+      - whose git mail domain is from google
+      - whose hostname is from google
+    Otherwise is external user.
+
+    Returns:
+        True if this is an external run, False otherwise.
+    """
+    return metrics_base.get_user_type() == metrics_base.EXTERNAL_USER
+
+
+def print_data_collection_notice():
+    """Print the data collection notice."""
+    anonymous = ''
+    user_type = 'INTERNAL'
+    if metrics_base.get_user_type() == metrics_base.EXTERNAL_USER:
+        anonymous = ' anonymous'
+        user_type = 'EXTERNAL'
+    notice = ('  We collect%s usage statistics in accordance with our Content '
+              'Licenses (%s), Contributor License Agreement (%s), Privacy '
+              'Policy (%s) and Terms of Service (%s).'
+             ) % (anonymous,
+                  constants.CONTENT_LICENSES_URL,
+                  constants.CONTRIBUTOR_AGREEMENT_URL[user_type],
+                  constants.PRIVACY_POLICY_URL,
+                  constants.TERMS_SERVICE_URL
+                 )
+    print('\n==================')
+    colorful_print("Notice:", constants.RED)
+    colorful_print("%s" % notice, constants.GREEN)
+    print('==================\n')
+
+
+def handle_test_runner_cmd(input_test, test_cmds, do_verification=False,
+                           result_path=CMD_RESULT_PATH):
+    """Handle the runner command of input tests.
+
+    Args:
+        input_test: A string of input tests pass to atest.
+        test_cmds: A list of strings for running input tests.
+        do_verification: A boolean to indicate the action of this method.
+                         True: Do verification without updating result map and
+                               raise DryRunVerificationError if verifying fails.
+                         False: Update result map, if the former command is
+                                different with current command, it will confirm
+                                with user if they want to update or not.
+        result_path: The file path for saving result.
+    """
+    full_result_content = {}
+    if os.path.isfile(result_path):
+        with open(result_path) as json_file:
+            full_result_content = json.load(json_file)
+    former_test_cmds = full_result_content.get(input_test, [])
+    if not _are_identical_cmds(test_cmds, former_test_cmds):
+        if do_verification:
+            raise atest_error.DryRunVerificationError('Dry run verification failed,'
+                                                      ' former commands: %s' %
+                                                      former_test_cmds)
+        if former_test_cmds:
+            # If former_test_cmds is different from test_cmds, ask users if they
+            # are willing to update the result.
+            print('Former cmds = %s' % former_test_cmds)
+            print('Current cmds = %s' % test_cmds)
+            try:
+                # TODO(b/137156054):
+                # Move the import statement into a method for that distutils is
+                # not a built-in lib in older python3(b/137017806). Will move it
+                # back when embedded_launcher fully supports Python3.
+                from distutils.util import strtobool
+                if not strtobool(raw_input('Do you want to update former result'
+                                           'with the latest one?(Y/n)')):
+                    print('SKIP updating result!!!')
+                    return
+            except ValueError:
+                # Default action is updating the command result of the input_test.
+                # If the user input is unrecognizable telling yes or no,
+                # "Y" is implicitly applied.
+                pass
+    else:
+        # If current commands are the same as the formers, no need to update
+        # result.
+        return
+    full_result_content[input_test] = test_cmds
+    with open(result_path, 'w') as outfile:
+        json.dump(full_result_content, outfile, indent=0)
+        print('Save result mapping to %s' % result_path)
+
+
+def _are_identical_cmds(current_cmds, former_cmds):
+    """Tell two commands are identical. Note that '--atest-log-file-path' is not
+    considered a critical argument, therefore, it will be removed during
+    the comparison. Also, atest can be ran in any place, so verifying relative
+    path is regardless as well.
+
+    Args:
+        current_cmds: A list of strings for running input tests.
+        former_cmds: A list of strings recorded from the previous run.
+
+    Returns:
+        True if both commands are identical, False otherwise.
+    """
+    def _normalize(cmd_list):
+        """Method that normalize commands.
+
+        Args:
+            cmd_list: A list with one element. E.g. ['cmd arg1 arg2 True']
+
+        Returns:
+            A list with elements. E.g. ['cmd', 'arg1', 'arg2', 'True']
+        """
+        _cmd = ''.join(cmd_list).encode('utf-8').split()
+        for cmd in _cmd:
+            if cmd.startswith('--atest-log-file-path'):
+                _cmd.remove(cmd)
+                continue
+            if _BUILD_CMD in cmd:
+                _cmd.remove(cmd)
+                _cmd.append(os.path.join('./', _BUILD_CMD))
+                continue
+        return _cmd
+
+    _current_cmds = _normalize(current_cmds)
+    _former_cmds = _normalize(former_cmds)
+    # Always sort cmd list to make it comparable.
+    _current_cmds.sort()
+    _former_cmds.sort()
+    return _current_cmds == _former_cmds
+
+def _get_hashed_file_name(main_file_name):
+    """Convert the input string to a md5-hashed string. If file_extension is
+       given, returns $(hashed_string).$(file_extension), otherwise
+       $(hashed_string).cache.
+
+    Args:
+        main_file_name: The input string need to be hashed.
+
+    Returns:
+        A string as hashed file name with .cache file extension.
+    """
+    hashed_fn = hashlib.md5(str(main_file_name).encode())
+    hashed_name = hashed_fn.hexdigest()
+    return hashed_name + '.cache'
+
+def get_test_info_cache_path(test_reference, cache_root=TEST_INFO_CACHE_ROOT):
+    """Get the cache path of the desired test_infos.
+
+    Args:
+        test_reference: A string of the test.
+        cache_root: Folder path where stores caches.
+
+    Returns:
+        A string of the path of test_info cache.
+    """
+    return os.path.join(cache_root,
+                        _get_hashed_file_name(test_reference))
+
+def update_test_info_cache(test_reference, test_infos,
+                           cache_root=TEST_INFO_CACHE_ROOT):
+    """Update cache content which stores a set of test_info objects through
+       pickle module, each test_reference will be saved as a cache file.
+
+    Args:
+        test_reference: A string referencing a test.
+        test_infos: A set of TestInfos.
+        cache_root: Folder path for saving caches.
+    """
+    if not os.path.isdir(cache_root):
+        os.makedirs(cache_root)
+    cache_path = get_test_info_cache_path(test_reference, cache_root)
+    # Save test_info to files.
+    try:
+        with open(cache_path, 'wb') as test_info_cache_file:
+            logging.debug('Saving cache %s.', cache_path)
+            pickle.dump(test_infos, test_info_cache_file, protocol=2)
+    except (pickle.PicklingError, TypeError, IOError) as err:
+        # Won't break anything, just log this error, and collect the exception
+        # by metrics.
+        logging.debug('Exception raised: %s', err)
+        metrics_utils.handle_exc_and_send_exit_event(
+            constants.ACCESS_CACHE_FAILURE)
+
+
+def load_test_info_cache(test_reference, cache_root=TEST_INFO_CACHE_ROOT):
+    """Load cache by test_reference to a set of test_infos object.
+
+    Args:
+        test_reference: A string referencing a test.
+        cache_root: Folder path for finding caches.
+
+    Returns:
+        A list of TestInfo namedtuple if cache found, else None.
+    """
+    cache_file = get_test_info_cache_path(test_reference, cache_root)
+    if os.path.isfile(cache_file):
+        logging.debug('Loading cache %s.', cache_file)
+        try:
+            with open(cache_file, 'rb') as config_dictionary_file:
+                return pickle.load(config_dictionary_file)
+        except (pickle.UnpicklingError, ValueError, EOFError, IOError) as err:
+            # Won't break anything, just remove the old cache, log this error, and
+            # collect the exception by metrics.
+            logging.debug('Exception raised: %s', err)
+            os.remove(cache_file)
+            metrics_utils.handle_exc_and_send_exit_event(
+                constants.ACCESS_CACHE_FAILURE)
+    return None
+
+def clean_test_info_caches(tests, cache_root=TEST_INFO_CACHE_ROOT):
+    """Clean caches of input tests.
+
+    Args:
+        tests: A list of test references.
+        cache_root: Folder path for finding caches.
+    """
+    for test in tests:
+        cache_file = get_test_info_cache_path(test, cache_root)
+        if os.path.isfile(cache_file):
+            logging.debug('Removing cache: %s', cache_file)
+            try:
+                os.remove(cache_file)
+            except IOError as err:
+                logging.debug('Exception raised: %s', err)
+                metrics_utils.handle_exc_and_send_exit_event(
+                    constants.ACCESS_CACHE_FAILURE)
+
+def get_modified_files(root_dir):
+    """Get the git modified files. The git path here is git top level of
+    the root_dir. It's inevitable to utilise different commands to fulfill
+    2 scenario:
+        1. locate unstaged/staged files
+        2. locate committed files but not yet merged.
+    the 'git_status_cmd' fulfils the former while the 'find_modified_files'
+    fulfils the latter.
+
+    Args:
+        root_dir: the root where it starts finding.
+
+    Returns:
+        A set of modified files altered since last commit.
+    """
+    modified_files = set()
+    try:
+        find_git_cmd = 'cd {}; git rev-parse --show-toplevel'.format(root_dir)
+        git_paths = subprocess.check_output(
+            find_git_cmd, shell=True).splitlines()
+        for git_path in git_paths:
+            # Find modified files from git working tree status.
+            git_status_cmd = ("repo forall {} -c git status --short | "
+                              "awk '{{print $NF}}'").format(git_path)
+            modified_wo_commit = subprocess.check_output(
+                git_status_cmd, shell=True).rstrip().splitlines()
+            for change in modified_wo_commit:
+                modified_files.add(
+                    os.path.normpath('{}/{}'.format(git_path, change)))
+            # Find modified files that are committed but not yet merged.
+            find_modified_files = _FIND_MODIFIED_FILES_CMDS.format(git_path)
+            commit_modified_files = subprocess.check_output(
+                find_modified_files, shell=True).splitlines()
+            for line in commit_modified_files:
+                modified_files.add(os.path.normpath('{}/{}'.format(
+                    git_path, line)))
+    except (OSError, subprocess.CalledProcessError) as err:
+        logging.debug('Exception raised: %s', err)
+    return modified_files
diff --git a/atest-py2/atest_utils_unittest.py b/atest-py2/atest_utils_unittest.py
new file mode 100755
index 0000000..eb89427
--- /dev/null
+++ b/atest-py2/atest_utils_unittest.py
@@ -0,0 +1,409 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for atest_utils."""
+
+import hashlib
+import os
+import subprocess
+import sys
+import tempfile
+import unittest
+import mock
+
+import atest_error
+import atest_utils
+import constants
+import unittest_utils
+from test_finders import test_info
+
+if sys.version_info[0] == 2:
+    from StringIO import StringIO
+else:
+    from io import StringIO
+
+TEST_MODULE_NAME_A = 'ModuleNameA'
+TEST_RUNNER_A = 'FakeTestRunnerA'
+TEST_BUILD_TARGET_A = set(['bt1', 'bt2'])
+TEST_DATA_A = {'test_data_a_1': 'a1',
+               'test_data_a_2': 'a2'}
+TEST_SUITE_A = 'FakeSuiteA'
+TEST_MODULE_CLASS_A = 'FAKE_MODULE_CLASS_A'
+TEST_INSTALL_LOC_A = set(['host', 'device'])
+TEST_FINDER_A = 'MODULE'
+TEST_INFO_A = test_info.TestInfo(TEST_MODULE_NAME_A, TEST_RUNNER_A,
+                                 TEST_BUILD_TARGET_A, TEST_DATA_A,
+                                 TEST_SUITE_A, TEST_MODULE_CLASS_A,
+                                 TEST_INSTALL_LOC_A)
+TEST_INFO_A.test_finder = TEST_FINDER_A
+
+#pylint: disable=protected-access
+class AtestUtilsUnittests(unittest.TestCase):
+    """Unit tests for atest_utils.py"""
+
+    def test_capture_fail_section_has_fail_section(self):
+        """Test capture_fail_section when has fail section."""
+        test_list = ['AAAAAA', 'FAILED: Error1', '^\n', 'Error2\n',
+                     '[  6% 191/2997] BBBBBB\n', 'CCCCC',
+                     '[  20% 322/2997] DDDDDD\n', 'EEEEE']
+        want_list = ['FAILED: Error1', '^\n', 'Error2\n']
+        self.assertEqual(want_list,
+                         atest_utils._capture_fail_section(test_list))
+
+    def test_capture_fail_section_no_fail_section(self):
+        """Test capture_fail_section when no fail section."""
+        test_list = ['[ 6% 191/2997] XXXXX', 'YYYYY: ZZZZZ']
+        want_list = []
+        self.assertEqual(want_list,
+                         atest_utils._capture_fail_section(test_list))
+
+    def test_is_test_mapping(self):
+        """Test method is_test_mapping."""
+        tm_option_attributes = [
+            'test_mapping',
+            'include_subdirs'
+        ]
+        for attr_to_test in tm_option_attributes:
+            args = mock.Mock()
+            for attr in tm_option_attributes:
+                setattr(args, attr, attr == attr_to_test)
+            args.tests = []
+            self.assertTrue(
+                atest_utils.is_test_mapping(args),
+                'Failed to validate option %s' % attr_to_test)
+
+        args = mock.Mock()
+        for attr in tm_option_attributes:
+            setattr(args, attr, False)
+        args.tests = [':group_name']
+        self.assertTrue(atest_utils.is_test_mapping(args))
+
+        args = mock.Mock()
+        for attr in tm_option_attributes:
+            setattr(args, attr, False)
+        args.tests = [':test1', 'test2']
+        self.assertFalse(atest_utils.is_test_mapping(args))
+
+        args = mock.Mock()
+        for attr in tm_option_attributes:
+            setattr(args, attr, False)
+        args.tests = ['test2']
+        self.assertFalse(atest_utils.is_test_mapping(args))
+
+    @mock.patch('curses.tigetnum')
+    def test_has_colors(self, mock_curses_tigetnum):
+        """Test method _has_colors."""
+        # stream is file I/O
+        stream = open('/tmp/test_has_colors.txt', 'wb')
+        self.assertFalse(atest_utils._has_colors(stream))
+        stream.close()
+
+        # stream is not a tty(terminal).
+        stream = mock.Mock()
+        stream.isatty.return_value = False
+        self.assertFalse(atest_utils._has_colors(stream))
+
+        # stream is a tty(terminal) and colors < 2.
+        stream = mock.Mock()
+        stream.isatty.return_value = True
+        mock_curses_tigetnum.return_value = 1
+        self.assertFalse(atest_utils._has_colors(stream))
+
+        # stream is a tty(terminal) and colors > 2.
+        stream = mock.Mock()
+        stream.isatty.return_value = True
+        mock_curses_tigetnum.return_value = 256
+        self.assertTrue(atest_utils._has_colors(stream))
+
+
+    @mock.patch('atest_utils._has_colors')
+    def test_colorize(self, mock_has_colors):
+        """Test method colorize."""
+        original_str = "test string"
+        green_no = 2
+
+        # _has_colors() return False.
+        mock_has_colors.return_value = False
+        converted_str = atest_utils.colorize(original_str, green_no,
+                                             highlight=True)
+        self.assertEqual(original_str, converted_str)
+
+        # Green with highlight.
+        mock_has_colors.return_value = True
+        converted_str = atest_utils.colorize(original_str, green_no,
+                                             highlight=True)
+        green_highlight_string = '\x1b[1;42m%s\x1b[0m' % original_str
+        self.assertEqual(green_highlight_string, converted_str)
+
+        # Green, no highlight.
+        mock_has_colors.return_value = True
+        converted_str = atest_utils.colorize(original_str, green_no,
+                                             highlight=False)
+        green_no_highlight_string = '\x1b[1;32m%s\x1b[0m' % original_str
+        self.assertEqual(green_no_highlight_string, converted_str)
+
+
+    @mock.patch('atest_utils._has_colors')
+    def test_colorful_print(self, mock_has_colors):
+        """Test method colorful_print."""
+        testing_str = "color_print_test"
+        green_no = 2
+
+        # _has_colors() return False.
+        mock_has_colors.return_value = False
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.colorful_print(testing_str, green_no, highlight=True,
+                                   auto_wrap=False)
+        sys.stdout = sys.__stdout__
+        uncolored_string = testing_str
+        self.assertEqual(capture_output.getvalue(), uncolored_string)
+
+        # Green with highlight, but no wrap.
+        mock_has_colors.return_value = True
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.colorful_print(testing_str, green_no, highlight=True,
+                                   auto_wrap=False)
+        sys.stdout = sys.__stdout__
+        green_highlight_no_wrap_string = '\x1b[1;42m%s\x1b[0m' % testing_str
+        self.assertEqual(capture_output.getvalue(),
+                         green_highlight_no_wrap_string)
+
+        # Green, no highlight, no wrap.
+        mock_has_colors.return_value = True
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.colorful_print(testing_str, green_no, highlight=False,
+                                   auto_wrap=False)
+        sys.stdout = sys.__stdout__
+        green_no_high_no_wrap_string = '\x1b[1;32m%s\x1b[0m' % testing_str
+        self.assertEqual(capture_output.getvalue(),
+                         green_no_high_no_wrap_string)
+
+        # Green with highlight and wrap.
+        mock_has_colors.return_value = True
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.colorful_print(testing_str, green_no, highlight=True,
+                                   auto_wrap=True)
+        sys.stdout = sys.__stdout__
+        green_highlight_wrap_string = '\x1b[1;42m%s\x1b[0m\n' % testing_str
+        self.assertEqual(capture_output.getvalue(), green_highlight_wrap_string)
+
+        # Green with wrap, but no highlight.
+        mock_has_colors.return_value = True
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.colorful_print(testing_str, green_no, highlight=False,
+                                   auto_wrap=True)
+        sys.stdout = sys.__stdout__
+        green_wrap_no_highlight_string = '\x1b[1;32m%s\x1b[0m\n' % testing_str
+        self.assertEqual(capture_output.getvalue(),
+                         green_wrap_no_highlight_string)
+
+    @mock.patch('socket.gethostname')
+    @mock.patch('subprocess.check_output')
+    def test_is_external_run(self, mock_output, mock_hostname):
+        """Test method is_external_run."""
+        mock_output.return_value = ''
+        mock_hostname.return_value = ''
+        self.assertTrue(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@other.com'
+        mock_hostname.return_value = 'abc.com'
+        self.assertTrue(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@other.com'
+        mock_hostname.return_value = 'abc.google.com'
+        self.assertFalse(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@other.com'
+        mock_hostname.return_value = 'abc.google.def.com'
+        self.assertTrue(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@google.com'
+        self.assertFalse(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@other.com'
+        mock_hostname.return_value = 'c.googlers.com'
+        self.assertFalse(atest_utils.is_external_run())
+
+        mock_output.return_value = 'test@other.com'
+        mock_hostname.return_value = 'a.googlers.com'
+        self.assertTrue(atest_utils.is_external_run())
+
+        mock_output.side_effect = OSError()
+        self.assertTrue(atest_utils.is_external_run())
+
+        mock_output.side_effect = subprocess.CalledProcessError(1, 'cmd')
+        self.assertTrue(atest_utils.is_external_run())
+
+    @mock.patch('metrics.metrics_base.get_user_type')
+    def test_print_data_collection_notice(self, mock_get_user_type):
+        """Test method print_data_collection_notice."""
+
+        # get_user_type return 1(external).
+        mock_get_user_type.return_value = 1
+        notice_str = ('\n==================\nNotice:\n'
+                      '  We collect anonymous usage statistics'
+                      ' in accordance with our'
+                      ' Content Licenses (https://source.android.com/setup/start/licenses),'
+                      ' Contributor License Agreement (https://opensource.google.com/docs/cla/),'
+                      ' Privacy Policy (https://policies.google.com/privacy) and'
+                      ' Terms of Service (https://policies.google.com/terms).'
+                      '\n==================\n\n')
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.print_data_collection_notice()
+        sys.stdout = sys.__stdout__
+        uncolored_string = notice_str
+        self.assertEqual(capture_output.getvalue(), uncolored_string)
+
+        # get_user_type return 0(internal).
+        mock_get_user_type.return_value = 0
+        notice_str = ('\n==================\nNotice:\n'
+                      '  We collect usage statistics'
+                      ' in accordance with our'
+                      ' Content Licenses (https://source.android.com/setup/start/licenses),'
+                      ' Contributor License Agreement (https://cla.developers.google.com/),'
+                      ' Privacy Policy (https://policies.google.com/privacy) and'
+                      ' Terms of Service (https://policies.google.com/terms).'
+                      '\n==================\n\n')
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        atest_utils.print_data_collection_notice()
+        sys.stdout = sys.__stdout__
+        uncolored_string = notice_str
+        self.assertEqual(capture_output.getvalue(), uncolored_string)
+
+    @mock.patch('__builtin__.raw_input')
+    @mock.patch('json.load')
+    def test_update_test_runner_cmd(self, mock_json_load_data, mock_raw_input):
+        """Test method handle_test_runner_cmd without enable do_verification."""
+        former_cmd_str = 'Former cmds ='
+        write_result_str = 'Save result mapping to test_result'
+        tmp_file = tempfile.NamedTemporaryFile()
+        input_cmd = 'atest_args'
+        runner_cmds = ['cmd1', 'cmd2']
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        # Previous data is empty. Should not enter strtobool.
+        # If entered, exception will be raised cause test fail.
+        mock_json_load_data.return_value = {}
+        atest_utils.handle_test_runner_cmd(input_cmd,
+                                           runner_cmds,
+                                           do_verification=False,
+                                           result_path=tmp_file.name)
+        sys.stdout = sys.__stdout__
+        self.assertEqual(capture_output.getvalue().find(former_cmd_str), -1)
+        # Previous data is the same as the new input. Should not enter strtobool.
+        # If entered, exception will be raised cause test fail
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        mock_json_load_data.return_value = {input_cmd:runner_cmds}
+        atest_utils.handle_test_runner_cmd(input_cmd,
+                                           runner_cmds,
+                                           do_verification=False,
+                                           result_path=tmp_file.name)
+        sys.stdout = sys.__stdout__
+        self.assertEqual(capture_output.getvalue().find(former_cmd_str), -1)
+        self.assertEqual(capture_output.getvalue().find(write_result_str), -1)
+        # Previous data has different cmds. Should enter strtobool not update,
+        # should not find write_result_str.
+        prev_cmds = ['cmd1']
+        mock_raw_input.return_value = 'n'
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        mock_json_load_data.return_value = {input_cmd:prev_cmds}
+        atest_utils.handle_test_runner_cmd(input_cmd,
+                                           runner_cmds,
+                                           do_verification=False,
+                                           result_path=tmp_file.name)
+        sys.stdout = sys.__stdout__
+        self.assertEqual(capture_output.getvalue().find(write_result_str), -1)
+
+    @mock.patch('json.load')
+    def test_verify_test_runner_cmd(self, mock_json_load_data):
+        """Test method handle_test_runner_cmd without enable update_result."""
+        tmp_file = tempfile.NamedTemporaryFile()
+        input_cmd = 'atest_args'
+        runner_cmds = ['cmd1', 'cmd2']
+        # Previous data is the same as the new input. Should not raise exception.
+        mock_json_load_data.return_value = {input_cmd:runner_cmds}
+        atest_utils.handle_test_runner_cmd(input_cmd,
+                                           runner_cmds,
+                                           do_verification=True,
+                                           result_path=tmp_file.name)
+        # Previous data has different cmds. Should enter strtobool and hit
+        # exception.
+        prev_cmds = ['cmd1']
+        mock_json_load_data.return_value = {input_cmd:prev_cmds}
+        self.assertRaises(atest_error.DryRunVerificationError,
+                          atest_utils.handle_test_runner_cmd,
+                          input_cmd,
+                          runner_cmds,
+                          do_verification=True,
+                          result_path=tmp_file.name)
+
+    def test_get_test_info_cache_path(self):
+        """Test method get_test_info_cache_path."""
+        input_file_name = 'mytest_name'
+        cache_root = '/a/b/c'
+        expect_hashed_name = ('%s.cache' % hashlib.md5(str(input_file_name).
+                                                       encode()).hexdigest())
+        self.assertEqual(os.path.join(cache_root, expect_hashed_name),
+                         atest_utils.get_test_info_cache_path(input_file_name,
+                                                              cache_root))
+
+    def test_get_and_load_cache(self):
+        """Test method update_test_info_cache and load_test_info_cache."""
+        test_reference = 'myTestRefA'
+        test_cache_dir = tempfile.mkdtemp()
+        atest_utils.update_test_info_cache(test_reference, [TEST_INFO_A],
+                                           test_cache_dir)
+        unittest_utils.assert_equal_testinfo_sets(
+            self, set([TEST_INFO_A]),
+            atest_utils.load_test_info_cache(test_reference, test_cache_dir))
+
+    @mock.patch('os.getcwd')
+    def test_get_build_cmd(self, mock_cwd):
+        """Test method get_build_cmd."""
+        build_top = '/home/a/b/c'
+        rel_path = 'd/e'
+        mock_cwd.return_value = os.path.join(build_top, rel_path)
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            expected_cmd = ['../../build/soong/soong_ui.bash', '--make-mode']
+            self.assertEqual(expected_cmd, atest_utils.get_build_cmd())
+
+    @mock.patch('subprocess.check_output')
+    def test_get_modified_files(self, mock_co):
+        """Test method get_modified_files"""
+        mock_co.side_effect = ['/a/b/',
+                               '\n',
+                               'test_fp1.java\nc/test_fp2.java']
+        self.assertEqual({'/a/b/test_fp1.java', '/a/b/c/test_fp2.java'},
+                         atest_utils.get_modified_files(''))
+        mock_co.side_effect = ['/a/b/',
+                               'test_fp4',
+                               '/test_fp3.java']
+        self.assertEqual({'/a/b/test_fp4', '/a/b/test_fp3.java'},
+                         atest_utils.get_modified_files(''))
+
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/atest-py2/bug_detector.py b/atest-py2/bug_detector.py
new file mode 100644
index 0000000..25438d2
--- /dev/null
+++ b/atest-py2/bug_detector.py
@@ -0,0 +1,140 @@
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Classes for bug events history
+"""
+
+import datetime
+import logging
+import json
+import os
+
+import constants
+
+from metrics import metrics_utils
+
+_META_FILE = os.path.join(os.path.expanduser('~'),
+                          '.config', 'asuite', 'atest_history.json')
+_DETECT_OPTION_FILTER = ['-v', '--verbose']
+_DETECTED_SUCCESS = 1
+_DETECTED_FAIL = 0
+# constants of history key
+_LATEST_EXIT_CODE = 'latest_exit_code'
+_UPDATED_AT = 'updated_at'
+
+class BugDetector(object):
+    """Class for handling if a bug is detected by comparing test history."""
+
+    def __init__(self, argv, exit_code, history_file=None):
+        """BugDetector constructor
+
+        Args:
+            argv: A list of arguments.
+            exit_code: An integer of exit code.
+            history_file: A string of a given history file path.
+        """
+        self.detect_key = self.get_detect_key(argv)
+        self.exit_code = exit_code
+        self.file = history_file if history_file else _META_FILE
+        self.history = self.get_history()
+        self.caught_result = self.detect_bug_caught()
+        self.update_history()
+
+    def get_detect_key(self, argv):
+        """Get the key for history searching.
+
+        1. remove '-v' in argv to argv_no_verbose
+        2. sort the argv_no_verbose
+
+        Args:
+            argv: A list of arguments.
+
+        Returns:
+            A string of ordered command line.
+        """
+        argv_without_option = [x for x in argv if x not in _DETECT_OPTION_FILTER]
+        argv_without_option.sort()
+        return ' '.join(argv_without_option)
+
+    def get_history(self):
+        """Get a history object from a history file.
+
+        e.g.
+        {
+            "SystemUITests:.ScrimControllerTest":{
+                "latest_exit_code": 5, "updated_at": "2019-01-26T15:33:08.305026"},
+            "--host hello_world_test ":{
+                "latest_exit_code": 0, "updated_at": "2019-02-26T15:33:08.305026"},
+        }
+
+        Returns:
+            An object of loading from a history.
+        """
+        history = {}
+        if os.path.exists(self.file):
+            with open(self.file) as json_file:
+                try:
+                    history = json.load(json_file)
+                except ValueError as e:
+                    logging.debug(e)
+                    metrics_utils.handle_exc_and_send_exit_event(
+                        constants.ACCESS_HISTORY_FAILURE)
+        return history
+
+    def detect_bug_caught(self):
+        """Detection of catching bugs.
+
+        When latest_exit_code and current exit_code are different, treat it
+        as a bug caught.
+
+        Returns:
+            A integer of detecting result, e.g.
+            1: success
+            0: fail
+        """
+        if not self.history:
+            return _DETECTED_FAIL
+        latest = self.history.get(self.detect_key, {})
+        if latest.get(_LATEST_EXIT_CODE, self.exit_code) == self.exit_code:
+            return _DETECTED_FAIL
+        return _DETECTED_SUCCESS
+
+    def update_history(self):
+        """Update the history file.
+
+        1. update latest_bug result to history cache.
+        2. trim history cache to size from oldest updated time.
+        3. write to the file.
+        """
+        latest_bug = {
+            self.detect_key: {
+                _LATEST_EXIT_CODE: self.exit_code,
+                _UPDATED_AT: datetime.datetime.now().isoformat()
+            }
+        }
+        self.history.update(latest_bug)
+        num_history = len(self.history)
+        if num_history > constants.UPPER_LIMIT:
+            sorted_history = sorted(self.history.items(),
+                                    key=lambda kv: kv[1][_UPDATED_AT])
+            self.history = dict(
+                sorted_history[(num_history - constants.TRIM_TO_SIZE):])
+        with open(self.file, 'w') as outfile:
+            try:
+                json.dump(self.history, outfile, indent=0)
+            except ValueError as e:
+                logging.debug(e)
+                metrics_utils.handle_exc_and_send_exit_event(
+                    constants.ACCESS_HISTORY_FAILURE)
diff --git a/atest-py2/bug_detector_unittest.py b/atest-py2/bug_detector_unittest.py
new file mode 100644
index 0000000..a9356fc
--- /dev/null
+++ b/atest-py2/bug_detector_unittest.py
@@ -0,0 +1,137 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for bug_detector."""
+
+import datetime
+import json
+import os
+import unittest
+import mock
+
+import bug_detector
+import constants
+import unittest_constants as uc
+
+TEST_DICT = {
+    'test1': {
+        'latest_exit_code': 5,
+        'updated_at': ''
+    },
+    'test2': {
+        'latest_exit_code': 0,
+        'updated_at': ''
+    }
+}
+
+class BugDetectorUnittest(unittest.TestCase):
+    """Unit test for bug_detector.py"""
+
+    def setUp(self):
+        """Set up stuff for testing."""
+        self.history_file = os.path.join(uc.TEST_DATA_DIR, 'bug_detector.json')
+        self.detector = bug_detector.BugDetector(['test1'], 5, self.history_file)
+        self._reset_history_file()
+        self.history_file2 = os.path.join(uc.TEST_DATA_DIR, 'bug_detector2.json')
+
+    def tearDown(self):
+        """Run after execution of every test"""
+        if os.path.isfile(self.history_file):
+            os.remove(self.history_file)
+        if os.path.isfile(self.history_file2):
+            os.remove(self.history_file2)
+
+    def _reset_history_file(self):
+        """Reset test history file."""
+        with open(self.history_file, 'w') as outfile:
+            json.dump(TEST_DICT, outfile)
+
+    def _make_test_file(self, file_size):
+        temp_history = {}
+        for i in range(file_size):
+            latest_bug = {
+                i: {
+                    'latest_exit_code': i,
+                    'updated_at': datetime.datetime.now().isoformat()
+                }
+            }
+            temp_history.update(latest_bug)
+        with open(self.history_file2, 'w') as outfile:
+            json.dump(temp_history, outfile, indent=0)
+
+    @mock.patch.object(bug_detector.BugDetector, 'update_history')
+    def test_get_detect_key(self, _):
+        """Test get_detect_key."""
+        # argv without -v
+        argv = ['test2', 'test1']
+        want_key = 'test1 test2'
+        dtr = bug_detector.BugDetector(argv, 0)
+        self.assertEqual(dtr.get_detect_key(argv), want_key)
+
+        # argv with -v
+        argv = ['-v', 'test2', 'test1']
+        want_key = 'test1 test2'
+        dtr = bug_detector.BugDetector(argv, 0)
+        self.assertEqual(dtr.get_detect_key(argv), want_key)
+
+        # argv with --verbose
+        argv = ['--verbose', 'test2', 'test3', 'test1']
+        want_key = 'test1 test2 test3'
+        dtr = bug_detector.BugDetector(argv, 0)
+        self.assertEqual(dtr.get_detect_key(argv), want_key)
+
+    def test_get_history(self):
+        """Test get_history."""
+        self.assertEqual(self.detector.get_history(), TEST_DICT)
+
+    @mock.patch.object(bug_detector.BugDetector, 'update_history')
+    def test_detect_bug_caught(self, _):
+        """Test detect_bug_caught."""
+        self._reset_history_file()
+        dtr = bug_detector.BugDetector(['test1'], 0, self.history_file)
+        success = 1
+        self.assertEqual(dtr.detect_bug_caught(), success)
+
+    def test_update_history(self):
+        """Test update_history."""
+        constants.UPPER_LIMIT = 10
+        constants.TRIM_TO_SIZE = 3
+
+        mock_file_size = 0
+        self._make_test_file(mock_file_size)
+        dtr = bug_detector.BugDetector(['test1'], 0, self.history_file2)
+        self.assertTrue(dtr.history.has_key('test1'))
+
+        # History is larger than constants.UPPER_LIMIT. Trim to size.
+        mock_file_size = 10
+        self._make_test_file(mock_file_size)
+        dtr = bug_detector.BugDetector(['test1'], 0, self.history_file2)
+        self.assertEqual(len(dtr.history), constants.TRIM_TO_SIZE)
+        keys = ['test1', '9', '8']
+        for key in keys:
+            self.assertTrue(dtr.history.has_key(key))
+
+        # History is not larger than constants.UPPER_LIMIT.
+        mock_file_size = 5
+        self._make_test_file(mock_file_size)
+        dtr = bug_detector.BugDetector(['test1'], 0, self.history_file2)
+        self.assertEqual(len(dtr.history), mock_file_size+1)
+        keys = ['test1', '4', '3', '2', '1', '0']
+        for key in keys:
+            self.assertTrue(dtr.history.has_key(key))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/cli_translator.py b/atest-py2/cli_translator.py
new file mode 100644
index 0000000..f11b34b
--- /dev/null
+++ b/atest-py2/cli_translator.py
@@ -0,0 +1,516 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#pylint: disable=too-many-lines
+"""
+Command Line Translator for atest.
+"""
+
+from __future__ import print_function
+
+import fnmatch
+import json
+import logging
+import os
+import re
+import sys
+import time
+
+import atest_error
+import atest_utils
+import constants
+import test_finder_handler
+import test_mapping
+
+from metrics import metrics
+from metrics import metrics_utils
+from test_finders import module_finder
+
+FUZZY_FINDER = 'FUZZY'
+CACHE_FINDER = 'CACHE'
+
+# Pattern used to identify comments start with '//' or '#' in TEST_MAPPING.
+_COMMENTS_RE = re.compile(r'(?m)[\s\t]*(#|//).*|(\".*?\")')
+_COMMENTS = frozenset(['//', '#'])
+
+#pylint: disable=no-self-use
+class CLITranslator(object):
+    """
+    CLITranslator class contains public method translate() and some private
+    helper methods. The atest tool can call the translate() method with a list
+    of strings, each string referencing a test to run. Translate() will
+    "translate" this list of test strings into a list of build targets and a
+    list of TradeFederation run commands.
+
+    Translation steps for a test string reference:
+        1. Narrow down the type of reference the test string could be, i.e.
+           whether it could be referencing a Module, Class, Package, etc.
+        2. Try to find the test files assuming the test string is one of these
+           types of reference.
+        3. If test files found, generate Build Targets and the Run Command.
+    """
+
+    def __init__(self, module_info=None, print_cache_msg=True):
+        """CLITranslator constructor
+
+        Args:
+            module_info: ModuleInfo class that has cached module-info.json.
+            print_cache_msg: Boolean whether printing clear cache message or not.
+                             True will print message while False won't print.
+        """
+        self.mod_info = module_info
+        self.enable_file_patterns = False
+        self.msg = ''
+        if print_cache_msg:
+            self.msg = ('(Test info has been cached for speeding up the next '
+                        'run, if test info need to be updated, please add -c '
+                        'to clean the old cache.)')
+
+    # pylint: disable=too-many-locals
+    def _find_test_infos(self, test, tm_test_detail):
+        """Return set of TestInfos based on a given test.
+
+        Args:
+            test: A string representing test references.
+            tm_test_detail: The TestDetail of test configured in TEST_MAPPING
+                files.
+
+        Returns:
+            Set of TestInfos based on the given test.
+        """
+        test_infos = set()
+        test_find_starts = time.time()
+        test_found = False
+        test_finders = []
+        test_info_str = ''
+        find_test_err_msg = None
+        for finder in test_finder_handler.get_find_methods_for_test(
+                self.mod_info, test):
+            # For tests in TEST_MAPPING, find method is only related to
+            # test name, so the details can be set after test_info object
+            # is created.
+            try:
+                found_test_infos = finder.find_method(
+                    finder.test_finder_instance, test)
+            except atest_error.TestDiscoveryException as e:
+                find_test_err_msg = e
+            if found_test_infos:
+                finder_info = finder.finder_info
+                for test_info in found_test_infos:
+                    if tm_test_detail:
+                        test_info.data[constants.TI_MODULE_ARG] = (
+                            tm_test_detail.options)
+                        test_info.from_test_mapping = True
+                        test_info.host = tm_test_detail.host
+                    if finder_info != CACHE_FINDER:
+                        test_info.test_finder = finder_info
+                    test_infos.add(test_info)
+                test_found = True
+                print("Found '%s' as %s" % (
+                    atest_utils.colorize(test, constants.GREEN),
+                    finder_info))
+                if finder_info == CACHE_FINDER and test_infos:
+                    test_finders.append(list(test_infos)[0].test_finder)
+                test_finders.append(finder_info)
+                test_info_str = ','.join([str(x) for x in found_test_infos])
+                break
+        if not test_found:
+            f_results = self._fuzzy_search_and_msg(test, find_test_err_msg)
+            if f_results:
+                test_infos.update(f_results)
+                test_found = True
+                test_finders.append(FUZZY_FINDER)
+        metrics.FindTestFinishEvent(
+            duration=metrics_utils.convert_duration(
+                time.time() - test_find_starts),
+            success=test_found,
+            test_reference=test,
+            test_finders=test_finders,
+            test_info=test_info_str)
+        # Cache test_infos by default except running with TEST_MAPPING which may
+        # include customized flags and they are likely to mess up other
+        # non-test_mapping tests.
+        if test_infos and not tm_test_detail:
+            atest_utils.update_test_info_cache(test, test_infos)
+            print(self.msg)
+        return test_infos
+
+    def _fuzzy_search_and_msg(self, test, find_test_err_msg):
+        """ Fuzzy search and print message.
+
+        Args:
+            test: A string representing test references
+            find_test_err_msg: A string of find test error message.
+
+        Returns:
+            A list of TestInfos if found, otherwise None.
+        """
+        print('No test found for: %s' %
+              atest_utils.colorize(test, constants.RED))
+        # Currently we focus on guessing module names. Append names on
+        # results if more finders support fuzzy searching.
+        mod_finder = module_finder.ModuleFinder(self.mod_info)
+        results = mod_finder.get_fuzzy_searching_results(test)
+        if len(results) == 1 and self._confirm_running(results):
+            found_test_infos = mod_finder.find_test_by_module_name(results[0])
+            # found_test_infos is a list with at most 1 element.
+            if found_test_infos:
+                return found_test_infos
+        elif len(results) > 1:
+            self._print_fuzzy_searching_results(results)
+        else:
+            print('No matching result for {0}.'.format(test))
+        if find_test_err_msg:
+            print('%s\n' % (atest_utils.colorize(
+                find_test_err_msg, constants.MAGENTA)))
+        else:
+            print('(This can happen after a repo sync or if the test'
+                  ' is new. Running: with "%s" may resolve the issue.)'
+                  '\n' % (atest_utils.colorize(
+                      constants.REBUILD_MODULE_INFO_FLAG,
+                      constants.RED)))
+        return None
+
+    def _get_test_infos(self, tests, test_mapping_test_details=None):
+        """Return set of TestInfos based on passed in tests.
+
+        Args:
+            tests: List of strings representing test references.
+            test_mapping_test_details: List of TestDetail for tests configured
+                in TEST_MAPPING files.
+
+        Returns:
+            Set of TestInfos based on the passed in tests.
+        """
+        test_infos = set()
+        if not test_mapping_test_details:
+            test_mapping_test_details = [None] * len(tests)
+        for test, tm_test_detail in zip(tests, test_mapping_test_details):
+            found_test_infos = self._find_test_infos(test, tm_test_detail)
+            test_infos.update(found_test_infos)
+        return test_infos
+
+    def _confirm_running(self, results):
+        """Listen to an answer from raw input.
+
+        Args:
+            results: A list of results.
+
+        Returns:
+            True is the answer is affirmative.
+        """
+        decision = raw_input('Did you mean {0}? [Y/n] '.format(
+            atest_utils.colorize(results[0], constants.GREEN)))
+        return decision in constants.AFFIRMATIVES
+
+    def _print_fuzzy_searching_results(self, results):
+        """Print modules when fuzzy searching gives multiple results.
+
+        If the result is lengthy, just print the first 10 items only since we
+        have already given enough-accurate result.
+
+        Args:
+            results: A list of guessed testable module names.
+
+        """
+        atest_utils.colorful_print('Did you mean the following modules?',
+                                   constants.WHITE)
+        for mod in results[:10]:
+            atest_utils.colorful_print(mod, constants.GREEN)
+
+    def filter_comments(self, test_mapping_file):
+        """Remove comments in TEST_MAPPING file to valid format. Only '//' and
+        '#' are regarded as comments.
+
+        Args:
+            test_mapping_file: Path to a TEST_MAPPING file.
+
+        Returns:
+            Valid json string without comments.
+        """
+        def _replace(match):
+            """Replace comments if found matching the defined regular expression.
+
+            Args:
+                match: The matched regex pattern
+
+            Returns:
+                "" if it matches _COMMENTS, otherwise original string.
+            """
+            line = match.group(0).strip()
+            return "" if any(map(line.startswith, _COMMENTS)) else line
+        with open(test_mapping_file) as json_file:
+            return re.sub(_COMMENTS_RE, _replace, json_file.read())
+
+    def _read_tests_in_test_mapping(self, test_mapping_file):
+        """Read tests from a TEST_MAPPING file.
+
+        Args:
+            test_mapping_file: Path to a TEST_MAPPING file.
+
+        Returns:
+            A tuple of (all_tests, imports), where
+            all_tests is a dictionary of all tests in the TEST_MAPPING file,
+                grouped by test group.
+            imports is a list of test_mapping.Import to include other test
+                mapping files.
+        """
+        all_tests = {}
+        imports = []
+        test_mapping_dict = json.loads(self.filter_comments(test_mapping_file))
+        for test_group_name, test_list in test_mapping_dict.items():
+            if test_group_name == constants.TEST_MAPPING_IMPORTS:
+                for import_detail in test_list:
+                    imports.append(
+                        test_mapping.Import(test_mapping_file, import_detail))
+            else:
+                grouped_tests = all_tests.setdefault(test_group_name, set())
+                tests = []
+                for test in test_list:
+                    if (self.enable_file_patterns and
+                            not test_mapping.is_match_file_patterns(
+                                test_mapping_file, test)):
+                        continue
+                    test_mod_info = self.mod_info.name_to_module_info.get(
+                        test['name'])
+                    if not test_mod_info:
+                        print('WARNING: %s is not a valid build target and '
+                              'may not be discoverable by TreeHugger. If you '
+                              'want to specify a class or test-package, '
+                              'please set \'name\' to the test module and use '
+                              '\'options\' to specify the right tests via '
+                              '\'include-filter\'.\nNote: this can also occur '
+                              'if the test module is not built for your '
+                              'current lunch target.\n' %
+                              atest_utils.colorize(test['name'], constants.RED))
+                    elif not any(x in test_mod_info['compatibility_suites'] for
+                                 x in constants.TEST_MAPPING_SUITES):
+                        print('WARNING: Please add %s to either suite: %s for '
+                              'this TEST_MAPPING file to work with TreeHugger.' %
+                              (atest_utils.colorize(test['name'],
+                                                    constants.RED),
+                               atest_utils.colorize(constants.TEST_MAPPING_SUITES,
+                                                    constants.GREEN)))
+                    tests.append(test_mapping.TestDetail(test))
+                grouped_tests.update(tests)
+        return all_tests, imports
+
+    def _find_files(self, path, file_name=constants.TEST_MAPPING):
+        """Find all files with given name under the given path.
+
+        Args:
+            path: A string of path in source.
+
+        Returns:
+            A list of paths of the files with the matching name under the given
+            path.
+        """
+        test_mapping_files = []
+        for root, _, filenames in os.walk(path):
+            for filename in fnmatch.filter(filenames, file_name):
+                test_mapping_files.append(os.path.join(root, filename))
+        return test_mapping_files
+
+    def _get_tests_from_test_mapping_files(
+            self, test_group, test_mapping_files):
+        """Get tests in the given test mapping files with the match group.
+
+        Args:
+            test_group: Group of tests to run. Default is set to `presubmit`.
+            test_mapping_files: A list of path of TEST_MAPPING files.
+
+        Returns:
+            A tuple of (tests, all_tests, imports), where,
+            tests is a set of tests (test_mapping.TestDetail) defined in
+            TEST_MAPPING file of the given path, and its parent directories,
+            with matching test_group.
+            all_tests is a dictionary of all tests in TEST_MAPPING files,
+            grouped by test group.
+            imports is a list of test_mapping.Import objects that contains the
+            details of where to import a TEST_MAPPING file.
+        """
+        all_imports = []
+        # Read and merge the tests in all TEST_MAPPING files.
+        merged_all_tests = {}
+        for test_mapping_file in test_mapping_files:
+            all_tests, imports = self._read_tests_in_test_mapping(
+                test_mapping_file)
+            all_imports.extend(imports)
+            for test_group_name, test_list in all_tests.items():
+                grouped_tests = merged_all_tests.setdefault(
+                    test_group_name, set())
+                grouped_tests.update(test_list)
+
+        tests = set(merged_all_tests.get(test_group, []))
+        if test_group == constants.TEST_GROUP_ALL:
+            for grouped_tests in merged_all_tests.values():
+                tests.update(grouped_tests)
+        return tests, merged_all_tests, all_imports
+
+    # pylint: disable=too-many-arguments
+    # pylint: disable=too-many-locals
+    def _find_tests_by_test_mapping(
+            self, path='', test_group=constants.TEST_GROUP_PRESUBMIT,
+            file_name=constants.TEST_MAPPING, include_subdirs=False,
+            checked_files=None):
+        """Find tests defined in TEST_MAPPING in the given path.
+
+        Args:
+            path: A string of path in source. Default is set to '', i.e., CWD.
+            test_group: Group of tests to run. Default is set to `presubmit`.
+            file_name: Name of TEST_MAPPING file. Default is set to
+                `TEST_MAPPING`. The argument is added for testing purpose.
+            include_subdirs: True to include tests in TEST_MAPPING files in sub
+                directories.
+            checked_files: Paths of TEST_MAPPING files that have been checked.
+
+        Returns:
+            A tuple of (tests, all_tests), where,
+            tests is a set of tests (test_mapping.TestDetail) defined in
+            TEST_MAPPING file of the given path, and its parent directories,
+            with matching test_group.
+            all_tests is a dictionary of all tests in TEST_MAPPING files,
+            grouped by test group.
+        """
+        path = os.path.realpath(path)
+        test_mapping_files = set()
+        all_tests = {}
+        test_mapping_file = os.path.join(path, file_name)
+        if os.path.exists(test_mapping_file):
+            test_mapping_files.add(test_mapping_file)
+        # Include all TEST_MAPPING files in sub-directories if `include_subdirs`
+        # is set to True.
+        if include_subdirs:
+            test_mapping_files.update(self._find_files(path, file_name))
+        # Include all possible TEST_MAPPING files in parent directories.
+        root_dir = os.environ.get(constants.ANDROID_BUILD_TOP, os.sep)
+        while path != root_dir and path != os.sep:
+            path = os.path.dirname(path)
+            test_mapping_file = os.path.join(path, file_name)
+            if os.path.exists(test_mapping_file):
+                test_mapping_files.add(test_mapping_file)
+
+        if checked_files is None:
+            checked_files = set()
+        test_mapping_files.difference_update(checked_files)
+        checked_files.update(test_mapping_files)
+        if not test_mapping_files:
+            return test_mapping_files, all_tests
+
+        tests, all_tests, imports = self._get_tests_from_test_mapping_files(
+            test_group, test_mapping_files)
+
+        # Load TEST_MAPPING files from imports recursively.
+        if imports:
+            for import_detail in imports:
+                path = import_detail.get_path()
+                # (b/110166535 #19) Import path might not exist if a project is
+                # located in different directory in different branches.
+                if path is None:
+                    logging.warn(
+                        'Failed to import TEST_MAPPING at %s', import_detail)
+                    continue
+                # Search for tests based on the imported search path.
+                import_tests, import_all_tests = (
+                    self._find_tests_by_test_mapping(
+                        path, test_group, file_name, include_subdirs,
+                        checked_files))
+                # Merge the collections
+                tests.update(import_tests)
+                for group, grouped_tests in import_all_tests.items():
+                    all_tests.setdefault(group, set()).update(grouped_tests)
+
+        return tests, all_tests
+
+    def _gather_build_targets(self, test_infos):
+        targets = set()
+        for test_info in test_infos:
+            targets |= test_info.build_targets
+        return targets
+
+    def _get_test_mapping_tests(self, args):
+        """Find the tests in TEST_MAPPING files.
+
+        Args:
+            args: arg parsed object.
+
+        Returns:
+            A tuple of (test_names, test_details_list), where
+            test_names: a list of test name
+            test_details_list: a list of test_mapping.TestDetail objects for
+                the tests in TEST_MAPPING files with matching test group.
+        """
+        # Pull out tests from test mapping
+        src_path = ''
+        test_group = constants.TEST_GROUP_PRESUBMIT
+        if args.tests:
+            if ':' in args.tests[0]:
+                src_path, test_group = args.tests[0].split(':')
+            else:
+                src_path = args.tests[0]
+
+        test_details, all_test_details = self._find_tests_by_test_mapping(
+            path=src_path, test_group=test_group,
+            include_subdirs=args.include_subdirs, checked_files=set())
+        test_details_list = list(test_details)
+        if not test_details_list:
+            logging.warn(
+                'No tests of group `%s` found in TEST_MAPPING at %s or its '
+                'parent directories.\nYou might be missing atest arguments,'
+                ' try `atest --help` for more information',
+                test_group, os.path.realpath(''))
+            if all_test_details:
+                tests = ''
+                for test_group, test_list in all_test_details.items():
+                    tests += '%s:\n' % test_group
+                    for test_detail in sorted(test_list):
+                        tests += '\t%s\n' % test_detail
+                logging.warn(
+                    'All available tests in TEST_MAPPING files are:\n%s',
+                    tests)
+            metrics_utils.send_exit_event(constants.EXIT_CODE_TEST_NOT_FOUND)
+            sys.exit(constants.EXIT_CODE_TEST_NOT_FOUND)
+
+        logging.debug(
+            'Test details:\n%s',
+            '\n'.join([str(detail) for detail in test_details_list]))
+        test_names = [detail.name for detail in test_details_list]
+        return test_names, test_details_list
+
+
+    def translate(self, args):
+        """Translate atest command line into build targets and run commands.
+
+        Args:
+            args: arg parsed object.
+
+        Returns:
+            A tuple with set of build_target strings and list of TestInfos.
+        """
+        tests = args.tests
+        # Test details from TEST_MAPPING files
+        test_details_list = None
+        if atest_utils.is_test_mapping(args):
+            if args.enable_file_patterns:
+                self.enable_file_patterns = True
+            tests, test_details_list = self._get_test_mapping_tests(args)
+        atest_utils.colorful_print("\nFinding Tests...", constants.CYAN)
+        logging.debug('Finding Tests: %s', tests)
+        start = time.time()
+        test_infos = self._get_test_infos(tests, test_details_list)
+        logging.debug('Found tests in %ss', time.time() - start)
+        for test_info in test_infos:
+            logging.debug('%s\n', test_info)
+        build_targets = self._gather_build_targets(test_infos)
+        return build_targets, test_infos
diff --git a/atest-py2/cli_translator_unittest.py b/atest-py2/cli_translator_unittest.py
new file mode 100755
index 0000000..0b39be2
--- /dev/null
+++ b/atest-py2/cli_translator_unittest.py
@@ -0,0 +1,375 @@
+#!/usr/bin/env python
+#
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for cli_translator."""
+
+import unittest
+import json
+import os
+import re
+import sys
+import mock
+
+import cli_translator as cli_t
+import constants
+import test_finder_handler
+import test_mapping
+import unittest_constants as uc
+import unittest_utils
+from metrics import metrics
+from test_finders import module_finder
+from test_finders import test_finder_base
+
+# Import StringIO in Python2/3 compatible way.
+if sys.version_info[0] == 2:
+    from StringIO import StringIO
+else:
+    from io import StringIO
+
+# TEST_MAPPING related consts
+TEST_MAPPING_TOP_DIR = os.path.join(uc.TEST_DATA_DIR, 'test_mapping')
+TEST_MAPPING_DIR = os.path.join(TEST_MAPPING_TOP_DIR, 'folder1')
+TEST_1 = test_mapping.TestDetail({'name': 'test1', 'host': True})
+TEST_2 = test_mapping.TestDetail({'name': 'test2'})
+TEST_3 = test_mapping.TestDetail({'name': 'test3'})
+TEST_4 = test_mapping.TestDetail({'name': 'test4'})
+TEST_5 = test_mapping.TestDetail({'name': 'test5'})
+TEST_6 = test_mapping.TestDetail({'name': 'test6'})
+TEST_7 = test_mapping.TestDetail({'name': 'test7'})
+TEST_8 = test_mapping.TestDetail({'name': 'test8'})
+TEST_9 = test_mapping.TestDetail({'name': 'test9'})
+TEST_10 = test_mapping.TestDetail({'name': 'test10'})
+
+SEARCH_DIR_RE = re.compile(r'^find ([^ ]*).*$')
+
+
+#pylint: disable=unused-argument
+def gettestinfos_side_effect(test_names, test_mapping_test_details=None):
+    """Mock return values for _get_test_info."""
+    test_infos = set()
+    for test_name in test_names:
+        if test_name == uc.MODULE_NAME:
+            test_infos.add(uc.MODULE_INFO)
+        if test_name == uc.CLASS_NAME:
+            test_infos.add(uc.CLASS_INFO)
+    return test_infos
+
+
+#pylint: disable=protected-access
+#pylint: disable=no-self-use
+class CLITranslatorUnittests(unittest.TestCase):
+    """Unit tests for cli_t.py"""
+
+    def setUp(self):
+        """Run before execution of every test"""
+        self.ctr = cli_t.CLITranslator()
+
+        # Create a mock of args.
+        self.args = mock.Mock
+        self.args.tests = []
+        # Test mapping related args
+        self.args.test_mapping = False
+        self.args.include_subdirs = False
+        self.args.enable_file_patterns = False
+        # Cache finder related args
+        self.args.clear_cache = False
+        self.ctr.mod_info = mock.Mock
+        self.ctr.mod_info.name_to_module_info = {}
+
+    def tearDown(self):
+        """Run after execution of every test"""
+        reload(uc)
+
+    @mock.patch('__builtin__.raw_input', return_value='n')
+    @mock.patch.object(module_finder.ModuleFinder, 'find_test_by_module_name')
+    @mock.patch.object(module_finder.ModuleFinder, 'get_fuzzy_searching_results')
+    @mock.patch.object(metrics, 'FindTestFinishEvent')
+    @mock.patch.object(test_finder_handler, 'get_find_methods_for_test')
+    # pylint: disable=too-many-locals
+    def test_get_test_infos(self, mock_getfindmethods, _metrics, mock_getfuzzyresults,
+                            mock_findtestbymodule, mock_raw_input):
+        """Test _get_test_infos method."""
+        ctr = cli_t.CLITranslator()
+        find_method_return_module_info = lambda x, y: uc.MODULE_INFOS
+        # pylint: disable=invalid-name
+        find_method_return_module_class_info = (lambda x, test: uc.MODULE_INFOS
+                                                if test == uc.MODULE_NAME
+                                                else uc.CLASS_INFOS)
+        find_method_return_nothing = lambda x, y: None
+        one_test = [uc.MODULE_NAME]
+        mult_test = [uc.MODULE_NAME, uc.CLASS_NAME]
+
+        # Let's make sure we return what we expect.
+        expected_test_infos = {uc.MODULE_INFO}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_module_info, None)]
+        unittest_utils.assert_strict_equal(
+            self, ctr._get_test_infos(one_test), expected_test_infos)
+
+        # Check we receive multiple test infos.
+        expected_test_infos = {uc.MODULE_INFO, uc.CLASS_INFO}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_module_class_info,
+                                    None)]
+        unittest_utils.assert_strict_equal(
+            self, ctr._get_test_infos(mult_test), expected_test_infos)
+
+        # Check return null set when we have no tests found or multiple results.
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_nothing, None)]
+        null_test_info = set()
+        mock_getfuzzyresults.return_value = []
+        self.assertEqual(null_test_info, ctr._get_test_infos(one_test))
+        self.assertEqual(null_test_info, ctr._get_test_infos(mult_test))
+
+        # Check returning test_info when the user says Yes.
+        mock_raw_input.return_value = "Y"
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_module_info, None)]
+        mock_getfuzzyresults.return_value = one_test
+        mock_findtestbymodule.return_value = uc.MODULE_INFO
+        unittest_utils.assert_strict_equal(
+            self, ctr._get_test_infos([uc.TYPO_MODULE_NAME]), {uc.MODULE_INFO})
+
+        # Check the method works for test mapping.
+        test_detail1 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST)
+        test_detail2 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_OPTION)
+        expected_test_infos = {uc.MODULE_INFO, uc.CLASS_INFO}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_module_class_info,
+                                    None)]
+        test_infos = ctr._get_test_infos(
+            mult_test, [test_detail1, test_detail2])
+        unittest_utils.assert_strict_equal(
+            self, test_infos, expected_test_infos)
+        for test_info in test_infos:
+            if test_info == uc.MODULE_INFO:
+                self.assertEqual(
+                    test_detail1.options,
+                    test_info.data[constants.TI_MODULE_ARG])
+            else:
+                self.assertEqual(
+                    test_detail2.options,
+                    test_info.data[constants.TI_MODULE_ARG])
+
+    @mock.patch.object(metrics, 'FindTestFinishEvent')
+    @mock.patch.object(test_finder_handler, 'get_find_methods_for_test')
+    def test_get_test_infos_2(self, mock_getfindmethods, _metrics):
+        """Test _get_test_infos method."""
+        ctr = cli_t.CLITranslator()
+        find_method_return_module_info2 = lambda x, y: uc.MODULE_INFOS2
+        find_method_ret_mod_cls_info2 = (
+            lambda x, test: uc.MODULE_INFOS2
+            if test == uc.MODULE_NAME else uc.CLASS_INFOS2)
+        one_test = [uc.MODULE_NAME]
+        mult_test = [uc.MODULE_NAME, uc.CLASS_NAME]
+        # Let's make sure we return what we expect.
+        expected_test_infos = {uc.MODULE_INFO, uc.MODULE_INFO2}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_return_module_info2,
+                                    None)]
+        unittest_utils.assert_strict_equal(
+            self, ctr._get_test_infos(one_test), expected_test_infos)
+        # Check we receive multiple test infos.
+        expected_test_infos = {uc.MODULE_INFO, uc.CLASS_INFO, uc.MODULE_INFO2,
+                               uc.CLASS_INFO2}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_ret_mod_cls_info2,
+                                    None)]
+        unittest_utils.assert_strict_equal(
+            self, ctr._get_test_infos(mult_test), expected_test_infos)
+        # Check the method works for test mapping.
+        test_detail1 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST)
+        test_detail2 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_OPTION)
+        expected_test_infos = {uc.MODULE_INFO, uc.CLASS_INFO, uc.MODULE_INFO2,
+                               uc.CLASS_INFO2}
+        mock_getfindmethods.return_value = [
+            test_finder_base.Finder(None, find_method_ret_mod_cls_info2,
+                                    None)]
+        test_infos = ctr._get_test_infos(
+            mult_test, [test_detail1, test_detail2])
+        unittest_utils.assert_strict_equal(
+            self, test_infos, expected_test_infos)
+        for test_info in test_infos:
+            if test_info in [uc.MODULE_INFO, uc.MODULE_INFO2]:
+                self.assertEqual(
+                    test_detail1.options,
+                    test_info.data[constants.TI_MODULE_ARG])
+            elif test_info in [uc.CLASS_INFO, uc.CLASS_INFO2]:
+                self.assertEqual(
+                    test_detail2.options,
+                    test_info.data[constants.TI_MODULE_ARG])
+
+    @mock.patch.object(cli_t.CLITranslator, '_get_test_infos',
+                       side_effect=gettestinfos_side_effect)
+    def test_translate_class(self, _info):
+        """Test translate method for tests by class name."""
+        # Check that we can find a class.
+        self.args.tests = [uc.CLASS_NAME]
+        targets, test_infos = self.ctr.translate(self.args)
+        unittest_utils.assert_strict_equal(
+            self, targets, uc.CLASS_BUILD_TARGETS)
+        unittest_utils.assert_strict_equal(self, test_infos, {uc.CLASS_INFO})
+
+    @mock.patch.object(cli_t.CLITranslator, '_get_test_infos',
+                       side_effect=gettestinfos_side_effect)
+    def test_translate_module(self, _info):
+        """Test translate method for tests by module or class name."""
+        # Check that we get all the build targets we expect.
+        self.args.tests = [uc.MODULE_NAME, uc.CLASS_NAME]
+        targets, test_infos = self.ctr.translate(self.args)
+        unittest_utils.assert_strict_equal(
+            self, targets, uc.MODULE_CLASS_COMBINED_BUILD_TARGETS)
+        unittest_utils.assert_strict_equal(self, test_infos, {uc.MODULE_INFO,
+                                                              uc.CLASS_INFO})
+
+    @mock.patch.object(cli_t.CLITranslator, '_find_tests_by_test_mapping')
+    @mock.patch.object(cli_t.CLITranslator, '_get_test_infos',
+                       side_effect=gettestinfos_side_effect)
+    def test_translate_test_mapping(self, _info, mock_testmapping):
+        """Test translate method for tests in test mapping."""
+        # Check that test mappings feeds into get_test_info properly.
+        test_detail1 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST)
+        test_detail2 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_OPTION)
+        mock_testmapping.return_value = ([test_detail1, test_detail2], None)
+        self.args.tests = []
+        targets, test_infos = self.ctr.translate(self.args)
+        unittest_utils.assert_strict_equal(
+            self, targets, uc.MODULE_CLASS_COMBINED_BUILD_TARGETS)
+        unittest_utils.assert_strict_equal(self, test_infos, {uc.MODULE_INFO,
+                                                              uc.CLASS_INFO})
+
+    @mock.patch.object(cli_t.CLITranslator, '_find_tests_by_test_mapping')
+    @mock.patch.object(cli_t.CLITranslator, '_get_test_infos',
+                       side_effect=gettestinfos_side_effect)
+    def test_translate_test_mapping_all(self, _info, mock_testmapping):
+        """Test translate method for tests in test mapping."""
+        # Check that test mappings feeds into get_test_info properly.
+        test_detail1 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST)
+        test_detail2 = test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_OPTION)
+        mock_testmapping.return_value = ([test_detail1, test_detail2], None)
+        self.args.tests = ['src_path:all']
+        self.args.test_mapping = True
+        targets, test_infos = self.ctr.translate(self.args)
+        unittest_utils.assert_strict_equal(
+            self, targets, uc.MODULE_CLASS_COMBINED_BUILD_TARGETS)
+        unittest_utils.assert_strict_equal(self, test_infos, {uc.MODULE_INFO,
+                                                              uc.CLASS_INFO})
+
+    def test_find_tests_by_test_mapping_presubmit(self):
+        """Test _find_tests_by_test_mapping method to locate presubmit tests."""
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: uc.TEST_DATA_DIR}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            tests, all_tests = self.ctr._find_tests_by_test_mapping(
+                path=TEST_MAPPING_DIR, file_name='test_mapping_sample',
+                checked_files=set())
+        expected = set([TEST_1, TEST_2, TEST_5, TEST_7, TEST_9])
+        expected_all_tests = {'presubmit': expected,
+                              'postsubmit': set(
+                                  [TEST_3, TEST_6, TEST_8, TEST_10]),
+                              'other_group': set([TEST_4])}
+        self.assertEqual(expected, tests)
+        self.assertEqual(expected_all_tests, all_tests)
+
+    def test_find_tests_by_test_mapping_postsubmit(self):
+        """Test _find_tests_by_test_mapping method to locate postsubmit tests.
+        """
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: uc.TEST_DATA_DIR}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            tests, all_tests = self.ctr._find_tests_by_test_mapping(
+                path=TEST_MAPPING_DIR,
+                test_group=constants.TEST_GROUP_POSTSUBMIT,
+                file_name='test_mapping_sample', checked_files=set())
+        expected_presubmit = set([TEST_1, TEST_2, TEST_5, TEST_7, TEST_9])
+        expected = set([TEST_3, TEST_6, TEST_8, TEST_10])
+        expected_all_tests = {'presubmit': expected_presubmit,
+                              'postsubmit': set(
+                                  [TEST_3, TEST_6, TEST_8, TEST_10]),
+                              'other_group': set([TEST_4])}
+        self.assertEqual(expected, tests)
+        self.assertEqual(expected_all_tests, all_tests)
+
+    def test_find_tests_by_test_mapping_all_group(self):
+        """Test _find_tests_by_test_mapping method to locate postsubmit tests.
+        """
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: uc.TEST_DATA_DIR}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            tests, all_tests = self.ctr._find_tests_by_test_mapping(
+                path=TEST_MAPPING_DIR, test_group=constants.TEST_GROUP_ALL,
+                file_name='test_mapping_sample', checked_files=set())
+        expected_presubmit = set([TEST_1, TEST_2, TEST_5, TEST_7, TEST_9])
+        expected = set([
+            TEST_1, TEST_2, TEST_3, TEST_4, TEST_5, TEST_6, TEST_7, TEST_8,
+            TEST_9, TEST_10])
+        expected_all_tests = {'presubmit': expected_presubmit,
+                              'postsubmit': set(
+                                  [TEST_3, TEST_6, TEST_8, TEST_10]),
+                              'other_group': set([TEST_4])}
+        self.assertEqual(expected, tests)
+        self.assertEqual(expected_all_tests, all_tests)
+
+    def test_find_tests_by_test_mapping_include_subdir(self):
+        """Test _find_tests_by_test_mapping method to include sub directory."""
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: uc.TEST_DATA_DIR}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            tests, all_tests = self.ctr._find_tests_by_test_mapping(
+                path=TEST_MAPPING_TOP_DIR, file_name='test_mapping_sample',
+                include_subdirs=True, checked_files=set())
+        expected = set([TEST_1, TEST_2, TEST_5, TEST_7, TEST_9])
+        expected_all_tests = {'presubmit': expected,
+                              'postsubmit': set([
+                                  TEST_3, TEST_6, TEST_8, TEST_10]),
+                              'other_group': set([TEST_4])}
+        self.assertEqual(expected, tests)
+        self.assertEqual(expected_all_tests, all_tests)
+
+    @mock.patch('__builtin__.raw_input', return_value='')
+    def test_confirm_running(self, mock_raw_input):
+        """Test _confirm_running method."""
+        self.assertTrue(self.ctr._confirm_running([TEST_1]))
+        mock_raw_input.return_value = 'N'
+        self.assertFalse(self.ctr._confirm_running([TEST_2]))
+
+    def test_print_fuzzy_searching_results(self):
+        """Test _print_fuzzy_searching_results"""
+        modules = [uc.MODULE_NAME, uc.MODULE2_NAME]
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        self.ctr._print_fuzzy_searching_results(modules)
+        sys.stdout = sys.__stdout__
+        output = 'Did you mean the following modules?\n{0}\n{1}\n'.format(
+            uc.MODULE_NAME, uc.MODULE2_NAME)
+        self.assertEqual(capture_output.getvalue(), output)
+
+    def test_filter_comments(self):
+        """Test filter_comments method"""
+        file_with_comments = os.path.join(TEST_MAPPING_TOP_DIR,
+                                          'folder6',
+                                          'test_mapping_sample_with_comments')
+        file_with_comments_golden = os.path.join(TEST_MAPPING_TOP_DIR,
+                                                 'folder6',
+                                                 'test_mapping_sample_golden')
+        test_mapping_dict = json.loads(
+            self.ctr.filter_comments(file_with_comments))
+        test_mapping_dict_gloden = None
+        with open(file_with_comments_golden) as json_file:
+            test_mapping_dict_gloden = json.load(json_file)
+
+        self.assertEqual(test_mapping_dict, test_mapping_dict_gloden)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/constants.py b/atest-py2/constants.py
new file mode 100644
index 0000000..fad8ef5
--- /dev/null
+++ b/atest-py2/constants.py
@@ -0,0 +1,29 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Imports the various constant files that are available (default, google, etc).
+"""
+#pylint: disable=wildcard-import
+#pylint: disable=unused-wildcard-import
+
+from constants_default import *
+
+
+# Now try to import the various constant files outside this repo to overwrite
+# the globals as desired.
+try:
+    from constants_google import *
+except ImportError:
+    pass
diff --git a/atest-py2/constants_default.py b/atest-py2/constants_default.py
new file mode 100644
index 0000000..adfba98
--- /dev/null
+++ b/atest-py2/constants_default.py
@@ -0,0 +1,242 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Various globals used by atest.
+"""
+
+import os
+import re
+
+MODE = 'DEFAULT'
+
+# Result server constants for atest_utils.
+RESULT_SERVER = ''
+RESULT_SERVER_ARGS = []
+RESULT_SERVER_TIMEOUT = 5
+# Result arguments if tests are configured in TEST_MAPPING.
+TEST_MAPPING_RESULT_SERVER_ARGS = []
+
+# Google service key for gts tests.
+GTS_GOOGLE_SERVICE_ACCOUNT = ''
+
+# Arg constants.
+WAIT_FOR_DEBUGGER = 'WAIT_FOR_DEBUGGER'
+DISABLE_INSTALL = 'DISABLE_INSTALL'
+DISABLE_TEARDOWN = 'DISABLE_TEARDOWN'
+PRE_PATCH_ITERATIONS = 'PRE_PATCH_ITERATIONS'
+POST_PATCH_ITERATIONS = 'POST_PATCH_ITERATIONS'
+PRE_PATCH_FOLDER = 'PRE_PATCH_FOLDER'
+POST_PATCH_FOLDER = 'POST_PATCH_FOLDER'
+SERIAL = 'SERIAL'
+SHARDING = 'SHARDING'
+ALL_ABI = 'ALL_ABI'
+HOST = 'HOST'
+CUSTOM_ARGS = 'CUSTOM_ARGS'
+DRY_RUN = 'DRY_RUN'
+ANDROID_SERIAL = 'ANDROID_SERIAL'
+INSTANT = 'INSTANT'
+USER_TYPE = 'USER_TYPE'
+ITERATIONS = 'ITERATIONS'
+RERUN_UNTIL_FAILURE = 'RERUN_UNTIL_FAILURE'
+RETRY_ANY_FAILURE = 'RETRY_ANY_FAILURE'
+TF_DEBUG = 'TF_DEBUG'
+TF_TEMPLATE = 'TF_TEMPLATE'
+COLLECT_TESTS_ONLY = 'COLLECT_TESTS_ONLY'
+
+# Application exit codes.
+EXIT_CODE_SUCCESS = 0
+EXIT_CODE_ENV_NOT_SETUP = 1
+EXIT_CODE_BUILD_FAILURE = 2
+EXIT_CODE_ERROR = 3
+EXIT_CODE_TEST_NOT_FOUND = 4
+EXIT_CODE_TEST_FAILURE = 5
+EXIT_CODE_VERIFY_FAILURE = 6
+EXIT_CODE_OUTSIDE_ROOT = 7
+
+# Codes of specific events. These are exceptions that don't stop anything
+# but sending metrics.
+ACCESS_CACHE_FAILURE = 101
+ACCESS_HISTORY_FAILURE = 102
+IMPORT_FAILURE = 103
+MLOCATEDB_LOCKED = 104
+
+# Test finder constants.
+MODULE_CONFIG = 'AndroidTest.xml'
+MODULE_COMPATIBILITY_SUITES = 'compatibility_suites'
+MODULE_NAME = 'module_name'
+MODULE_PATH = 'path'
+MODULE_CLASS = 'class'
+MODULE_INSTALLED = 'installed'
+MODULE_CLASS_ROBOLECTRIC = 'ROBOLECTRIC'
+MODULE_CLASS_NATIVE_TESTS = 'NATIVE_TESTS'
+MODULE_CLASS_JAVA_LIBRARIES = 'JAVA_LIBRARIES'
+MODULE_TEST_CONFIG = 'test_config'
+
+# Env constants
+ANDROID_BUILD_TOP = 'ANDROID_BUILD_TOP'
+ANDROID_OUT = 'OUT'
+ANDROID_OUT_DIR = 'OUT_DIR'
+ANDROID_HOST_OUT = 'ANDROID_HOST_OUT'
+ANDROID_PRODUCT_OUT = 'ANDROID_PRODUCT_OUT'
+USER_FROM_TOOL = 'USER_FROM_TOOL'
+
+# Test Info data keys
+# Value of include-filter option.
+TI_FILTER = 'filter'
+TI_REL_CONFIG = 'rel_config'
+TI_MODULE_CLASS = 'module_class'
+# Value of module-arg option
+TI_MODULE_ARG = 'module-arg'
+
+# Google TF
+GTF_MODULE = 'google-tradefed'
+GTF_TARGET = 'google-tradefed-core'
+
+# TEST_MAPPING filename
+TEST_MAPPING = 'TEST_MAPPING'
+# Test group for tests in TEST_MAPPING
+TEST_GROUP_PRESUBMIT = 'presubmit'
+TEST_GROUP_POSTSUBMIT = 'postsubmit'
+TEST_GROUP_ALL = 'all'
+# Key in TEST_MAPPING file for a list of imported TEST_MAPPING file
+TEST_MAPPING_IMPORTS = 'imports'
+
+# TradeFed command line args
+TF_INCLUDE_FILTER_OPTION = 'include-filter'
+TF_EXCLUDE_FILTER_OPTION = 'exclude-filter'
+TF_INCLUDE_FILTER = '--include-filter'
+TF_EXCLUDE_FILTER = '--exclude-filter'
+TF_ATEST_INCLUDE_FILTER = '--atest-include-filter'
+TF_ATEST_INCLUDE_FILTER_VALUE_FMT = '{test_name}:{test_filter}'
+TF_MODULE_ARG = '--module-arg'
+TF_MODULE_ARG_VALUE_FMT = '{test_name}:{option_name}:{option_value}'
+TF_SUITE_FILTER_ARG_VALUE_FMT = '"{test_name} {option_value}"'
+TF_SKIP_LOADING_CONFIG_JAR = '--skip-loading-config-jar'
+
+# Suite Plans
+SUITE_PLANS = frozenset(['cts'])
+
+# Constants of Steps
+REBUILD_MODULE_INFO_FLAG = '--rebuild-module-info'
+BUILD_STEP = 'build'
+INSTALL_STEP = 'install'
+TEST_STEP = 'test'
+ALL_STEPS = [BUILD_STEP, INSTALL_STEP, TEST_STEP]
+
+# ANSI code shift for colorful print
+BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
+
+# Answers equivalent to YES!
+AFFIRMATIVES = ['y', 'Y', 'yes', 'Yes', 'YES', '']
+LD_RANGE = 2
+
+# Types of Levenshetine Distance Cost
+COST_TYPO = (1, 1, 1)
+COST_SEARCH = (8, 1, 5)
+
+# Value of TestInfo install_locations.
+DEVICELESS_TEST = 'host'
+DEVICE_TEST = 'device'
+BOTH_TEST = 'both'
+
+# Metrics
+METRICS_URL = 'http://asuite-218222.appspot.com/atest/metrics'
+EXTERNAL = 'EXTERNAL_RUN'
+INTERNAL = 'INTERNAL_RUN'
+INTERNAL_EMAIL = '@google.com'
+INTERNAL_HOSTNAME = ['.google.com', 'c.googlers.com']
+CONTENT_LICENSES_URL = 'https://source.android.com/setup/start/licenses'
+CONTRIBUTOR_AGREEMENT_URL = {
+    'INTERNAL': 'https://cla.developers.google.com/',
+    'EXTERNAL': 'https://opensource.google.com/docs/cla/'
+}
+PRIVACY_POLICY_URL = 'https://policies.google.com/privacy'
+TERMS_SERVICE_URL = 'https://policies.google.com/terms'
+TOOL_NAME = 'atest'
+TF_PREPARATION = 'tf-preparation'
+
+# Detect type for local_detect_event.
+# Next expansion : DETECT_TYPE_XXX = 1
+DETECT_TYPE_BUG_DETECTED = 0
+# Considering a trade-off between speed and size, we set UPPER_LIMIT to 100000
+# to make maximum file space 10M(100000(records)*100(byte/record)) at most.
+# Therefore, to update history file will spend 1 sec at most in each run.
+UPPER_LIMIT = 100000
+TRIM_TO_SIZE = 50000
+
+# VTS plans
+VTS_STAGING_PLAN = 'vts-staging-default'
+
+# TreeHugger TEST_MAPPING SUITE_PLANS
+TEST_MAPPING_SUITES = ['device-tests', 'general-tests']
+
+# VTS10 TF
+VTS_TF_MODULE = 'vts10-tradefed'
+
+# VTS TF
+VTS_CORE_TF_MODULE = 'vts-tradefed'
+
+# VTS suite set
+VTS_CORE_SUITE = 'vts'
+
+# ATest TF
+ATEST_TF_MODULE = 'atest-tradefed'
+
+# Build environment variable for each build on ATest
+# With RECORD_ALL_DEPS enabled, ${ANDROID_PRODUCT_OUT}/module-info.json will
+# generate modules' dependencies info when make.
+# With SOONG_COLLECT_JAVA_DEPS enabled, out/soong/module_bp_java_deps.json will
+# be generated when make.
+ATEST_BUILD_ENV = {'RECORD_ALL_DEPS':'true', 'SOONG_COLLECT_JAVA_DEPS':'true'}
+
+# Atest index path and relative dirs/caches.
+INDEX_DIR = os.path.join(os.getenv(ANDROID_HOST_OUT, ''), 'indexes')
+LOCATE_CACHE = os.path.join(INDEX_DIR, 'mlocate.db')
+INT_INDEX = os.path.join(INDEX_DIR, 'integration.idx')
+CLASS_INDEX = os.path.join(INDEX_DIR, 'classes.idx')
+CC_CLASS_INDEX = os.path.join(INDEX_DIR, 'cc_classes.idx')
+PACKAGE_INDEX = os.path.join(INDEX_DIR, 'packages.idx')
+QCLASS_INDEX = os.path.join(INDEX_DIR, 'fqcn.idx')
+MODULE_INDEX = os.path.join(INDEX_DIR, 'modules.idx')
+VERSION_FILE = os.path.join(os.path.dirname(__file__), 'VERSION')
+
+# Regeular Expressions
+CC_EXT_RE = re.compile(r'.*\.(cc|cpp)$')
+JAVA_EXT_RE = re.compile(r'.*\.(java|kt)$')
+# e.g. /path/to/ccfile.cc: TEST_F(test_name, method_name){
+CC_OUTPUT_RE = re.compile(r'(?P<file_path>/.*):\s*TEST(_F|_P)?[ ]*\('
+                          r'(?P<test_name>\w+)\s*,\s*(?P<method_name>\w+)\)'
+                          r'\s*\{')
+CC_GREP_RE = r'^[ ]*TEST(_P|_F)?[ ]*\([[:alnum:]].*,'
+# e.g. /path/to/Javafile.java:package com.android.settings.accessibility
+# grab the path, Javafile(class) and com.android.settings.accessibility(package)
+CLASS_OUTPUT_RE = re.compile(r'(?P<java_path>.*/(?P<class>[A-Z]\w+)\.\w+)[:].*')
+QCLASS_OUTPUT_RE = re.compile(r'(?P<java_path>.*/(?P<class>[A-Z]\w+)\.\w+)'
+                              r'[:]\s*package\s+(?P<package>[^(;|\s)]+)\s*')
+PACKAGE_OUTPUT_RE = re.compile(r'(?P<java_dir>/.*/).*[.](java|kt)[:]\s*package\s+'
+                               r'(?P<package>[^(;|\s)]+)\s*')
+
+ATEST_RESULT_ROOT = '/tmp/atest_result'
+LATEST_RESULT_FILE = os.path.join(ATEST_RESULT_ROOT, 'LATEST', 'test_result')
+
+# Tests list which need vts_kernel_tests as test dependency
+REQUIRED_KERNEL_TEST_MODULES = [
+    'vts_ltp_test_arm',
+    'vts_ltp_test_arm_64',
+    'vts_linux_kselftest_arm_32',
+    'vts_linux_kselftest_arm_64',
+    'vts_linux_kselftest_x86_32',
+    'vts_linux_kselftest_x86_64'
+]
diff --git a/atest-py2/docs/atest_structure.md b/atest-py2/docs/atest_structure.md
new file mode 100644
index 0000000..1ff7b90
--- /dev/null
+++ b/atest-py2/docs/atest_structure.md
@@ -0,0 +1,116 @@
+# Atest Developer Guide
+
+You're here because you'd like to contribute to atest. To start off, we'll
+explain how atest is structured and where the major pieces live and what they
+do. If you're more interested in how to use atest, go to the [README](../README.md).
+
+##### Table of Contents
+1. [Overall Structure](#overall-structure)
+2. [Major Files and Dirs](#major-files-and-dirs)
+3. [Test Finders](#test-finders)
+4. [Test Runners](#test-runners)
+5. [Constants Override](#constants-override)
+
+## <a name="overall-structure">Overall Structure</a>
+
+Atest is primarily composed of 2 components: [test finders](#test-finders) and
+[test runners](#test-runners). At a high level, atest does the following:
+1. Parse args and verify environment
+2. Find test(s) based on user input
+3. Build test dependencies
+4. Run test(s)
+
+Let's walk through an example run and highlight what happens under the covers.
+
+> ```# atest hello_world_test```
+
+Atest will first check the environment is setup and then load up the
+module-info.json file (and build it if it's not detected or we want to rebuild
+it). That is a critical piece that atest depends on. Module-info.json contains a
+list of all modules in the android repo and some relevant info (e.g.
+compatibility_suite, auto_gen_config, etc) that is used during the test finding
+process. We create the results dir for our test runners to dump results in and
+proceed to the first juicy part of atest, finding tests.
+
+The tests specified by the user are passed into the ```CLITranslator``` to
+transform the user input into a ```TestInfo``` object that contains all of the
+required and optional bits used to run the test as how the user intended.
+Required info would be the test name, test dependencies, and the test runner
+used to run the test. Optional bits would be additional args for the test and
+method/class filters.
+
+Once ```TestInfo``` objects have been constructed for all the tests passed in
+by the user, all of the test dependencies are built. This step can by bypassed
+if the user specifies only _-t_ or _-i_.
+
+The final step is to run the tests which is where the test runners do their job.
+All of the ```TestInfo``` objects get passed into the ```test_runner_handler```
+which invokes each ```TestInfo``` specified test runner. In this specific case,
+the ```AtestTradefedTestRunner``` is used to kick off ```hello_world_test```.
+
+Read on to learn more about the classes mentioned.
+
+## <a name="major-files-and-dirs">Major Files and Dirs</a>
+
+Here is a list of major files and dirs that are important to point out:
+* ```atest.py``` - Main entry point.
+* ```cli_translator.py``` - Home of ```CLITranslator``` class. Translates the
+  user input into something the test runners can understand.
+* ```test_finder_handler.py``` - Module that collects all test finders,
+  determines which test finder methods to use and returns them for
+  ```CLITranslator``` to utilize.
+* ```test_finders/``` - Location of test finder classes. More details on test
+  finders [below](#test-finders).
+* ```test_finders/test_info.py``` - Module that defines ```TestInfo``` class.
+* ```test_runner_handler.py``` - Module that collects all test runners and
+  contains logic to determine what test runner to use for a particular
+  ```TestInfo```.
+* ```test_runners/``` - Location of test runner classes. More details on test
+  runners [below](#test-runners).
+* ```constants_default.py``` - Location of constant defaults. Need to override
+  some of these constants for your private repo? [Instructions below](#constants-override).
+
+## <a name="test-finders">Test Finders</a>
+
+Test finders are classes that host find methods. The find methods are called by
+atest to find tests in the android repo based on the user's input (path,
+filename, class, etc).  Find methods will also find the corresponding test
+dependencies for the supplied test, translating it into a form that a test
+runner can understand, and specifying the test runner.
+
+For more details and instructions on how to create new test finders,
+[go here](./develop_test_finders.md)
+
+## <a name="test-runners">Test Runners</a>
+
+Test Runners are classes that execute the tests. They consume a ```TestInfo```
+and execute the test as specified.
+
+For more details and instructions on how to create new test runners, [go here](./develop_test_runners.md)
+
+## <a name="constants-override">Constants Override</a>
+
+You'd like to override some constants but not sure how?  Override them with your
+own constants_override.py that lives in your own private repo.
+
+1. Create new ```constants_override.py``` (or whatever you'd like to name it) in
+  your own repo. It can live anywhere but just for example purposes, let's
+  specify the path to be ```<private repo>/path/to/constants_override/constants_override.py```.
+2. Add a ```vendorsetup.sh``` script in ```//vendor/<somewhere>``` to export the
+  path of ```constants_override.py``` base path into ```PYTHONPATH```.
+```bash
+# This file is executed by build/envsetup.sh
+_path_to_constants_override="$(gettop)/path/to/constants_override"
+if [[ ! $PYTHONPATH == *${_path_to_constants_override}* ]]; then
+  export PYTHONPATH=${_path_to_constants_override}:$PYTHONPATH
+fi
+```
+3. Try-except import ```constants_override``` in ```constants.py```.
+```python
+try:
+    from constants_override import *
+except ImportError:
+    pass
+```
+4. You're done! To pick up the override, rerun build/envsetup.sh to kick off the
+  vendorsetup.sh script.
diff --git a/atest-py2/docs/develop_test_finders.md b/atest-py2/docs/develop_test_finders.md
new file mode 100644
index 0000000..5235ef7
--- /dev/null
+++ b/atest-py2/docs/develop_test_finders.md
@@ -0,0 +1,64 @@
+# Test Finder Developer Guide
+
+Learn about test finders and how to create a new test finder class.
+
+##### Table of Contents
+1. [Test Finder Details](#test-finder-details)
+2. [Creating a Test Finder](#creating-a-test-finder)
+
+## <a name="test-finder-details">Test Finder Details</a>
+
+A test finder class holds find methods. A find method is given a string (the
+user input) and should try to resolve that string into a ```TestInfo``` object.
+A ```TestInfo``` object holds the test name, test dependencies, test runner, and
+a data field to hold misc bits like filters and extra args for the test.  The
+test finder class can hold multiple find methods. The find methods are grouped
+together in a class so they can share metadata for optimal test finding.
+Examples of metadata would be the ```ModuleInfo``` object or the dirs that hold
+the test configs for the ```TFIntegrationFinder```.
+
+**When should I create a new test finder class?**
+
+If the metadata used to find a test is unlike existing test finder classes,
+that is the right time to create a new class. Metadata can be anything like
+file name patterns, a special file in a dir to indicate it's a test, etc. The
+default test finder classes use the module-info.json and specific dir paths
+metadata (```ModuleFinder``` and ```TFIntegrationFinder``` respectively).
+
+## <a name="creating-a-test-finder">Creating a Test Finder</a>
+
+First thing to choose is where to put the test finder. This will primarily
+depend on if the test finder will be public or private. If public,
+```test_finders/``` is the default location.
+
+> If it will be private, then you can
+> follow the same instructions for ```vendorsetup.sh``` in
+> [constants override](atest_structure.md#constants-override) where you will
+> add the path of where the test finder lives into ```$PYTHONPATH```. Same
+> rules apply, rerun ```build/envsetup.sh``` to update ```$PYTHONPATH```.
+
+Now define your class and decorate it with the
+```test_finder_base.find_method_register``` decorator. This decorator will
+create a list of find methods that ```test_finder_handler``` will use to collect
+the find methods from your test finder class. Take a look at
+```test_finders/example_test_finder.py``` as an example.
+
+Define the find methods in your test finder class. These find methods must
+return a ```TestInfo``` object. Extra bits of info can be stored in the data
+field as a dict.  Check out ```ExampleFinder``` to see how the data field is
+used.
+
+Decorate each find method with the ```test_finder_base.register``` decorator.
+This is used by the class decorator to identify the find methods of the class.
+
+Final bit is to add your test finder class to ```test_finder_handler```.
+Try-except import it in the ```_get_test_finders``` method and that should be
+it. The find methods will be collected and executed before the default find
+methods.
+```python
+try:
+    from test_finders import new_test_finder
+    test_finders_list.add(new_test_finder.NewTestFinder)
+except ImportError:
+    pass
+```
diff --git a/atest-py2/docs/develop_test_runners.md b/atest-py2/docs/develop_test_runners.md
new file mode 100644
index 0000000..80388ac
--- /dev/null
+++ b/atest-py2/docs/develop_test_runners.md
@@ -0,0 +1,64 @@
+# Test Runner Developer Guide
+
+Learn about test runners and how to create a new test runner class.
+
+##### Table of Contents
+1. [Test Runner Details](#test-runner-details)
+2. [Creating a Test Runner](#creating-a-test-runner)
+
+## <a name="test-runner-details">Test Runner Details</a>
+
+The test runner class is responsible for test execution. Its primary logic
+involve construction of the commandline given a ```TestInfo``` and
+```extra_args``` passed into the ```run_tests``` method. The extra_args are
+top-level args consumed by atest passed onto the test runner. It is up to the
+test runner to translate those args into the specific args the test runner
+accepts. In this way, you can think of the test runner as a translator between
+the atest CLI and your test runner's CLI. The reason for this is so that atest
+can have a consistent CLI for args instead of requiring the users to remember
+the differing CLIs of various test runners.  The test runner should also
+determine its specific dependencies that need to be built prior to any test
+execution.
+
+## <a name="creating-a-test-runner">Creating a Test Runner</a>
+
+First thing to choose is where to put the test runner. This will primarily
+depend on if the test runner will be public or private. If public,
+```test_runners/``` is the default location.
+
+> If it will be private, then you can
+> follow the same instructions for ```vendorsetup.sh``` in
+> [constants override](atest_structure.md#constants-override) where you will
+> add the path of where the test runner lives into ```$PYTHONPATH```. Same
+> rules apply, rerun ```build/envsetup.sh``` to update ```$PYTHONPATH```.
+
+To create a new test runner, create a new class that inherits
+```TestRunnerBase```. Take a look at ```test_runners/example_test_runner.py```
+to see what a simple test runner will look like.
+
+**Important Notes**
+You'll need to override the following parent methods:
+* ```host_env_check()```: Check if host environment is properly setup for the
+  test runner. Raise an expception if not.
+* ```get_test_runner_build_reqs()```: Return a set of build targets that need
+  to be built prior to test execution.
+* ```run_tests()```: Execute the test(s).
+
+And define the following class vars:
+* ```NAME```: Unique name of the test runner.
+* ```EXECUTABLE```: Test runner command, should be an absolute path if the
+  command can not be found in ```$PATH```.
+
+There is a parent helper method (```run```) that should be used to execute the
+actual test command.
+
+Once the test runner class is created, you'll need to add it in
+```test_runner_handler``` so that atest is aware of it. Try-except import the
+test runner in ```_get_test_runners``` like how ```ExampleTestRunner``` is.
+```python
+try:
+    from test_runners import new_test_runner
+    test_runners_dict[new_test_runner.NewTestRunner.NAME] = new_test_runner.NewTestRunner
+except ImportError:
+    pass
+```
diff --git a/atest-py2/docs/developer_workflow.md b/atest-py2/docs/developer_workflow.md
new file mode 100644
index 0000000..d3c2a32
--- /dev/null
+++ b/atest-py2/docs/developer_workflow.md
@@ -0,0 +1,154 @@
+# Atest Developer Workflow
+
+This document explains the practical steps for contributing code to atest.
+
+##### Table of Contents
+1. [Identify the code you should work on](#identify-the-code-you-should-work-on)
+2. [Working on the Python Code](#working-on-the-python-code)
+3. [Working on the TradeFed Code](#working-on-the-tradefed-code)
+4. [Working on the VTS10-TradeFed Code](#working-on-the-vts10-tradefed-code)
+5. [Working on the Robolectric Code](#working-on-the-robolectric-code)
+
+
+## <a name="what-code">Identify the code you should work on</a>
+
+Atest is essentially a wrapper around various test runners. Because of
+this division, your first step should be to identify the code
+involved with your change. This will help determine what tests you write
+and run.  Note that the wrapper code is written in python, so we'll be
+referring to it as the "Python Code".
+
+##### The Python Code
+
+This code defines atest's command line interface.
+Its job is to translate user inputs into (1) build targets and (2)
+information needed for the test runner to run the test. It then invokes
+the appropriate test runner code to run the tests. As the tests
+are run it also parses the test runner's output into the output seen by
+the user. It uses Test Finder and Test Runner classes to do this work.
+If your contribution involves any of this functionality, this is the
+code you'll want to work on.
+
+<p>For more details on how this code works, checkout the following docs:
+
+ - [General Structure](./atest_structure.md)
+ - [Test Finders](./develop_test_finders.md)
+ - [Test Runners](./develop_test_runners.md)
+
+##### The Test Runner Code
+
+This is the code that actually runs the test. If your change
+involves how the test is actually run, you'll need to work with this
+code.
+
+Each test runner will have a different workflow. Atest currently
+supports the following test runners:
+- TradeFed
+- VTS10-TradeFed
+- Robolectric
+
+
+## <a name="working-on-the-python-code">Working on the Python Code</a>
+
+##### Where does the Python code live?
+
+The python code lives here: `tools/tradefederation/core/atest/`
+(path relative to android repo root)
+
+##### Writing tests
+
+Test files go in the same directory as the file being tested. The test
+file should have the same name as the file it's testing, except it
+should have "_unittests" appended to the name. For example, tests
+for the logic in `cli_translator.py` go in the file
+`cli_translator_unittests.py` in the same directory.
+
+
+##### Running tests
+
+Python tests are just python files executable by the Python interpreter.
+You can run ALL the python tests by executing this bash script in the
+atest root dir: `./run_atest_unittests.sh`. Alternatively, you can
+directly execute any individual unittest file. However, you'll need to
+first add atest to your PYTHONPATH via entering in your terminal:
+`PYTHONPATH=<atest_dir>:$PYTHONPATH`.
+
+All tests should be passing before you submit your change.
+
+## <a name="working-on-the-tradefed-code">Working on the TradeFed Code</a>
+
+##### Where does the TradeFed code live?
+
+The TradeFed code lives here:
+`tools/tradefederation/core/src/com/android/tradefed/` (path relative
+to android repo root).
+
+The `testtype/suite/AtestRunner.java` is the most important file in
+the TradeFed Code. It defines the TradeFed API used
+by the Python Code, specifically by
+`test_runners/atest_tf_test_runner.py`. This is the file you'll want
+to edit if you need to make changes to the TradeFed code.
+
+
+##### Writing tests
+
+Tradefed test files live in a parallel `/tests/` file tree here:
+`tools/tradefederation/core/tests/src/com/android/tradefed/`.
+A test file should have the same name as the file it's testing,
+except with the word "Test" appended to the end. <p>
+For example, the tests for `tools/tradefederation/core/src/com/android/tradefed/testtype/suite/AtestRunner.java`
+can be found in `tools/tradefederation/core/tests/src/com/android/tradefed/testtype/suite/AtestRunnerTest.java`.
+
+##### Running tests
+
+TradeFed itself is used to run the TradeFed unittests so you'll need
+to build TradeFed first. See the
+[TradeFed README](../../README.md) for information about setting up
+TradeFed. <p>
+There are so many TradeFed tests that you'll probably want to
+first run the test file your code change affected individually. The
+command to run an individual test file is:<br>
+
+`tradefed.sh run host -n --class <fully.qualified.ClassName>`
+
+Thus, to run all the tests in AtestRunnerTest.java, you'd enter:
+
+`tradefed.sh run host -n --class com.android.tradefed.testtype.suite.AtestRunnerTest`
+
+To run ALL the TradeFed unittests, enter:
+`./tools/tradefederation/core/tests/run_tradefed_tests.sh`
+(from android repo root)
+
+Before submitting code you should run all the TradeFed tests.
+
+## <a name="working-on-the-vts10-tradefed-code">Working on the VTS10-TradeFed Code</a>
+
+##### Where does the VTS10-TradeFed code live?
+
+The VTS10-Tradefed code lives here: `test/vts/tools/vts-tradefed/`
+(path relative to android repo root)
+
+##### Writing tests
+
+You shouldn't need to edit vts10-tradefed code, so there is no
+need to write vts10 tests. Reach out to the vts team
+if you need information on their unittests.
+
+##### Running tests
+
+Again, you shouldn't need to change vts10-tradefed code.
+
+## <a name="working-on-the-robolectric-code">Working on the Robolectric Code</a>
+
+##### Where does the Robolectric code live?
+
+The Robolectric code lives here: `prebuilts/misc/common/robolectric/3.6.1/`
+(path relative to android repo root)
+
+##### Writing tests
+
+You shouldn't need to edit this code, so no need to write tests.
+
+##### Running tests
+
+Again, you shouldn't need to edit this code, so no need to run test.
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/metrics/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/metrics/__init__.py
diff --git a/atest-py2/metrics/clearcut_client.py b/atest-py2/metrics/clearcut_client.py
new file mode 100644
index 0000000..ecb83c3
--- /dev/null
+++ b/atest-py2/metrics/clearcut_client.py
@@ -0,0 +1,176 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Python client library to write logs to Clearcut.
+
+This class is intended to be general-purpose, usable for any Clearcut LogSource.
+
+    Typical usage example:
+
+    client = clearcut.Clearcut(clientanalytics_pb2.LogRequest.MY_LOGSOURCE)
+    client.log(my_event)
+    client.flush_events()
+"""
+
+import logging
+import threading
+import time
+try:
+    # PYTHON2
+    from urllib2 import urlopen
+    from urllib2 import Request
+    from urllib2 import HTTPError
+    from urllib2 import URLError
+except ImportError:
+    # PYTHON3
+    from urllib.request import urlopen
+    from urllib.request import Request
+    from urllib.request import HTTPError
+    from urllib.request import URLError
+
+from proto import clientanalytics_pb2
+
+_CLEARCUT_PROD_URL = 'https://play.googleapis.com/log'
+_DEFAULT_BUFFER_SIZE = 100  # Maximum number of events to be buffered.
+_DEFAULT_FLUSH_INTERVAL_SEC = 60  # 1 Minute.
+_BUFFER_FLUSH_RATIO = 0.5  # Flush buffer when we exceed this ratio.
+_CLIENT_TYPE = 6
+
+class Clearcut(object):
+    """Handles logging to Clearcut."""
+
+    def __init__(self, log_source, url=None, buffer_size=None,
+                 flush_interval_sec=None):
+        """Initializes a Clearcut client.
+
+        Args:
+            log_source: The log source.
+            url: The Clearcut url to connect to.
+            buffer_size: The size of the client buffer in number of events.
+            flush_interval_sec: The flush interval in seconds.
+        """
+        self._clearcut_url = url if url else _CLEARCUT_PROD_URL
+        self._log_source = log_source
+        self._buffer_size = buffer_size if buffer_size else _DEFAULT_BUFFER_SIZE
+        self._pending_events = []
+        if flush_interval_sec:
+            self._flush_interval_sec = flush_interval_sec
+        else:
+            self._flush_interval_sec = _DEFAULT_FLUSH_INTERVAL_SEC
+        self._pending_events_lock = threading.Lock()
+        self._scheduled_flush_thread = None
+        self._scheduled_flush_time = float('inf')
+        self._min_next_request_time = 0
+
+    def log(self, event):
+        """Logs events to Clearcut.
+
+        Logging an event can potentially trigger a flush of queued events. Flushing
+        is triggered when the buffer is more than half full or after the flush
+        interval has passed.
+
+        Args:
+          event: A LogEvent to send to Clearcut.
+        """
+        self._append_events_to_buffer([event])
+
+    def flush_events(self):
+        """ Cancel whatever is scheduled and schedule an immediate flush."""
+        if self._scheduled_flush_thread:
+            self._scheduled_flush_thread.cancel()
+        self._min_next_request_time = 0
+        self._schedule_flush_thread(0)
+
+    def _serialize_events_to_proto(self, events):
+        log_request = clientanalytics_pb2.LogRequest()
+        log_request.request_time_ms = int(time.time() * 1000)
+        # pylint: disable=no-member
+        log_request.client_info.client_type = _CLIENT_TYPE
+        log_request.log_source = self._log_source
+        log_request.log_event.extend(events)
+        return log_request
+
+    def _append_events_to_buffer(self, events, retry=False):
+        with self._pending_events_lock:
+            self._pending_events.extend(events)
+            if len(self._pending_events) > self._buffer_size:
+                index = len(self._pending_events) - self._buffer_size
+                del self._pending_events[:index]
+            self._schedule_flush(retry)
+
+    def _schedule_flush(self, retry):
+        if (not retry
+                and len(self._pending_events) >= int(self._buffer_size *
+                                                     _BUFFER_FLUSH_RATIO)
+                and self._scheduled_flush_time > time.time()):
+            # Cancel whatever is scheduled and schedule an immediate flush.
+            if self._scheduled_flush_thread:
+                self._scheduled_flush_thread.cancel()
+            self._schedule_flush_thread(0)
+        elif self._pending_events and not self._scheduled_flush_thread:
+            # Schedule a flush to run later.
+            self._schedule_flush_thread(self._flush_interval_sec)
+
+    def _schedule_flush_thread(self, time_from_now):
+        min_wait_sec = self._min_next_request_time - time.time()
+        if min_wait_sec > time_from_now:
+            time_from_now = min_wait_sec
+        logging.debug('Scheduling thread to run in %f seconds', time_from_now)
+        self._scheduled_flush_thread = threading.Timer(time_from_now, self._flush)
+        self._scheduled_flush_time = time.time() + time_from_now
+        self._scheduled_flush_thread.start()
+
+    def _flush(self):
+        """Flush buffered events to Clearcut.
+
+        If the sent request is unsuccessful, the events will be appended to
+        buffer and rescheduled for next flush.
+        """
+        with self._pending_events_lock:
+            self._scheduled_flush_time = float('inf')
+            self._scheduled_flush_thread = None
+            events = self._pending_events
+            self._pending_events = []
+        if self._min_next_request_time > time.time():
+            self._append_events_to_buffer(events, retry=True)
+            return
+        log_request = self._serialize_events_to_proto(events)
+        self._send_to_clearcut(log_request.SerializeToString())
+
+    #pylint: disable=broad-except
+    def _send_to_clearcut(self, data):
+        """Sends a POST request with data as the body.
+
+        Args:
+            data: The serialized proto to send to Clearcut.
+        """
+        request = Request(self._clearcut_url, data=data)
+        try:
+            response = urlopen(request)
+            msg = response.read()
+            logging.debug('LogRequest successfully sent to Clearcut.')
+            log_response = clientanalytics_pb2.LogResponse()
+            log_response.ParseFromString(msg)
+            # pylint: disable=no-member
+            # Throttle based on next_request_wait_millis value.
+            self._min_next_request_time = (log_response.next_request_wait_millis
+                                           / 1000 + time.time())
+            logging.debug('LogResponse: %s', log_response)
+        except HTTPError as e:
+            logging.debug('Failed to push events to Clearcut. Error code: %d',
+                          e.code)
+        except URLError:
+            logging.debug('Failed to push events to Clearcut.')
+        except Exception as e:
+            logging.debug(e)
diff --git a/atest-py2/metrics/metrics.py b/atest-py2/metrics/metrics.py
new file mode 100644
index 0000000..f6446a6
--- /dev/null
+++ b/atest-py2/metrics/metrics.py
@@ -0,0 +1,148 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Metrics class.
+"""
+
+import constants
+
+from . import metrics_base
+
+class AtestStartEvent(metrics_base.MetricsBase):
+    """
+    Create Atest start event and send to clearcut.
+
+    Usage:
+        metrics.AtestStartEvent(
+            command_line='example_atest_command',
+            test_references=['example_test_reference'],
+            cwd='example/working/dir',
+            os='example_os')
+    """
+    _EVENT_NAME = 'atest_start_event'
+    command_line = constants.INTERNAL
+    test_references = constants.INTERNAL
+    cwd = constants.INTERNAL
+    os = constants.INTERNAL
+
+class AtestExitEvent(metrics_base.MetricsBase):
+    """
+    Create Atest exit event and send to clearcut.
+
+    Usage:
+        metrics.AtestExitEvent(
+            duration=metrics_utils.convert_duration(end-start),
+            exit_code=0,
+            stacktrace='some_trace',
+            logs='some_logs')
+    """
+    _EVENT_NAME = 'atest_exit_event'
+    duration = constants.EXTERNAL
+    exit_code = constants.EXTERNAL
+    stacktrace = constants.INTERNAL
+    logs = constants.INTERNAL
+
+class FindTestFinishEvent(metrics_base.MetricsBase):
+    """
+    Create find test finish event and send to clearcut.
+
+    Occurs after a SINGLE test reference has been resolved to a test or
+    not found.
+
+    Usage:
+        metrics.FindTestFinishEvent(
+            duration=metrics_utils.convert_duration(end-start),
+            success=true,
+            test_reference='hello_world_test',
+            test_finders=['example_test_reference', 'ref2'],
+            test_info="test_name: hello_world_test -
+                test_runner:AtestTradefedTestRunner -
+                build_targets:
+                    set(['MODULES-IN-platform_testing-tests-example-native']) -
+                data:{'rel_config':
+                    'platform_testing/tests/example/native/AndroidTest.xml',
+                    'filter': frozenset([])} -
+                suite:None - module_class: ['NATIVE_TESTS'] -
+                install_locations:set(['device', 'host'])")
+    """
+    _EVENT_NAME = 'find_test_finish_event'
+    duration = constants.EXTERNAL
+    success = constants.EXTERNAL
+    test_reference = constants.INTERNAL
+    test_finders = constants.INTERNAL
+    test_info = constants.INTERNAL
+
+class BuildFinishEvent(metrics_base.MetricsBase):
+    """
+    Create build finish event and send to clearcut.
+
+    Occurs after the build finishes, either successfully or not.
+
+    Usage:
+        metrics.BuildFinishEvent(
+            duration=metrics_utils.convert_duration(end-start),
+            success=true,
+            targets=['target1', 'target2'])
+    """
+    _EVENT_NAME = 'build_finish_event'
+    duration = constants.EXTERNAL
+    success = constants.EXTERNAL
+    targets = constants.INTERNAL
+
+class RunnerFinishEvent(metrics_base.MetricsBase):
+    """
+    Create run finish event and send to clearcut.
+
+    Occurs when a single test runner has completed.
+
+    Usage:
+        metrics.RunnerFinishEvent(
+            duration=metrics_utils.convert_duration(end-start),
+            success=true,
+            runner_name='AtestTradefedTestRunner'
+            test=[{name:'hello_world_test', result:0, stacktrace:''},
+                  {name:'test2', result:1, stacktrace:'xxx'}])
+    """
+    _EVENT_NAME = 'runner_finish_event'
+    duration = constants.EXTERNAL
+    success = constants.EXTERNAL
+    runner_name = constants.EXTERNAL
+    test = constants.INTERNAL
+
+class RunTestsFinishEvent(metrics_base.MetricsBase):
+    """
+    Create run tests finish event and send to clearcut.
+
+    Occurs after all test runners and tests have finished.
+
+    Usage:
+        metrics.RunTestsFinishEvent(
+            duration=metrics_utils.convert_duration(end-start))
+    """
+    _EVENT_NAME = 'run_tests_finish_event'
+    duration = constants.EXTERNAL
+
+class LocalDetectEvent(metrics_base.MetricsBase):
+    """
+    Create local detection event and send it to clearcut.
+
+    Usage:
+        metrics.LocalDetectEvent(
+            detect_type=0,
+            result=0)
+    """
+    _EVENT_NAME = 'local_detect_event'
+    detect_type = constants.EXTERNAL
+    result = constants.EXTERNAL
diff --git a/atest-py2/metrics/metrics_base.py b/atest-py2/metrics/metrics_base.py
new file mode 100644
index 0000000..44b3819
--- /dev/null
+++ b/atest-py2/metrics/metrics_base.py
@@ -0,0 +1,146 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Metrics base class.
+"""
+
+from __future__ import print_function
+
+import logging
+import random
+import socket
+import subprocess
+import time
+import uuid
+
+import asuite_metrics
+import constants
+
+from proto import clientanalytics_pb2
+from proto import external_user_log_pb2
+from proto import internal_user_log_pb2
+
+from . import clearcut_client
+
+INTERNAL_USER = 0
+EXTERNAL_USER = 1
+
+ATEST_EVENTS = {
+    INTERNAL_USER: internal_user_log_pb2.AtestLogEventInternal,
+    EXTERNAL_USER: external_user_log_pb2.AtestLogEventExternal
+}
+# log source
+ATEST_LOG_SOURCE = {
+    INTERNAL_USER: 971,
+    EXTERNAL_USER: 934
+}
+
+
+def get_user_type():
+    """Get user type.
+
+    Determine the internal user by passing at least one check:
+      - whose git mail domain is from google
+      - whose hostname is from google
+    Otherwise is external user.
+
+    Returns:
+        INTERNAL_USER if user is internal, EXTERNAL_USER otherwise.
+    """
+    try:
+        output = subprocess.check_output(['git', 'config', '--get', 'user.email'],
+                                         universal_newlines=True)
+        if output and output.strip().endswith(constants.INTERNAL_EMAIL):
+            return INTERNAL_USER
+    except OSError:
+        # OSError can be raised when running atest_unittests on a host
+        # without git being set up.
+        logging.debug('Unable to determine if this is an external run, git is '
+                      'not found.')
+    except subprocess.CalledProcessError:
+        logging.debug('Unable to determine if this is an external run, email '
+                      'is not found in git config.')
+    try:
+        hostname = socket.getfqdn()
+        if (hostname and
+                any([(x in hostname) for x in constants.INTERNAL_HOSTNAME])):
+            return INTERNAL_USER
+    except IOError:
+        logging.debug('Unable to determine if this is an external run, '
+                      'hostname is not found.')
+    return EXTERNAL_USER
+
+
+class MetricsBase(object):
+    """Class for separating allowed fields and sending metric."""
+
+    _run_id = str(uuid.uuid4())
+    try:
+        #pylint: disable=protected-access
+        _user_key = str(asuite_metrics._get_grouping_key())
+    #pylint: disable=broad-except
+    except Exception:
+        _user_key = asuite_metrics.UNUSED_UUID
+    _user_type = get_user_type()
+    _log_source = ATEST_LOG_SOURCE[_user_type]
+    cc = clearcut_client.Clearcut(_log_source)
+    tool_name = None
+
+    def __new__(cls, **kwargs):
+        """Send metric event to clearcut.
+
+        Args:
+            cls: this class object.
+            **kwargs: A dict of named arguments.
+
+        Returns:
+            A Clearcut instance.
+        """
+        # pylint: disable=no-member
+        if not cls.tool_name:
+            logging.debug('There is no tool_name, and metrics stops sending.')
+            return None
+        allowed = ({constants.EXTERNAL} if cls._user_type == EXTERNAL_USER
+                   else {constants.EXTERNAL, constants.INTERNAL})
+        fields = [k for k, v in vars(cls).items()
+                  if not k.startswith('_') and v in allowed]
+        fields_and_values = {}
+        for field in fields:
+            if field in kwargs:
+                fields_and_values[field] = kwargs.pop(field)
+        params = {'user_key': cls._user_key,
+                  'run_id': cls._run_id,
+                  'user_type': cls._user_type,
+                  'tool_name': cls.tool_name,
+                  cls._EVENT_NAME: fields_and_values}
+        log_event = cls._build_full_event(ATEST_EVENTS[cls._user_type](**params))
+        cls.cc.log(log_event)
+        return cls.cc
+
+    @classmethod
+    def _build_full_event(cls, atest_event):
+        """This is all protobuf building you can ignore.
+
+        Args:
+            cls: this class object.
+            atest_event: A client_pb2.AtestLogEvent instance.
+
+        Returns:
+            A clientanalytics_pb2.LogEvent instance.
+        """
+        log_event = clientanalytics_pb2.LogEvent()
+        log_event.event_time_ms = int((time.time() - random.randint(1, 600)) * 1000)
+        log_event.source_extension = atest_event.SerializeToString()
+        return log_event
diff --git a/atest-py2/metrics/metrics_utils.py b/atest-py2/metrics/metrics_utils.py
new file mode 100644
index 0000000..a43b8f6
--- /dev/null
+++ b/atest-py2/metrics/metrics_utils.py
@@ -0,0 +1,128 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Utility functions for metrics.
+"""
+
+import os
+import platform
+import sys
+import time
+import traceback
+
+from . import metrics
+from . import metrics_base
+
+
+def static_var(varname, value):
+    """Decorator to cache static variable."""
+    def fun_var_decorate(func):
+        """Set the static variable in a function."""
+        setattr(func, varname, value)
+        return func
+    return fun_var_decorate
+
+
+@static_var("start_time", [])
+def get_start_time():
+    """Get start time.
+
+    Return:
+        start_time: Start time in seconds. Return cached start_time if exists,
+        time.time() otherwise.
+    """
+    if not get_start_time.start_time:
+        get_start_time.start_time = time.time()
+    return get_start_time.start_time
+
+
+def convert_duration(diff_time_sec):
+    """Compute duration from time difference.
+
+    A Duration represents a signed, fixed-length span of time represented
+    as a count of seconds and fractions of seconds at nanosecond
+    resolution.
+
+    Args:
+        dur_time_sec: The time in seconds as a floating point number.
+
+    Returns:
+        A dict of Duration.
+    """
+    seconds = int(diff_time_sec)
+    nanos = int((diff_time_sec - seconds)*10**9)
+    return {'seconds': seconds, 'nanos': nanos}
+
+
+# pylint: disable=broad-except
+def handle_exc_and_send_exit_event(exit_code):
+    """handle exceptions and send exit event.
+
+    Args:
+        exit_code: An integer of exit code.
+    """
+    stacktrace = logs = ''
+    try:
+        exc_type, exc_msg, _ = sys.exc_info()
+        stacktrace = traceback.format_exc()
+        if exc_type:
+            logs = '{etype}: {value}'.format(etype=exc_type.__name__,
+                                             value=exc_msg)
+    except Exception:
+        pass
+    send_exit_event(exit_code, stacktrace=stacktrace, logs=logs)
+
+
+def send_exit_event(exit_code, stacktrace='', logs=''):
+    """Log exit event and flush all events to clearcut.
+
+    Args:
+        exit_code: An integer of exit code.
+        stacktrace: A string of stacktrace.
+        logs: A string of logs.
+    """
+    clearcut = metrics.AtestExitEvent(
+        duration=convert_duration(time.time()-get_start_time()),
+        exit_code=exit_code,
+        stacktrace=stacktrace,
+        logs=logs)
+    # pylint: disable=no-member
+    if clearcut:
+        clearcut.flush_events()
+
+
+def send_start_event(tool_name, command_line='', test_references='',
+                     cwd=None, operating_system=None):
+    """Log start event of clearcut.
+
+    Args:
+        tool_name: A string of the asuite product name.
+        command_line: A string of the user input command.
+        test_references: A string of the input tests.
+        cwd: A string of current path.
+        operating_system: A string of user's operating system.
+    """
+    if not cwd:
+        cwd = os.getcwd()
+    if not operating_system:
+        operating_system = platform.platform()
+    # Without tool_name information, asuite's clearcut client will not send
+    # event to server.
+    metrics_base.MetricsBase.tool_name = tool_name
+    get_start_time()
+    metrics.AtestStartEvent(command_line=command_line,
+                            test_references=test_references,
+                            cwd=cwd,
+                            os=operating_system)
diff --git a/atest-py2/module_info.py b/atest-py2/module_info.py
new file mode 100644
index 0000000..d925548
--- /dev/null
+++ b/atest-py2/module_info.py
@@ -0,0 +1,336 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Module Info class used to hold cached module-info.json.
+"""
+
+import json
+import logging
+import os
+
+import atest_utils
+import constants
+
+# JSON file generated by build system that lists all buildable targets.
+_MODULE_INFO = 'module-info.json'
+
+
+class ModuleInfo(object):
+    """Class that offers fast/easy lookup for Module related details."""
+
+    def __init__(self, force_build=False, module_file=None):
+        """Initialize the ModuleInfo object.
+
+        Load up the module-info.json file and initialize the helper vars.
+
+        Args:
+            force_build: Boolean to indicate if we should rebuild the
+                         module_info file regardless if it's created or not.
+            module_file: String of path to file to load up. Used for testing.
+        """
+        module_info_target, name_to_module_info = self._load_module_info_file(
+            force_build, module_file)
+        self.name_to_module_info = name_to_module_info
+        self.module_info_target = module_info_target
+        self.path_to_module_info = self._get_path_to_module_info(
+            self.name_to_module_info)
+        self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+
+    @staticmethod
+    def _discover_mod_file_and_target(force_build):
+        """Find the module file.
+
+        Args:
+            force_build: Boolean to indicate if we should rebuild the
+                         module_info file regardless if it's created or not.
+
+        Returns:
+            Tuple of module_info_target and path to module file.
+        """
+        module_info_target = None
+        root_dir = os.environ.get(constants.ANDROID_BUILD_TOP, '/')
+        out_dir = os.environ.get(constants.ANDROID_PRODUCT_OUT, root_dir)
+        module_file_path = os.path.join(out_dir, _MODULE_INFO)
+
+        # Check if the user set a custom out directory by comparing the out_dir
+        # to the root_dir.
+        if out_dir.find(root_dir) == 0:
+            # Make target is simply file path relative to root
+            module_info_target = os.path.relpath(module_file_path, root_dir)
+        else:
+            # If the user has set a custom out directory, generate an absolute
+            # path for module info targets.
+            logging.debug('User customized out dir!')
+            module_file_path = os.path.join(
+                os.environ.get(constants.ANDROID_PRODUCT_OUT), _MODULE_INFO)
+            module_info_target = module_file_path
+        if not os.path.isfile(module_file_path) or force_build:
+            logging.debug('Generating %s - this is required for '
+                          'initial runs.', _MODULE_INFO)
+            build_env = dict(constants.ATEST_BUILD_ENV)
+            atest_utils.build([module_info_target],
+                              verbose=logging.getLogger().isEnabledFor(logging.DEBUG),
+                              env_vars=build_env)
+        return module_info_target, module_file_path
+
+    def _load_module_info_file(self, force_build, module_file):
+        """Load the module file.
+
+        Args:
+            force_build: Boolean to indicate if we should rebuild the
+                         module_info file regardless if it's created or not.
+            module_file: String of path to file to load up. Used for testing.
+
+        Returns:
+            Tuple of module_info_target and dict of json.
+        """
+        # If module_file is specified, we're testing so we don't care if
+        # module_info_target stays None.
+        module_info_target = None
+        file_path = module_file
+        if not file_path:
+            module_info_target, file_path = self._discover_mod_file_and_target(
+                force_build)
+        with open(file_path) as json_file:
+            mod_info = json.load(json_file)
+        return module_info_target, mod_info
+
+    @staticmethod
+    def _get_path_to_module_info(name_to_module_info):
+        """Return the path_to_module_info dict.
+
+        Args:
+            name_to_module_info: Dict of module name to module info dict.
+
+        Returns:
+            Dict of module path to module info dict.
+        """
+        path_to_module_info = {}
+        for mod_name, mod_info in name_to_module_info.items():
+            # Cross-compiled and multi-arch modules actually all belong to
+            # a single target so filter out these extra modules.
+            if mod_name != mod_info.get(constants.MODULE_NAME, ''):
+                continue
+            for path in mod_info.get(constants.MODULE_PATH, []):
+                mod_info[constants.MODULE_NAME] = mod_name
+                # There could be multiple modules in a path.
+                if path in path_to_module_info:
+                    path_to_module_info[path].append(mod_info)
+                else:
+                    path_to_module_info[path] = [mod_info]
+        return path_to_module_info
+
+    def is_module(self, name):
+        """Return True if name is a module, False otherwise."""
+        return name in self.name_to_module_info
+
+    def get_paths(self, name):
+        """Return paths of supplied module name, Empty list if non-existent."""
+        info = self.name_to_module_info.get(name)
+        if info:
+            return info.get(constants.MODULE_PATH, [])
+        return []
+
+    def get_module_names(self, rel_module_path):
+        """Get the modules that all have module_path.
+
+        Args:
+            rel_module_path: path of module in module-info.json
+
+        Returns:
+            List of module names.
+        """
+        return [m.get(constants.MODULE_NAME)
+                for m in self.path_to_module_info.get(rel_module_path, [])]
+
+    def get_module_info(self, mod_name):
+        """Return dict of info for given module name, None if non-existent."""
+        module_info = self.name_to_module_info.get(mod_name)
+        # Android's build system will automatically adding 2nd arch bitness
+        # string at the end of the module name which will make atest could not
+        # finding matched module. Rescan the module-info with matched module
+        # name without bitness.
+        if not module_info:
+            for _, module_info in self.name_to_module_info.items():
+                if mod_name == module_info.get(constants.MODULE_NAME, ''):
+                    break
+        return module_info
+
+    def is_suite_in_compatibility_suites(self, suite, mod_info):
+        """Check if suite exists in the compatibility_suites of module-info.
+
+        Args:
+            suite: A string of suite name.
+            mod_info: Dict of module info to check.
+
+        Returns:
+            True if it exists in mod_info, False otherwise.
+        """
+        return suite in mod_info.get(constants.MODULE_COMPATIBILITY_SUITES, [])
+
+    def get_testable_modules(self, suite=None):
+        """Return the testable modules of the given suite name.
+
+        Args:
+            suite: A string of suite name. Set to None to return all testable
+            modules.
+
+        Returns:
+            List of testable modules. Empty list if non-existent.
+            If suite is None, return all the testable modules in module-info.
+        """
+        modules = set()
+        for _, info in self.name_to_module_info.items():
+            if self.is_testable_module(info):
+                if suite:
+                    if self.is_suite_in_compatibility_suites(suite, info):
+                        modules.add(info.get(constants.MODULE_NAME))
+                else:
+                    modules.add(info.get(constants.MODULE_NAME))
+        return modules
+
+    def is_testable_module(self, mod_info):
+        """Check if module is something we can test.
+
+        A module is testable if:
+          - it's installed, or
+          - it's a robolectric module (or shares path with one).
+
+        Args:
+            mod_info: Dict of module info to check.
+
+        Returns:
+            True if we can test this module, False otherwise.
+        """
+        if not mod_info:
+            return False
+        if mod_info.get(constants.MODULE_INSTALLED) and self.has_test_config(mod_info):
+            return True
+        if self.is_robolectric_test(mod_info.get(constants.MODULE_NAME)):
+            return True
+        return False
+
+    def has_test_config(self, mod_info):
+        """Validate if this module has a test config.
+
+        A module can have a test config in the following manner:
+          - AndroidTest.xml at the module path.
+          - test_config be set in module-info.json.
+          - Auto-generated config via the auto_test_config key in module-info.json.
+
+        Args:
+            mod_info: Dict of module info to check.
+
+        Returns:
+            True if this module has a test config, False otherwise.
+        """
+        # Check if test_config in module-info is set.
+        for test_config in mod_info.get(constants.MODULE_TEST_CONFIG, []):
+            if os.path.isfile(os.path.join(self.root_dir, test_config)):
+                return True
+        # Check for AndroidTest.xml at the module path.
+        for path in mod_info.get(constants.MODULE_PATH, []):
+            if os.path.isfile(os.path.join(self.root_dir, path,
+                                           constants.MODULE_CONFIG)):
+                return True
+        # Check if the module has an auto-generated config.
+        return self.is_auto_gen_test_config(mod_info.get(constants.MODULE_NAME))
+
+    def get_robolectric_test_name(self, module_name):
+        """Returns runnable robolectric module name.
+
+        There are at least 2 modules in every robolectric module path, return
+        the module that we can run as a build target.
+
+        Arg:
+            module_name: String of module.
+
+        Returns:
+            String of module that is the runnable robolectric module, None if
+            none could be found.
+        """
+        module_name_info = self.name_to_module_info.get(module_name)
+        if not module_name_info:
+            return None
+        module_paths = module_name_info.get(constants.MODULE_PATH, [])
+        if module_paths:
+            for mod in self.get_module_names(module_paths[0]):
+                mod_info = self.get_module_info(mod)
+                if self.is_robolectric_module(mod_info):
+                    return mod
+        return None
+
+    def is_robolectric_test(self, module_name):
+        """Check if module is a robolectric test.
+
+        A module can be a robolectric test if the specified module has their
+        class set as ROBOLECTRIC (or shares their path with a module that does).
+
+        Args:
+            module_name: String of module to check.
+
+        Returns:
+            True if the module is a robolectric module, else False.
+        """
+        # Check 1, module class is ROBOLECTRIC
+        mod_info = self.get_module_info(module_name)
+        if self.is_robolectric_module(mod_info):
+            return True
+        # Check 2, shared modules in the path have class ROBOLECTRIC_CLASS.
+        if self.get_robolectric_test_name(module_name):
+            return True
+        return False
+
+    def is_auto_gen_test_config(self, module_name):
+        """Check if the test config file will be generated automatically.
+
+        Args:
+            module_name: A string of the module name.
+
+        Returns:
+            True if the test config file will be generated automatically.
+        """
+        if self.is_module(module_name):
+            mod_info = self.name_to_module_info.get(module_name)
+            auto_test_config = mod_info.get('auto_test_config', [])
+            return auto_test_config and auto_test_config[0]
+        return False
+
+    def is_robolectric_module(self, mod_info):
+        """Check if a module is a robolectric module.
+
+        Args:
+            mod_info: ModuleInfo to check.
+
+        Returns:
+            True if module is a robolectric module, False otherwise.
+        """
+        if mod_info:
+            return (mod_info.get(constants.MODULE_CLASS, [None])[0] ==
+                    constants.MODULE_CLASS_ROBOLECTRIC)
+        return False
+
+    def is_native_test(self, module_name):
+        """Check if the input module is a native test.
+
+        Args:
+            module_name: A string of the module name.
+
+        Returns:
+            True if the test is a native test, False otherwise.
+        """
+        mod_info = self.get_module_info(module_name)
+        return constants.MODULE_CLASS_NATIVE_TESTS in mod_info.get(
+            constants.MODULE_CLASS, [])
diff --git a/atest-py2/module_info_unittest.py b/atest-py2/module_info_unittest.py
new file mode 100755
index 0000000..4e48977
--- /dev/null
+++ b/atest-py2/module_info_unittest.py
@@ -0,0 +1,287 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for module_info."""
+
+import os
+import unittest
+import mock
+
+import constants
+import module_info
+import unittest_constants as uc
+
+JSON_FILE_PATH = os.path.join(uc.TEST_DATA_DIR, uc.JSON_FILE)
+EXPECTED_MOD_TARGET = 'tradefed'
+EXPECTED_MOD_TARGET_PATH = ['tf/core']
+UNEXPECTED_MOD_TARGET = 'this_should_not_be_in_module-info.json'
+MOD_NO_PATH = 'module-no-path'
+PATH_TO_MULT_MODULES = 'shared/path/to/be/used'
+MULT_MOODULES_WITH_SHARED_PATH = ['module2', 'module1']
+PATH_TO_MULT_MODULES_WITH_MULTI_ARCH = 'shared/path/to/be/used2'
+TESTABLE_MODULES_WITH_SHARED_PATH = ['multiarch1', 'multiarch2', 'multiarch3', 'multiarch3_32']
+
+ROBO_MOD_PATH = ['/shared/robo/path']
+NON_RUN_ROBO_MOD_NAME = 'robo_mod'
+RUN_ROBO_MOD_NAME = 'run_robo_mod'
+NON_RUN_ROBO_MOD = {constants.MODULE_NAME: NON_RUN_ROBO_MOD_NAME,
+                    constants.MODULE_PATH: ROBO_MOD_PATH,
+                    constants.MODULE_CLASS: ['random_class']}
+RUN_ROBO_MOD = {constants.MODULE_NAME: RUN_ROBO_MOD_NAME,
+                constants.MODULE_PATH: ROBO_MOD_PATH,
+                constants.MODULE_CLASS: [constants.MODULE_CLASS_ROBOLECTRIC]}
+MOD_PATH_INFO_DICT = {ROBO_MOD_PATH[0]: [RUN_ROBO_MOD, NON_RUN_ROBO_MOD]}
+MOD_NAME_INFO_DICT = {
+    RUN_ROBO_MOD_NAME: RUN_ROBO_MOD,
+    NON_RUN_ROBO_MOD_NAME: NON_RUN_ROBO_MOD}
+MOD_NAME1 = 'mod1'
+MOD_NAME2 = 'mod2'
+MOD_NAME3 = 'mod3'
+MOD_NAME4 = 'mod4'
+MOD_INFO_DICT = {}
+MODULE_INFO = {constants.MODULE_NAME: 'random_name',
+               constants.MODULE_PATH: 'a/b/c/path',
+               constants.MODULE_CLASS: ['random_class']}
+NAME_TO_MODULE_INFO = {'random_name' : MODULE_INFO}
+
+#pylint: disable=protected-access
+class ModuleInfoUnittests(unittest.TestCase):
+    """Unit tests for module_info.py"""
+
+    @mock.patch('json.load', return_value={})
+    @mock.patch('__builtin__.open', new_callable=mock.mock_open)
+    @mock.patch('os.path.isfile', return_value=True)
+    def test_load_mode_info_file_out_dir_handling(self, _isfile, _open, _json):
+        """Test _load_module_info_file out dir handling."""
+        # Test out default out dir is used.
+        build_top = '/path/to/top'
+        default_out_dir = os.path.join(build_top, 'out/dir/here')
+        os_environ_mock = {'ANDROID_PRODUCT_OUT': default_out_dir,
+                           constants.ANDROID_BUILD_TOP: build_top}
+        default_out_dir_mod_targ = 'out/dir/here/module-info.json'
+        # Make sure module_info_target is what we think it is.
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            mod_info = module_info.ModuleInfo()
+            self.assertEqual(default_out_dir_mod_targ,
+                             mod_info.module_info_target)
+
+        # Test out custom out dir is used (OUT_DIR=dir2).
+        custom_out_dir = os.path.join(build_top, 'out2/dir/here')
+        os_environ_mock = {'ANDROID_PRODUCT_OUT': custom_out_dir,
+                           constants.ANDROID_BUILD_TOP: build_top}
+        custom_out_dir_mod_targ = 'out2/dir/here/module-info.json'
+        # Make sure module_info_target is what we think it is.
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            mod_info = module_info.ModuleInfo()
+            self.assertEqual(custom_out_dir_mod_targ,
+                             mod_info.module_info_target)
+
+        # Test out custom abs out dir is used (OUT_DIR=/tmp/out/dir2).
+        abs_custom_out_dir = '/tmp/out/dir'
+        os_environ_mock = {'ANDROID_PRODUCT_OUT': abs_custom_out_dir,
+                           constants.ANDROID_BUILD_TOP: build_top}
+        custom_abs_out_dir_mod_targ = '/tmp/out/dir/module-info.json'
+        # Make sure module_info_target is what we think it is.
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            mod_info = module_info.ModuleInfo()
+            self.assertEqual(custom_abs_out_dir_mod_targ,
+                             mod_info.module_info_target)
+
+    @mock.patch.object(module_info.ModuleInfo, '_load_module_info_file',)
+    def test_get_path_to_module_info(self, mock_load_module):
+        """Test that we correctly create the path to module info dict."""
+        mod_one = 'mod1'
+        mod_two = 'mod2'
+        mod_path_one = '/path/to/mod1'
+        mod_path_two = '/path/to/mod2'
+        mod_info_dict = {mod_one: {constants.MODULE_PATH: [mod_path_one],
+                                   constants.MODULE_NAME: mod_one},
+                         mod_two: {constants.MODULE_PATH: [mod_path_two],
+                                   constants.MODULE_NAME: mod_two}}
+        mock_load_module.return_value = ('mod_target', mod_info_dict)
+        path_to_mod_info = {mod_path_one: [{constants.MODULE_NAME: mod_one,
+                                            constants.MODULE_PATH: [mod_path_one]}],
+                            mod_path_two: [{constants.MODULE_NAME: mod_two,
+                                            constants.MODULE_PATH: [mod_path_two]}]}
+        mod_info = module_info.ModuleInfo()
+        self.assertDictEqual(path_to_mod_info,
+                             mod_info._get_path_to_module_info(mod_info_dict))
+
+    def test_is_module(self):
+        """Test that we get the module when it's properly loaded."""
+        # Load up the test json file and check that module is in it
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        self.assertTrue(mod_info.is_module(EXPECTED_MOD_TARGET))
+        self.assertFalse(mod_info.is_module(UNEXPECTED_MOD_TARGET))
+
+    def test_get_path(self):
+        """Test that we get the module path when it's properly loaded."""
+        # Load up the test json file and check that module is in it
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        self.assertEqual(mod_info.get_paths(EXPECTED_MOD_TARGET),
+                         EXPECTED_MOD_TARGET_PATH)
+        self.assertEqual(mod_info.get_paths(MOD_NO_PATH), [])
+
+    def test_get_module_names(self):
+        """test that we get the module name properly."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        self.assertEqual(mod_info.get_module_names(EXPECTED_MOD_TARGET_PATH[0]),
+                         [EXPECTED_MOD_TARGET])
+        self.assertEqual(mod_info.get_module_names(PATH_TO_MULT_MODULES),
+                         MULT_MOODULES_WITH_SHARED_PATH)
+
+    def test_path_to_mod_info(self):
+        """test that we get the module name properly."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        module_list = []
+        for path_to_mod_info in mod_info.path_to_module_info[PATH_TO_MULT_MODULES_WITH_MULTI_ARCH]:
+            module_list.append(path_to_mod_info.get(constants.MODULE_NAME))
+        module_list.sort()
+        TESTABLE_MODULES_WITH_SHARED_PATH.sort()
+        self.assertEqual(module_list, TESTABLE_MODULES_WITH_SHARED_PATH)
+
+    def test_is_suite_in_compatibility_suites(self):
+        """Test is_suite_in_compatibility_suites."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        info = {'compatibility_suites': []}
+        self.assertFalse(mod_info.is_suite_in_compatibility_suites("cts", info))
+        info2 = {'compatibility_suites': ["cts"]}
+        self.assertTrue(mod_info.is_suite_in_compatibility_suites("cts", info2))
+        self.assertFalse(mod_info.is_suite_in_compatibility_suites("vts10", info2))
+        info3 = {'compatibility_suites': ["cts", "vts10"]}
+        self.assertTrue(mod_info.is_suite_in_compatibility_suites("cts", info3))
+        self.assertTrue(mod_info.is_suite_in_compatibility_suites("vts10", info3))
+        self.assertFalse(mod_info.is_suite_in_compatibility_suites("ats", info3))
+
+    @mock.patch.object(module_info.ModuleInfo, 'is_testable_module')
+    @mock.patch.object(module_info.ModuleInfo, 'is_suite_in_compatibility_suites')
+    def test_get_testable_modules(self, mock_is_suite_exist, mock_is_testable):
+        """Test get_testable_modules."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        mock_is_testable.return_value = False
+        self.assertEqual(mod_info.get_testable_modules(), set())
+        mod_info.name_to_module_info = NAME_TO_MODULE_INFO
+        mock_is_testable.return_value = True
+        mock_is_suite_exist.return_value = True
+        self.assertEqual(1, len(mod_info.get_testable_modules('test_suite')))
+        mock_is_suite_exist.return_value = False
+        self.assertEqual(0, len(mod_info.get_testable_modules('test_suite')))
+        self.assertEqual(1, len(mod_info.get_testable_modules()))
+
+    @mock.patch.object(module_info.ModuleInfo, 'has_test_config')
+    @mock.patch.object(module_info.ModuleInfo, 'is_robolectric_test')
+    def test_is_testable_module(self, mock_is_robo_test, mock_has_test_config):
+        """Test is_testable_module."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        mock_is_robo_test.return_value = False
+        mock_has_test_config.return_value = True
+        installed_module_info = {constants.MODULE_INSTALLED:
+                                 uc.DEFAULT_INSTALL_PATH}
+        non_installed_module_info = {constants.MODULE_NAME: 'rand_name'}
+        # Empty mod_info or a non-installed module.
+        self.assertFalse(mod_info.is_testable_module(non_installed_module_info))
+        self.assertFalse(mod_info.is_testable_module({}))
+        # Testable Module or is a robo module for non-installed module.
+        self.assertTrue(mod_info.is_testable_module(installed_module_info))
+        mock_has_test_config.return_value = False
+        self.assertFalse(mod_info.is_testable_module(installed_module_info))
+        mock_is_robo_test.return_value = True
+        self.assertTrue(mod_info.is_testable_module(non_installed_module_info))
+
+    @mock.patch.object(module_info.ModuleInfo, 'is_auto_gen_test_config')
+    def test_has_test_config(self, mock_is_auto_gen):
+        """Test has_test_config."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        info = {constants.MODULE_PATH:[uc.TEST_DATA_DIR]}
+        mock_is_auto_gen.return_value = True
+        # Validate we see the config when it's auto-generated.
+        self.assertTrue(mod_info.has_test_config(info))
+        self.assertTrue(mod_info.has_test_config({}))
+        # Validate when actual config exists and there's no auto-generated config.
+        mock_is_auto_gen.return_value = False
+        self.assertTrue(mod_info.has_test_config(info))
+        self.assertFalse(mod_info.has_test_config({}))
+        # Validate the case mod_info MODULE_TEST_CONFIG be set
+        info2 = {constants.MODULE_PATH:[uc.TEST_CONFIG_DATA_DIR],
+                 constants.MODULE_TEST_CONFIG:[os.path.join(uc.TEST_CONFIG_DATA_DIR, "a.xml")]}
+        self.assertTrue(mod_info.has_test_config(info2))
+
+    @mock.patch.object(module_info.ModuleInfo, 'get_module_names')
+    def test_get_robolectric_test_name(self, mock_get_module_names):
+        """Test get_robolectric_test_name."""
+        # Happy path testing, make sure we get the run robo target.
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        mod_info.name_to_module_info = MOD_NAME_INFO_DICT
+        mod_info.path_to_module_info = MOD_PATH_INFO_DICT
+        mock_get_module_names.return_value = [RUN_ROBO_MOD_NAME, NON_RUN_ROBO_MOD_NAME]
+        self.assertEqual(mod_info.get_robolectric_test_name(
+            NON_RUN_ROBO_MOD_NAME), RUN_ROBO_MOD_NAME)
+        # Let's also make sure we don't return anything when we're not supposed
+        # to.
+        mock_get_module_names.return_value = [NON_RUN_ROBO_MOD_NAME]
+        self.assertEqual(mod_info.get_robolectric_test_name(
+            NON_RUN_ROBO_MOD_NAME), None)
+
+    @mock.patch.object(module_info.ModuleInfo, 'get_module_info')
+    @mock.patch.object(module_info.ModuleInfo, 'get_module_names')
+    def test_is_robolectric_test(self, mock_get_module_names, mock_get_module_info):
+        """Test is_robolectric_test."""
+        # Happy path testing, make sure we get the run robo target.
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        mod_info.name_to_module_info = MOD_NAME_INFO_DICT
+        mod_info.path_to_module_info = MOD_PATH_INFO_DICT
+        mock_get_module_names.return_value = [RUN_ROBO_MOD_NAME, NON_RUN_ROBO_MOD_NAME]
+        mock_get_module_info.return_value = RUN_ROBO_MOD
+        # Test on a run robo module.
+        self.assertTrue(mod_info.is_robolectric_test(RUN_ROBO_MOD_NAME))
+        # Test on a non-run robo module but shares with a run robo module.
+        self.assertTrue(mod_info.is_robolectric_test(NON_RUN_ROBO_MOD_NAME))
+        # Make sure we don't find robo tests where they don't exist.
+        mock_get_module_info.return_value = None
+        self.assertFalse(mod_info.is_robolectric_test('rand_mod'))
+
+    @mock.patch.object(module_info.ModuleInfo, 'is_module')
+    def test_is_auto_gen_test_config(self, mock_is_module):
+        """Test is_auto_gen_test_config correctly detects the module."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        mock_is_module.return_value = True
+        is_auto_test_config = {'auto_test_config': [True]}
+        is_not_auto_test_config = {'auto_test_config': [False]}
+        is_not_auto_test_config_again = {'auto_test_config': []}
+        MOD_INFO_DICT[MOD_NAME1] = is_auto_test_config
+        MOD_INFO_DICT[MOD_NAME2] = is_not_auto_test_config
+        MOD_INFO_DICT[MOD_NAME3] = is_not_auto_test_config_again
+        MOD_INFO_DICT[MOD_NAME4] = {}
+        mod_info.name_to_module_info = MOD_INFO_DICT
+        self.assertTrue(mod_info.is_auto_gen_test_config(MOD_NAME1))
+        self.assertFalse(mod_info.is_auto_gen_test_config(MOD_NAME2))
+        self.assertFalse(mod_info.is_auto_gen_test_config(MOD_NAME3))
+        self.assertFalse(mod_info.is_auto_gen_test_config(MOD_NAME4))
+
+    def test_is_robolectric_module(self):
+        """Test is_robolectric_module correctly detects the module."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        is_robolectric_module = {'class': ['ROBOLECTRIC']}
+        is_not_robolectric_module = {'class': ['OTHERS']}
+        MOD_INFO_DICT[MOD_NAME1] = is_robolectric_module
+        MOD_INFO_DICT[MOD_NAME2] = is_not_robolectric_module
+        mod_info.name_to_module_info = MOD_INFO_DICT
+        self.assertTrue(mod_info.is_robolectric_module(MOD_INFO_DICT[MOD_NAME1]))
+        self.assertFalse(mod_info.is_robolectric_module(MOD_INFO_DICT[MOD_NAME2]))
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/proto/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/proto/__init__.py
diff --git a/atest-py2/proto/clientanalytics.proto b/atest-py2/proto/clientanalytics.proto
new file mode 100644
index 0000000..e75bf78
--- /dev/null
+++ b/atest-py2/proto/clientanalytics.proto
@@ -0,0 +1,22 @@
+syntax = "proto2";
+
+option java_package = "com.android.asuite.clearcut";
+
+message LogRequest {
+  optional ClientInfo client_info = 1;
+  optional int32 log_source = 2;
+  optional int64 request_time_ms = 4;
+  repeated LogEvent log_event = 3;
+}
+message ClientInfo {
+  optional int32 client_type = 1;
+}
+
+message LogResponse {
+  optional int64 next_request_wait_millis = 1 ;
+}
+
+message LogEvent {
+  optional int64 event_time_ms = 1 ;
+  optional bytes source_extension = 6;
+}
diff --git a/atest-py2/proto/clientanalytics_pb2.py b/atest-py2/proto/clientanalytics_pb2.py
new file mode 100644
index 0000000..b58dcc7
--- /dev/null
+++ b/atest-py2/proto/clientanalytics_pb2.py
@@ -0,0 +1,217 @@
+# pylint: skip-file
+# Generated by the protocol buffer compiler.  DO NOT EDIT!
+# source: proto/clientanalytics.proto
+
+import sys
+_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+from google.protobuf import descriptor_pb2
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+
+
+DESCRIPTOR = _descriptor.FileDescriptor(
+  name='proto/clientanalytics.proto',
+  package='',
+  syntax='proto2',
+  serialized_pb=_b('\n\x1bproto/clientanalytics.proto\"y\n\nLogRequest\x12 \n\x0b\x63lient_info\x18\x01 \x01(\x0b\x32\x0b.ClientInfo\x12\x12\n\nlog_source\x18\x02 \x01(\x05\x12\x17\n\x0frequest_time_ms\x18\x04 \x01(\x03\x12\x1c\n\tlog_event\x18\x03 \x03(\x0b\x32\t.LogEvent\"!\n\nClientInfo\x12\x13\n\x0b\x63lient_type\x18\x01 \x01(\x05\"/\n\x0bLogResponse\x12 \n\x18next_request_wait_millis\x18\x01 \x01(\x03\";\n\x08LogEvent\x12\x15\n\revent_time_ms\x18\x01 \x01(\x03\x12\x18\n\x10source_extension\x18\x06 \x01(\x0c')
+)
+_sym_db.RegisterFileDescriptor(DESCRIPTOR)
+
+
+
+
+_LOGREQUEST = _descriptor.Descriptor(
+  name='LogRequest',
+  full_name='LogRequest',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='client_info', full_name='LogRequest.client_info', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='log_source', full_name='LogRequest.log_source', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='request_time_ms', full_name='LogRequest.request_time_ms', index=2,
+      number=4, type=3, cpp_type=2, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='log_event', full_name='LogRequest.log_event', index=3,
+      number=3, type=11, cpp_type=10, label=3,
+      has_default_value=False, default_value=[],
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=31,
+  serialized_end=152,
+)
+
+
+_CLIENTINFO = _descriptor.Descriptor(
+  name='ClientInfo',
+  full_name='ClientInfo',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='client_type', full_name='ClientInfo.client_type', index=0,
+      number=1, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=154,
+  serialized_end=187,
+)
+
+
+_LOGRESPONSE = _descriptor.Descriptor(
+  name='LogResponse',
+  full_name='LogResponse',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='next_request_wait_millis', full_name='LogResponse.next_request_wait_millis', index=0,
+      number=1, type=3, cpp_type=2, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=189,
+  serialized_end=236,
+)
+
+
+_LOGEVENT = _descriptor.Descriptor(
+  name='LogEvent',
+  full_name='LogEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='event_time_ms', full_name='LogEvent.event_time_ms', index=0,
+      number=1, type=3, cpp_type=2, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='source_extension', full_name='LogEvent.source_extension', index=1,
+      number=6, type=12, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b(""),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=238,
+  serialized_end=297,
+)
+
+_LOGREQUEST.fields_by_name['client_info'].message_type = _CLIENTINFO
+_LOGREQUEST.fields_by_name['log_event'].message_type = _LOGEVENT
+DESCRIPTOR.message_types_by_name['LogRequest'] = _LOGREQUEST
+DESCRIPTOR.message_types_by_name['ClientInfo'] = _CLIENTINFO
+DESCRIPTOR.message_types_by_name['LogResponse'] = _LOGRESPONSE
+DESCRIPTOR.message_types_by_name['LogEvent'] = _LOGEVENT
+
+LogRequest = _reflection.GeneratedProtocolMessageType('LogRequest', (_message.Message,), dict(
+  DESCRIPTOR = _LOGREQUEST,
+  __module__ = 'proto.clientanalytics_pb2'
+  # @@protoc_insertion_point(class_scope:LogRequest)
+  ))
+_sym_db.RegisterMessage(LogRequest)
+
+ClientInfo = _reflection.GeneratedProtocolMessageType('ClientInfo', (_message.Message,), dict(
+  DESCRIPTOR = _CLIENTINFO,
+  __module__ = 'proto.clientanalytics_pb2'
+  # @@protoc_insertion_point(class_scope:ClientInfo)
+  ))
+_sym_db.RegisterMessage(ClientInfo)
+
+LogResponse = _reflection.GeneratedProtocolMessageType('LogResponse', (_message.Message,), dict(
+  DESCRIPTOR = _LOGRESPONSE,
+  __module__ = 'proto.clientanalytics_pb2'
+  # @@protoc_insertion_point(class_scope:LogResponse)
+  ))
+_sym_db.RegisterMessage(LogResponse)
+
+LogEvent = _reflection.GeneratedProtocolMessageType('LogEvent', (_message.Message,), dict(
+  DESCRIPTOR = _LOGEVENT,
+  __module__ = 'proto.clientanalytics_pb2'
+  # @@protoc_insertion_point(class_scope:LogEvent)
+  ))
+_sym_db.RegisterMessage(LogEvent)
+
+
+# @@protoc_insertion_point(module_scope)
diff --git a/atest-py2/proto/common.proto b/atest-py2/proto/common.proto
new file mode 100644
index 0000000..49cc48c
--- /dev/null
+++ b/atest-py2/proto/common.proto
@@ -0,0 +1,16 @@
+syntax = "proto2";
+
+option java_package = "com.android.asuite.clearcut";
+
+message Duration {
+  required int64 seconds = 1;
+  required int32 nanos = 2;
+}
+
+// ----------------
+// ENUM DEFINITIONS
+// ----------------
+enum UserType {
+  GOOGLE = 0;
+  EXTERNAL = 1;
+}
diff --git a/atest-py2/proto/common_pb2.py b/atest-py2/proto/common_pb2.py
new file mode 100644
index 0000000..5b7bd2e
--- /dev/null
+++ b/atest-py2/proto/common_pb2.py
@@ -0,0 +1,104 @@
+# pylint: skip-file
+# Generated by the protocol buffer compiler.  DO NOT EDIT!
+# source: proto/common.proto
+
+import sys
+_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
+from google.protobuf.internal import enum_type_wrapper
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+from google.protobuf import descriptor_pb2
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+
+
+DESCRIPTOR = _descriptor.FileDescriptor(
+  name='proto/common.proto',
+  package='',
+  syntax='proto2',
+  serialized_pb=_b('\n\x12proto/common.proto\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x02(\x03\x12\r\n\x05nanos\x18\x02 \x02(\x05*$\n\x08UserType\x12\n\n\x06GOOGLE\x10\x00\x12\x0c\n\x08\x45XTERNAL\x10\x01')
+)
+_sym_db.RegisterFileDescriptor(DESCRIPTOR)
+
+_USERTYPE = _descriptor.EnumDescriptor(
+  name='UserType',
+  full_name='UserType',
+  filename=None,
+  file=DESCRIPTOR,
+  values=[
+    _descriptor.EnumValueDescriptor(
+      name='GOOGLE', index=0, number=0,
+      options=None,
+      type=None),
+    _descriptor.EnumValueDescriptor(
+      name='EXTERNAL', index=1, number=1,
+      options=None,
+      type=None),
+  ],
+  containing_type=None,
+  options=None,
+  serialized_start=66,
+  serialized_end=102,
+)
+_sym_db.RegisterEnumDescriptor(_USERTYPE)
+
+UserType = enum_type_wrapper.EnumTypeWrapper(_USERTYPE)
+GOOGLE = 0
+EXTERNAL = 1
+
+
+
+_DURATION = _descriptor.Descriptor(
+  name='Duration',
+  full_name='Duration',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='seconds', full_name='Duration.seconds', index=0,
+      number=1, type=3, cpp_type=2, label=2,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='nanos', full_name='Duration.nanos', index=1,
+      number=2, type=5, cpp_type=1, label=2,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=22,
+  serialized_end=64,
+)
+
+DESCRIPTOR.message_types_by_name['Duration'] = _DURATION
+DESCRIPTOR.enum_types_by_name['UserType'] = _USERTYPE
+
+Duration = _reflection.GeneratedProtocolMessageType('Duration', (_message.Message,), dict(
+  DESCRIPTOR = _DURATION,
+  __module__ = 'proto.common_pb2'
+  # @@protoc_insertion_point(class_scope:Duration)
+  ))
+_sym_db.RegisterMessage(Duration)
+
+
+# @@protoc_insertion_point(module_scope)
diff --git a/atest-py2/proto/external_user_log.proto b/atest-py2/proto/external_user_log.proto
new file mode 100644
index 0000000..533ff0a
--- /dev/null
+++ b/atest-py2/proto/external_user_log.proto
@@ -0,0 +1,70 @@
+syntax = "proto2";
+
+import "proto/common.proto";
+
+option java_package = "com.android.asuite.clearcut";
+
+// Proto used by Atest CLI Tool for External Non-PII Users
+message AtestLogEventExternal {
+
+  // ------------------------
+  // EVENT DEFINITIONS
+  // ------------------------
+  // Occurs immediately upon execution of atest
+  message AtestStartEvent {
+  }
+
+  // Occurs when atest exits for any reason
+  message AtestExitEvent {
+    optional Duration duration = 1;
+    optional int32 exit_code = 2;
+  }
+
+  // Occurs after a SINGLE test reference has been resolved to a test or
+  // not found
+  message FindTestFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+  }
+
+  // Occurs after the build finishes, either successfully or not.
+  message BuildFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+  }
+
+  // Occurs when a single test runner has completed
+  message RunnerFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+    optional string runner_name = 3;
+  }
+
+  // Occurs after all test runners and tests have finished
+  message RunTestsFinishEvent {
+    optional Duration duration = 1;
+  }
+
+  // Occurs after detection of catching bug by atest have finished
+  message LocalDetectEvent {
+    optional int32 detect_type = 1;
+    optional int32 result = 2;
+  }
+
+  // ------------------------
+  // FIELDS FOR ATESTLOGEVENT
+  // ------------------------
+  optional string user_key = 1;
+  optional string run_id = 2;
+  optional UserType user_type = 3;
+  optional string tool_name = 10;
+  oneof event {
+    AtestStartEvent atest_start_event = 4;
+    AtestExitEvent atest_exit_event = 5;
+    FindTestFinishEvent find_test_finish_event= 6;
+    BuildFinishEvent build_finish_event = 7;
+    RunnerFinishEvent runner_finish_event = 8;
+    RunTestsFinishEvent run_tests_finish_event = 9;
+    LocalDetectEvent local_detect_event = 11;
+  }
+}
diff --git a/atest-py2/proto/external_user_log_pb2.py b/atest-py2/proto/external_user_log_pb2.py
new file mode 100644
index 0000000..ba33fd4
--- /dev/null
+++ b/atest-py2/proto/external_user_log_pb2.py
@@ -0,0 +1,487 @@
+# pylint: skip-file
+# Generated by the protocol buffer compiler.  DO NOT EDIT!
+# source: proto/external_user_log.proto
+
+import sys
+_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+from google.protobuf import descriptor_pb2
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from proto import common_pb2 as proto_dot_common__pb2
+
+
+DESCRIPTOR = _descriptor.FileDescriptor(
+  name='proto/external_user_log.proto',
+  package='',
+  syntax='proto2',
+  serialized_pb=_b('\n\x1dproto/external_user_log.proto\x1a\x12proto/common.proto\"\x8f\x08\n\x15\x41testLogEventExternal\x12\x10\n\x08user_key\x18\x01 \x01(\t\x12\x0e\n\x06run_id\x18\x02 \x01(\t\x12\x1c\n\tuser_type\x18\x03 \x01(\x0e\x32\t.UserType\x12\x11\n\ttool_name\x18\n \x01(\t\x12\x43\n\x11\x61test_start_event\x18\x04 \x01(\x0b\x32&.AtestLogEventExternal.AtestStartEventH\x00\x12\x41\n\x10\x61test_exit_event\x18\x05 \x01(\x0b\x32%.AtestLogEventExternal.AtestExitEventH\x00\x12L\n\x16\x66ind_test_finish_event\x18\x06 \x01(\x0b\x32*.AtestLogEventExternal.FindTestFinishEventH\x00\x12\x45\n\x12\x62uild_finish_event\x18\x07 \x01(\x0b\x32\'.AtestLogEventExternal.BuildFinishEventH\x00\x12G\n\x13runner_finish_event\x18\x08 \x01(\x0b\x32(.AtestLogEventExternal.RunnerFinishEventH\x00\x12L\n\x16run_tests_finish_event\x18\t \x01(\x0b\x32*.AtestLogEventExternal.RunTestsFinishEventH\x00\x12\x45\n\x12local_detect_event\x18\x0b \x01(\x0b\x32\'.AtestLogEventExternal.LocalDetectEventH\x00\x1a\x11\n\x0f\x41testStartEvent\x1a@\n\x0e\x41testExitEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x11\n\texit_code\x18\x02 \x01(\x05\x1a\x43\n\x13\x46indTestFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x1a@\n\x10\x42uildFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x1aV\n\x11RunnerFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x12\x13\n\x0brunner_name\x18\x03 \x01(\t\x1a\x32\n\x13RunTestsFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x1a\x37\n\x10LocalDetectEvent\x12\x13\n\x0b\x64\x65tect_type\x18\x01 \x01(\x05\x12\x0e\n\x06result\x18\x02 \x01(\x05\x42\x07\n\x05\x65vent')
+  ,
+  dependencies=[proto_dot_common__pb2.DESCRIPTOR,])
+_sym_db.RegisterFileDescriptor(DESCRIPTOR)
+
+
+
+
+_ATESTLOGEVENTEXTERNAL_ATESTSTARTEVENT = _descriptor.Descriptor(
+  name='AtestStartEvent',
+  full_name='AtestLogEventExternal.AtestStartEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=669,
+  serialized_end=686,
+)
+
+_ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT = _descriptor.Descriptor(
+  name='AtestExitEvent',
+  full_name='AtestLogEventExternal.AtestExitEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventExternal.AtestExitEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='exit_code', full_name='AtestLogEventExternal.AtestExitEvent.exit_code', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=688,
+  serialized_end=752,
+)
+
+_ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT = _descriptor.Descriptor(
+  name='FindTestFinishEvent',
+  full_name='AtestLogEventExternal.FindTestFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventExternal.FindTestFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventExternal.FindTestFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=754,
+  serialized_end=821,
+)
+
+_ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT = _descriptor.Descriptor(
+  name='BuildFinishEvent',
+  full_name='AtestLogEventExternal.BuildFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventExternal.BuildFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventExternal.BuildFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=823,
+  serialized_end=887,
+)
+
+_ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT = _descriptor.Descriptor(
+  name='RunnerFinishEvent',
+  full_name='AtestLogEventExternal.RunnerFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventExternal.RunnerFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventExternal.RunnerFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='runner_name', full_name='AtestLogEventExternal.RunnerFinishEvent.runner_name', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=889,
+  serialized_end=975,
+)
+
+_ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT = _descriptor.Descriptor(
+  name='RunTestsFinishEvent',
+  full_name='AtestLogEventExternal.RunTestsFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventExternal.RunTestsFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=977,
+  serialized_end=1027,
+)
+
+_ATESTLOGEVENTEXTERNAL_LOCALDETECTEVENT = _descriptor.Descriptor(
+  name='LocalDetectEvent',
+  full_name='AtestLogEventExternal.LocalDetectEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='detect_type', full_name='AtestLogEventExternal.LocalDetectEvent.detect_type', index=0,
+      number=1, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='result', full_name='AtestLogEventExternal.LocalDetectEvent.result', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=1029,
+  serialized_end=1084,
+)
+
+_ATESTLOGEVENTEXTERNAL = _descriptor.Descriptor(
+  name='AtestLogEventExternal',
+  full_name='AtestLogEventExternal',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='user_key', full_name='AtestLogEventExternal.user_key', index=0,
+      number=1, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='run_id', full_name='AtestLogEventExternal.run_id', index=1,
+      number=2, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='user_type', full_name='AtestLogEventExternal.user_type', index=2,
+      number=3, type=14, cpp_type=8, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='tool_name', full_name='AtestLogEventExternal.tool_name', index=3,
+      number=10, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='atest_start_event', full_name='AtestLogEventExternal.atest_start_event', index=4,
+      number=4, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='atest_exit_event', full_name='AtestLogEventExternal.atest_exit_event', index=5,
+      number=5, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='find_test_finish_event', full_name='AtestLogEventExternal.find_test_finish_event', index=6,
+      number=6, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='build_finish_event', full_name='AtestLogEventExternal.build_finish_event', index=7,
+      number=7, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='runner_finish_event', full_name='AtestLogEventExternal.runner_finish_event', index=8,
+      number=8, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='run_tests_finish_event', full_name='AtestLogEventExternal.run_tests_finish_event', index=9,
+      number=9, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='local_detect_event', full_name='AtestLogEventExternal.local_detect_event', index=10,
+      number=11, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[_ATESTLOGEVENTEXTERNAL_ATESTSTARTEVENT, _ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT, _ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT, _ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT, _ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT, _ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT, _ATESTLOGEVENTEXTERNAL_LOCALDETECTEVENT, ],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+    _descriptor.OneofDescriptor(
+      name='event', full_name='AtestLogEventExternal.event',
+      index=0, containing_type=None, fields=[]),
+  ],
+  serialized_start=54,
+  serialized_end=1093,
+)
+
+_ATESTLOGEVENTEXTERNAL_ATESTSTARTEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL_LOCALDETECTEVENT.containing_type = _ATESTLOGEVENTEXTERNAL
+_ATESTLOGEVENTEXTERNAL.fields_by_name['user_type'].enum_type = proto_dot_common__pb2._USERTYPE
+_ATESTLOGEVENTEXTERNAL.fields_by_name['atest_start_event'].message_type = _ATESTLOGEVENTEXTERNAL_ATESTSTARTEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['atest_exit_event'].message_type = _ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['find_test_finish_event'].message_type = _ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['build_finish_event'].message_type = _ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['runner_finish_event'].message_type = _ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['run_tests_finish_event'].message_type = _ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT
+_ATESTLOGEVENTEXTERNAL.fields_by_name['local_detect_event'].message_type = _ATESTLOGEVENTEXTERNAL_LOCALDETECTEVENT
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['atest_start_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['atest_start_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['atest_exit_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['atest_exit_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['find_test_finish_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['find_test_finish_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['build_finish_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['build_finish_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['runner_finish_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['runner_finish_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['run_tests_finish_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['run_tests_finish_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTEXTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTEXTERNAL.fields_by_name['local_detect_event'])
+_ATESTLOGEVENTEXTERNAL.fields_by_name['local_detect_event'].containing_oneof = _ATESTLOGEVENTEXTERNAL.oneofs_by_name['event']
+DESCRIPTOR.message_types_by_name['AtestLogEventExternal'] = _ATESTLOGEVENTEXTERNAL
+
+AtestLogEventExternal = _reflection.GeneratedProtocolMessageType('AtestLogEventExternal', (_message.Message,), dict(
+
+  AtestStartEvent = _reflection.GeneratedProtocolMessageType('AtestStartEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_ATESTSTARTEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.AtestStartEvent)
+    ))
+  ,
+
+  AtestExitEvent = _reflection.GeneratedProtocolMessageType('AtestExitEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_ATESTEXITEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.AtestExitEvent)
+    ))
+  ,
+
+  FindTestFinishEvent = _reflection.GeneratedProtocolMessageType('FindTestFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_FINDTESTFINISHEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.FindTestFinishEvent)
+    ))
+  ,
+
+  BuildFinishEvent = _reflection.GeneratedProtocolMessageType('BuildFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_BUILDFINISHEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.BuildFinishEvent)
+    ))
+  ,
+
+  RunnerFinishEvent = _reflection.GeneratedProtocolMessageType('RunnerFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_RUNNERFINISHEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.RunnerFinishEvent)
+    ))
+  ,
+
+  RunTestsFinishEvent = _reflection.GeneratedProtocolMessageType('RunTestsFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_RUNTESTSFINISHEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.RunTestsFinishEvent)
+    ))
+  ,
+
+  LocalDetectEvent = _reflection.GeneratedProtocolMessageType('LocalDetectEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTEXTERNAL_LOCALDETECTEVENT,
+    __module__ = 'proto.external_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventExternal.LocalDetectEvent)
+    ))
+  ,
+  DESCRIPTOR = _ATESTLOGEVENTEXTERNAL,
+  __module__ = 'proto.external_user_log_pb2'
+  # @@protoc_insertion_point(class_scope:AtestLogEventExternal)
+  ))
+_sym_db.RegisterMessage(AtestLogEventExternal)
+_sym_db.RegisterMessage(AtestLogEventExternal.AtestStartEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.AtestExitEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.FindTestFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.BuildFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.RunnerFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.RunTestsFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventExternal.LocalDetectEvent)
+
+
+# @@protoc_insertion_point(module_scope)
diff --git a/atest-py2/proto/internal_user_log.proto b/atest-py2/proto/internal_user_log.proto
new file mode 100644
index 0000000..05d4dee
--- /dev/null
+++ b/atest-py2/proto/internal_user_log.proto
@@ -0,0 +1,86 @@
+syntax = "proto2";
+
+import "proto/common.proto";
+
+option java_package = "com.android.asuite.clearcut";
+
+// Proto used by Atest CLI Tool for internal Users
+message AtestLogEventInternal {
+
+  // ------------------------
+  // EVENT DEFINITIONS
+  // ------------------------
+  // Occurs immediately upon execution of atest
+  message AtestStartEvent {
+    optional string command_line = 1;
+    repeated string test_references = 2;
+    optional string cwd = 3;
+    optional string os = 4;
+  }
+
+  // Occurs when atest exits for any reason
+  message AtestExitEvent {
+    optional Duration duration = 1;
+    optional int32 exit_code = 2;
+    optional string stacktrace = 3;
+    optional string logs = 4;
+  }
+
+  // Occurs after a SINGLE test reference has been resolved to a test or
+  // not found
+  message FindTestFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+    optional string test_reference = 3;
+    repeated string test_finders = 4;
+    optional string test_info = 5;
+  }
+
+  // Occurs after the build finishes, either successfully or not.
+  message BuildFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+    repeated string targets = 3;
+  }
+
+  // Occurs when a single test runner has completed
+  message RunnerFinishEvent {
+    optional Duration duration = 1;
+    optional bool success = 2;
+    optional string runner_name = 3;
+    message Test {
+      optional string name = 1;
+      optional int32 result = 2;
+      optional string stacktrace = 3;
+    }
+    repeated Test test = 4;
+  }
+
+  // Occurs after all test runners and tests have finished
+  message RunTestsFinishEvent {
+    optional Duration duration = 1;
+  }
+
+  // Occurs after detection of catching bug by atest have finished
+  message LocalDetectEvent {
+    optional int32 detect_type = 1;
+    optional int32 result = 2;
+  }
+
+  // ------------------------
+  // FIELDS FOR ATESTLOGEVENT
+  // ------------------------
+  optional string user_key = 1;
+  optional string run_id = 2;
+  optional UserType user_type = 3;
+  optional string tool_name = 10;
+  oneof event {
+    AtestStartEvent atest_start_event = 4;
+    AtestExitEvent atest_exit_event = 5;
+    FindTestFinishEvent find_test_finish_event= 6;
+    BuildFinishEvent build_finish_event = 7;
+    RunnerFinishEvent runner_finish_event = 8;
+    RunTestsFinishEvent run_tests_finish_event = 9;
+    LocalDetectEvent local_detect_event = 11;
+  }
+}
diff --git a/atest-py2/proto/internal_user_log_pb2.py b/atest-py2/proto/internal_user_log_pb2.py
new file mode 100644
index 0000000..e8585dc
--- /dev/null
+++ b/atest-py2/proto/internal_user_log_pb2.py
@@ -0,0 +1,618 @@
+# pylint: skip-file
+# Generated by the protocol buffer compiler.  DO NOT EDIT!
+# source: proto/internal_user_log.proto
+
+import sys
+_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+from google.protobuf import descriptor_pb2
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from proto import common_pb2 as proto_dot_common__pb2
+
+
+DESCRIPTOR = _descriptor.FileDescriptor(
+  name='proto/internal_user_log.proto',
+  package='',
+  syntax='proto2',
+  serialized_pb=_b('\n\x1dproto/internal_user_log.proto\x1a\x12proto/common.proto\"\xc4\n\n\x15\x41testLogEventInternal\x12\x10\n\x08user_key\x18\x01 \x01(\t\x12\x0e\n\x06run_id\x18\x02 \x01(\t\x12\x1c\n\tuser_type\x18\x03 \x01(\x0e\x32\t.UserType\x12\x11\n\ttool_name\x18\n \x01(\t\x12\x43\n\x11\x61test_start_event\x18\x04 \x01(\x0b\x32&.AtestLogEventInternal.AtestStartEventH\x00\x12\x41\n\x10\x61test_exit_event\x18\x05 \x01(\x0b\x32%.AtestLogEventInternal.AtestExitEventH\x00\x12L\n\x16\x66ind_test_finish_event\x18\x06 \x01(\x0b\x32*.AtestLogEventInternal.FindTestFinishEventH\x00\x12\x45\n\x12\x62uild_finish_event\x18\x07 \x01(\x0b\x32\'.AtestLogEventInternal.BuildFinishEventH\x00\x12G\n\x13runner_finish_event\x18\x08 \x01(\x0b\x32(.AtestLogEventInternal.RunnerFinishEventH\x00\x12L\n\x16run_tests_finish_event\x18\t \x01(\x0b\x32*.AtestLogEventInternal.RunTestsFinishEventH\x00\x12\x45\n\x12local_detect_event\x18\x0b \x01(\x0b\x32\'.AtestLogEventInternal.LocalDetectEventH\x00\x1aY\n\x0f\x41testStartEvent\x12\x14\n\x0c\x63ommand_line\x18\x01 \x01(\t\x12\x17\n\x0ftest_references\x18\x02 \x03(\t\x12\x0b\n\x03\x63wd\x18\x03 \x01(\t\x12\n\n\x02os\x18\x04 \x01(\t\x1a\x62\n\x0e\x41testExitEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x11\n\texit_code\x18\x02 \x01(\x05\x12\x12\n\nstacktrace\x18\x03 \x01(\t\x12\x0c\n\x04logs\x18\x04 \x01(\t\x1a\x84\x01\n\x13\x46indTestFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x12\x16\n\x0etest_reference\x18\x03 \x01(\t\x12\x14\n\x0ctest_finders\x18\x04 \x03(\t\x12\x11\n\ttest_info\x18\x05 \x01(\t\x1aQ\n\x10\x42uildFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x12\x0f\n\x07targets\x18\x03 \x03(\t\x1a\xcd\x01\n\x11RunnerFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x12\x0f\n\x07success\x18\x02 \x01(\x08\x12\x13\n\x0brunner_name\x18\x03 \x01(\t\x12;\n\x04test\x18\x04 \x03(\x0b\x32-.AtestLogEventInternal.RunnerFinishEvent.Test\x1a\x38\n\x04Test\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0e\n\x06result\x18\x02 \x01(\x05\x12\x12\n\nstacktrace\x18\x03 \x01(\t\x1a\x32\n\x13RunTestsFinishEvent\x12\x1b\n\x08\x64uration\x18\x01 \x01(\x0b\x32\t.Duration\x1a\x37\n\x10LocalDetectEvent\x12\x13\n\x0b\x64\x65tect_type\x18\x01 \x01(\x05\x12\x0e\n\x06result\x18\x02 \x01(\x05\x42\x07\n\x05\x65vent')
+  ,
+  dependencies=[proto_dot_common__pb2.DESCRIPTOR,])
+_sym_db.RegisterFileDescriptor(DESCRIPTOR)
+
+
+
+
+_ATESTLOGEVENTINTERNAL_ATESTSTARTEVENT = _descriptor.Descriptor(
+  name='AtestStartEvent',
+  full_name='AtestLogEventInternal.AtestStartEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='command_line', full_name='AtestLogEventInternal.AtestStartEvent.command_line', index=0,
+      number=1, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='test_references', full_name='AtestLogEventInternal.AtestStartEvent.test_references', index=1,
+      number=2, type=9, cpp_type=9, label=3,
+      has_default_value=False, default_value=[],
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='cwd', full_name='AtestLogEventInternal.AtestStartEvent.cwd', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='os', full_name='AtestLogEventInternal.AtestStartEvent.os', index=3,
+      number=4, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=669,
+  serialized_end=758,
+)
+
+_ATESTLOGEVENTINTERNAL_ATESTEXITEVENT = _descriptor.Descriptor(
+  name='AtestExitEvent',
+  full_name='AtestLogEventInternal.AtestExitEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventInternal.AtestExitEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='exit_code', full_name='AtestLogEventInternal.AtestExitEvent.exit_code', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='stacktrace', full_name='AtestLogEventInternal.AtestExitEvent.stacktrace', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='logs', full_name='AtestLogEventInternal.AtestExitEvent.logs', index=3,
+      number=4, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=760,
+  serialized_end=858,
+)
+
+_ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT = _descriptor.Descriptor(
+  name='FindTestFinishEvent',
+  full_name='AtestLogEventInternal.FindTestFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventInternal.FindTestFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventInternal.FindTestFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='test_reference', full_name='AtestLogEventInternal.FindTestFinishEvent.test_reference', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='test_finders', full_name='AtestLogEventInternal.FindTestFinishEvent.test_finders', index=3,
+      number=4, type=9, cpp_type=9, label=3,
+      has_default_value=False, default_value=[],
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='test_info', full_name='AtestLogEventInternal.FindTestFinishEvent.test_info', index=4,
+      number=5, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=861,
+  serialized_end=993,
+)
+
+_ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT = _descriptor.Descriptor(
+  name='BuildFinishEvent',
+  full_name='AtestLogEventInternal.BuildFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventInternal.BuildFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventInternal.BuildFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='targets', full_name='AtestLogEventInternal.BuildFinishEvent.targets', index=2,
+      number=3, type=9, cpp_type=9, label=3,
+      has_default_value=False, default_value=[],
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=995,
+  serialized_end=1076,
+)
+
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT_TEST = _descriptor.Descriptor(
+  name='Test',
+  full_name='AtestLogEventInternal.RunnerFinishEvent.Test',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='name', full_name='AtestLogEventInternal.RunnerFinishEvent.Test.name', index=0,
+      number=1, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='result', full_name='AtestLogEventInternal.RunnerFinishEvent.Test.result', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='stacktrace', full_name='AtestLogEventInternal.RunnerFinishEvent.Test.stacktrace', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=1228,
+  serialized_end=1284,
+)
+
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT = _descriptor.Descriptor(
+  name='RunnerFinishEvent',
+  full_name='AtestLogEventInternal.RunnerFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventInternal.RunnerFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='success', full_name='AtestLogEventInternal.RunnerFinishEvent.success', index=1,
+      number=2, type=8, cpp_type=7, label=1,
+      has_default_value=False, default_value=False,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='runner_name', full_name='AtestLogEventInternal.RunnerFinishEvent.runner_name', index=2,
+      number=3, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='test', full_name='AtestLogEventInternal.RunnerFinishEvent.test', index=3,
+      number=4, type=11, cpp_type=10, label=3,
+      has_default_value=False, default_value=[],
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT_TEST, ],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=1079,
+  serialized_end=1284,
+)
+
+_ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT = _descriptor.Descriptor(
+  name='RunTestsFinishEvent',
+  full_name='AtestLogEventInternal.RunTestsFinishEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='duration', full_name='AtestLogEventInternal.RunTestsFinishEvent.duration', index=0,
+      number=1, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=1286,
+  serialized_end=1336,
+)
+
+_ATESTLOGEVENTINTERNAL_LOCALDETECTEVENT = _descriptor.Descriptor(
+  name='LocalDetectEvent',
+  full_name='AtestLogEventInternal.LocalDetectEvent',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='detect_type', full_name='AtestLogEventInternal.LocalDetectEvent.detect_type', index=0,
+      number=1, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='result', full_name='AtestLogEventInternal.LocalDetectEvent.result', index=1,
+      number=2, type=5, cpp_type=1, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+  ],
+  serialized_start=1338,
+  serialized_end=1393,
+)
+
+_ATESTLOGEVENTINTERNAL = _descriptor.Descriptor(
+  name='AtestLogEventInternal',
+  full_name='AtestLogEventInternal',
+  filename=None,
+  file=DESCRIPTOR,
+  containing_type=None,
+  fields=[
+    _descriptor.FieldDescriptor(
+      name='user_key', full_name='AtestLogEventInternal.user_key', index=0,
+      number=1, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='run_id', full_name='AtestLogEventInternal.run_id', index=1,
+      number=2, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='user_type', full_name='AtestLogEventInternal.user_type', index=2,
+      number=3, type=14, cpp_type=8, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='tool_name', full_name='AtestLogEventInternal.tool_name', index=3,
+      number=10, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='atest_start_event', full_name='AtestLogEventInternal.atest_start_event', index=4,
+      number=4, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='atest_exit_event', full_name='AtestLogEventInternal.atest_exit_event', index=5,
+      number=5, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='find_test_finish_event', full_name='AtestLogEventInternal.find_test_finish_event', index=6,
+      number=6, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='build_finish_event', full_name='AtestLogEventInternal.build_finish_event', index=7,
+      number=7, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='runner_finish_event', full_name='AtestLogEventInternal.runner_finish_event', index=8,
+      number=8, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='run_tests_finish_event', full_name='AtestLogEventInternal.run_tests_finish_event', index=9,
+      number=9, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+    _descriptor.FieldDescriptor(
+      name='local_detect_event', full_name='AtestLogEventInternal.local_detect_event', index=10,
+      number=11, type=11, cpp_type=10, label=1,
+      has_default_value=False, default_value=None,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      options=None),
+  ],
+  extensions=[
+  ],
+  nested_types=[_ATESTLOGEVENTINTERNAL_ATESTSTARTEVENT, _ATESTLOGEVENTINTERNAL_ATESTEXITEVENT, _ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT, _ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT, _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT, _ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT, _ATESTLOGEVENTINTERNAL_LOCALDETECTEVENT, ],
+  enum_types=[
+  ],
+  options=None,
+  is_extendable=False,
+  syntax='proto2',
+  extension_ranges=[],
+  oneofs=[
+    _descriptor.OneofDescriptor(
+      name='event', full_name='AtestLogEventInternal.event',
+      index=0, containing_type=None, fields=[]),
+  ],
+  serialized_start=54,
+  serialized_end=1402,
+)
+
+_ATESTLOGEVENTINTERNAL_ATESTSTARTEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_ATESTEXITEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTINTERNAL_ATESTEXITEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT_TEST.containing_type = _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT.fields_by_name['test'].message_type = _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT_TEST
+_ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT.fields_by_name['duration'].message_type = proto_dot_common__pb2._DURATION
+_ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL_LOCALDETECTEVENT.containing_type = _ATESTLOGEVENTINTERNAL
+_ATESTLOGEVENTINTERNAL.fields_by_name['user_type'].enum_type = proto_dot_common__pb2._USERTYPE
+_ATESTLOGEVENTINTERNAL.fields_by_name['atest_start_event'].message_type = _ATESTLOGEVENTINTERNAL_ATESTSTARTEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['atest_exit_event'].message_type = _ATESTLOGEVENTINTERNAL_ATESTEXITEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['find_test_finish_event'].message_type = _ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['build_finish_event'].message_type = _ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['runner_finish_event'].message_type = _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['run_tests_finish_event'].message_type = _ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT
+_ATESTLOGEVENTINTERNAL.fields_by_name['local_detect_event'].message_type = _ATESTLOGEVENTINTERNAL_LOCALDETECTEVENT
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['atest_start_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['atest_start_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['atest_exit_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['atest_exit_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['find_test_finish_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['find_test_finish_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['build_finish_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['build_finish_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['runner_finish_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['runner_finish_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['run_tests_finish_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['run_tests_finish_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+_ATESTLOGEVENTINTERNAL.oneofs_by_name['event'].fields.append(
+  _ATESTLOGEVENTINTERNAL.fields_by_name['local_detect_event'])
+_ATESTLOGEVENTINTERNAL.fields_by_name['local_detect_event'].containing_oneof = _ATESTLOGEVENTINTERNAL.oneofs_by_name['event']
+DESCRIPTOR.message_types_by_name['AtestLogEventInternal'] = _ATESTLOGEVENTINTERNAL
+
+AtestLogEventInternal = _reflection.GeneratedProtocolMessageType('AtestLogEventInternal', (_message.Message,), dict(
+
+  AtestStartEvent = _reflection.GeneratedProtocolMessageType('AtestStartEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_ATESTSTARTEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.AtestStartEvent)
+    ))
+  ,
+
+  AtestExitEvent = _reflection.GeneratedProtocolMessageType('AtestExitEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_ATESTEXITEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.AtestExitEvent)
+    ))
+  ,
+
+  FindTestFinishEvent = _reflection.GeneratedProtocolMessageType('FindTestFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_FINDTESTFINISHEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.FindTestFinishEvent)
+    ))
+  ,
+
+  BuildFinishEvent = _reflection.GeneratedProtocolMessageType('BuildFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_BUILDFINISHEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.BuildFinishEvent)
+    ))
+  ,
+
+  RunnerFinishEvent = _reflection.GeneratedProtocolMessageType('RunnerFinishEvent', (_message.Message,), dict(
+
+    Test = _reflection.GeneratedProtocolMessageType('Test', (_message.Message,), dict(
+      DESCRIPTOR = _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT_TEST,
+      __module__ = 'proto.internal_user_log_pb2'
+      # @@protoc_insertion_point(class_scope:AtestLogEventInternal.RunnerFinishEvent.Test)
+      ))
+    ,
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_RUNNERFINISHEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.RunnerFinishEvent)
+    ))
+  ,
+
+  RunTestsFinishEvent = _reflection.GeneratedProtocolMessageType('RunTestsFinishEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_RUNTESTSFINISHEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.RunTestsFinishEvent)
+    ))
+  ,
+
+  LocalDetectEvent = _reflection.GeneratedProtocolMessageType('LocalDetectEvent', (_message.Message,), dict(
+    DESCRIPTOR = _ATESTLOGEVENTINTERNAL_LOCALDETECTEVENT,
+    __module__ = 'proto.internal_user_log_pb2'
+    # @@protoc_insertion_point(class_scope:AtestLogEventInternal.LocalDetectEvent)
+    ))
+  ,
+  DESCRIPTOR = _ATESTLOGEVENTINTERNAL,
+  __module__ = 'proto.internal_user_log_pb2'
+  # @@protoc_insertion_point(class_scope:AtestLogEventInternal)
+  ))
+_sym_db.RegisterMessage(AtestLogEventInternal)
+_sym_db.RegisterMessage(AtestLogEventInternal.AtestStartEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.AtestExitEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.FindTestFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.BuildFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.RunnerFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.RunnerFinishEvent.Test)
+_sym_db.RegisterMessage(AtestLogEventInternal.RunTestsFinishEvent)
+_sym_db.RegisterMessage(AtestLogEventInternal.LocalDetectEvent)
+
+
+# @@protoc_insertion_point(module_scope)
diff --git a/atest-py2/result_reporter.py b/atest-py2/result_reporter.py
new file mode 100644
index 0000000..17032cd
--- /dev/null
+++ b/atest-py2/result_reporter.py
@@ -0,0 +1,524 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+Result Reporter
+
+The result reporter formats and prints test results.
+
+----
+Example Output for command to run following tests:
+CtsAnimationTestCases:EvaluatorTest, HelloWorldTests, and WmTests
+
+Running Tests ...
+
+CtsAnimationTestCases
+---------------------
+
+android.animation.cts.EvaluatorTest.UnitTests (7 Tests)
+[1/7] android.animation.cts.EvaluatorTest#testRectEvaluator: PASSED (153ms)
+[2/7] android.animation.cts.EvaluatorTest#testIntArrayEvaluator: PASSED (0ms)
+[3/7] android.animation.cts.EvaluatorTest#testIntEvaluator: PASSED (0ms)
+[4/7] android.animation.cts.EvaluatorTest#testFloatArrayEvaluator: PASSED (1ms)
+[5/7] android.animation.cts.EvaluatorTest#testPointFEvaluator: PASSED (1ms)
+[6/7] android.animation.cts.EvaluatorTest#testArgbEvaluator: PASSED (0ms)
+[7/7] android.animation.cts.EvaluatorTest#testFloatEvaluator: PASSED (1ms)
+
+HelloWorldTests
+---------------
+
+android.test.example.helloworld.UnitTests(2 Tests)
+[1/2] android.test.example.helloworld.HelloWorldTest#testHalloWelt: PASSED (0ms)
+[2/2] android.test.example.helloworld.HelloWorldTest#testHelloWorld: PASSED (1ms)
+
+WmTests
+-------
+
+com.android.tradefed.targetprep.UnitTests (1 Test)
+RUNNER ERROR: com.android.tradefed.targetprep.TargetSetupError:
+Failed to install WmTests.apk on 127.0.0.1:54373. Reason:
+    error message ...
+
+
+Summary
+-------
+CtsAnimationTestCases: Passed: 7, Failed: 0
+HelloWorldTests: Passed: 2, Failed: 0
+WmTests: Passed: 0, Failed: 0 (Completed With ERRORS)
+
+1 test failed
+"""
+
+from __future__ import print_function
+from collections import OrderedDict
+
+import constants
+import atest_utils as au
+
+from test_runners import test_runner_base
+
+UNSUPPORTED_FLAG = 'UNSUPPORTED_RUNNER'
+FAILURE_FLAG = 'RUNNER_FAILURE'
+BENCHMARK_ESSENTIAL_KEYS = {'repetition_index', 'cpu_time', 'name', 'repetitions',
+                            'run_type', 'threads', 'time_unit', 'iterations',
+                            'run_name', 'real_time'}
+# TODO(b/146875480): handle the optional benchmark events
+BENCHMARK_OPTIONAL_KEYS = {'bytes_per_second', 'label'}
+BENCHMARK_EVENT_KEYS = BENCHMARK_ESSENTIAL_KEYS.union(BENCHMARK_OPTIONAL_KEYS)
+INT_KEYS = {'cpu_time', 'real_time'}
+
+class PerfInfo(object):
+    """Class for storing performance test of a test run."""
+
+    def __init__(self):
+        """Initialize a new instance of PerfInfo class."""
+        # perf_info: A list of benchmark_info(dict).
+        self.perf_info = []
+
+    def update_perf_info(self, test):
+        """Update perf_info with the given result of a single test.
+
+        Args:
+            test: A TestResult namedtuple.
+        """
+        all_additional_keys = set(test.additional_info.keys())
+        # Ensure every key is in all_additional_keys.
+        if not BENCHMARK_ESSENTIAL_KEYS.issubset(all_additional_keys):
+            return
+        benchmark_info = {}
+        benchmark_info['test_name'] = test.test_name
+        for key, data in test.additional_info.items():
+            if key in INT_KEYS:
+                data_to_int = data.split('.')[0]
+                benchmark_info[key] = data_to_int
+            elif key in BENCHMARK_EVENT_KEYS:
+                benchmark_info[key] = data
+        if benchmark_info:
+            self.perf_info.append(benchmark_info)
+
+    def print_perf_info(self):
+        """Print summary of a perf_info."""
+        if not self.perf_info:
+            return
+        classify_perf_info, max_len = self._classify_perf_info()
+        separator = '-' * au.get_terminal_size()[0]
+        print(separator)
+        print("{:{name}}    {:^{real_time}}    {:^{cpu_time}}    "
+              "{:>{iterations}}".format(
+                  'Benchmark', 'Time', 'CPU', 'Iteration',
+                  name=max_len['name']+3,
+                  real_time=max_len['real_time']+max_len['time_unit']+1,
+                  cpu_time=max_len['cpu_time']+max_len['time_unit']+1,
+                  iterations=max_len['iterations']))
+        print(separator)
+        for module_name, module_perf_info in classify_perf_info.items():
+            print("{}:".format(module_name))
+            for benchmark_info in module_perf_info:
+                # BpfBenchMark/MapWriteNewEntry/1    1530 ns     1522 ns   460517
+                print("  #{:{name}}    {:>{real_time}} {:{time_unit}}    "
+                      "{:>{cpu_time}} {:{time_unit}}    "
+                      "{:>{iterations}}".format(benchmark_info['name'],
+                                                benchmark_info['real_time'],
+                                                benchmark_info['time_unit'],
+                                                benchmark_info['cpu_time'],
+                                                benchmark_info['time_unit'],
+                                                benchmark_info['iterations'],
+                                                name=max_len['name'],
+                                                real_time=max_len['real_time'],
+                                                time_unit=max_len['time_unit'],
+                                                cpu_time=max_len['cpu_time'],
+                                                iterations=max_len['iterations']))
+
+    def _classify_perf_info(self):
+        """Classify the perf_info by test module name.
+
+        Returns:
+            A tuple of (classified_perf_info, max_len), where
+            classified_perf_info: A dict of perf_info and each perf_info are
+                                 belong to different modules.
+                e.g.
+                    { module_name_01: [perf_info of module_1],
+                      module_name_02: [perf_info of module_2], ...}
+            max_len: A dict which stores the max length of each event.
+                     It contains the max string length of 'name', real_time',
+                     'time_unit', 'cpu_time', 'iterations'.
+                e.g.
+                    {name: 56, real_time: 9, time_unit: 2, cpu_time: 8,
+                     iterations: 12}
+        """
+        module_categories = set()
+        max_len = {}
+        all_name = []
+        all_real_time = []
+        all_time_unit = []
+        all_cpu_time = []
+        all_iterations = ['Iteration']
+        for benchmark_info in self.perf_info:
+            module_categories.add(benchmark_info['test_name'].split('#')[0])
+            all_name.append(benchmark_info['name'])
+            all_real_time.append(benchmark_info['real_time'])
+            all_time_unit.append(benchmark_info['time_unit'])
+            all_cpu_time.append(benchmark_info['cpu_time'])
+            all_iterations.append(benchmark_info['iterations'])
+        classified_perf_info = {}
+        for module_name in module_categories:
+            module_perf_info = []
+            for benchmark_info in self.perf_info:
+                if benchmark_info['test_name'].split('#')[0] == module_name:
+                    module_perf_info.append(benchmark_info)
+            classified_perf_info[module_name] = module_perf_info
+        max_len = {'name': len(max(all_name, key=len)),
+                   'real_time': len(max(all_real_time, key=len)),
+                   'time_unit': len(max(all_time_unit, key=len)),
+                   'cpu_time': len(max(all_cpu_time, key=len)),
+                   'iterations': len(max(all_iterations, key=len))}
+        return classified_perf_info, max_len
+
+
+class RunStat(object):
+    """Class for storing stats of a test run."""
+
+    def __init__(self, passed=0, failed=0, ignored=0, run_errors=False,
+                 assumption_failed=0):
+        """Initialize a new instance of RunStat class.
+
+        Args:
+            passed: Count of passing tests.
+            failed: Count of failed tests.
+            ignored: Count of ignored tests.
+            assumption_failed: Count of assumption failure tests.
+            run_errors: A boolean if there were run errors
+        """
+        # TODO(b/109822985): Track group and run estimated totals for updating
+        # summary line
+        self.passed = passed
+        self.failed = failed
+        self.ignored = ignored
+        self.assumption_failed = assumption_failed
+        self.perf_info = PerfInfo()
+        # Run errors are not for particular tests, they are runner errors.
+        self.run_errors = run_errors
+
+    @property
+    def total(self):
+        """Getter for total tests actually ran. Accessed via self.total"""
+        return self.passed + self.failed
+
+
+class ResultReporter(object):
+    """Result Reporter class.
+
+    As each test is run, the test runner will call self.process_test_result()
+    with a TestResult namedtuple that contains the following information:
+    - runner_name:   Name of the test runner
+    - group_name:    Name of the test group if any.
+                     In Tradefed that's the Module name.
+    - test_name:     Name of the test.
+                     In Tradefed that's qualified.class#Method
+    - status:        The strings FAILED or PASSED.
+    - stacktrace:    The stacktrace if the test failed.
+    - group_total:   The total tests scheduled to be run for a group.
+                     In Tradefed this is provided when the Module starts.
+    - runner_total:  The total tests scheduled to be run for the runner.
+                     In Tradefed this is not available so is None.
+
+    The Result Reporter will print the results of this test and then update
+    its stats state.
+
+    Test stats are stored in the following structure:
+    - self.run_stats: Is RunStat instance containing stats for the overall run.
+                      This include pass/fail counts across ALL test runners.
+
+    - self.runners:  Is of the form: {RunnerName: {GroupName: RunStat Instance}}
+                     Where {} is an ordered dict.
+
+                     The stats instance contains stats for each test group.
+                     If the runner doesn't support groups, then the group
+                     name will be None.
+
+    For example this could be a state of ResultReporter:
+
+    run_stats: RunStat(passed:10, failed:5)
+    runners: {'AtestTradefedTestRunner':
+                            {'Module1': RunStat(passed:1, failed:1),
+                             'Module2': RunStat(passed:0, failed:4)},
+              'RobolectricTestRunner': {None: RunStat(passed:5, failed:0)},
+              'VtsTradefedTestRunner': {'Module1': RunStat(passed:4, failed:0)}}
+    """
+
+    def __init__(self, silent=False):
+        """Init ResultReporter.
+
+        Args:
+            silent: A boolean of silence or not.
+        """
+        self.run_stats = RunStat()
+        self.runners = OrderedDict()
+        self.failed_tests = []
+        self.all_test_results = []
+        self.pre_test = None
+        self.log_path = None
+        self.silent = silent
+        self.rerun_options = ''
+
+    def process_test_result(self, test):
+        """Given the results of a single test, update stats and print results.
+
+        Args:
+            test: A TestResult namedtuple.
+        """
+        if test.runner_name not in self.runners:
+            self.runners[test.runner_name] = OrderedDict()
+        assert self.runners[test.runner_name] != FAILURE_FLAG
+        self.all_test_results.append(test)
+        if test.group_name not in self.runners[test.runner_name]:
+            self.runners[test.runner_name][test.group_name] = RunStat()
+            self._print_group_title(test)
+        self._update_stats(test,
+                           self.runners[test.runner_name][test.group_name])
+        self._print_result(test)
+
+    def runner_failure(self, runner_name, failure_msg):
+        """Report a runner failure.
+
+        Use instead of process_test_result() when runner fails separate from
+        any particular test, e.g. during setup of runner.
+
+        Args:
+            runner_name: A string of the name of the runner.
+            failure_msg: A string of the failure message to pass to user.
+        """
+        self.runners[runner_name] = FAILURE_FLAG
+        print('\n', runner_name, '\n', '-' * len(runner_name), sep='')
+        print('Runner encountered a critical failure. Skipping.\n'
+              'FAILURE: %s' % failure_msg)
+
+    def register_unsupported_runner(self, runner_name):
+        """Register an unsupported runner.
+
+           Prints the following to the screen:
+
+           RunnerName
+           ----------
+           This runner does not support normal results formatting.
+           Below is the raw output of the test runner.
+
+           RAW OUTPUT:
+           <Raw Runner Output>
+
+           Args:
+              runner_name: A String of the test runner's name.
+        """
+        assert runner_name not in self.runners
+        self.runners[runner_name] = UNSUPPORTED_FLAG
+        print('\n', runner_name, '\n', '-' * len(runner_name), sep='')
+        print('This runner does not support normal results formatting. Below '
+              'is the raw output of the test runner.\n\nRAW OUTPUT:')
+
+    def print_starting_text(self):
+        """Print starting text for running tests."""
+        print(au.colorize('\nRunning Tests...', constants.CYAN))
+
+    def print_summary(self):
+        """Print summary of all test runs.
+
+        Returns:
+            0 if all tests pass, non-zero otherwise.
+
+        """
+        tests_ret = constants.EXIT_CODE_SUCCESS
+        if not self.runners:
+            return tests_ret
+        print('\n%s' % au.colorize('Summary', constants.CYAN))
+        print('-------')
+        if self.rerun_options:
+            print(self.rerun_options)
+        failed_sum = len(self.failed_tests)
+        for runner_name, groups in self.runners.items():
+            if groups == UNSUPPORTED_FLAG:
+                print(runner_name, 'Unsupported. See raw output above.')
+                continue
+            if groups == FAILURE_FLAG:
+                tests_ret = constants.EXIT_CODE_TEST_FAILURE
+                print(runner_name, 'Crashed. No results to report.')
+                failed_sum += 1
+                continue
+            for group_name, stats in groups.items():
+                name = group_name if group_name else runner_name
+                summary = self.process_summary(name, stats)
+                if stats.failed > 0:
+                    tests_ret = constants.EXIT_CODE_TEST_FAILURE
+                if stats.run_errors:
+                    tests_ret = constants.EXIT_CODE_TEST_FAILURE
+                    failed_sum += 1 if not stats.failed else 0
+                print(summary)
+        self.run_stats.perf_info.print_perf_info()
+        print()
+        if tests_ret == constants.EXIT_CODE_SUCCESS:
+            print(au.colorize('All tests passed!', constants.GREEN))
+        else:
+            message = '%d %s failed' % (failed_sum,
+                                        'tests' if failed_sum > 1 else 'test')
+            print(au.colorize(message, constants.RED))
+            print('-'*len(message))
+            self.print_failed_tests()
+        if self.log_path:
+            print('Test Logs have saved in %s' % self.log_path)
+        return tests_ret
+
+    def print_failed_tests(self):
+        """Print the failed tests if existed."""
+        if self.failed_tests:
+            for test_name in self.failed_tests:
+                print('%s' % test_name)
+
+    def process_summary(self, name, stats):
+        """Process the summary line.
+
+        Strategy:
+            Error status happens ->
+                SomeTests: Passed: 2, Failed: 0 <red>(Completed With ERRORS)</red>
+                SomeTests: Passed: 2, <red>Failed</red>: 2 <red>(Completed With ERRORS)</red>
+            More than 1 test fails ->
+                SomeTests: Passed: 2, <red>Failed</red>: 5
+            No test fails ->
+                SomeTests: <green>Passed</green>: 2, Failed: 0
+
+        Args:
+            name: A string of test name.
+            stats: A RunStat instance for a test group.
+
+        Returns:
+            A summary of the test result.
+        """
+        passed_label = 'Passed'
+        failed_label = 'Failed'
+        ignored_label = 'Ignored'
+        assumption_failed_label = 'Assumption Failed'
+        error_label = ''
+        if stats.failed > 0:
+            failed_label = au.colorize(failed_label, constants.RED)
+        if stats.run_errors:
+            error_label = au.colorize('(Completed With ERRORS)', constants.RED)
+        elif stats.failed == 0:
+            passed_label = au.colorize(passed_label, constants.GREEN)
+        summary = '%s: %s: %s, %s: %s, %s: %s, %s: %s %s' % (name,
+                                                             passed_label,
+                                                             stats.passed,
+                                                             failed_label,
+                                                             stats.failed,
+                                                             ignored_label,
+                                                             stats.ignored,
+                                                             assumption_failed_label,
+                                                             stats.assumption_failed,
+                                                             error_label)
+        return summary
+
+    def _update_stats(self, test, group):
+        """Given the results of a single test, update test run stats.
+
+        Args:
+            test: a TestResult namedtuple.
+            group: a RunStat instance for a test group.
+        """
+        # TODO(109822985): Track group and run estimated totals for updating
+        # summary line
+        if test.status == test_runner_base.PASSED_STATUS:
+            self.run_stats.passed += 1
+            group.passed += 1
+        elif test.status == test_runner_base.IGNORED_STATUS:
+            self.run_stats.ignored += 1
+            group.ignored += 1
+        elif test.status == test_runner_base.ASSUMPTION_FAILED:
+            self.run_stats.assumption_failed += 1
+            group.assumption_failed += 1
+        elif test.status == test_runner_base.FAILED_STATUS:
+            self.run_stats.failed += 1
+            self.failed_tests.append(test.test_name)
+            group.failed += 1
+        elif test.status == test_runner_base.ERROR_STATUS:
+            self.run_stats.run_errors = True
+            group.run_errors = True
+        self.run_stats.perf_info.update_perf_info(test)
+
+    def _print_group_title(self, test):
+        """Print the title line for a test group.
+
+        Test Group/Runner Name
+        ----------------------
+
+        Args:
+            test: A TestResult namedtuple.
+        """
+        if self.silent:
+            return
+        title = test.group_name or test.runner_name
+        underline = '-' * (len(title))
+        print('\n%s\n%s' % (title, underline))
+
+    def _print_result(self, test):
+        """Print the results of a single test.
+
+           Looks like:
+           fully.qualified.class#TestMethod: PASSED/FAILED
+
+        Args:
+            test: a TestResult namedtuple.
+        """
+        if self.silent:
+            return
+        if not self.pre_test or (test.test_run_name !=
+                                 self.pre_test.test_run_name):
+            print('%s (%s %s)' % (au.colorize(test.test_run_name,
+                                              constants.BLUE),
+                                  test.group_total,
+                                  'Test' if test.group_total <= 1 else 'Tests'))
+        if test.status == test_runner_base.ERROR_STATUS:
+            print('RUNNER ERROR: %s\n' % test.details)
+            self.pre_test = test
+            return
+        if test.test_name:
+            if test.status == test_runner_base.PASSED_STATUS:
+                # Example of output:
+                # [78/92] test_name: PASSED (92ms)
+                print('[%s/%s] %s: %s %s' % (test.test_count,
+                                             test.group_total,
+                                             test.test_name,
+                                             au.colorize(
+                                                 test.status,
+                                                 constants.GREEN),
+                                             test.test_time))
+                for key, data in test.additional_info.items():
+                    if key not in BENCHMARK_EVENT_KEYS:
+                        print('\t%s: %s' % (au.colorize(key, constants.BLUE), data))
+            elif test.status == test_runner_base.IGNORED_STATUS:
+                # Example: [33/92] test_name: IGNORED (12ms)
+                print('[%s/%s] %s: %s %s' % (test.test_count, test.group_total,
+                                             test.test_name, au.colorize(
+                                                 test.status, constants.MAGENTA),
+                                             test.test_time))
+            elif test.status == test_runner_base.ASSUMPTION_FAILED:
+                # Example: [33/92] test_name: ASSUMPTION_FAILED (12ms)
+                print('[%s/%s] %s: %s %s' % (test.test_count, test.group_total,
+                                             test.test_name, au.colorize(
+                                                 test.status, constants.MAGENTA),
+                                             test.test_time))
+            else:
+                # Example: [26/92] test_name: FAILED (32ms)
+                print('[%s/%s] %s: %s %s' % (test.test_count, test.group_total,
+                                             test.test_name, au.colorize(
+                                                 test.status, constants.RED),
+                                             test.test_time))
+        if test.status == test_runner_base.FAILED_STATUS:
+            print('\nSTACKTRACE:\n%s' % test.details)
+        self.pre_test = test
diff --git a/atest-py2/result_reporter_unittest.py b/atest-py2/result_reporter_unittest.py
new file mode 100755
index 0000000..9c56dc5
--- /dev/null
+++ b/atest-py2/result_reporter_unittest.py
@@ -0,0 +1,546 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for result_reporter."""
+
+import sys
+import unittest
+import mock
+
+import result_reporter
+from test_runners import test_runner_base
+
+if sys.version_info[0] == 2:
+    from StringIO import StringIO
+else:
+    from io import StringIO
+
+RESULT_PASSED_TEST = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='someClassName#sostName',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_PASSED_TEST_MODULE_2 = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule2',
+    test_name='someClassName#sostName',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_PASSED_TEST_RUNNER_2_NO_MODULE = test_runner_base.TestResult(
+    runner_name='someTestRunner2',
+    group_name=None,
+    test_name='someClassName#sostName',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_FAILED_TEST = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='someClassName2#sestName2',
+    status=test_runner_base.FAILED_STATUS,
+    details='someTrace',
+    test_count=1,
+    test_time='',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_RUN_FAILURE = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='someClassName#sostName',
+    status=test_runner_base.ERROR_STATUS,
+    details='someRunFailureReason',
+    test_count=1,
+    test_time='',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_INVOCATION_FAILURE = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name=None,
+    test_name=None,
+    status=test_runner_base.ERROR_STATUS,
+    details='someInvocationFailureReason',
+    test_count=1,
+    test_time='',
+    runner_total=None,
+    group_total=None,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_IGNORED_TEST = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='someClassName#sostName',
+    status=test_runner_base.IGNORED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_ASSUMPTION_FAILED_TEST = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='someClassName#sostName',
+    status=test_runner_base.ASSUMPTION_FAILED,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={},
+    test_run_name='com.android.UnitTests'
+)
+
+ADDITIONAL_INFO_PERF01_TEST01 = {u'repetition_index': u'0',
+                                 u'cpu_time': u'10001.10001',
+                                 u'name': u'perfName01',
+                                 u'repetitions': u'0', u'run_type': u'iteration',
+                                 u'label': u'2123', u'threads': u'1',
+                                 u'time_unit': u'ns', u'iterations': u'1001',
+                                 u'run_name': u'perfName01',
+                                 u'real_time': u'11001.11001'}
+
+RESULT_PERF01_TEST01 = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='somePerfClass01#perfName01',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info=ADDITIONAL_INFO_PERF01_TEST01,
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_PERF01_TEST02 = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='somePerfClass01#perfName02',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={u'repetition_index': u'0', u'cpu_time': u'10002.10002',
+                     u'name': u'perfName02',
+                     u'repetitions': u'0', u'run_type': u'iteration',
+                     u'label': u'2123', u'threads': u'1',
+                     u'time_unit': u'ns', u'iterations': u'1002',
+                     u'run_name': u'perfName02',
+                     u'real_time': u'11002.11002'},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_PERF01_TEST03_NO_CPU_TIME = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='somePerfClass01#perfName03',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={u'repetition_index': u'0',
+                     u'name': u'perfName03',
+                     u'repetitions': u'0', u'run_type': u'iteration',
+                     u'label': u'2123', u'threads': u'1',
+                     u'time_unit': u'ns', u'iterations': u'1003',
+                     u'run_name': u'perfName03',
+                     u'real_time': u'11003.11003'},
+    test_run_name='com.android.UnitTests'
+)
+
+RESULT_PERF02_TEST01 = test_runner_base.TestResult(
+    runner_name='someTestRunner',
+    group_name='someTestModule',
+    test_name='somePerfClass02#perfName11',
+    status=test_runner_base.PASSED_STATUS,
+    details=None,
+    test_count=1,
+    test_time='(10ms)',
+    runner_total=None,
+    group_total=2,
+    additional_info={u'repetition_index': u'0', u'cpu_time': u'20001.20001',
+                     u'name': u'perfName11',
+                     u'repetitions': u'0', u'run_type': u'iteration',
+                     u'label': u'2123', u'threads': u'1',
+                     u'time_unit': u'ns', u'iterations': u'2001',
+                     u'run_name': u'perfName11',
+                     u'real_time': u'210001.21001'},
+    test_run_name='com.android.UnitTests'
+)
+
+#pylint: disable=protected-access
+#pylint: disable=invalid-name
+class ResultReporterUnittests(unittest.TestCase):
+    """Unit tests for result_reporter.py"""
+
+    def setUp(self):
+        self.rr = result_reporter.ResultReporter()
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    @mock.patch.object(result_reporter.ResultReporter, '_print_group_title')
+    @mock.patch.object(result_reporter.ResultReporter, '_update_stats')
+    @mock.patch.object(result_reporter.ResultReporter, '_print_result')
+    def test_process_test_result(self, mock_print, mock_update, mock_title):
+        """Test process_test_result method."""
+        # Passed Test
+        self.assertTrue('someTestRunner' not in self.rr.runners)
+        self.rr.process_test_result(RESULT_PASSED_TEST)
+        self.assertTrue('someTestRunner' in self.rr.runners)
+        group = self.rr.runners['someTestRunner'].get('someTestModule')
+        self.assertIsNotNone(group)
+        mock_title.assert_called_with(RESULT_PASSED_TEST)
+        mock_update.assert_called_with(RESULT_PASSED_TEST, group)
+        mock_print.assert_called_with(RESULT_PASSED_TEST)
+        # Failed Test
+        mock_title.reset_mock()
+        self.rr.process_test_result(RESULT_FAILED_TEST)
+        mock_title.assert_not_called()
+        mock_update.assert_called_with(RESULT_FAILED_TEST, group)
+        mock_print.assert_called_with(RESULT_FAILED_TEST)
+        # Test with new Group
+        mock_title.reset_mock()
+        self.rr.process_test_result(RESULT_PASSED_TEST_MODULE_2)
+        self.assertTrue('someTestModule2' in self.rr.runners['someTestRunner'])
+        mock_title.assert_called_with(RESULT_PASSED_TEST_MODULE_2)
+        # Test with new Runner
+        mock_title.reset_mock()
+        self.rr.process_test_result(RESULT_PASSED_TEST_RUNNER_2_NO_MODULE)
+        self.assertTrue('someTestRunner2' in self.rr.runners)
+        mock_title.assert_called_with(RESULT_PASSED_TEST_RUNNER_2_NO_MODULE)
+
+    def test_print_result_run_name(self):
+        """Test print run name function in print_result method."""
+        try:
+            rr = result_reporter.ResultReporter()
+            capture_output = StringIO()
+            sys.stdout = capture_output
+            run_name = 'com.android.UnitTests'
+            rr._print_result(test_runner_base.TestResult(
+                runner_name='runner_name',
+                group_name='someTestModule',
+                test_name='someClassName#someTestName',
+                status=test_runner_base.FAILED_STATUS,
+                details='someTrace',
+                test_count=2,
+                test_time='(2h44m36.402s)',
+                runner_total=None,
+                group_total=2,
+                additional_info={},
+                test_run_name=run_name
+            ))
+            # Make sure run name in the first line.
+            capture_output_str = capture_output.getvalue().strip()
+            self.assertTrue(run_name in capture_output_str.split('\n')[0])
+            run_name2 = 'com.android.UnitTests2'
+            capture_output = StringIO()
+            sys.stdout = capture_output
+            rr._print_result(test_runner_base.TestResult(
+                runner_name='runner_name',
+                group_name='someTestModule',
+                test_name='someClassName#someTestName',
+                status=test_runner_base.FAILED_STATUS,
+                details='someTrace',
+                test_count=2,
+                test_time='(2h43m36.402s)',
+                runner_total=None,
+                group_total=2,
+                additional_info={},
+                test_run_name=run_name2
+            ))
+            # Make sure run name in the first line.
+            capture_output_str = capture_output.getvalue().strip()
+            self.assertTrue(run_name2 in capture_output_str.split('\n')[0])
+        finally:
+            sys.stdout = sys.__stdout__
+
+    def test_register_unsupported_runner(self):
+        """Test register_unsupported_runner method."""
+        self.rr.register_unsupported_runner('NotSupported')
+        runner = self.rr.runners['NotSupported']
+        self.assertIsNotNone(runner)
+        self.assertEquals(runner, result_reporter.UNSUPPORTED_FLAG)
+
+    def test_update_stats_passed(self):
+        """Test _update_stats method."""
+        # Passed Test
+        group = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST, group)
+        self.assertEquals(self.rr.run_stats.passed, 1)
+        self.assertEquals(self.rr.run_stats.failed, 0)
+        self.assertEquals(self.rr.run_stats.run_errors, False)
+        self.assertEquals(self.rr.failed_tests, [])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 0)
+        self.assertEquals(group.ignored, 0)
+        self.assertEquals(group.run_errors, False)
+        # Passed Test New Group
+        group2 = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST_MODULE_2, group2)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 0)
+        self.assertEquals(self.rr.run_stats.run_errors, False)
+        self.assertEquals(self.rr.failed_tests, [])
+        self.assertEquals(group2.passed, 1)
+        self.assertEquals(group2.failed, 0)
+        self.assertEquals(group.ignored, 0)
+        self.assertEquals(group2.run_errors, False)
+
+    def test_update_stats_failed(self):
+        """Test _update_stats method."""
+        # Passed Test
+        group = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST, group)
+        # Passed Test New Group
+        group2 = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST_MODULE_2, group2)
+        # Failed Test Old Group
+        self.rr._update_stats(RESULT_FAILED_TEST, group)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 1)
+        self.assertEquals(self.rr.run_stats.run_errors, False)
+        self.assertEquals(self.rr.failed_tests, [RESULT_FAILED_TEST.test_name])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 1)
+        self.assertEquals(group.ignored, 0)
+        self.assertEquals(group.total, 2)
+        self.assertEquals(group2.total, 1)
+        self.assertEquals(group.run_errors, False)
+        # Test Run Failure
+        self.rr._update_stats(RESULT_RUN_FAILURE, group)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 1)
+        self.assertEquals(self.rr.run_stats.run_errors, True)
+        self.assertEquals(self.rr.failed_tests, [RESULT_FAILED_TEST.test_name])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 1)
+        self.assertEquals(group.ignored, 0)
+        self.assertEquals(group.run_errors, True)
+        self.assertEquals(group2.run_errors, False)
+        # Invocation Failure
+        self.rr._update_stats(RESULT_INVOCATION_FAILURE, group)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 1)
+        self.assertEquals(self.rr.run_stats.run_errors, True)
+        self.assertEquals(self.rr.failed_tests, [RESULT_FAILED_TEST.test_name])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 1)
+        self.assertEquals(group.ignored, 0)
+        self.assertEquals(group.run_errors, True)
+
+    def test_update_stats_ignored_and_assumption_failure(self):
+        """Test _update_stats method."""
+        # Passed Test
+        group = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST, group)
+        # Passed Test New Group
+        group2 = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PASSED_TEST_MODULE_2, group2)
+        # Failed Test Old Group
+        self.rr._update_stats(RESULT_FAILED_TEST, group)
+        # Test Run Failure
+        self.rr._update_stats(RESULT_RUN_FAILURE, group)
+        # Invocation Failure
+        self.rr._update_stats(RESULT_INVOCATION_FAILURE, group)
+        # Ignored Test
+        self.rr._update_stats(RESULT_IGNORED_TEST, group)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 1)
+        self.assertEquals(self.rr.run_stats.run_errors, True)
+        self.assertEquals(self.rr.failed_tests, [RESULT_FAILED_TEST.test_name])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 1)
+        self.assertEquals(group.ignored, 1)
+        self.assertEquals(group.run_errors, True)
+        # 2nd Ignored Test
+        self.rr._update_stats(RESULT_IGNORED_TEST, group)
+        self.assertEquals(self.rr.run_stats.passed, 2)
+        self.assertEquals(self.rr.run_stats.failed, 1)
+        self.assertEquals(self.rr.run_stats.run_errors, True)
+        self.assertEquals(self.rr.failed_tests, [RESULT_FAILED_TEST.test_name])
+        self.assertEquals(group.passed, 1)
+        self.assertEquals(group.failed, 1)
+        self.assertEquals(group.ignored, 2)
+        self.assertEquals(group.run_errors, True)
+
+        # Assumption_Failure test
+        self.rr._update_stats(RESULT_ASSUMPTION_FAILED_TEST, group)
+        self.assertEquals(group.assumption_failed, 1)
+        # 2nd Assumption_Failure test
+        self.rr._update_stats(RESULT_ASSUMPTION_FAILED_TEST, group)
+        self.assertEquals(group.assumption_failed, 2)
+
+    def test_print_summary_ret_val(self):
+        """Test print_summary method's return value."""
+        # PASS Case
+        self.rr.process_test_result(RESULT_PASSED_TEST)
+        self.assertEquals(0, self.rr.print_summary())
+        # PASS Case + Fail Case
+        self.rr.process_test_result(RESULT_FAILED_TEST)
+        self.assertNotEqual(0, self.rr.print_summary())
+        # PASS Case + Fail Case + PASS Case
+        self.rr.process_test_result(RESULT_PASSED_TEST_MODULE_2)
+        self.assertNotEqual(0, self.rr.print_summary())
+
+    def test_print_summary_ret_val_err_stat(self):
+        """Test print_summary method's return value."""
+        # PASS Case
+        self.rr.process_test_result(RESULT_PASSED_TEST)
+        self.assertEquals(0, self.rr.print_summary())
+        # PASS Case + Fail Case
+        self.rr.process_test_result(RESULT_RUN_FAILURE)
+        self.assertNotEqual(0, self.rr.print_summary())
+        # PASS Case + Fail Case + PASS Case
+        self.rr.process_test_result(RESULT_PASSED_TEST_MODULE_2)
+        self.assertNotEqual(0, self.rr.print_summary())
+
+    def test_update_perf_info(self):
+        """Test update_perf_info method."""
+        group = result_reporter.RunStat()
+        # 1. Test PerfInfo after RESULT_PERF01_TEST01
+        # _update_stats() will call _update_perf_info()
+        self.rr._update_stats(RESULT_PERF01_TEST01, group)
+        correct_perf_info = []
+        # trim the time form 10001.10001 to 10001
+        trim_perf01_test01 = {u'repetition_index': u'0', u'cpu_time': u'10001',
+                              u'name': u'perfName01',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'1001',
+                              u'run_name': u'perfName01',
+                              u'real_time': u'11001',
+                              'test_name': 'somePerfClass01#perfName01'}
+        correct_perf_info.append(trim_perf01_test01)
+        self.assertEquals(self.rr.run_stats.perf_info.perf_info,
+                          correct_perf_info)
+        # 2. Test PerfInfo after RESULT_PERF01_TEST01
+        self.rr._update_stats(RESULT_PERF01_TEST02, group)
+        trim_perf01_test02 = {u'repetition_index': u'0', u'cpu_time': u'10002',
+                              u'name': u'perfName02',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'1002',
+                              u'run_name': u'perfName02',
+                              u'real_time': u'11002',
+                              'test_name': 'somePerfClass01#perfName02'}
+        correct_perf_info.append(trim_perf01_test02)
+        self.assertEquals(self.rr.run_stats.perf_info.perf_info,
+                          correct_perf_info)
+        # 3. Test PerfInfo after RESULT_PERF02_TEST01
+        self.rr._update_stats(RESULT_PERF02_TEST01, group)
+        trim_perf02_test01 = {u'repetition_index': u'0', u'cpu_time': u'20001',
+                              u'name': u'perfName11',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'2001',
+                              u'run_name': u'perfName11',
+                              u'real_time': u'210001',
+                              'test_name': 'somePerfClass02#perfName11'}
+        correct_perf_info.append(trim_perf02_test01)
+        self.assertEquals(self.rr.run_stats.perf_info.perf_info,
+                          correct_perf_info)
+        # 4. Test PerfInfo after RESULT_PERF01_TEST03_NO_CPU_TIME
+        self.rr._update_stats(RESULT_PERF01_TEST03_NO_CPU_TIME, group)
+        # Nothing added since RESULT_PERF01_TEST03_NO_CPU_TIME lack of cpu_time
+        self.assertEquals(self.rr.run_stats.perf_info.perf_info,
+                          correct_perf_info)
+
+    def test_classify_perf_info(self):
+        """Test _classify_perf_info method."""
+        group = result_reporter.RunStat()
+        self.rr._update_stats(RESULT_PERF01_TEST01, group)
+        self.rr._update_stats(RESULT_PERF01_TEST02, group)
+        self.rr._update_stats(RESULT_PERF02_TEST01, group)
+        # trim the time form 10001.10001 to 10001
+        trim_perf01_test01 = {u'repetition_index': u'0', u'cpu_time': u'10001',
+                              u'name': u'perfName01',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'1001',
+                              u'run_name': u'perfName01',
+                              u'real_time': u'11001',
+                              'test_name': 'somePerfClass01#perfName01'}
+        trim_perf01_test02 = {u'repetition_index': u'0', u'cpu_time': u'10002',
+                              u'name': u'perfName02',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'1002',
+                              u'run_name': u'perfName02',
+                              u'real_time': u'11002',
+                              'test_name': 'somePerfClass01#perfName02'}
+        trim_perf02_test01 = {u'repetition_index': u'0', u'cpu_time': u'20001',
+                              u'name': u'perfName11',
+                              u'repetitions': u'0', u'run_type': u'iteration',
+                              u'label': u'2123', u'threads': u'1',
+                              u'time_unit': u'ns', u'iterations': u'2001',
+                              u'run_name': u'perfName11',
+                              u'real_time': u'210001',
+                              'test_name': 'somePerfClass02#perfName11'}
+        correct_classify_perf_info = {"somePerfClass01":[trim_perf01_test01,
+                                                         trim_perf01_test02],
+                                      "somePerfClass02":[trim_perf02_test01]}
+        classify_perf_info, max_len = self.rr.run_stats.perf_info._classify_perf_info()
+        correct_max_len = {'real_time': 6, 'cpu_time': 5, 'name': 10,
+                           'iterations': 9, 'time_unit': 2}
+        self.assertEquals(max_len, correct_max_len)
+        self.assertEquals(classify_perf_info, correct_classify_perf_info)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/run_atest_unittests.sh b/atest-py2/run_atest_unittests.sh
new file mode 100755
index 0000000..db28ac5
--- /dev/null
+++ b/atest-py2/run_atest_unittests.sh
@@ -0,0 +1,83 @@
+#!/bin/bash
+
+# Copyright (C) 2017 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# A simple helper script that runs all of the atest unit tests.
+# There are 2 situations that we take care of:
+#   1. User wants to invoke this script directly.
+#   2. PREUPLOAD hook invokes this script.
+
+ATEST_DIR=$(dirname $0)
+[ "$(uname -s)" == "Darwin" ] && { realpath(){ echo "$(cd $(dirname $1);pwd -P)/$(basename $1)"; }; }
+ATEST_REAL_PATH=$(realpath $ATEST_DIR)
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+NC='\033[0m' # No Color
+COVERAGE=false
+
+function get_pythonpath() {
+    echo "$ATEST_REAL_PATH:$PYTHONPATH"
+}
+
+function print_summary() {
+    local test_results=$1
+    if [[ $COVERAGE == true ]]; then
+        coverage report -m
+        coverage html
+    fi
+    if [[ $test_results -eq 0 ]]; then
+        echo -e "${GREEN}All unittests pass${NC}!"
+    else
+        echo -e "${RED}There was a unittest failure${NC}"
+    fi
+}
+
+function run_atest_unittests() {
+    echo "Running tests..."
+    local run_cmd="python"
+    local rc=0
+    if [[ $COVERAGE == true ]]; then
+        # Clear previously coverage data.
+        python -m coverage erase
+        # Collect coverage data.
+        run_cmd="coverage run --source $ATEST_REAL_PATH --append"
+    fi
+
+    for test_file in $(find $ATEST_DIR -name "*_unittest.py"); do
+        if ! PYTHONPATH=$(get_pythonpath) $run_cmd $test_file; then
+          rc=1
+          echo -e "${RED}$t failed${NC}"
+        fi
+    done
+    echo
+    print_summary $rc
+    return $rc
+}
+
+# Let's check if anything is passed in, if not we assume the user is invoking
+# script, but if we get a list of files, assume it's the PREUPLOAD hook.
+read -ra PREUPLOAD_FILES <<< "$@"
+if [[ ${#PREUPLOAD_FILES[@]} -eq 0 ]]; then
+    run_atest_unittests; exit $?
+elif [[ "${#PREUPLOAD_FILES[@]}" -eq 1 && "${PREUPLOAD_FILES}" == "coverage" ]]; then
+    COVERAGE=true run_atest_unittests; exit $?
+else
+    for f in ${PREUPLOAD_FILES[@]}; do
+        # We only want to run this unittest if atest files have been touched.
+        if [[ $f == atest/* ]]; then
+            run_atest_unittests; exit $?
+        fi
+    done
+fi
diff --git a/atest-py2/test_data/test_commands.json b/atest-py2/test_data/test_commands.json
new file mode 100644
index 0000000..ec64e16
--- /dev/null
+++ b/atest-py2/test_data/test_commands.json
@@ -0,0 +1,59 @@
+{
+"hello_world_test": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter hello_world_test --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"packages/apps/Car/Messenger/tests/robotests/src/com/android/car/messenger/MessengerDelegateTest.java": [
+"./build/soong/soong_ui.bash --make-mode RunCarMessengerRoboTests"
+],
+"CtsAnimationTestCases:AnimatorTest": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsAnimationTestCases --atest-include-filter CtsAnimationTestCases:android.animation.cts.AnimatorTest --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"CtsSampleDeviceTestCases:android.sample.cts.SampleDeviceReportLogTest": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsSampleDeviceTestCases --atest-include-filter CtsSampleDeviceTestCases:android.sample.cts.SampleDeviceReportLogTest --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"CtsAnimationTestCases CtsSampleDeviceTestCases": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsAnimationTestCases --include-filter CtsSampleDeviceTestCases --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"AnimatorTest": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsAnimationTestCases --atest-include-filter CtsAnimationTestCases:android.animation.cts.AnimatorTest --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"PacketFragmenterTest": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter net_test_hci --atest-include-filter net_test_hci:PacketFragmenterTest.* --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"android.animation.cts": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsAnimationTestCases --atest-include-filter CtsAnimationTestCases:android.animation.cts --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"platform_testing/tests/example/native/Android.bp": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter hello_world_test --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"tools/tradefederation/core/res/config/native-benchmark.xml": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter native-benchmark --log-level WARN --logcat-on-failure --no-enable-granular-attempts"
+],
+"native-benchmark": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter native-benchmark --log-level WARN --logcat-on-failure --no-enable-granular-attempts"
+],
+"platform_testing/tests/example/native": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter hello_world_test --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"VtsCodelabHelloWorldTest": [
+"vts10-tradefed run commandAndExit vts-staging-default -m VtsCodelabHelloWorldTest --skip-all-system-status-check --skip-preconditions --primary-abi-only"
+],
+"aidegen_unittests": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --atest-log-file-path=/tmp/atest_run_1568627341_v33kdA/log --include-filter aidegen_unittests --log-level WARN"
+],
+"HelloWorldTests": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter HelloWorldTests --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"CtsSampleDeviceTestCases:SampleDeviceTest#testSharedPreferences": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsSampleDeviceTestCases --atest-include-filter CtsSampleDeviceTestCases:android.sample.cts.SampleDeviceTest#testSharedPreferences --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"CtsSampleDeviceTestCases:android.sample.cts": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter CtsSampleDeviceTestCases --atest-include-filter CtsSampleDeviceTestCases:android.sample.cts --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"PacketFragmenterTest#test_no_fragment_necessary,test_ble_fragment_necessary": [
+"atest_tradefed.sh template/atest_local_min --template:map test=atest --include-filter net_test_hci --atest-include-filter net_test_hci:PacketFragmenterTest.test_ble_fragment_necessary:PacketFragmenterTest.test_no_fragment_necessary --log-level WARN --skip-loading-config-jar --logcat-on-failure --no-enable-granular-attempts"
+],
+"CarMessengerRoboTests": [
+"./build/soong/soong_ui.bash --make-mode RunCarMessengerRoboTests"
+]
+}
diff --git a/atest-py2/test_finder_handler.py b/atest-py2/test_finder_handler.py
new file mode 100644
index 0000000..5736e1d
--- /dev/null
+++ b/atest-py2/test_finder_handler.py
@@ -0,0 +1,257 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Test Finder Handler module.
+"""
+
+import logging
+
+import atest_enum
+from test_finders import cache_finder
+from test_finders import test_finder_base
+from test_finders import suite_plan_finder
+from test_finders import tf_integration_finder
+from test_finders import module_finder
+
+# List of default test finder classes.
+_TEST_FINDERS = {
+    suite_plan_finder.SuitePlanFinder,
+    tf_integration_finder.TFIntegrationFinder,
+    module_finder.ModuleFinder,
+    cache_finder.CacheFinder,
+}
+
+# Explanation of REFERENCE_TYPEs:
+# ----------------------------------
+# 0. MODULE: LOCAL_MODULE or LOCAL_PACKAGE_NAME value in Android.mk/Android.bp.
+# 1. CLASS: Names which the same with a ClassName.java/kt file.
+# 2. QUALIFIED_CLASS: String like "a.b.c.ClassName".
+# 3. MODULE_CLASS: Combo of MODULE and CLASS as "module:class".
+# 4. PACKAGE: Package in java file. Same as file path to java file.
+# 5. MODULE_PACKAGE: Combo of MODULE and PACKAGE as "module:package".
+# 6. MODULE_FILE_PATH: File path to dir of tests or test itself.
+# 7. INTEGRATION_FILE_PATH: File path to config xml in one of the 4 integration
+#                           config directories.
+# 8. INTEGRATION: xml file name in one of the 4 integration config directories.
+# 9. SUITE: Value of the "run-suite-tag" in xml config file in 4 config dirs.
+#           Same as value of "test-suite-tag" in AndroidTest.xml files.
+# 10. CC_CLASS: Test case in cc file.
+# 11. SUITE_PLAN: Suite name such as cts.
+# 12. SUITE_PLAN_FILE_PATH: File path to config xml in the suite config directories.
+# 13. CACHE: A pseudo type that runs cache_finder without finding test in real.
+_REFERENCE_TYPE = atest_enum.AtestEnum(['MODULE', 'CLASS', 'QUALIFIED_CLASS',
+                                        'MODULE_CLASS', 'PACKAGE',
+                                        'MODULE_PACKAGE', 'MODULE_FILE_PATH',
+                                        'INTEGRATION_FILE_PATH', 'INTEGRATION',
+                                        'SUITE', 'CC_CLASS', 'SUITE_PLAN',
+                                        'SUITE_PLAN_FILE_PATH', 'CACHE'])
+
+_REF_TYPE_TO_FUNC_MAP = {
+    _REFERENCE_TYPE.MODULE: module_finder.ModuleFinder.find_test_by_module_name,
+    _REFERENCE_TYPE.CLASS: module_finder.ModuleFinder.find_test_by_class_name,
+    _REFERENCE_TYPE.MODULE_CLASS: module_finder.ModuleFinder.find_test_by_module_and_class,
+    _REFERENCE_TYPE.QUALIFIED_CLASS: module_finder.ModuleFinder.find_test_by_class_name,
+    _REFERENCE_TYPE.PACKAGE: module_finder.ModuleFinder.find_test_by_package_name,
+    _REFERENCE_TYPE.MODULE_PACKAGE: module_finder.ModuleFinder.find_test_by_module_and_package,
+    _REFERENCE_TYPE.MODULE_FILE_PATH: module_finder.ModuleFinder.find_test_by_path,
+    _REFERENCE_TYPE.INTEGRATION_FILE_PATH:
+        tf_integration_finder.TFIntegrationFinder.find_int_test_by_path,
+    _REFERENCE_TYPE.INTEGRATION:
+        tf_integration_finder.TFIntegrationFinder.find_test_by_integration_name,
+    _REFERENCE_TYPE.CC_CLASS:
+        module_finder.ModuleFinder.find_test_by_cc_class_name,
+    _REFERENCE_TYPE.SUITE_PLAN:suite_plan_finder.SuitePlanFinder.find_test_by_suite_name,
+    _REFERENCE_TYPE.SUITE_PLAN_FILE_PATH:
+        suite_plan_finder.SuitePlanFinder.find_test_by_suite_path,
+    _REFERENCE_TYPE.CACHE: cache_finder.CacheFinder.find_test_by_cache,
+}
+
+
+def _get_finder_instance_dict(module_info):
+    """Return dict of finder instances.
+
+    Args:
+        module_info: ModuleInfo for finder classes to use.
+
+    Returns:
+        Dict of finder instances keyed by their name.
+    """
+    instance_dict = {}
+    for finder in _get_test_finders():
+        instance_dict[finder.NAME] = finder(module_info=module_info)
+    return instance_dict
+
+
+def _get_test_finders():
+    """Returns the test finders.
+
+    If external test types are defined outside atest, they can be try-except
+    imported into here.
+
+    Returns:
+        Set of test finder classes.
+    """
+    test_finders_list = _TEST_FINDERS
+    # Example import of external test finder:
+    try:
+        from test_finders import example_finder
+        test_finders_list.add(example_finder.ExampleFinder)
+    except ImportError:
+        pass
+    return test_finders_list
+
+# pylint: disable=too-many-return-statements
+def _get_test_reference_types(ref):
+    """Determine type of test reference based on the content of string.
+
+    Examples:
+        The string 'SequentialRWTest' could be a reference to
+        a Module or a Class name.
+
+        The string 'cts/tests/filesystem' could be a Path, Integration
+        or Suite reference.
+
+    Args:
+        ref: A string referencing a test.
+
+    Returns:
+        A list of possible REFERENCE_TYPEs (ints) for reference string.
+    """
+    if ref.startswith('.') or '..' in ref:
+        return [_REFERENCE_TYPE.CACHE,
+                _REFERENCE_TYPE.INTEGRATION_FILE_PATH,
+                _REFERENCE_TYPE.MODULE_FILE_PATH,
+                _REFERENCE_TYPE.SUITE_PLAN_FILE_PATH]
+    if '/' in ref:
+        if ref.startswith('/'):
+            return [_REFERENCE_TYPE.CACHE,
+                    _REFERENCE_TYPE.INTEGRATION_FILE_PATH,
+                    _REFERENCE_TYPE.MODULE_FILE_PATH,
+                    _REFERENCE_TYPE.SUITE_PLAN_FILE_PATH]
+        return [_REFERENCE_TYPE.CACHE,
+                _REFERENCE_TYPE.INTEGRATION_FILE_PATH,
+                _REFERENCE_TYPE.MODULE_FILE_PATH,
+                _REFERENCE_TYPE.INTEGRATION,
+                _REFERENCE_TYPE.SUITE_PLAN_FILE_PATH,
+                # TODO: Uncomment in SUITE when it's supported
+                # _REFERENCE_TYPE.SUITE
+               ]
+    if '.' in ref:
+        ref_end = ref.rsplit('.', 1)[-1]
+        ref_end_is_upper = ref_end[0].isupper()
+    if ':' in ref:
+        if '.' in ref:
+            if ref_end_is_upper:
+                # Module:fully.qualified.Class or Integration:fully.q.Class
+                return [_REFERENCE_TYPE.CACHE,
+                        _REFERENCE_TYPE.INTEGRATION,
+                        _REFERENCE_TYPE.MODULE_CLASS]
+            # Module:some.package
+            return [_REFERENCE_TYPE.CACHE, _REFERENCE_TYPE.MODULE_PACKAGE,
+                    _REFERENCE_TYPE.MODULE_CLASS]
+        # Module:Class or IntegrationName:Class
+        return [_REFERENCE_TYPE.CACHE,
+                _REFERENCE_TYPE.INTEGRATION,
+                _REFERENCE_TYPE.MODULE_CLASS]
+    if '.' in ref:
+        # The string of ref_end possibly includes specific mathods, e.g.
+        # foo.java#method, so let ref_end be the first part of splitting '#'.
+        if "#" in ref_end:
+            ref_end = ref_end.split('#')[0]
+        if ref_end in ('java', 'kt', 'bp', 'mk', 'cc', 'cpp'):
+            return [_REFERENCE_TYPE.CACHE, _REFERENCE_TYPE.MODULE_FILE_PATH]
+        if ref_end == 'xml':
+            return [_REFERENCE_TYPE.CACHE,
+                    _REFERENCE_TYPE.INTEGRATION_FILE_PATH,
+                    _REFERENCE_TYPE.SUITE_PLAN_FILE_PATH]
+        if ref_end_is_upper:
+            return [_REFERENCE_TYPE.CACHE, _REFERENCE_TYPE.QUALIFIED_CLASS]
+        return [_REFERENCE_TYPE.CACHE,
+                _REFERENCE_TYPE.MODULE,
+                _REFERENCE_TYPE.PACKAGE]
+    # Note: We assume that if you're referencing a file in your cwd,
+    # that file must have a '.' in its name, i.e. foo.java, foo.xml.
+    # If this ever becomes not the case, then we need to include path below.
+    return [_REFERENCE_TYPE.CACHE,
+            _REFERENCE_TYPE.INTEGRATION,
+            # TODO: Uncomment in SUITE when it's supported
+            # _REFERENCE_TYPE.SUITE,
+            _REFERENCE_TYPE.MODULE,
+            _REFERENCE_TYPE.SUITE_PLAN,
+            _REFERENCE_TYPE.CLASS,
+            _REFERENCE_TYPE.CC_CLASS]
+
+
+def _get_registered_find_methods(module_info):
+    """Return list of registered find methods.
+
+    This is used to return find methods that were not listed in the
+    default find methods but just registered in the finder classes. These
+    find methods will run before the default find methods.
+
+    Args:
+        module_info: ModuleInfo for finder classes to instantiate with.
+
+    Returns:
+        List of registered find methods.
+    """
+    find_methods = []
+    finder_instance_dict = _get_finder_instance_dict(module_info)
+    for finder in _get_test_finders():
+        finder_instance = finder_instance_dict[finder.NAME]
+        for find_method_info in finder_instance.get_all_find_methods():
+            find_methods.append(test_finder_base.Finder(
+                finder_instance, find_method_info.find_method, finder.NAME))
+    return find_methods
+
+
+def _get_default_find_methods(module_info, test):
+    """Default find methods to be used based on the given test name.
+
+    Args:
+        module_info: ModuleInfo for finder instances to use.
+        test: String of test name to help determine which find methods
+              to utilize.
+
+    Returns:
+        List of find methods to use.
+    """
+    find_methods = []
+    finder_instance_dict = _get_finder_instance_dict(module_info)
+    test_ref_types = _get_test_reference_types(test)
+    logging.debug('Resolved input to possible references: %s', [
+        _REFERENCE_TYPE[t] for t in test_ref_types])
+    for test_ref_type in test_ref_types:
+        find_method = _REF_TYPE_TO_FUNC_MAP[test_ref_type]
+        finder_instance = finder_instance_dict[find_method.im_class.NAME]
+        finder_info = _REFERENCE_TYPE[test_ref_type]
+        find_methods.append(test_finder_base.Finder(finder_instance,
+                                                    find_method,
+                                                    finder_info))
+    return find_methods
+
+
+def get_find_methods_for_test(module_info, test):
+    """Return a list of ordered find methods.
+
+    Args:
+      test: String of test name to get find methods for.
+
+    Returns:
+        List of ordered find methods.
+    """
+    registered_find_methods = _get_registered_find_methods(module_info)
+    default_find_methods = _get_default_find_methods(module_info, test)
+    return registered_find_methods + default_find_methods
diff --git a/atest-py2/test_finder_handler_unittest.py b/atest-py2/test_finder_handler_unittest.py
new file mode 100755
index 0000000..9fc1ef8
--- /dev/null
+++ b/atest-py2/test_finder_handler_unittest.py
@@ -0,0 +1,265 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for test_finder_handler."""
+
+import unittest
+import mock
+
+import atest_error
+import test_finder_handler
+from test_finders import test_info
+from test_finders import test_finder_base
+
+#pylint: disable=protected-access
+REF_TYPE = test_finder_handler._REFERENCE_TYPE
+
+_EXAMPLE_FINDER_A = 'EXAMPLE_A'
+
+
+#pylint: disable=no-self-use
+@test_finder_base.find_method_register
+class ExampleFinderA(test_finder_base.TestFinderBase):
+    """Example finder class A."""
+    NAME = _EXAMPLE_FINDER_A
+    _TEST_RUNNER = 'TEST_RUNNER'
+
+    @test_finder_base.register()
+    def registered_find_method_from_example_finder(self, test):
+        """Registered Example find method."""
+        if test == 'ExampleFinderATrigger':
+            return test_info.TestInfo(test_name=test,
+                                      test_runner=self._TEST_RUNNER,
+                                      build_targets=set())
+        return None
+
+    def unregistered_find_method_from_example_finder(self, _test):
+        """Unregistered Example find method, should never be called."""
+        raise atest_error.ShouldNeverBeCalledError()
+
+
+_TEST_FINDERS_PATCH = {
+    ExampleFinderA,
+}
+
+
+_FINDER_INSTANCES = {
+    _EXAMPLE_FINDER_A: ExampleFinderA(),
+}
+
+
+class TestFinderHandlerUnittests(unittest.TestCase):
+    """Unit tests for test_finder_handler.py"""
+
+    def setUp(self):
+        """Set up for testing."""
+        # pylint: disable=invalid-name
+        # This is so we can see the full diffs when there are mismatches.
+        self.maxDiff = None
+        self.empty_mod_info = None
+        # We want to control the finders we return.
+        mock.patch('test_finder_handler._get_test_finders',
+                   lambda: _TEST_FINDERS_PATCH).start()
+        # Since we're going to be comparing instance objects, we'll need to keep
+        # track of the objects so they align.
+        mock.patch('test_finder_handler._get_finder_instance_dict',
+                   lambda x: _FINDER_INSTANCES).start()
+        # We want to mock out the default find methods to make sure we got all
+        # the methods we expect.
+        mock.patch('test_finder_handler._get_default_find_methods',
+                   lambda x, y: [test_finder_base.Finder(
+                       _FINDER_INSTANCES[_EXAMPLE_FINDER_A],
+                       ExampleFinderA.unregistered_find_method_from_example_finder,
+                       _EXAMPLE_FINDER_A)]).start()
+
+    def tearDown(self):
+        """Tear down."""
+        mock.patch.stopall()
+
+    def test_get_test_reference_types(self):
+        """Test _get_test_reference_types parses reference types correctly."""
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('ModuleOrClassName'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE,
+             REF_TYPE.SUITE_PLAN, REF_TYPE.CLASS, REF_TYPE.CC_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('Module_or_Class_name'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE,
+             REF_TYPE.SUITE_PLAN, REF_TYPE.CLASS, REF_TYPE.CC_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SuiteName'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE,
+             REF_TYPE.SUITE_PLAN, REF_TYPE.CLASS, REF_TYPE.CC_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('Suite-Name'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE,
+             REF_TYPE.SUITE_PLAN, REF_TYPE.CLASS, REF_TYPE.CC_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('some.package'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE, REF_TYPE.PACKAGE]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('fully.q.Class'),
+            [REF_TYPE.CACHE, REF_TYPE.QUALIFIED_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('Integration.xml'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SomeClass.java'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SomeClass.kt'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('Android.mk'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('Android.bp'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SomeTest.cc'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SomeTest.cpp'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('SomeTest.cc#method'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('module:Class'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('module:f.q.Class'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('module:a.package'),
+            [REF_TYPE.CACHE, REF_TYPE.MODULE_PACKAGE, REF_TYPE.MODULE_CLASS]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('.'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('..'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('./rel/path/to/test'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('rel/path/to/test'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.INTEGRATION,
+             REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('/abs/path/to/test'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('int/test'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.INTEGRATION,
+             REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('int/test:fully.qual.Class#m'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.INTEGRATION,
+             REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('int/test:Class#method'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION_FILE_PATH,
+             REF_TYPE.MODULE_FILE_PATH, REF_TYPE.INTEGRATION,
+             REF_TYPE.SUITE_PLAN_FILE_PATH]
+        )
+        self.assertEqual(
+            test_finder_handler._get_test_reference_types('int_name_no_slash:Class#m'),
+            [REF_TYPE.CACHE, REF_TYPE.INTEGRATION, REF_TYPE.MODULE_CLASS]
+        )
+
+    def test_get_registered_find_methods(self):
+        """Test that we get the registered find methods."""
+        empty_mod_info = None
+        example_finder_a_instance = test_finder_handler._get_finder_instance_dict(
+            empty_mod_info)[_EXAMPLE_FINDER_A]
+        should_equal = [
+            test_finder_base.Finder(
+                example_finder_a_instance,
+                ExampleFinderA.registered_find_method_from_example_finder,
+                _EXAMPLE_FINDER_A)]
+        should_not_equal = [
+            test_finder_base.Finder(
+                example_finder_a_instance,
+                ExampleFinderA.unregistered_find_method_from_example_finder,
+                _EXAMPLE_FINDER_A)]
+        # Let's make sure we see the registered method.
+        self.assertEqual(
+            should_equal,
+            test_finder_handler._get_registered_find_methods(empty_mod_info)
+        )
+        # Make sure we don't see the unregistered method here.
+        self.assertNotEqual(
+            should_not_equal,
+            test_finder_handler._get_registered_find_methods(empty_mod_info)
+        )
+
+    def test_get_find_methods_for_test(self):
+        """Test that we get the find methods we expect."""
+        # Let's see that we get the unregistered and registered find methods in
+        # the order we expect.
+        test = ''
+        registered_find_methods = [
+            test_finder_base.Finder(
+                _FINDER_INSTANCES[_EXAMPLE_FINDER_A],
+                ExampleFinderA.registered_find_method_from_example_finder,
+                _EXAMPLE_FINDER_A)]
+        default_find_methods = [
+            test_finder_base.Finder(
+                _FINDER_INSTANCES[_EXAMPLE_FINDER_A],
+                ExampleFinderA.unregistered_find_method_from_example_finder,
+                _EXAMPLE_FINDER_A)]
+        should_equal = registered_find_methods + default_find_methods
+        self.assertEqual(
+            should_equal,
+            test_finder_handler.get_find_methods_for_test(self.empty_mod_info,
+                                                          test))
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/test_finders/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/test_finders/__init__.py
diff --git a/atest-py2/test_finders/cache_finder.py b/atest-py2/test_finders/cache_finder.py
new file mode 100644
index 0000000..5b7bd07
--- /dev/null
+++ b/atest-py2/test_finders/cache_finder.py
@@ -0,0 +1,61 @@
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Cache Finder class.
+"""
+
+import atest_utils
+from test_finders import test_finder_base
+from test_finders import test_info
+
+class CacheFinder(test_finder_base.TestFinderBase):
+    """Cache Finder class."""
+    NAME = 'CACHE'
+
+    def __init__(self, **kwargs):
+        super(CacheFinder, self).__init__()
+
+    def _is_latest_testinfos(self, test_infos):
+        """Check whether test_infos are up-to-date.
+
+        Args:
+            test_infos: A list of TestInfo.
+
+        Returns:
+            True if all keys in test_infos and TestInfo object are equal.
+            Otherwise, False.
+        """
+        sorted_base_ti = sorted(
+            vars(test_info.TestInfo(None, None, None)).keys())
+        for cached_test_info in test_infos:
+            sorted_cache_ti = sorted(vars(cached_test_info).keys())
+            if not sorted_cache_ti == sorted_base_ti:
+                return False
+        return True
+
+    def find_test_by_cache(self, test_reference):
+        """Find the matched test_infos in saved caches.
+
+        Args:
+            test_reference: A string of the path to the test's file or dir.
+
+        Returns:
+            A list of TestInfo namedtuple if cache found and is in latest
+            TestInfo format, else None.
+        """
+        test_infos = atest_utils.load_test_info_cache(test_reference)
+        if test_infos and self._is_latest_testinfos(test_infos):
+            return test_infos
+        return None
diff --git a/atest-py2/test_finders/cache_finder_unittest.py b/atest-py2/test_finders/cache_finder_unittest.py
new file mode 100755
index 0000000..7797ea3
--- /dev/null
+++ b/atest-py2/test_finders/cache_finder_unittest.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for cache_finder."""
+
+import unittest
+import os
+import mock
+
+# pylint: disable=import-error
+import atest_utils
+import unittest_constants as uc
+from test_finders import cache_finder
+
+
+#pylint: disable=protected-access
+class CacheFinderUnittests(unittest.TestCase):
+    """Unit tests for cache_finder.py"""
+    def setUp(self):
+        """Set up stuff for testing."""
+        self.cache_finder = cache_finder.CacheFinder()
+
+    @mock.patch.object(atest_utils, 'get_test_info_cache_path')
+    def test_find_test_by_cache(self, mock_get_cache_path):
+        """Test find_test_by_cache method."""
+        uncached_test = 'mytest1'
+        cached_test = 'hello_world_test'
+        uncached_test2 = 'mytest2'
+        test_cache_root = os.path.join(uc.TEST_DATA_DIR, 'cache_root')
+        # Hit matched cache file but no original_finder in it,
+        # should return None.
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            'cd66f9f5ad63b42d0d77a9334de6bb73.cache')
+        self.assertIsNone(self.cache_finder.find_test_by_cache(uncached_test))
+        # Hit matched cache file and original_finder is in it,
+        # should return cached test infos.
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            '78ea54ef315f5613f7c11dd1a87f10c7.cache')
+        self.assertIsNotNone(self.cache_finder.find_test_by_cache(cached_test))
+        # Does not hit matched cache file, should return cached test infos.
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            '39488b7ac83c56d5a7d285519fe3e3fd.cache')
+        self.assertIsNone(self.cache_finder.find_test_by_cache(uncached_test2))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_finders/example_finder.py b/atest-py2/test_finders/example_finder.py
new file mode 100644
index 0000000..d1fc33b
--- /dev/null
+++ b/atest-py2/test_finders/example_finder.py
@@ -0,0 +1,38 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Example Finder class.
+"""
+
+# pylint: disable=import-error
+from test_finders import test_info
+from test_finders import test_finder_base
+from test_runners import example_test_runner
+
+
+@test_finder_base.find_method_register
+class ExampleFinder(test_finder_base.TestFinderBase):
+    """Example finder class."""
+    NAME = 'EXAMPLE'
+    _TEST_RUNNER = example_test_runner.ExampleTestRunner.NAME
+
+    @test_finder_base.register()
+    def find_method_from_example_finder(self, test):
+        """Example find method to demonstrate how to register it."""
+        if test == 'ExampleFinderTest':
+            return test_info.TestInfo(test_name=test,
+                                      test_runner=self._TEST_RUNNER,
+                                      build_targets=set())
+        return None
diff --git a/atest-py2/test_finders/module_finder.py b/atest-py2/test_finders/module_finder.py
new file mode 100644
index 0000000..049658e
--- /dev/null
+++ b/atest-py2/test_finders/module_finder.py
@@ -0,0 +1,652 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Module Finder class.
+"""
+
+import logging
+import os
+
+# pylint: disable=import-error
+import atest_error
+import atest_utils
+import constants
+from test_finders import test_info
+from test_finders import test_finder_base
+from test_finders import test_finder_utils
+from test_runners import atest_tf_test_runner
+from test_runners import robolectric_test_runner
+from test_runners import vts_tf_test_runner
+
+_MODULES_IN = 'MODULES-IN-%s'
+_ANDROID_MK = 'Android.mk'
+
+# These are suites in LOCAL_COMPATIBILITY_SUITE that aren't really suites so
+# we can ignore them.
+_SUITES_TO_IGNORE = frozenset({'general-tests', 'device-tests', 'tests'})
+
+class ModuleFinder(test_finder_base.TestFinderBase):
+    """Module finder class."""
+    NAME = 'MODULE'
+    _TEST_RUNNER = atest_tf_test_runner.AtestTradefedTestRunner.NAME
+    _ROBOLECTRIC_RUNNER = robolectric_test_runner.RobolectricTestRunner.NAME
+    _VTS_TEST_RUNNER = vts_tf_test_runner.VtsTradefedTestRunner.NAME
+
+    def __init__(self, module_info=None):
+        super(ModuleFinder, self).__init__()
+        self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+        self.module_info = module_info
+
+    def _determine_testable_module(self, path):
+        """Determine which module the user is trying to test.
+
+        Returns the module to test. If there are multiple possibilities, will
+        ask the user. Otherwise will return the only module found.
+
+        Args:
+            path: String path of module to look for.
+
+        Returns:
+            A list of the module names.
+        """
+        testable_modules = []
+        for mod in self.module_info.get_module_names(path):
+            mod_info = self.module_info.get_module_info(mod)
+            # Robolectric tests always exist in pairs of 2, one module to build
+            # the test and another to run it. For now, we are assuming they are
+            # isolated in their own folders and will return if we find one.
+            if self.module_info.is_robolectric_test(mod):
+                # return a list with one module name if it is robolectric.
+                return [mod]
+            if self.module_info.is_testable_module(mod_info):
+                testable_modules.append(mod_info.get(constants.MODULE_NAME))
+        return test_finder_utils.extract_test_from_tests(testable_modules)
+
+    def _is_vts_module(self, module_name):
+        """Returns True if the module is a vts10 module, else False."""
+        mod_info = self.module_info.get_module_info(module_name)
+        suites = []
+        if mod_info:
+            suites = mod_info.get('compatibility_suites', [])
+        # Pull out all *ts (cts, tvts, etc) suites.
+        suites = [suite for suite in suites if suite not in _SUITES_TO_IGNORE]
+        return len(suites) == 1 and 'vts10' in suites
+
+    def _update_to_vts_test_info(self, test):
+        """Fill in the fields with vts10 specific info.
+
+        We need to update the runner to use the vts10 runner and also find the
+        test specific dependencies.
+
+        Args:
+            test: TestInfo to update with vts10 specific details.
+
+        Return:
+            TestInfo that is ready for the vts10 test runner.
+        """
+        test.test_runner = self._VTS_TEST_RUNNER
+        config_file = os.path.join(self.root_dir,
+                                   test.data[constants.TI_REL_CONFIG])
+        # Need to get out dir (special logic is to account for custom out dirs).
+        # The out dir is used to construct the build targets for the test deps.
+        out_dir = os.environ.get(constants.ANDROID_HOST_OUT)
+        custom_out_dir = os.environ.get(constants.ANDROID_OUT_DIR)
+        # If we're not an absolute custom out dir, get relative out dir path.
+        if custom_out_dir is None or not os.path.isabs(custom_out_dir):
+            out_dir = os.path.relpath(out_dir, self.root_dir)
+        vts_out_dir = os.path.join(out_dir, 'vts10', 'android-vts10', 'testcases')
+        # Parse dependency of default staging plans.
+        xml_paths = test_finder_utils.search_integration_dirs(
+            constants.VTS_STAGING_PLAN,
+            self.module_info.get_paths(constants.VTS_TF_MODULE))
+        vts_xmls = set()
+        vts_xmls.add(config_file)
+        for xml_path in xml_paths:
+            vts_xmls |= test_finder_utils.get_plans_from_vts_xml(xml_path)
+        for config_file in vts_xmls:
+            # Add in vts10 test build targets.
+            test.build_targets |= test_finder_utils.get_targets_from_vts_xml(
+                config_file, vts_out_dir, self.module_info)
+        test.build_targets.add('vts-test-core')
+        test.build_targets.add(test.test_name)
+        return test
+
+    def _update_to_robolectric_test_info(self, test):
+        """Update the fields for a robolectric test.
+
+        Args:
+          test: TestInfo to be updated with robolectric fields.
+
+        Returns:
+          TestInfo with robolectric fields.
+        """
+        test.test_runner = self._ROBOLECTRIC_RUNNER
+        test.test_name = self.module_info.get_robolectric_test_name(test.test_name)
+        return test
+
+    def _process_test_info(self, test):
+        """Process the test info and return some fields updated/changed.
+
+        We need to check if the test found is a special module (like vts10) and
+        update the test_info fields (like test_runner) appropriately.
+
+        Args:
+            test: TestInfo that has been filled out by a find method.
+
+        Return:
+            TestInfo that has been modified as needed and return None if
+            this module can't be found in the module_info.
+        """
+        module_name = test.test_name
+        mod_info = self.module_info.get_module_info(module_name)
+        if not mod_info:
+            return None
+        test.module_class = mod_info['class']
+        test.install_locations = test_finder_utils.get_install_locations(
+            mod_info['installed'])
+        # Check if this is only a vts10 module.
+        if self._is_vts_module(test.test_name):
+            return self._update_to_vts_test_info(test)
+        elif self.module_info.is_robolectric_test(test.test_name):
+            return self._update_to_robolectric_test_info(test)
+        rel_config = test.data[constants.TI_REL_CONFIG]
+        test.build_targets = self._get_build_targets(module_name, rel_config)
+        return test
+
+    def _get_build_targets(self, module_name, rel_config):
+        """Get the test deps.
+
+        Args:
+            module_name: name of the test.
+            rel_config: XML for the given test.
+
+        Returns:
+            Set of build targets.
+        """
+        targets = set()
+        if not self.module_info.is_auto_gen_test_config(module_name):
+            config_file = os.path.join(self.root_dir, rel_config)
+            targets = test_finder_utils.get_targets_from_xml(config_file,
+                                                             self.module_info)
+        if constants.VTS_CORE_SUITE in self.module_info.get_module_info(
+                module_name).get(constants.MODULE_COMPATIBILITY_SUITES, []):
+            targets.add(constants.VTS_CORE_TF_MODULE)
+        for module_path in self.module_info.get_paths(module_name):
+            mod_dir = module_path.replace('/', '-')
+            targets.add(_MODULES_IN % mod_dir)
+        # (b/156457698) Force add vts_kernel_tests as build target if our test
+        # belong to REQUIRED_KERNEL_TEST_MODULES due to required_module option
+        # not working for sh_test in soong.
+        if module_name in constants.REQUIRED_KERNEL_TEST_MODULES:
+            targets.add('vts_kernel_tests')
+        return targets
+
+    def _get_module_test_config(self, module_name, rel_config=None):
+        """Get the value of test_config in module_info.
+
+        Get the value of 'test_config' in module_info if its
+        auto_test_config is not true.
+        In this case, the test_config is specified by user.
+        If not, return rel_config.
+
+        Args:
+            module_name: A string of the test's module name.
+            rel_config: XML for the given test.
+
+        Returns:
+            A string of test_config path if found, else return rel_config.
+        """
+        mod_info = self.module_info.get_module_info(module_name)
+        if mod_info:
+            test_config = ''
+            test_config_list = mod_info.get(constants.MODULE_TEST_CONFIG, [])
+            if test_config_list:
+                test_config = test_config_list[0]
+            if not self.module_info.is_auto_gen_test_config(module_name) and test_config != '':
+                return test_config
+        return rel_config
+
+    def _get_test_info_filter(self, path, methods, **kwargs):
+        """Get test info filter.
+
+        Args:
+            path: A string of the test's path.
+            methods: A set of method name strings.
+            rel_module_dir: Optional. A string of the module dir relative to
+                root.
+            class_name: Optional. A string of the class name.
+            is_native_test: Optional. A boolean variable of whether to search
+                for a native test or not.
+
+        Returns:
+            A set of test info filter.
+        """
+        _, file_name = test_finder_utils.get_dir_path_and_filename(path)
+        ti_filter = frozenset()
+        if kwargs.get('is_native_test', None):
+            ti_filter = frozenset([test_info.TestFilter(
+                test_finder_utils.get_cc_filter(
+                    kwargs.get('class_name', '*'), methods), frozenset())])
+        # Path to java file.
+        elif file_name and constants.JAVA_EXT_RE.match(file_name):
+            full_class_name = test_finder_utils.get_fully_qualified_class_name(
+                path)
+            ti_filter = frozenset(
+                [test_info.TestFilter(full_class_name, methods)])
+        # Path to cc file.
+        elif file_name and constants.CC_EXT_RE.match(file_name):
+            if not test_finder_utils.has_cc_class(path):
+                raise atest_error.MissingCCTestCaseError(
+                    "Can't find CC class in %s" % path)
+            if methods:
+                ti_filter = frozenset(
+                    [test_info.TestFilter(test_finder_utils.get_cc_filter(
+                        kwargs.get('class_name', '*'), methods), frozenset())])
+        # Path to non-module dir, treat as package.
+        elif (not file_name
+              and kwargs.get('rel_module_dir', None) !=
+              os.path.relpath(path, self.root_dir)):
+            dir_items = [os.path.join(path, f) for f in os.listdir(path)]
+            for dir_item in dir_items:
+                if constants.JAVA_EXT_RE.match(dir_item):
+                    package_name = test_finder_utils.get_package_name(dir_item)
+                    if package_name:
+                        # methods should be empty frozenset for package.
+                        if methods:
+                            raise atest_error.MethodWithoutClassError(
+                                '%s: Method filtering requires class'
+                                % str(methods))
+                        ti_filter = frozenset(
+                            [test_info.TestFilter(package_name, methods)])
+                        break
+        return ti_filter
+
+    def _get_rel_config(self, test_path):
+        """Get config file's relative path.
+
+        Args:
+            test_path: A string of the test absolute path.
+
+        Returns:
+            A string of config's relative path, else None.
+        """
+        test_dir = os.path.dirname(test_path)
+        rel_module_dir = test_finder_utils.find_parent_module_dir(
+            self.root_dir, test_dir, self.module_info)
+        if rel_module_dir:
+            return os.path.join(rel_module_dir, constants.MODULE_CONFIG)
+        return None
+
+    def _get_test_infos(self, test_path, rel_config, module_name, test_filter):
+        """Get test_info for test_path.
+
+        Args:
+            test_path: A string of the test path.
+            rel_config: A string of rel path of config.
+            module_name: A string of the module name to use.
+            test_filter: A test info filter.
+
+        Returns:
+            A list of TestInfo namedtuple if found, else None.
+        """
+        if not rel_config:
+            rel_config = self._get_rel_config(test_path)
+            if not rel_config:
+                return None
+        if module_name:
+            module_names = [module_name]
+        else:
+            module_names = self._determine_testable_module(
+                os.path.dirname(rel_config))
+        test_infos = []
+        if module_names:
+            for mname in module_names:
+                # The real test config might be record in module-info.
+                rel_config = self._get_module_test_config(mname,
+                                                          rel_config=rel_config)
+                mod_info = self.module_info.get_module_info(mname)
+                tinfo = self._process_test_info(test_info.TestInfo(
+                    test_name=mname,
+                    test_runner=self._TEST_RUNNER,
+                    build_targets=set(),
+                    data={constants.TI_FILTER: test_filter,
+                          constants.TI_REL_CONFIG: rel_config},
+                    compatibility_suites=mod_info.get(
+                        constants.MODULE_COMPATIBILITY_SUITES, [])))
+                if tinfo:
+                    test_infos.append(tinfo)
+        return test_infos
+
+    def find_test_by_module_name(self, module_name):
+        """Find test for the given module name.
+
+        Args:
+            module_name: A string of the test's module name.
+
+        Returns:
+            A list that includes only 1 populated TestInfo namedtuple
+            if found, otherwise None.
+        """
+        mod_info = self.module_info.get_module_info(module_name)
+        if self.module_info.is_testable_module(mod_info):
+            # path is a list with only 1 element.
+            rel_config = os.path.join(mod_info['path'][0],
+                                      constants.MODULE_CONFIG)
+            rel_config = self._get_module_test_config(module_name, rel_config=rel_config)
+            tinfo = self._process_test_info(test_info.TestInfo(
+                test_name=module_name,
+                test_runner=self._TEST_RUNNER,
+                build_targets=set(),
+                data={constants.TI_REL_CONFIG: rel_config,
+                      constants.TI_FILTER: frozenset()},
+                compatibility_suites=mod_info.get(
+                    constants.MODULE_COMPATIBILITY_SUITES, [])))
+            if tinfo:
+                return [tinfo]
+        return None
+
+    def find_test_by_kernel_class_name(self, module_name, class_name):
+        """Find kernel test for the given class name.
+
+        Args:
+            module_name: A string of the module name to use.
+            class_name: A string of the test's class name.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None.
+        """
+        class_name, methods = test_finder_utils.split_methods(class_name)
+        test_config = self._get_module_test_config(module_name)
+        test_config_path = os.path.join(self.root_dir, test_config)
+        mod_info = self.module_info.get_module_info(module_name)
+        ti_filter = frozenset(
+            [test_info.TestFilter(class_name, methods)])
+        if test_finder_utils.is_test_from_kernel_xml(test_config_path, class_name):
+            tinfo = self._process_test_info(test_info.TestInfo(
+                test_name=module_name,
+                test_runner=self._TEST_RUNNER,
+                build_targets=set(),
+                data={constants.TI_REL_CONFIG: test_config,
+                      constants.TI_FILTER: ti_filter},
+                compatibility_suites=mod_info.get(
+                    constants.MODULE_COMPATIBILITY_SUITES, [])))
+            if tinfo:
+                return [tinfo]
+        return None
+
+    def find_test_by_class_name(self, class_name, module_name=None,
+                                rel_config=None, is_native_test=False):
+        """Find test files given a class name.
+
+        If module_name and rel_config not given it will calculate it determine
+        it by looking up the tree from the class file.
+
+        Args:
+            class_name: A string of the test's class name.
+            module_name: Optional. A string of the module name to use.
+            rel_config: Optional. A string of module dir relative to repo root.
+            is_native_test: A boolean variable of whether to search for a
+            native test or not.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None.
+        """
+        class_name, methods = test_finder_utils.split_methods(class_name)
+        if rel_config:
+            search_dir = os.path.join(self.root_dir,
+                                      os.path.dirname(rel_config))
+        else:
+            search_dir = self.root_dir
+        test_paths = test_finder_utils.find_class_file(search_dir, class_name,
+                                                       is_native_test, methods)
+        if not test_paths and rel_config:
+            logging.info('Did not find class (%s) under module path (%s), '
+                         'researching from repo root.', class_name, rel_config)
+            test_paths = test_finder_utils.find_class_file(self.root_dir,
+                                                           class_name,
+                                                           is_native_test,
+                                                           methods)
+        if not test_paths:
+            return None
+        tinfos = []
+        for test_path in test_paths:
+            test_filter = self._get_test_info_filter(
+                test_path, methods, class_name=class_name,
+                is_native_test=is_native_test)
+            tinfo = self._get_test_infos(test_path, rel_config,
+                                         module_name, test_filter)
+            if tinfo:
+                tinfos.extend(tinfo)
+        return tinfos
+
+    def find_test_by_module_and_class(self, module_class):
+        """Find the test info given a MODULE:CLASS string.
+
+        Args:
+            module_class: A string of form MODULE:CLASS or MODULE:CLASS#METHOD.
+
+        Returns:
+            A list of populated TestInfo namedtuple if found, else None.
+        """
+        if ':' not in module_class:
+            return None
+        module_name, class_name = module_class.split(':')
+        # module_infos is a list with at most 1 element.
+        module_infos = self.find_test_by_module_name(module_name)
+        module_info = module_infos[0] if module_infos else None
+        if not module_info:
+            return None
+        find_result = None
+        # If the target module is NATIVE_TEST, search CC classes only.
+        if not self.module_info.is_native_test(module_name):
+            # Find by java class.
+            find_result = self.find_test_by_class_name(
+                class_name, module_info.test_name,
+                module_info.data.get(constants.TI_REL_CONFIG))
+        # kernel target test is also define as NATIVE_TEST in build system.
+        # TODO (b/157210083) Update find_test_by_kernel_class_name method to
+        # support gen_rule use case.
+        if not find_result:
+            find_result = self.find_test_by_kernel_class_name(
+                module_name, class_name)
+        # Find by cc class.
+        if not find_result:
+            find_result = self.find_test_by_cc_class_name(
+                class_name, module_info.test_name,
+                module_info.data.get(constants.TI_REL_CONFIG))
+        return find_result
+
+    def find_test_by_package_name(self, package, module_name=None,
+                                  rel_config=None):
+        """Find the test info given a PACKAGE string.
+
+        Args:
+            package: A string of the package name.
+            module_name: Optional. A string of the module name.
+            ref_config: Optional. A string of rel path of config.
+
+        Returns:
+            A list of populated TestInfo namedtuple if found, else None.
+        """
+        _, methods = test_finder_utils.split_methods(package)
+        if methods:
+            raise atest_error.MethodWithoutClassError('%s: Method filtering '
+                                                      'requires class' % (
+                                                          methods))
+        # Confirm that packages exists and get user input for multiples.
+        if rel_config:
+            search_dir = os.path.join(self.root_dir,
+                                      os.path.dirname(rel_config))
+        else:
+            search_dir = self.root_dir
+        package_paths = test_finder_utils.run_find_cmd(
+            test_finder_utils.FIND_REFERENCE_TYPE.PACKAGE, search_dir, package)
+        # Package path will be the full path to the dir represented by package.
+        if not package_paths:
+            return None
+        test_filter = frozenset([test_info.TestFilter(package, frozenset())])
+        test_infos = []
+        for package_path in package_paths:
+            tinfo = self._get_test_infos(package_path, rel_config,
+                                         module_name, test_filter)
+            if tinfo:
+                test_infos.extend(tinfo)
+        return test_infos
+
+    def find_test_by_module_and_package(self, module_package):
+        """Find the test info given a MODULE:PACKAGE string.
+
+        Args:
+            module_package: A string of form MODULE:PACKAGE
+
+        Returns:
+            A list of populated TestInfo namedtuple if found, else None.
+        """
+        module_name, package = module_package.split(':')
+        # module_infos is a list with at most 1 element.
+        module_infos = self.find_test_by_module_name(module_name)
+        module_info = module_infos[0] if module_infos else None
+        if not module_info:
+            return None
+        return self.find_test_by_package_name(
+            package, module_info.test_name,
+            module_info.data.get(constants.TI_REL_CONFIG))
+
+    def find_test_by_path(self, path):
+        """Find the first test info matching the given path.
+
+        Strategy:
+            path_to_java_file --> Resolve to CLASS
+            path_to_cc_file --> Resolve to CC CLASS
+            path_to_module_file -> Resolve to MODULE
+            path_to_module_dir -> Resolve to MODULE
+            path_to_dir_with_class_files--> Resolve to PACKAGE
+            path_to_any_other_dir --> Resolve as MODULE
+
+        Args:
+            path: A string of the test's path.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None
+        """
+        logging.debug('Finding test by path: %s', path)
+        path, methods = test_finder_utils.split_methods(path)
+        # TODO: See if this can be generalized and shared with methods above
+        # create absolute path from cwd and remove symbolic links
+        path = os.path.realpath(path)
+        if not os.path.exists(path):
+            return None
+        if (methods and
+                not test_finder_utils.has_method_in_file(path, methods)):
+            return None
+        dir_path, _ = test_finder_utils.get_dir_path_and_filename(path)
+        # Module/Class
+        rel_module_dir = test_finder_utils.find_parent_module_dir(
+            self.root_dir, dir_path, self.module_info)
+        if not rel_module_dir:
+            return None
+        rel_config = os.path.join(rel_module_dir, constants.MODULE_CONFIG)
+        test_filter = self._get_test_info_filter(path, methods,
+                                                 rel_module_dir=rel_module_dir)
+        return self._get_test_infos(path, rel_config, None, test_filter)
+
+    def find_test_by_cc_class_name(self, class_name, module_name=None,
+                                   rel_config=None):
+        """Find test files given a cc class name.
+
+        If module_name and rel_config not given, test will be determined
+        by looking up the tree for files which has input class.
+
+        Args:
+            class_name: A string of the test's class name.
+            module_name: Optional. A string of the module name to use.
+            rel_config: Optional. A string of module dir relative to repo root.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None.
+        """
+        # Check if class_name is prepended with file name. If so, trim the
+        # prefix and keep only the class_name.
+        if '.' in class_name:
+            # Assume the class name has a format of file_name.class_name
+            class_name = class_name[class_name.rindex('.')+1:]
+            logging.info('Search with updated class name: %s', class_name)
+        return self.find_test_by_class_name(
+            class_name, module_name, rel_config, is_native_test=True)
+
+    def get_testable_modules_with_ld(self, user_input, ld_range=0):
+        """Calculate the edit distances of the input and testable modules.
+
+        The user input will be calculated across all testable modules and
+        results in integers generated by Levenshtein Distance algorithm.
+        To increase the speed of the calculation, a bound can be applied to
+        this method to prevent from calculating every testable modules.
+
+        Guessing from typos, e.g. atest atest_unitests, implies a tangible range
+        of length that Atest only needs to search within it, and the default of
+        the bound is 2.
+
+        Guessing from keywords however, e.g. atest --search Camera, means that
+        the uncertainty of the module name is way higher, and Atest should walk
+        through all testable modules and return the highest possibilities.
+
+        Args:
+            user_input: A string of the user input.
+            ld_range: An integer that range the searching scope. If the length of
+                      user_input is 10, then Atest will calculate modules of which
+                      length is between 8 and 12. 0 is equivalent to unlimited.
+
+        Returns:
+            A List of LDs and possible module names. If the user_input is "fax",
+            the output will be like:
+            [[2, "fog"], [2, "Fix"], [4, "duck"], [7, "Duckies"]]
+
+            Which means the most lilely names of "fax" are fog and Fix(LD=2),
+            while Dickies is the most unlikely one(LD=7).
+        """
+        atest_utils.colorful_print('\nSearching for similar module names using '
+                                   'fuzzy search...', constants.CYAN)
+        testable_modules = sorted(self.module_info.get_testable_modules(), key=len)
+        lower_bound = len(user_input) - ld_range
+        upper_bound = len(user_input) + ld_range
+        testable_modules_with_ld = []
+        for module_name in testable_modules:
+            # Dispose those too short or too lengthy.
+            if ld_range != 0:
+                if len(module_name) < lower_bound:
+                    continue
+                elif len(module_name) > upper_bound:
+                    break
+            testable_modules_with_ld.append(
+                [test_finder_utils.get_levenshtein_distance(
+                    user_input, module_name), module_name])
+        return testable_modules_with_ld
+
+    def get_fuzzy_searching_results(self, user_input):
+        """Give results which have no more than allowance of edit distances.
+
+        Args:
+            user_input: the target module name for fuzzy searching.
+
+        Return:
+            A list of guessed modules.
+        """
+        modules_with_ld = self.get_testable_modules_with_ld(user_input,
+                                                            ld_range=constants.LD_RANGE)
+        guessed_modules = []
+        for _distance, _module in modules_with_ld:
+            if _distance <= abs(constants.LD_RANGE):
+                guessed_modules.append(_module)
+        return guessed_modules
diff --git a/atest-py2/test_finders/module_finder_unittest.py b/atest-py2/test_finders/module_finder_unittest.py
new file mode 100755
index 0000000..20d99e4
--- /dev/null
+++ b/atest-py2/test_finders/module_finder_unittest.py
@@ -0,0 +1,594 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for module_finder."""
+
+import re
+import unittest
+import os
+import mock
+
+# pylint: disable=import-error
+import atest_error
+import constants
+import module_info
+import unittest_constants as uc
+import unittest_utils
+from test_finders import module_finder
+from test_finders import test_finder_utils
+from test_finders import test_info
+from test_runners import atest_tf_test_runner as atf_tr
+
+MODULE_CLASS = '%s:%s' % (uc.MODULE_NAME, uc.CLASS_NAME)
+MODULE_PACKAGE = '%s:%s' % (uc.MODULE_NAME, uc.PACKAGE)
+CC_MODULE_CLASS = '%s:%s' % (uc.CC_MODULE_NAME, uc.CC_CLASS_NAME)
+KERNEL_TEST_CLASS = 'test_class_1'
+KERNEL_TEST_CONFIG = 'KernelTest.xml'
+KERNEL_MODULE_CLASS = '%s:%s' % (constants.REQUIRED_KERNEL_TEST_MODULES[0],
+                                 KERNEL_TEST_CLASS)
+KERNEL_CONFIG_FILE = os.path.join(uc.TEST_DATA_DIR, KERNEL_TEST_CONFIG)
+KERNEL_CLASS_FILTER = test_info.TestFilter(KERNEL_TEST_CLASS, frozenset())
+KERNEL_MODULE_CLASS_DATA = {constants.TI_REL_CONFIG: KERNEL_CONFIG_FILE,
+                            constants.TI_FILTER: frozenset([KERNEL_CLASS_FILTER])}
+KERNEL_MODULE_CLASS_INFO = test_info.TestInfo(
+    constants.REQUIRED_KERNEL_TEST_MODULES[0],
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.CLASS_BUILD_TARGETS, KERNEL_MODULE_CLASS_DATA)
+FLAT_METHOD_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.MODULE_BUILD_TARGETS,
+    data={constants.TI_FILTER: frozenset([uc.FLAT_METHOD_FILTER]),
+          constants.TI_REL_CONFIG: uc.CONFIG_FILE})
+MODULE_CLASS_METHOD = '%s#%s' % (MODULE_CLASS, uc.METHOD_NAME)
+CC_MODULE_CLASS_METHOD = '%s#%s' % (CC_MODULE_CLASS, uc.CC_METHOD_NAME)
+CLASS_INFO_MODULE_2 = test_info.TestInfo(
+    uc.MODULE2_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.CLASS_BUILD_TARGETS,
+    data={constants.TI_FILTER: frozenset([uc.CLASS_FILTER]),
+          constants.TI_REL_CONFIG: uc.CONFIG2_FILE})
+CC_CLASS_INFO_MODULE_2 = test_info.TestInfo(
+    uc.CC_MODULE2_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.CLASS_BUILD_TARGETS,
+    data={constants.TI_FILTER: frozenset([uc.CC_CLASS_FILTER]),
+          constants.TI_REL_CONFIG: uc.CC_CONFIG2_FILE})
+DEFAULT_INSTALL_PATH = ['/path/to/install']
+ROBO_MOD_PATH = ['/shared/robo/path']
+NON_RUN_ROBO_MOD_NAME = 'robo_mod'
+RUN_ROBO_MOD_NAME = 'run_robo_mod'
+NON_RUN_ROBO_MOD = {constants.MODULE_NAME: NON_RUN_ROBO_MOD_NAME,
+                    constants.MODULE_PATH: ROBO_MOD_PATH,
+                    constants.MODULE_CLASS: ['random_class']}
+RUN_ROBO_MOD = {constants.MODULE_NAME: RUN_ROBO_MOD_NAME,
+                constants.MODULE_PATH: ROBO_MOD_PATH,
+                constants.MODULE_CLASS: [constants.MODULE_CLASS_ROBOLECTRIC]}
+
+SEARCH_DIR_RE = re.compile(r'^find ([^ ]*).*$')
+
+#pylint: disable=unused-argument
+def classoutside_side_effect(find_cmd, shell=False):
+    """Mock the check output of a find cmd where class outside module path."""
+    search_dir = SEARCH_DIR_RE.match(find_cmd).group(1).strip()
+    if search_dir == uc.ROOT:
+        return uc.FIND_ONE
+    return None
+
+
+#pylint: disable=protected-access
+class ModuleFinderUnittests(unittest.TestCase):
+    """Unit tests for module_finder.py"""
+
+    def setUp(self):
+        """Set up stuff for testing."""
+        self.mod_finder = module_finder.ModuleFinder()
+        self.mod_finder.module_info = mock.Mock(spec=module_info.ModuleInfo)
+        self.mod_finder.module_info.path_to_module_info = {}
+        self.mod_finder.root_dir = uc.ROOT
+
+    def test_is_vts_module(self):
+        """Test _load_module_info_file regular operation."""
+        mod_name = 'mod'
+        is_vts_module_info = {'compatibility_suites': ['vts10', 'tests']}
+        self.mod_finder.module_info.get_module_info.return_value = is_vts_module_info
+        self.assertTrue(self.mod_finder._is_vts_module(mod_name))
+
+        is_not_vts_module = {'compatibility_suites': ['vts10', 'cts']}
+        self.mod_finder.module_info.get_module_info.return_value = is_not_vts_module
+        self.assertFalse(self.mod_finder._is_vts_module(mod_name))
+
+    # pylint: disable=unused-argument
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets',
+                       return_value=uc.MODULE_BUILD_TARGETS)
+    def test_find_test_by_module_name(self, _get_targ):
+        """Test find_test_by_module_name."""
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mod_info = {'installed': ['/path/to/install'],
+                    'path': [uc.MODULE_DIR],
+                    constants.MODULE_CLASS: [],
+                    constants.MODULE_COMPATIBILITY_SUITES: []}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        t_infos = self.mod_finder.find_test_by_module_name(uc.MODULE_NAME)
+        unittest_utils.assert_equal_testinfos(
+            self,
+            t_infos[0],
+            uc.MODULE_INFO)
+        self.mod_finder.module_info.get_module_info.return_value = None
+        self.mod_finder.module_info.is_testable_module.return_value = False
+        self.assertIsNone(self.mod_finder.find_test_by_module_name('Not_Module'))
+
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_ONE)
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch('os.path.isdir', return_value=True)
+    #pylint: disable=unused-argument
+    def test_find_test_by_class_name(self, _isdir, _isfile, _fqcn,
+                                     mock_checkoutput, mock_build,
+                                     _vts, _has_method_in_file):
+        """Test find_test_by_class_name."""
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        t_infos = self.mod_finder.find_test_by_class_name(uc.CLASS_NAME)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.CLASS_INFO)
+
+        # with method
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        class_with_method = '%s#%s' % (uc.CLASS_NAME, uc.METHOD_NAME)
+        t_infos = self.mod_finder.find_test_by_class_name(class_with_method)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.METHOD_INFO)
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        class_methods = '%s,%s' % (class_with_method, uc.METHOD2_NAME)
+        t_infos = self.mod_finder.find_test_by_class_name(class_methods)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            FLAT_METHOD_INFO)
+        # module and rel_config passed in
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_class_name(
+            uc.CLASS_NAME, uc.MODULE_NAME, uc.CONFIG_FILE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.CLASS_INFO)
+        # find output fails to find class file
+        mock_checkoutput.return_value = ''
+        self.assertIsNone(self.mod_finder.find_test_by_class_name('Not class'))
+        # class is outside given module path
+        mock_checkoutput.side_effect = classoutside_side_effect
+        t_infos = self.mod_finder.find_test_by_class_name(uc.CLASS_NAME,
+                                                          uc.MODULE2_NAME,
+                                                          uc.CONFIG2_FILE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            CLASS_INFO_MODULE_2)
+
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_ONE)
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    #pylint: disable=unused-argument
+    def test_find_test_by_module_and_class(self, _isfile, _fqcn,
+                                           mock_checkoutput, mock_build,
+                                           _vts, _has_method_in_file):
+        """Test find_test_by_module_and_class."""
+        # Native test was tested in test_find_test_by_cc_class_name().
+        self.mod_finder.module_info.is_native_test.return_value = False
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        mod_info = {constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+                    constants.MODULE_PATH: [uc.MODULE_DIR],
+                    constants.MODULE_CLASS: [],
+                    constants.MODULE_COMPATIBILITY_SUITES: []}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        t_infos = self.mod_finder.find_test_by_module_and_class(MODULE_CLASS)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.CLASS_INFO)
+        # with method
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_module_and_class(MODULE_CLASS_METHOD)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.METHOD_INFO)
+        self.mod_finder.module_info.is_testable_module.return_value = False
+        # bad module, good class, returns None
+        bad_module = '%s:%s' % ('BadMod', uc.CLASS_NAME)
+        self.mod_finder.module_info.get_module_info.return_value = None
+        self.assertIsNone(self.mod_finder.find_test_by_module_and_class(bad_module))
+        # find output fails to find class file
+        mock_checkoutput.return_value = ''
+        bad_class = '%s:%s' % (uc.MODULE_NAME, 'Anything')
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        self.assertIsNone(self.mod_finder.find_test_by_module_and_class(bad_class))
+
+    @mock.patch.object(module_finder.ModuleFinder, 'find_test_by_kernel_class_name',
+                       return_value=None)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_CC_ONE)
+    @mock.patch.object(test_finder_utils, 'find_class_file',
+                       side_effect=[None, None, '/'])
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    #pylint: disable=unused-argument
+    def test_find_test_by_module_and_class_part_2(self, _isfile, mock_fcf,
+                                                  mock_checkoutput, mock_build,
+                                                  _vts, _find_kernel):
+        """Test find_test_by_module_and_class for MODULE:CC_CLASS."""
+        # Native test was tested in test_find_test_by_cc_class_name()
+        self.mod_finder.module_info.is_native_test.return_value = False
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        mod_info = {constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+                    constants.MODULE_PATH: [uc.CC_MODULE_DIR],
+                    constants.MODULE_CLASS: [],
+                    constants.MODULE_COMPATIBILITY_SUITES: []}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        t_infos = self.mod_finder.find_test_by_module_and_class(CC_MODULE_CLASS)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.CC_MODULE_CLASS_INFO)
+        # with method
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        mock_fcf.side_effect = [None, None, '/']
+        t_infos = self.mod_finder.find_test_by_module_and_class(CC_MODULE_CLASS_METHOD)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.CC_METHOD_INFO)
+        # bad module, good class, returns None
+        bad_module = '%s:%s' % ('BadMod', uc.CC_CLASS_NAME)
+        self.mod_finder.module_info.get_module_info.return_value = None
+        self.mod_finder.module_info.is_testable_module.return_value = False
+        self.assertIsNone(self.mod_finder.find_test_by_module_and_class(bad_module))
+
+    @mock.patch.object(module_finder.ModuleFinder, '_get_module_test_config',
+                       return_value=KERNEL_CONFIG_FILE)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_CC_ONE)
+    @mock.patch.object(test_finder_utils, 'find_class_file',
+                       side_effect=[None, None, '/'])
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    #pylint: disable=unused-argument
+    def test_find_test_by_module_and_class_for_kernel_test(
+            self, _isfile, mock_fcf, mock_checkoutput, mock_build, _vts,
+            _test_config):
+        """Test find_test_by_module_and_class for MODULE:CC_CLASS."""
+        # Kernel test was tested in find_test_by_kernel_class_name()
+        self.mod_finder.module_info.is_native_test.return_value = False
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        mod_info = {constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+                    constants.MODULE_PATH: [uc.CC_MODULE_DIR],
+                    constants.MODULE_CLASS: [],
+                    constants.MODULE_COMPATIBILITY_SUITES: []}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        t_infos = self.mod_finder.find_test_by_module_and_class(KERNEL_MODULE_CLASS)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], KERNEL_MODULE_CLASS_INFO)
+
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_PKG)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch('os.path.isdir', return_value=True)
+    #pylint: disable=unused-argument
+    def test_find_test_by_package_name(self, _isdir, _isfile, mock_checkoutput,
+                                       mock_build, _vts):
+        """Test find_test_by_package_name."""
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []
+            }
+        t_infos = self.mod_finder.find_test_by_package_name(uc.PACKAGE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            uc.PACKAGE_INFO)
+        # with method, should raise
+        pkg_with_method = '%s#%s' % (uc.PACKAGE, uc.METHOD_NAME)
+        self.assertRaises(atest_error.MethodWithoutClassError,
+                          self.mod_finder.find_test_by_package_name,
+                          pkg_with_method)
+        # module and rel_config passed in
+        t_infos = self.mod_finder.find_test_by_package_name(
+            uc.PACKAGE, uc.MODULE_NAME, uc.CONFIG_FILE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.PACKAGE_INFO)
+        # find output fails to find class file
+        mock_checkoutput.return_value = ''
+        self.assertIsNone(self.mod_finder.find_test_by_package_name('Not pkg'))
+
+    @mock.patch('os.path.isdir', return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.FIND_PKG)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    #pylint: disable=unused-argument
+    def test_find_test_by_module_and_package(self, _isfile, mock_checkoutput,
+                                             mock_build, _vts, _isdir):
+        """Test find_test_by_module_and_package."""
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        mod_info = {constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+                    constants.MODULE_PATH: [uc.MODULE_DIR],
+                    constants.MODULE_CLASS: [],
+                    constants.MODULE_COMPATIBILITY_SUITES: []}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        t_infos = self.mod_finder.find_test_by_module_and_package(MODULE_PACKAGE)
+        self.assertEqual(t_infos, None)
+        _isdir.return_value = True
+        t_infos = self.mod_finder.find_test_by_module_and_package(MODULE_PACKAGE)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.PACKAGE_INFO)
+
+        # with method, raises
+        module_pkg_with_method = '%s:%s#%s' % (uc.MODULE2_NAME, uc.PACKAGE,
+                                               uc.METHOD_NAME)
+        self.assertRaises(atest_error.MethodWithoutClassError,
+                          self.mod_finder.find_test_by_module_and_package,
+                          module_pkg_with_method)
+        # bad module, good pkg, returns None
+        self.mod_finder.module_info.is_testable_module.return_value = False
+        bad_module = '%s:%s' % ('BadMod', uc.PACKAGE)
+        self.mod_finder.module_info.get_module_info.return_value = None
+        self.assertIsNone(self.mod_finder.find_test_by_module_and_package(bad_module))
+        # find output fails to find package path
+        mock_checkoutput.return_value = ''
+        bad_pkg = '%s:%s' % (uc.MODULE_NAME, 'Anything')
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        self.assertIsNone(self.mod_finder.find_test_by_module_and_package(bad_pkg))
+
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(test_finder_utils, 'has_cc_class',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('os.path.realpath',
+                side_effect=unittest_utils.realpath_side_effect)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch.object(test_finder_utils, 'find_parent_module_dir')
+    @mock.patch('os.path.exists')
+    #pylint: disable=unused-argument
+    def test_find_test_by_path(self, mock_pathexists, mock_dir, _isfile, _real,
+                               _fqcn, _vts, mock_build, _has_cc_class,
+                               _has_method_in_file):
+        """Test find_test_by_path."""
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = set()
+        # Check that we don't return anything with invalid test references.
+        mock_pathexists.return_value = False
+        unittest_utils.assert_equal_testinfos(
+            self, None, self.mod_finder.find_test_by_path('bad/path'))
+        mock_pathexists.return_value = True
+        mock_dir.return_value = None
+        unittest_utils.assert_equal_testinfos(
+            self, None, self.mod_finder.find_test_by_path('no/module'))
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+
+        # Happy path testing.
+        mock_dir.return_value = uc.MODULE_DIR
+
+        class_path = '%s.kt' % uc.CLASS_NAME
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.CLASS_INFO, t_infos[0])
+
+        class_path = '%s.java' % uc.CLASS_NAME
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.CLASS_INFO, t_infos[0])
+
+        class_with_method = '%s#%s' % (class_path, uc.METHOD_NAME)
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_path(class_with_method)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.METHOD_INFO)
+
+        class_with_methods = '%s,%s' % (class_with_method, uc.METHOD2_NAME)
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_path(class_with_methods)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            FLAT_METHOD_INFO)
+
+        # Cc path testing.
+        self.mod_finder.module_info.get_module_names.return_value = [uc.CC_MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.CC_MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        mock_dir.return_value = uc.CC_MODULE_DIR
+        class_path = '%s' % uc.CC_PATH
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.CC_PATH_INFO2, t_infos[0])
+
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets',
+                       return_value=uc.MODULE_BUILD_TARGETS)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'find_parent_module_dir',
+                       return_value=os.path.relpath(uc.TEST_DATA_DIR, uc.ROOT))
+    #pylint: disable=unused-argument
+    def test_find_test_by_path_part_2(self, _find_parent, _is_vts, _get_build):
+        """Test find_test_by_path for directories."""
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        # Dir with java files in it, should run as package
+        class_dir = os.path.join(uc.TEST_DATA_DIR, 'path_testing')
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        t_infos = self.mod_finder.find_test_by_path(class_dir)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.PATH_INFO, t_infos[0])
+        # Dir with no java files in it, should run whole module
+        empty_dir = os.path.join(uc.TEST_DATA_DIR, 'path_testing_empty')
+        t_infos = self.mod_finder.find_test_by_path(empty_dir)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.EMPTY_PATH_INFO,
+            t_infos[0])
+        # Dir with cc files in it, should run as cc class
+        class_dir = os.path.join(uc.TEST_DATA_DIR, 'cc_path_testing')
+        self.mod_finder.module_info.get_module_names.return_value = [uc.CC_MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.CC_MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        t_infos = self.mod_finder.find_test_by_path(class_dir)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.CC_PATH_INFO, t_infos[0])
+
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch('subprocess.check_output', return_value=uc.CC_FIND_ONE)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch('os.path.isdir', return_value=True)
+    #pylint: disable=unused-argument
+    def test_find_test_by_cc_class_name(self, _isdir, _isfile,
+                                        mock_checkoutput, mock_build,
+                                        _vts, _has_method):
+        """Test find_test_by_cc_class_name."""
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        self.mod_finder.module_info.get_module_names.return_value = [uc.CC_MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.CC_MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        t_infos = self.mod_finder.find_test_by_cc_class_name(uc.CC_CLASS_NAME)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.CC_CLASS_INFO)
+
+        # with method
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        class_with_method = '%s#%s' % (uc.CC_CLASS_NAME, uc.CC_METHOD_NAME)
+        t_infos = self.mod_finder.find_test_by_cc_class_name(class_with_method)
+        unittest_utils.assert_equal_testinfos(
+            self,
+            t_infos[0],
+            uc.CC_METHOD_INFO)
+        mock_build.return_value = uc.MODULE_BUILD_TARGETS
+        class_methods = '%s,%s' % (class_with_method, uc.CC_METHOD2_NAME)
+        t_infos = self.mod_finder.find_test_by_cc_class_name(class_methods)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            uc.CC_METHOD2_INFO)
+        # module and rel_config passed in
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        t_infos = self.mod_finder.find_test_by_cc_class_name(
+            uc.CC_CLASS_NAME, uc.CC_MODULE_NAME, uc.CC_CONFIG_FILE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0], uc.CC_CLASS_INFO)
+        # find output fails to find class file
+        mock_checkoutput.return_value = ''
+        self.assertIsNone(self.mod_finder.find_test_by_cc_class_name(
+            'Not class'))
+        # class is outside given module path
+        mock_checkoutput.return_value = uc.CC_FIND_ONE
+        t_infos = self.mod_finder.find_test_by_cc_class_name(
+            uc.CC_CLASS_NAME,
+            uc.CC_MODULE2_NAME,
+            uc.CC_CONFIG2_FILE)
+        unittest_utils.assert_equal_testinfos(
+            self, t_infos[0],
+            CC_CLASS_INFO_MODULE_2)
+
+    def test_get_testable_modules_with_ld(self):
+        """Test get_testable_modules_with_ld"""
+        self.mod_finder.module_info.get_testable_modules.return_value = [
+            uc.MODULE_NAME, uc.MODULE2_NAME]
+        # Without a misfit constraint
+        ld1 = self.mod_finder.get_testable_modules_with_ld(uc.TYPO_MODULE_NAME)
+        self.assertEqual([[16, uc.MODULE2_NAME], [1, uc.MODULE_NAME]], ld1)
+        # With a misfit constraint
+        ld2 = self.mod_finder.get_testable_modules_with_ld(uc.TYPO_MODULE_NAME, 2)
+        self.assertEqual([[1, uc.MODULE_NAME]], ld2)
+
+    def test_get_fuzzy_searching_modules(self):
+        """Test get_fuzzy_searching_modules"""
+        self.mod_finder.module_info.get_testable_modules.return_value = [
+            uc.MODULE_NAME, uc.MODULE2_NAME]
+        result = self.mod_finder.get_fuzzy_searching_results(uc.TYPO_MODULE_NAME)
+        self.assertEqual(uc.MODULE_NAME, result[0])
+
+    def test_get_build_targets_w_vts_core(self):
+        """Test _get_build_targets."""
+        self.mod_finder.module_info.is_auto_gen_test_config.return_value = True
+        self.mod_finder.module_info.get_paths.return_value = []
+        mod_info = {constants.MODULE_COMPATIBILITY_SUITES:
+                        [constants.VTS_CORE_SUITE]}
+        self.mod_finder.module_info.get_module_info.return_value = mod_info
+        self.assertEqual(self.mod_finder._get_build_targets('', ''),
+                         {constants.VTS_CORE_TF_MODULE})
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_finders/suite_plan_finder.py b/atest-py2/test_finders/suite_plan_finder.py
new file mode 100644
index 0000000..a33da2d
--- /dev/null
+++ b/atest-py2/test_finders/suite_plan_finder.py
@@ -0,0 +1,158 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Suite Plan Finder class.
+"""
+
+import logging
+import os
+import re
+
+# pylint: disable=import-error
+import constants
+from test_finders import test_finder_base
+from test_finders import test_finder_utils
+from test_finders import test_info
+from test_runners import suite_plan_test_runner
+
+_SUITE_PLAN_NAME_RE = re.compile(r'^.*\/(?P<suite>.*)-tradefed\/res\/config\/'
+                                 r'(?P<suite_plan_name>.*).xml$')
+
+
+class SuitePlanFinder(test_finder_base.TestFinderBase):
+    """Suite Plan Finder class."""
+    NAME = 'SUITE_PLAN'
+    _SUITE_PLAN_TEST_RUNNER = suite_plan_test_runner.SuitePlanTestRunner.NAME
+
+    def __init__(self, module_info=None):
+        super(SuitePlanFinder, self).__init__()
+        self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+        self.mod_info = module_info
+        self.suite_plan_dirs = self._get_suite_plan_dirs()
+
+    def _get_mod_paths(self, module_name):
+        """Return the paths of the given module name."""
+        if self.mod_info:
+            return self.mod_info.get_paths(module_name)
+        return []
+
+    def _get_suite_plan_dirs(self):
+        """Get suite plan dirs from MODULE_INFO based on targets.
+
+        Strategy:
+            Search module-info.json using SUITE_PLANS to get all the suite
+            plan dirs.
+
+        Returns:
+            A tuple of lists of strings of suite plan dir rel to repo root.
+            None if the path can not be found in module-info.json.
+        """
+        return [d for x in constants.SUITE_PLANS for d in
+                self._get_mod_paths(x+'-tradefed') if d is not None]
+
+    def _get_test_info_from_path(self, path, suite_name=None):
+        """Get the test info from the result of using regular expression
+        matching with the give path.
+
+        Args:
+            path: A string of the test's absolute or relative path.
+            suite_name: A string of the suite name.
+
+        Returns:
+            A populated TestInfo namedtuple if regular expression
+            matches, else None.
+        """
+        # Don't use names that simply match the path,
+        # must be the actual name used by *TS to run the test.
+        match = _SUITE_PLAN_NAME_RE.match(path)
+        if not match:
+            logging.error('Suite plan test outside config dir: %s', path)
+            return None
+        suite = match.group('suite')
+        suite_plan_name = match.group('suite_plan_name')
+        if suite_name:
+            if suite_plan_name != suite_name:
+                logging.warn('Input (%s) not valid suite plan name, '
+                             'did you mean: %s?', suite_name, suite_plan_name)
+                return None
+        return test_info.TestInfo(
+            test_name=suite_plan_name,
+            test_runner=self._SUITE_PLAN_TEST_RUNNER,
+            build_targets=set([suite]),
+            suite=suite)
+
+    def find_test_by_suite_path(self, suite_path):
+        """Find the first test info matching the given path.
+
+        Strategy:
+            If suite_path is to file --> Return TestInfo if the file
+            exists in the suite plan dirs, else return None.
+            If suite_path is to dir --> Return None
+
+        Args:
+            suite_path: A string of the path to the test's file or dir.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None.
+            This is a list with at most 1 element.
+        """
+        path, _ = test_finder_utils.split_methods(suite_path)
+        # Make sure we're looking for a config.
+        if not path.endswith('.xml'):
+            return None
+        path = os.path.realpath(path)
+        suite_plan_dir = test_finder_utils.get_int_dir_from_path(
+            path, self.suite_plan_dirs)
+        if suite_plan_dir:
+            rel_config = os.path.relpath(path, self.root_dir)
+            return [self._get_test_info_from_path(rel_config)]
+        return None
+
+    def find_test_by_suite_name(self, suite_name):
+        """Find the test for the given suite name.
+
+        Strategy:
+            If suite_name is cts --> Return TestInfo to indicate suite runner
+            to make cts and run test using cts-tradefed.
+            If suite_name is cts-common --> Return TestInfo to indicate suite
+            runner to make cts and run test using cts-tradefed if file exists
+            in the suite plan dirs, else return None.
+
+        Args:
+            suite_name: A string of suite name.
+
+        Returns:
+            A list of populated TestInfo namedtuple if suite_name matches
+            a suite in constants.SUITE_PLAN, else check if the file
+            existing in the suite plan dirs, else return None.
+        """
+        logging.debug('Finding test by suite: %s', suite_name)
+        test_infos = []
+        if suite_name in constants.SUITE_PLANS:
+            test_infos.append(test_info.TestInfo(
+                test_name=suite_name,
+                test_runner=self._SUITE_PLAN_TEST_RUNNER,
+                build_targets=set([suite_name]),
+                suite=suite_name))
+        else:
+            test_files = test_finder_utils.search_integration_dirs(
+                suite_name, self.suite_plan_dirs)
+            if not test_files:
+                return None
+            for test_file in test_files:
+                _test_info = self._get_test_info_from_path(test_file, suite_name)
+                if _test_info:
+                    test_infos.append(_test_info)
+        return test_infos
diff --git a/atest-py2/test_finders/suite_plan_finder_unittest.py b/atest-py2/test_finders/suite_plan_finder_unittest.py
new file mode 100755
index 0000000..0fed2d2
--- /dev/null
+++ b/atest-py2/test_finders/suite_plan_finder_unittest.py
@@ -0,0 +1,184 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Unittests for suite_plan_finder."""
+
+import os
+import unittest
+import mock
+
+# pylint: disable=import-error
+import unittest_constants as uc
+import unittest_utils
+from test_finders import test_finder_utils
+from test_finders import test_info
+from test_finders import suite_plan_finder
+from test_runners import suite_plan_test_runner
+
+
+# pylint: disable=protected-access
+class SuitePlanFinderUnittests(unittest.TestCase):
+    """Unit tests for suite_plan_finder.py"""
+
+    def setUp(self):
+        """Set up stuff for testing."""
+        self.suite_plan_finder = suite_plan_finder.SuitePlanFinder()
+        self.suite_plan_finder.suite_plan_dirs = [os.path.join(uc.ROOT, uc.CTS_INT_DIR)]
+        self.suite_plan_finder.root_dir = uc.ROOT
+
+    def test_get_test_info_from_path(self):
+        """Test _get_test_info_from_path.
+        Strategy:
+            If suite_path is to cts file -->
+                test_info: test_name=cts,
+                           test_runner=TestSuiteTestRunner,
+                           build_target=set(['cts']
+                           suite='cts')
+            If suite_path is to cts-common file -->
+                test_info: test_name=cts-common,
+                           test_runner=TestSuiteTestRunner,
+                           build_target=set(['cts']
+                           suite='cts')
+            If suite_path is to common file --> test_info: None
+            If suite_path is to non-existing file --> test_info: None
+        """
+        suite_plan = 'cts'
+        path = os.path.join(uc.ROOT, uc.CTS_INT_DIR, suite_plan+'.xml')
+        want_info = test_info.TestInfo(test_name=suite_plan,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets={suite_plan},
+                                       suite=suite_plan)
+        unittest_utils.assert_equal_testinfos(
+            self, want_info, self.suite_plan_finder._get_test_info_from_path(path))
+
+        suite_plan = 'cts-common'
+        path = os.path.join(uc.ROOT, uc.CTS_INT_DIR, suite_plan+'.xml')
+        want_info = test_info.TestInfo(test_name=suite_plan,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets={'cts'},
+                                       suite='cts')
+        unittest_utils.assert_equal_testinfos(
+            self, want_info, self.suite_plan_finder._get_test_info_from_path(path))
+
+        suite_plan = 'common'
+        path = os.path.join(uc.ROOT, uc.CTS_INT_DIR, 'cts-common.xml')
+        want_info = None
+        unittest_utils.assert_equal_testinfos(
+            self, want_info, self.suite_plan_finder._get_test_info_from_path(path, suite_plan))
+
+        path = os.path.join(uc.ROOT, 'cts-common.xml')
+        want_info = None
+        unittest_utils.assert_equal_testinfos(
+            self, want_info, self.suite_plan_finder._get_test_info_from_path(path))
+
+    @mock.patch.object(test_finder_utils, 'search_integration_dirs')
+    def test_find_test_by_suite_name(self, _search):
+        """Test find_test_by_suite_name.
+        Strategy:
+            suite_name: cts --> test_info: test_name=cts,
+                                           test_runner=TestSuiteTestRunner,
+                                           build_target=set(['cts']
+                                           suite='cts')
+            suite_name: CTS --> test_info: None
+            suite_name: cts-common --> test_info: test_name=cts-common,
+                                                  test_runner=TestSuiteTestRunner,
+                                                  build_target=set(['cts'],
+                                                  suite='cts')
+        """
+        suite_name = 'cts'
+        t_info = self.suite_plan_finder.find_test_by_suite_name(suite_name)
+        want_info = test_info.TestInfo(test_name=suite_name,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets={suite_name},
+                                       suite=suite_name)
+        unittest_utils.assert_equal_testinfos(self, t_info[0], want_info)
+
+        suite_name = 'CTS'
+        _search.return_value = None
+        t_info = self.suite_plan_finder.find_test_by_suite_name(suite_name)
+        want_info = None
+        unittest_utils.assert_equal_testinfos(self, t_info, want_info)
+
+        suite_name = 'cts-common'
+        suite = 'cts'
+        _search.return_value = [os.path.join(uc.ROOT, uc.CTS_INT_DIR, suite_name + '.xml')]
+        t_info = self.suite_plan_finder.find_test_by_suite_name(suite_name)
+        want_info = test_info.TestInfo(test_name=suite_name,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets=set([suite]),
+                                       suite=suite)
+        unittest_utils.assert_equal_testinfos(self, t_info[0], want_info)
+
+    @mock.patch('os.path.realpath',
+                side_effect=unittest_utils.realpath_side_effect)
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=True)
+    @mock.patch.object(test_finder_utils, 'get_int_dir_from_path')
+    @mock.patch('os.path.exists', return_value=True)
+    def test_find_suite_plan_test_by_suite_path(self, _exists, _find, _isfile, _isdir, _real):
+        """Test find_test_by_suite_name.
+        Strategy:
+            suite_name: cts.xml --> test_info:
+                                        test_name=cts,
+                                        test_runner=TestSuiteTestRunner,
+                                        build_target=set(['cts']
+                                        suite='cts')
+            suite_name: cts-common.xml --> test_info:
+                                               test_name=cts-common,
+                                               test_runner=TestSuiteTestRunner,
+                                               build_target=set(['cts'],
+                                               suite='cts')
+            suite_name: cts-camera.xml --> test_info:
+                                               test_name=cts-camera,
+                                               test_runner=TestSuiteTestRunner,
+                                               build_target=set(['cts'],
+                                               suite='cts')
+        """
+        suite_int_name = 'cts'
+        suite = 'cts'
+        path = os.path.join(uc.CTS_INT_DIR, suite_int_name + '.xml')
+        _find.return_value = uc.CTS_INT_DIR
+        t_info = self.suite_plan_finder.find_test_by_suite_path(path)
+        want_info = test_info.TestInfo(test_name=suite_int_name,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets=set([suite]),
+                                       suite=suite)
+        unittest_utils.assert_equal_testinfos(self, t_info[0], want_info)
+
+        suite_int_name = 'cts-common'
+        suite = 'cts'
+        path = os.path.join(uc.CTS_INT_DIR, suite_int_name + '.xml')
+        _find.return_value = uc.CTS_INT_DIR
+        t_info = self.suite_plan_finder.find_test_by_suite_path(path)
+        want_info = test_info.TestInfo(test_name=suite_int_name,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets=set([suite]),
+                                       suite=suite)
+        unittest_utils.assert_equal_testinfos(self, t_info[0], want_info)
+
+        suite_int_name = 'cts-camera'
+        suite = 'cts'
+        path = os.path.join(uc.CTS_INT_DIR, suite_int_name + '.xml')
+        _find.return_value = uc.CTS_INT_DIR
+        t_info = self.suite_plan_finder.find_test_by_suite_path(path)
+        want_info = test_info.TestInfo(test_name=suite_int_name,
+                                       test_runner=suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                       build_targets=set([suite]),
+                                       suite=suite)
+        unittest_utils.assert_equal_testinfos(self, t_info[0], want_info)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_finders/test_finder_base.py b/atest-py2/test_finders/test_finder_base.py
new file mode 100644
index 0000000..14fc079
--- /dev/null
+++ b/atest-py2/test_finders/test_finder_base.py
@@ -0,0 +1,54 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Test finder base class.
+"""
+from collections import namedtuple
+
+
+Finder = namedtuple('Finder', ['test_finder_instance', 'find_method',
+                               'finder_info'])
+
+
+def find_method_register(cls):
+    """Class decorater to find all registered find methods."""
+    cls.find_methods = []
+    cls.get_all_find_methods = lambda x: x.find_methods
+    for methodname in dir(cls):
+        method = getattr(cls, methodname)
+        if hasattr(method, '_registered'):
+            cls.find_methods.append(Finder(None, method, None))
+    return cls
+
+
+def register():
+    """Decorator to register find methods."""
+
+    def wrapper(func):
+        """Wrapper for the register decorator."""
+        #pylint: disable=protected-access
+        func._registered = True
+        return func
+    return wrapper
+
+
+# This doesn't really do anything since there are no find methods defined but
+# it's here anyways as an example for other test type classes.
+@find_method_register
+class TestFinderBase(object):
+    """Base class for test finder class."""
+
+    def __init__(self, *args, **kwargs):
+        pass
diff --git a/atest-py2/test_finders/test_finder_utils.py b/atest-py2/test_finders/test_finder_utils.py
new file mode 100644
index 0000000..681d77a
--- /dev/null
+++ b/atest-py2/test_finders/test_finder_utils.py
@@ -0,0 +1,984 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Utils for finder classes.
+"""
+
+from __future__ import print_function
+import logging
+import multiprocessing
+import os
+import pickle
+import re
+import subprocess
+import time
+import xml.etree.ElementTree as ET
+
+# pylint: disable=import-error
+import atest_decorator
+import atest_error
+import atest_enum
+import constants
+
+from metrics import metrics_utils
+
+# Helps find apk files listed in a test config (AndroidTest.xml) file.
+# Matches "filename.apk" in <option name="foo", value="filename.apk" />
+# We want to make sure we don't grab apks with paths in their name since we
+# assume the apk name is the build target.
+_APK_RE = re.compile(r'^[^/]+\.apk$', re.I)
+# RE for checking if TEST or TEST_F is in a cc file or not.
+_CC_CLASS_RE = re.compile(r'^[ ]*TEST(_F|_P)?[ ]*\(', re.I)
+# RE for checking if there exists one of the methods in java file.
+_JAVA_METHODS_PATTERN = r'.*[ ]+({0})\(.*'
+# RE for checking if there exists one of the methods in cc file.
+_CC_METHODS_PATTERN = r'^[ ]*TEST(_F|_P)?[ ]*\(.*,[ ]*({0})\).*'
+# Parse package name from the package declaration line of a java or a kotlin file.
+# Group matches "foo.bar" of line "package foo.bar;" or "package foo.bar"
+_PACKAGE_RE = re.compile(r'\s*package\s+(?P<package>[^(;|\s)]+)\s*', re.I)
+# Matches install paths in module_info to install location(host or device).
+_HOST_PATH_RE = re.compile(r'.*\/host\/.*', re.I)
+_DEVICE_PATH_RE = re.compile(r'.*\/target\/.*', re.I)
+
+# Explanation of FIND_REFERENCE_TYPEs:
+# ----------------------------------
+# 0. CLASS: Name of a java/kotlin class, usually file is named the same
+#    (HostTest lives in HostTest.java or HostTest.kt)
+# 1. QUALIFIED_CLASS: Like CLASS but also contains the package in front like
+#                     com.android.tradefed.testtype.HostTest.
+# 2. PACKAGE: Name of a java package.
+# 3. INTEGRATION: XML file name in one of the 4 integration config directories.
+# 4. CC_CLASS: Name of a cc class.
+
+FIND_REFERENCE_TYPE = atest_enum.AtestEnum(['CLASS', 'QUALIFIED_CLASS',
+                                            'PACKAGE', 'INTEGRATION', 'CC_CLASS'])
+# Get cpu count.
+_CPU_COUNT = 0 if os.uname()[0] == 'Linux' else multiprocessing.cpu_count()
+
+# Unix find commands for searching for test files based on test type input.
+# Note: Find (unlike grep) exits with status 0 if nothing found.
+FIND_CMDS = {
+    FIND_REFERENCE_TYPE.CLASS: r"find {0} {1} -type f"
+                               r"| egrep '.*/{2}\.(kt|java)$' || true",
+    FIND_REFERENCE_TYPE.QUALIFIED_CLASS: r"find {0} {1} -type f"
+                                         r"| egrep '.*{2}\.(kt|java)$' || true",
+    FIND_REFERENCE_TYPE.PACKAGE: r"find {0} {1} -wholename "
+                                 r"'*{2}' -type d -print",
+    FIND_REFERENCE_TYPE.INTEGRATION: r"find {0} {1} -wholename "
+                                     r"'*{2}.xml' -print",
+    # Searching a test among files where the absolute paths contain *test*.
+    # If users complain atest couldn't find a CC_CLASS, ask them to follow the
+    # convention that the filename or dirname must contain *test*, where *test*
+    # is case-insensitive.
+    FIND_REFERENCE_TYPE.CC_CLASS: r"find {0} {1} -type f -print"
+                                  r"| egrep -i '/*test.*\.(cc|cpp)$'"
+                                  r"| xargs -P" + str(_CPU_COUNT) +
+                                  r" egrep -sH '^[ ]*TEST(_F|_P)?[ ]*\({2}' || true"
+}
+
+# Map ref_type with its index file.
+FIND_INDEXES = {
+    FIND_REFERENCE_TYPE.CLASS: constants.CLASS_INDEX,
+    FIND_REFERENCE_TYPE.QUALIFIED_CLASS: constants.QCLASS_INDEX,
+    FIND_REFERENCE_TYPE.PACKAGE: constants.PACKAGE_INDEX,
+    FIND_REFERENCE_TYPE.INTEGRATION: constants.INT_INDEX,
+    FIND_REFERENCE_TYPE.CC_CLASS: constants.CC_CLASS_INDEX
+}
+
+# XML parsing related constants.
+_COMPATIBILITY_PACKAGE_PREFIX = "com.android.compatibility"
+_CTS_JAR = "cts-tradefed"
+_XML_PUSH_DELIM = '->'
+_APK_SUFFIX = '.apk'
+# Setup script for device perf tests.
+_PERF_SETUP_LABEL = 'perf-setup.sh'
+
+# XML tags.
+_XML_NAME = 'name'
+_XML_VALUE = 'value'
+
+# VTS xml parsing constants.
+_VTS_TEST_MODULE = 'test-module-name'
+_VTS_MODULE = 'module-name'
+_VTS_BINARY_SRC = 'binary-test-source'
+_VTS_PUSH_GROUP = 'push-group'
+_VTS_PUSH = 'push'
+_VTS_BINARY_SRC_DELIM = '::'
+_VTS_PUSH_DIR = os.path.join(os.environ.get(constants.ANDROID_BUILD_TOP, ''),
+                             'test', 'vts', 'tools', 'vts-tradefed', 'res',
+                             'push_groups')
+_VTS_PUSH_SUFFIX = '.push'
+_VTS_BITNESS = 'append-bitness'
+_VTS_BITNESS_TRUE = 'true'
+_VTS_BITNESS_32 = '32'
+_VTS_BITNESS_64 = '64'
+_VTS_TEST_FILE = 'test-file-name'
+_VTS_APK = 'apk'
+# Matches 'DATA/target' in '_32bit::DATA/target'
+_VTS_BINARY_SRC_DELIM_RE = re.compile(r'.*::(?P<target>.*)$')
+_VTS_OUT_DATA_APP_PATH = 'DATA/app'
+
+# pylint: disable=inconsistent-return-statements
+def split_methods(user_input):
+    """Split user input string into test reference and list of methods.
+
+    Args:
+        user_input: A string of the user's input.
+                    Examples:
+                        class_name
+                        class_name#method1,method2
+                        path
+                        path#method1,method2
+    Returns:
+        A tuple. First element is String of test ref and second element is
+        a set of method name strings or empty list if no methods included.
+    Exception:
+        atest_error.TooManyMethodsError raised when input string is trying to
+        specify too many methods in a single positional argument.
+
+        Examples of unsupported input strings:
+            module:class#method,class#method
+            class1#method,class2#method
+            path1#method,path2#method
+    """
+    parts = user_input.split('#')
+    if len(parts) == 1:
+        return parts[0], frozenset()
+    elif len(parts) == 2:
+        return parts[0], frozenset(parts[1].split(','))
+    raise atest_error.TooManyMethodsError(
+        'Too many methods specified with # character in user input: %s.'
+        '\n\nOnly one class#method combination supported per positional'
+        ' argument. Multiple classes should be separated by spaces: '
+        'class#method class#method')
+
+
+# pylint: disable=inconsistent-return-statements
+def get_fully_qualified_class_name(test_path):
+    """Parse the fully qualified name from the class java file.
+
+    Args:
+        test_path: A string of absolute path to the java class file.
+
+    Returns:
+        A string of the fully qualified class name.
+
+    Raises:
+        atest_error.MissingPackageName if no class name can be found.
+    """
+    with open(test_path) as class_file:
+        for line in class_file:
+            match = _PACKAGE_RE.match(line)
+            if match:
+                package = match.group('package')
+                cls = os.path.splitext(os.path.split(test_path)[1])[0]
+                return '%s.%s' % (package, cls)
+    raise atest_error.MissingPackageNameError('%s: Test class java file'
+                                              'does not contain a package'
+                                              'name.'% test_path)
+
+
+def has_cc_class(test_path):
+    """Find out if there is any test case in the cc file.
+
+    Args:
+        test_path: A string of absolute path to the cc file.
+
+    Returns:
+        Boolean: has cc class in test_path or not.
+    """
+    with open(test_path) as class_file:
+        for line in class_file:
+            match = _CC_CLASS_RE.match(line)
+            if match:
+                return True
+    return False
+
+
+def get_package_name(file_name):
+    """Parse the package name from a java file.
+
+    Args:
+        file_name: A string of the absolute path to the java file.
+
+    Returns:
+        A string of the package name or None
+      """
+    with open(file_name) as data:
+        for line in data:
+            match = _PACKAGE_RE.match(line)
+            if match:
+                return match.group('package')
+
+
+def has_method_in_file(test_path, methods):
+    """Find out if there is at least one method in the file.
+
+    Note: This method doesn't handle if method is in comment sections or not.
+    If the file has any method(even in comment sections), it will return True.
+
+    Args:
+        test_path: A string of absolute path to the test file.
+        methods: A set of method names.
+
+    Returns:
+        Boolean: there is at least one method in test_path.
+    """
+    if not os.path.isfile(test_path):
+        return False
+    methods_re = None
+    if constants.JAVA_EXT_RE.match(test_path):
+        methods_re = re.compile(_JAVA_METHODS_PATTERN.format(
+            '|'.join([r'%s' % x for x in methods])))
+    elif constants.CC_EXT_RE.match(test_path):
+        methods_re = re.compile(_CC_METHODS_PATTERN.format(
+            '|'.join([r'%s' % x for x in methods])))
+    if methods_re:
+        with open(test_path) as test_file:
+            for line in test_file:
+                match = re.match(methods_re, line)
+                if match:
+                    return True
+    return False
+
+
+def extract_test_path(output, methods=None):
+    """Extract the test path from the output of a unix 'find' command.
+
+    Example of find output for CLASS find cmd:
+    /<some_root>/cts/tests/jank/src/android/jank/cts/ui/CtsDeviceJankUi.java
+
+    Args:
+        output: A string or list output of a unix 'find' command.
+        methods: A set of method names.
+
+    Returns:
+        A list of the test paths or None if output is '' or None.
+    """
+    if not output:
+        return None
+    verified_tests = set()
+    if isinstance(output, str):
+        output = output.splitlines()
+    for test in output:
+        # compare CC_OUTPUT_RE with output
+        match_obj = constants.CC_OUTPUT_RE.match(test)
+        if match_obj:
+            # cc/cpp
+            fpath = match_obj.group('file_path')
+            if not methods or match_obj.group('method_name') in methods:
+                verified_tests.add(fpath)
+        else:
+            # TODO (b/138997521) - Atest checks has_method_in_file of a class
+            #  without traversing its parent classes. A workaround for this is
+            #  do not check has_method_in_file. Uncomment below when a solution
+            #  to it is applied.
+            # java/kt
+            #if not methods or has_method_in_file(test, methods):
+            verified_tests.add(test)
+    return extract_test_from_tests(list(verified_tests))
+
+
+def extract_test_from_tests(tests):
+    """Extract the test path from the tests.
+
+    Return the test to run from tests. If more than one option, prompt the user
+    to select multiple ones. Supporting formats:
+    - An integer. E.g. 0
+    - Comma-separated integers. E.g. 1,3,5
+    - A range of integers denoted by the starting integer separated from
+      the end integer by a dash, '-'. E.g. 1-3
+
+    Args:
+        tests: A string list which contains multiple test paths.
+
+    Returns:
+        A string list of paths.
+    """
+    count = len(tests)
+    if count <= 1:
+        return tests if count else None
+    mtests = set()
+    try:
+        numbered_list = ['%s: %s' % (i, t) for i, t in enumerate(tests)]
+        numbered_list.append('%s: All' % count)
+        print('Multiple tests found:\n{0}'.format('\n'.join(numbered_list)))
+        test_indices = raw_input("Please enter numbers of test to use. "
+                                 "If none of above option matched, keep "
+                                 "searching for other possible tests."
+                                 "\n(multiple selection is supported,"
+                                 " e.g. '1' or '0,1' or '0-2'): ")
+        for idx in re.sub(r'(\s)', '', test_indices).split(','):
+            indices = idx.split('-')
+            len_indices = len(indices)
+            if len_indices > 0:
+                start_index = min(int(indices[0]), int(indices[len_indices-1]))
+                end_index = max(int(indices[0]), int(indices[len_indices-1]))
+                # One of input is 'All', return all options.
+                if start_index == count or end_index == count:
+                    return tests
+                mtests.update(tests[start_index:(end_index+1)])
+    except (ValueError, IndexError, AttributeError, TypeError) as err:
+        logging.debug('%s', err)
+        print('None of above option matched, keep searching for other'
+              ' possible tests...')
+    return list(mtests)
+
+
+@atest_decorator.static_var("cached_ignore_dirs", [])
+def _get_ignored_dirs():
+    """Get ignore dirs in find command.
+
+    Since we can't construct a single find cmd to find the target and
+    filter-out the dir with .out-dir, .find-ignore and $OUT-DIR. We have
+    to run the 1st find cmd to find these dirs. Then, we can use these
+    results to generate the real find cmd.
+
+    Return:
+        A list of the ignore dirs.
+    """
+    out_dirs = _get_ignored_dirs.cached_ignore_dirs
+    if not out_dirs:
+        build_top = os.environ.get(constants.ANDROID_BUILD_TOP)
+        find_out_dir_cmd = (r'find %s -maxdepth 2 '
+                            r'-type f \( -name ".out-dir" -o -name '
+                            r'".find-ignore" \)') % build_top
+        out_files = subprocess.check_output(find_out_dir_cmd, shell=True)
+        # Get all dirs with .out-dir or .find-ignore
+        if out_files:
+            out_files = out_files.splitlines()
+            for out_file in out_files:
+                if out_file:
+                    out_dirs.append(os.path.dirname(out_file.strip()))
+        # Get the out folder if user specified $OUT_DIR
+        custom_out_dir = os.environ.get(constants.ANDROID_OUT_DIR)
+        if custom_out_dir:
+            user_out_dir = None
+            if os.path.isabs(custom_out_dir):
+                user_out_dir = custom_out_dir
+            else:
+                user_out_dir = os.path.join(build_top, custom_out_dir)
+            # only ignore the out_dir when it under $ANDROID_BUILD_TOP
+            if build_top in user_out_dir:
+                if user_out_dir not in out_dirs:
+                    out_dirs.append(user_out_dir)
+        _get_ignored_dirs.cached_ignore_dirs = out_dirs
+    return out_dirs
+
+
+def _get_prune_cond_of_ignored_dirs():
+    """Get the prune condition of ignore dirs.
+
+    Generation a string of the prune condition in the find command.
+    It will filter-out the dir with .out-dir, .find-ignore and $OUT-DIR.
+    Because they are the out dirs, we don't have to find them.
+
+    Return:
+        A string of the prune condition of the ignore dirs.
+    """
+    out_dirs = _get_ignored_dirs()
+    prune_cond = r'-type d \( -name ".*"'
+    for out_dir in out_dirs:
+        prune_cond += r' -o -path %s' % out_dir
+    prune_cond += r' \) -prune -o'
+    return prune_cond
+
+
+def run_find_cmd(ref_type, search_dir, target, methods=None):
+    """Find a path to a target given a search dir and a target name.
+
+    Args:
+        ref_type: An AtestEnum of the reference type.
+        search_dir: A string of the dirpath to search in.
+        target: A string of what you're trying to find.
+        methods: A set of method names.
+
+    Return:
+        A list of the path to the target.
+        If the search_dir is inexistent, None will be returned.
+    """
+    # If module_info.json is outdated, finding in the search_dir can result in
+    # raising exception. Return null immediately can guild users to run
+    # --rebuild-module-info to resolve the problem.
+    if not os.path.isdir(search_dir):
+        logging.debug('\'%s\' does not exist!', search_dir)
+        return None
+    ref_name = FIND_REFERENCE_TYPE[ref_type]
+    start = time.time()
+    if os.path.isfile(FIND_INDEXES[ref_type]):
+        _dict, out = {}, None
+        with open(FIND_INDEXES[ref_type], 'rb') as index:
+            try:
+                _dict = pickle.load(index)
+            except (IOError, EOFError, pickle.UnpicklingError) as err:
+                logging.debug('Exception raised: %s', err)
+                metrics_utils.handle_exc_and_send_exit_event(
+                    constants.ACCESS_CACHE_FAILURE)
+                os.remove(FIND_INDEXES[ref_type])
+        if _dict.get(target):
+            logging.debug('Found %s in %s', target, FIND_INDEXES[ref_type])
+            out = [path for path in _dict.get(target) if search_dir in path]
+    else:
+        prune_cond = _get_prune_cond_of_ignored_dirs()
+        if '.' in target:
+            target = target.replace('.', '/')
+        find_cmd = FIND_CMDS[ref_type].format(search_dir, prune_cond, target)
+        logging.debug('Executing %s find cmd: %s', ref_name, find_cmd)
+        out = subprocess.check_output(find_cmd, shell=True)
+        logging.debug('%s find cmd out: %s', ref_name, out)
+    logging.debug('%s find completed in %ss', ref_name, time.time() - start)
+    return extract_test_path(out, methods)
+
+
+def find_class_file(search_dir, class_name, is_native_test=False, methods=None):
+    """Find a path to a class file given a search dir and a class name.
+
+    Args:
+        search_dir: A string of the dirpath to search in.
+        class_name: A string of the class to search for.
+        is_native_test: A boolean variable of whether to search for a native
+        test or not.
+        methods: A set of method names.
+
+    Return:
+        A list of the path to the java/cc file.
+    """
+    if is_native_test:
+        ref_type = FIND_REFERENCE_TYPE.CC_CLASS
+    elif '.' in class_name:
+        ref_type = FIND_REFERENCE_TYPE.QUALIFIED_CLASS
+    else:
+        ref_type = FIND_REFERENCE_TYPE.CLASS
+    return run_find_cmd(ref_type, search_dir, class_name, methods)
+
+
+def is_equal_or_sub_dir(sub_dir, parent_dir):
+    """Return True sub_dir is sub dir or equal to parent_dir.
+
+    Args:
+      sub_dir: A string of the sub directory path.
+      parent_dir: A string of the parent directory path.
+
+    Returns:
+        A boolean of whether both are dirs and sub_dir is sub of parent_dir
+        or is equal to parent_dir.
+    """
+    # avoid symlink issues with real path
+    parent_dir = os.path.realpath(parent_dir)
+    sub_dir = os.path.realpath(sub_dir)
+    if not os.path.isdir(sub_dir) or not os.path.isdir(parent_dir):
+        return False
+    return os.path.commonprefix([sub_dir, parent_dir]) == parent_dir
+
+
+def find_parent_module_dir(root_dir, start_dir, module_info):
+    """From current dir search up file tree until root dir for module dir.
+
+    Args:
+        root_dir: A string  of the dir that is the parent of the start dir.
+        start_dir: A string of the dir to start searching up from.
+        module_info: ModuleInfo object containing module information from the
+                     build system.
+
+    Returns:
+        A string of the module dir relative to root, None if no Module Dir
+        found. There may be multiple testable modules at this level.
+
+    Exceptions:
+        ValueError: Raised if cur_dir not dir or not subdir of root dir.
+    """
+    if not is_equal_or_sub_dir(start_dir, root_dir):
+        raise ValueError('%s not in repo %s' % (start_dir, root_dir))
+    auto_gen_dir = None
+    current_dir = start_dir
+    while current_dir != root_dir:
+        # TODO (b/112904944) - migrate module_finder functions to here and
+        # reuse them.
+        rel_dir = os.path.relpath(current_dir, root_dir)
+        # Check if actual config file here
+        if os.path.isfile(os.path.join(current_dir, constants.MODULE_CONFIG)):
+            return rel_dir
+        # Check module_info if auto_gen config or robo (non-config) here
+        for mod in module_info.path_to_module_info.get(rel_dir, []):
+            if module_info.is_robolectric_module(mod):
+                return rel_dir
+            for test_config in mod.get(constants.MODULE_TEST_CONFIG, []):
+                if os.path.isfile(os.path.join(root_dir, test_config)):
+                    return rel_dir
+            if mod.get('auto_test_config'):
+                auto_gen_dir = rel_dir
+                # Don't return for auto_gen, keep checking for real config, because
+                # common in cts for class in apk that's in hostside test setup.
+        current_dir = os.path.dirname(current_dir)
+    return auto_gen_dir
+
+
+def get_targets_from_xml(xml_file, module_info):
+    """Retrieve build targets from the given xml.
+
+    Just a helper func on top of get_targets_from_xml_root.
+
+    Args:
+        xml_file: abs path to xml file.
+        module_info: ModuleInfo class used to verify targets are valid modules.
+
+    Returns:
+        A set of build targets based on the signals found in the xml file.
+    """
+    xml_root = ET.parse(xml_file).getroot()
+    return get_targets_from_xml_root(xml_root, module_info)
+
+
+def _get_apk_target(apk_target):
+    """Return the sanitized apk_target string from the xml.
+
+    The apk_target string can be of 2 forms:
+      - apk_target.apk
+      - apk_target.apk->/path/to/install/apk_target.apk
+
+    We want to return apk_target in both cases.
+
+    Args:
+        apk_target: String of target name to clean.
+
+    Returns:
+        String of apk_target to build.
+    """
+    apk = apk_target.split(_XML_PUSH_DELIM, 1)[0].strip()
+    return apk[:-len(_APK_SUFFIX)]
+
+
+def _is_apk_target(name, value):
+    """Return True if XML option is an apk target.
+
+    We have some scenarios where an XML option can be an apk target:
+      - value is an apk file.
+      - name is a 'push' option where value holds the apk_file + other stuff.
+
+    Args:
+        name: String name of XML option.
+        value: String value of the XML option.
+
+    Returns:
+        True if it's an apk target we should build, False otherwise.
+    """
+    if _APK_RE.match(value):
+        return True
+    if name == 'push' and value.endswith(_APK_SUFFIX):
+        return True
+    return False
+
+
+def get_targets_from_xml_root(xml_root, module_info):
+    """Retrieve build targets from the given xml root.
+
+    We're going to pull the following bits of info:
+      - Parse any .apk files listed in the config file.
+      - Parse option value for "test-module-name" (for vts10 tests).
+      - Look for the perf script.
+
+    Args:
+        module_info: ModuleInfo class used to verify targets are valid modules.
+        xml_root: ElementTree xml_root for us to look through.
+
+    Returns:
+        A set of build targets based on the signals found in the xml file.
+    """
+    targets = set()
+    option_tags = xml_root.findall('.//option')
+    for tag in option_tags:
+        target_to_add = None
+        name = tag.attrib[_XML_NAME].strip()
+        value = tag.attrib[_XML_VALUE].strip()
+        if _is_apk_target(name, value):
+            target_to_add = _get_apk_target(value)
+        elif _PERF_SETUP_LABEL in value:
+            targets.add(_PERF_SETUP_LABEL)
+            continue
+
+        # Let's make sure we can actually build the target.
+        if target_to_add and module_info.is_module(target_to_add):
+            targets.add(target_to_add)
+        elif target_to_add:
+            logging.warning('Build target (%s) not present in module info, '
+                            'skipping build', target_to_add)
+
+    # TODO (b/70813166): Remove this lookup once all runtime dependencies
+    # can be listed as a build dependencies or are in the base test harness.
+    nodes_with_class = xml_root.findall(".//*[@class]")
+    for class_attr in nodes_with_class:
+        fqcn = class_attr.attrib['class'].strip()
+        if fqcn.startswith(_COMPATIBILITY_PACKAGE_PREFIX):
+            targets.add(_CTS_JAR)
+    logging.debug('Targets found in config file: %s', targets)
+    return targets
+
+
+def _get_vts_push_group_targets(push_file, rel_out_dir):
+    """Retrieve vts10 push group build targets.
+
+    A push group file is a file that list out test dependencies and other push
+    group files. Go through the push file and gather all the test deps we need.
+
+    Args:
+        push_file: Name of the push file in the VTS
+        rel_out_dir: Abs path to the out dir to help create vts10 build targets.
+
+    Returns:
+        Set of string which represent build targets.
+    """
+    targets = set()
+    full_push_file_path = os.path.join(_VTS_PUSH_DIR, push_file)
+    # pylint: disable=invalid-name
+    with open(full_push_file_path) as f:
+        for line in f:
+            target = line.strip()
+            # Skip empty lines.
+            if not target:
+                continue
+
+            # This is a push file, get the targets from it.
+            if target.endswith(_VTS_PUSH_SUFFIX):
+                targets |= _get_vts_push_group_targets(line.strip(),
+                                                       rel_out_dir)
+                continue
+            sanitized_target = target.split(_XML_PUSH_DELIM, 1)[0].strip()
+            targets.add(os.path.join(rel_out_dir, sanitized_target))
+    return targets
+
+
+def _specified_bitness(xml_root):
+    """Check if the xml file contains the option append-bitness.
+
+    Args:
+        xml_root: abs path to xml file.
+
+    Returns:
+        True if xml specifies to append-bitness, False otherwise.
+    """
+    option_tags = xml_root.findall('.//option')
+    for tag in option_tags:
+        value = tag.attrib[_XML_VALUE].strip()
+        name = tag.attrib[_XML_NAME].strip()
+        if name == _VTS_BITNESS and value == _VTS_BITNESS_TRUE:
+            return True
+    return False
+
+
+def _get_vts_binary_src_target(value, rel_out_dir):
+    """Parse out the vts10 binary src target.
+
+    The value can be in the following pattern:
+      - {_32bit,_64bit,_IPC32_32bit}::DATA/target (DATA/target)
+      - DATA/target->/data/target (DATA/target)
+      - out/host/linx-x86/bin/VtsSecuritySelinuxPolicyHostTest (the string as
+        is)
+
+    Args:
+        value: String of the XML option value to parse.
+        rel_out_dir: String path of out dir to prepend to target when required.
+
+    Returns:
+        String of the target to build.
+    """
+    # We'll assume right off the bat we can use the value as is and modify it if
+    # necessary, e.g. out/host/linux-x86/bin...
+    target = value
+    # _32bit::DATA/target
+    match = _VTS_BINARY_SRC_DELIM_RE.match(value)
+    if match:
+        target = os.path.join(rel_out_dir, match.group('target'))
+    # DATA/target->/data/target
+    elif _XML_PUSH_DELIM in value:
+        target = value.split(_XML_PUSH_DELIM, 1)[0].strip()
+        target = os.path.join(rel_out_dir, target)
+    return target
+
+
+def get_plans_from_vts_xml(xml_file):
+    """Get configs which are included by xml_file.
+
+    We're looking for option(include) to get all dependency plan configs.
+
+    Args:
+        xml_file: Absolute path to xml file.
+
+    Returns:
+        A set of plan config paths which are depended by xml_file.
+    """
+    if not os.path.exists(xml_file):
+        raise atest_error.XmlNotExistError('%s: The xml file does'
+                                           'not exist' % xml_file)
+    plans = set()
+    xml_root = ET.parse(xml_file).getroot()
+    plans.add(xml_file)
+    option_tags = xml_root.findall('.//include')
+    if not option_tags:
+        return plans
+    # Currently, all vts10 xmls live in the same dir :
+    # https://android.googlesource.com/platform/test/vts/+/master/tools/vts-tradefed/res/config/
+    # If the vts10 plans start using folders to organize the plans, the logic here
+    # should be changed.
+    xml_dir = os.path.dirname(xml_file)
+    for tag in option_tags:
+        name = tag.attrib[_XML_NAME].strip()
+        plans |= get_plans_from_vts_xml(os.path.join(xml_dir, name + ".xml"))
+    return plans
+
+
+def get_targets_from_vts_xml(xml_file, rel_out_dir, module_info):
+    """Parse a vts10 xml for test dependencies we need to build.
+
+    We have a separate vts10 parsing function because we make a big assumption
+    on the targets (the way they're formatted and what they represent) and we
+    also create these build targets in a very special manner as well.
+    The 6 options we're looking for are:
+      - binary-test-source
+      - push-group
+      - push
+      - test-module-name
+      - test-file-name
+      - apk
+
+    Args:
+        module_info: ModuleInfo class used to verify targets are valid modules.
+        rel_out_dir: Abs path to the out dir to help create vts10 build targets.
+        xml_file: abs path to xml file.
+
+    Returns:
+        A set of build targets based on the signals found in the xml file.
+    """
+    xml_root = ET.parse(xml_file).getroot()
+    targets = set()
+    option_tags = xml_root.findall('.//option')
+    for tag in option_tags:
+        value = tag.attrib[_XML_VALUE].strip()
+        name = tag.attrib[_XML_NAME].strip()
+        if name in [_VTS_TEST_MODULE, _VTS_MODULE]:
+            if module_info.is_module(value):
+                targets.add(value)
+            else:
+                logging.warning('vts10 test module (%s) not present in module '
+                                'info, skipping build', value)
+        elif name == _VTS_BINARY_SRC:
+            targets.add(_get_vts_binary_src_target(value, rel_out_dir))
+        elif name == _VTS_PUSH_GROUP:
+            # Look up the push file and parse out build artifacts (as well as
+            # other push group files to parse).
+            targets |= _get_vts_push_group_targets(value, rel_out_dir)
+        elif name == _VTS_PUSH:
+            # Parse out the build artifact directly.
+            push_target = value.split(_XML_PUSH_DELIM, 1)[0].strip()
+            # If the config specified append-bitness, append the bits suffixes
+            # to the target.
+            if _specified_bitness(xml_root):
+                targets.add(os.path.join(rel_out_dir, push_target + _VTS_BITNESS_32))
+                targets.add(os.path.join(rel_out_dir, push_target + _VTS_BITNESS_64))
+            else:
+                targets.add(os.path.join(rel_out_dir, push_target))
+        elif name == _VTS_TEST_FILE:
+            # The _VTS_TEST_FILE values can be set in 2 possible ways:
+            #   1. test_file.apk
+            #   2. DATA/app/test_file/test_file.apk
+            # We'll assume that test_file.apk (#1) is in an expected path (but
+            # that is not true, see b/76158619) and create the full path for it
+            # and then append the _VTS_TEST_FILE value to targets to build.
+            target = os.path.join(rel_out_dir, value)
+            # If value is just an APK, specify the path that we expect it to be in
+            # e.g. out/host/linux-x86/vts10/android-vts10/testcases/DATA/app/test_file/test_file.apk
+            head, _ = os.path.split(value)
+            if not head:
+                target = os.path.join(rel_out_dir, _VTS_OUT_DATA_APP_PATH,
+                                      _get_apk_target(value), value)
+            targets.add(target)
+        elif name == _VTS_APK:
+            targets.add(os.path.join(rel_out_dir, value))
+    logging.debug('Targets found in config file: %s', targets)
+    return targets
+
+
+def get_dir_path_and_filename(path):
+    """Return tuple of dir and file name from given path.
+
+    Args:
+        path: String of path to break up.
+
+    Returns:
+        Tuple of (dir, file) paths.
+    """
+    if os.path.isfile(path):
+        dir_path, file_path = os.path.split(path)
+    else:
+        dir_path, file_path = path, None
+    return dir_path, file_path
+
+
+def get_cc_filter(class_name, methods):
+    """Get the cc filter.
+
+    Args:
+        class_name: class name of the cc test.
+        methods: a list of method names.
+
+    Returns:
+        A formatted string for cc filter.
+        Ex: "class1.method1:class1.method2" or "class1.*"
+    """
+    if methods:
+        return ":".join(["%s.%s" % (class_name, x) for x in methods])
+    return "%s.*" % class_name
+
+
+def search_integration_dirs(name, int_dirs):
+    """Search integration dirs for name and return full path.
+
+    Args:
+        name: A string of plan name needed to be found.
+        int_dirs: A list of path needed to be searched.
+
+    Returns:
+        A list of the test path.
+        Ask user to select if multiple tests are found.
+        None if no matched test found.
+    """
+    root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+    test_files = []
+    for integration_dir in int_dirs:
+        abs_path = os.path.join(root_dir, integration_dir)
+        test_paths = run_find_cmd(FIND_REFERENCE_TYPE.INTEGRATION, abs_path,
+                                  name)
+        if test_paths:
+            test_files.extend(test_paths)
+    return extract_test_from_tests(test_files)
+
+
+def get_int_dir_from_path(path, int_dirs):
+    """Search integration dirs for the given path and return path of dir.
+
+    Args:
+        path: A string of path needed to be found.
+        int_dirs: A list of path needed to be searched.
+
+    Returns:
+        A string of the test dir. None if no matched path found.
+    """
+    root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+    if not os.path.exists(path):
+        return None
+    dir_path, file_name = get_dir_path_and_filename(path)
+    int_dir = None
+    for possible_dir in int_dirs:
+        abs_int_dir = os.path.join(root_dir, possible_dir)
+        if is_equal_or_sub_dir(dir_path, abs_int_dir):
+            int_dir = abs_int_dir
+            break
+    if not file_name:
+        logging.warn('Found dir (%s) matching input (%s).'
+                     ' Referencing an entire Integration/Suite dir'
+                     ' is not supported. If you are trying to reference'
+                     ' a test by its path, please input the path to'
+                     ' the integration/suite config file itself.',
+                     int_dir, path)
+        return None
+    return int_dir
+
+
+def get_install_locations(installed_paths):
+    """Get install locations from installed paths.
+
+    Args:
+        installed_paths: List of installed_paths from module_info.
+
+    Returns:
+        Set of install locations from module_info installed_paths. e.g.
+        set(['host', 'device'])
+    """
+    install_locations = set()
+    for path in installed_paths:
+        if _HOST_PATH_RE.match(path):
+            install_locations.add(constants.DEVICELESS_TEST)
+        elif _DEVICE_PATH_RE.match(path):
+            install_locations.add(constants.DEVICE_TEST)
+    return install_locations
+
+
+def get_levenshtein_distance(test_name, module_name, dir_costs=constants.COST_TYPO):
+    """Return an edit distance between test_name and module_name.
+
+    Levenshtein Distance has 3 actions: delete, insert and replace.
+    dis_costs makes each action weigh differently.
+
+    Args:
+        test_name: A keyword from the users.
+        module_name: A testable module name.
+        dir_costs: A tuple which contains 3 integer, where dir represents
+                   Deletion, Insertion and Replacement respectively.
+                   For guessing typos: (1, 1, 1) gives the best result.
+                   For searching keywords, (8, 1, 5) gives the best result.
+
+    Returns:
+        An edit distance integer between test_name and module_name.
+    """
+    rows = len(test_name) + 1
+    cols = len(module_name) + 1
+    deletion, insertion, replacement = dir_costs
+
+    # Creating a Dynamic Programming Matrix and weighting accordingly.
+    dp_matrix = [[0 for _ in range(cols)] for _ in range(rows)]
+    # Weigh rows/deletion
+    for row in range(1, rows):
+        dp_matrix[row][0] = row * deletion
+    # Weigh cols/insertion
+    for col in range(1, cols):
+        dp_matrix[0][col] = col * insertion
+    # The core logic of LD
+    for col in range(1, cols):
+        for row in range(1, rows):
+            if test_name[row-1] == module_name[col-1]:
+                cost = 0
+            else:
+                cost = replacement
+            dp_matrix[row][col] = min(dp_matrix[row-1][col] + deletion,
+                                      dp_matrix[row][col-1] + insertion,
+                                      dp_matrix[row-1][col-1] + cost)
+
+    return dp_matrix[row][col]
+
+
+def is_test_from_kernel_xml(xml_file, test_name):
+    """Check if test defined in xml_file.
+
+    A kernel test can be defined like:
+    <option name="test-command-line" key="test_class_1" value="command 1" />
+    where key is the name of test class and method of the runner. This method
+    returns True if the test_name was defined in the given xml_file.
+
+    Args:
+        xml_file: Absolute path to xml file.
+        test_name: test_name want to find.
+
+    Returns:
+        True if test_name in xml_file, False otherwise.
+    """
+    if not os.path.exists(xml_file):
+        raise atest_error.XmlNotExistError('%s: The xml file does'
+                                           'not exist' % xml_file)
+    xml_root = ET.parse(xml_file).getroot()
+    option_tags = xml_root.findall('.//option')
+    for option_tag in option_tags:
+        if option_tag.attrib['name'] == 'test-command-line':
+            if option_tag.attrib['key'] == test_name:
+                return True
+    return False
diff --git a/atest-py2/test_finders/test_finder_utils_unittest.py b/atest-py2/test_finders/test_finder_utils_unittest.py
new file mode 100755
index 0000000..db0496b
--- /dev/null
+++ b/atest-py2/test_finders/test_finder_utils_unittest.py
@@ -0,0 +1,585 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for test_finder_utils."""
+
+import os
+import unittest
+import mock
+
+# pylint: disable=import-error
+import atest_error
+import constants
+import module_info
+import unittest_constants as uc
+import unittest_utils
+from test_finders import test_finder_utils
+
+CLASS_DIR = 'foo/bar/jank/src/android/jank/cts/ui'
+OTHER_DIR = 'other/dir/'
+OTHER_CLASS_NAME = 'test.java'
+CLASS_NAME3 = 'test2'
+INT_DIR1 = os.path.join(uc.TEST_DATA_DIR, 'integration_dir_testing/int_dir1')
+INT_DIR2 = os.path.join(uc.TEST_DATA_DIR, 'integration_dir_testing/int_dir2')
+INT_FILE_NAME = 'int_dir_testing'
+FIND_TWO = uc.ROOT + 'other/dir/test.java\n' + uc.FIND_ONE
+FIND_THREE = '/a/b/c.java\n/d/e/f.java\n/g/h/i.java'
+FIND_THREE_LIST = ['/a/b/c.java', '/d/e/f.java', '/g/h/i.java']
+VTS_XML = 'VtsAndroidTest.xml'
+VTS_BITNESS_XML = 'VtsBitnessAndroidTest.xml'
+VTS_PUSH_DIR = 'vts_push_files'
+VTS_PLAN_DIR = 'vts_plan_files'
+VTS_XML_TARGETS = {'VtsTestName',
+                   'DATA/nativetest/vts_treble_vintf_test/vts_treble_vintf_test',
+                   'DATA/nativetest64/vts_treble_vintf_test/vts_treble_vintf_test',
+                   'DATA/lib/libhidl-gen-hash.so',
+                   'DATA/lib64/libhidl-gen-hash.so',
+                   'hal-hidl-hash/frameworks/hardware/interfaces/current.txt',
+                   'hal-hidl-hash/hardware/interfaces/current.txt',
+                   'hal-hidl-hash/system/hardware/interfaces/current.txt',
+                   'hal-hidl-hash/system/libhidl/transport/current.txt',
+                   'target_with_delim',
+                   'out/dir/target',
+                   'push_file1_target1',
+                   'push_file1_target2',
+                   'push_file2_target1',
+                   'push_file2_target2',
+                   'CtsDeviceInfo.apk',
+                   'DATA/app/DeviceHealthTests/DeviceHealthTests.apk',
+                   'DATA/app/sl4a/sl4a.apk'}
+VTS_PLAN_TARGETS = {os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-staging-default.xml'),
+                    os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-aa.xml'),
+                    os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-bb.xml'),
+                    os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-cc.xml'),
+                    os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-dd.xml')}
+XML_TARGETS = {'CtsJankDeviceTestCases', 'perf-setup.sh', 'cts-tradefed',
+               'GtsEmptyTestApp'}
+PATH_TO_MODULE_INFO_WITH_AUTOGEN = {
+    'foo/bar/jank' : [{'auto_test_config' : True}]}
+PATH_TO_MODULE_INFO_WITH_MULTI_AUTOGEN = {
+    'foo/bar/jank' : [{'auto_test_config' : True},
+                      {'auto_test_config' : True}]}
+PATH_TO_MODULE_INFO_WITH_MULTI_AUTOGEN_AND_ROBO = {
+    'foo/bar' : [{'auto_test_config' : True},
+                 {'auto_test_config' : True}],
+    'foo/bar/jank': [{constants.MODULE_CLASS : [constants.MODULE_CLASS_ROBOLECTRIC]}]}
+
+#pylint: disable=protected-access
+class TestFinderUtilsUnittests(unittest.TestCase):
+    """Unit tests for test_finder_utils.py"""
+
+    def test_split_methods(self):
+        """Test _split_methods method."""
+        # Class
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('Class.Name'),
+            ('Class.Name', set()))
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('Class.Name#Method'),
+            ('Class.Name', {'Method'}))
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('Class.Name#Method,Method2'),
+            ('Class.Name', {'Method', 'Method2'}))
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('Class.Name#Method,Method2'),
+            ('Class.Name', {'Method', 'Method2'}))
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('Class.Name#Method,Method2'),
+            ('Class.Name', {'Method', 'Method2'}))
+        self.assertRaises(
+            atest_error.TooManyMethodsError, test_finder_utils.split_methods,
+            'class.name#Method,class.name.2#method')
+        # Path
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('foo/bar/class.java'),
+            ('foo/bar/class.java', set()))
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.split_methods('foo/bar/class.java#Method'),
+            ('foo/bar/class.java', {'Method'}))
+
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=False)
+    @mock.patch('__builtin__.raw_input', return_value='1')
+    def test_extract_test_path(self, _, has_method):
+        """Test extract_test_dir method."""
+        paths = [os.path.join(uc.ROOT, CLASS_DIR, uc.CLASS_NAME + '.java')]
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(uc.FIND_ONE), paths)
+        paths = [os.path.join(uc.ROOT, CLASS_DIR, uc.CLASS_NAME + '.java')]
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(FIND_TWO), paths)
+        has_method.return_value = True
+        paths = [os.path.join(uc.ROOT, CLASS_DIR, uc.CLASS_NAME + '.java')]
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(uc.FIND_ONE, 'method'), paths)
+
+    def test_has_method_in_file(self):
+        """Test has_method_in_file method."""
+        test_path = os.path.join(uc.TEST_DATA_DIR, 'class_file_path_testing',
+                                 'hello_world_test.cc')
+        self.assertTrue(test_finder_utils.has_method_in_file(
+            test_path, frozenset(['PrintHelloWorld'])))
+        self.assertFalse(test_finder_utils.has_method_in_file(
+            test_path, frozenset(['PrintHelloWorld1'])))
+        test_path = os.path.join(uc.TEST_DATA_DIR, 'class_file_path_testing',
+                                 'hello_world_test.java')
+        self.assertTrue(test_finder_utils.has_method_in_file(
+            test_path, frozenset(['testMethod1'])))
+        test_path = os.path.join(uc.TEST_DATA_DIR, 'class_file_path_testing',
+                                 'hello_world_test.java')
+        self.assertTrue(test_finder_utils.has_method_in_file(
+            test_path, frozenset(['testMethod', 'testMethod2'])))
+        test_path = os.path.join(uc.TEST_DATA_DIR, 'class_file_path_testing',
+                                 'hello_world_test.java')
+        self.assertFalse(test_finder_utils.has_method_in_file(
+            test_path, frozenset(['testMethod'])))
+
+    @mock.patch('__builtin__.raw_input', return_value='1')
+    def test_extract_test_from_tests(self, mock_input):
+        """Test method extract_test_from_tests method."""
+        tests = []
+        self.assertEquals(test_finder_utils.extract_test_from_tests(tests), None)
+        paths = [os.path.join(uc.ROOT, CLASS_DIR, uc.CLASS_NAME + '.java')]
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(uc.FIND_ONE), paths)
+        paths = [os.path.join(uc.ROOT, OTHER_DIR, OTHER_CLASS_NAME)]
+        mock_input.return_value = '0'
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(FIND_TWO), paths)
+        # Test inputing out-of-range integer or a string
+        mock_input.return_value = '100'
+        self.assertEquals(test_finder_utils.extract_test_from_tests(
+            uc.CLASS_NAME), [])
+        mock_input.return_value = 'lOO'
+        self.assertEquals(test_finder_utils.extract_test_from_tests(
+            uc.CLASS_NAME), [])
+
+    @mock.patch('__builtin__.raw_input', return_value='1')
+    def test_extract_test_from_multiselect(self, mock_input):
+        """Test method extract_test_from_tests method."""
+        # selecting 'All'
+        paths = ['/a/b/c.java', '/d/e/f.java', '/g/h/i.java']
+        mock_input.return_value = '3'
+        unittest_utils.assert_strict_equal(
+            self, sorted(test_finder_utils.extract_test_from_tests(
+                FIND_THREE_LIST)), sorted(paths))
+        # multi-select
+        paths = ['/a/b/c.java', '/g/h/i.java']
+        mock_input.return_value = '0,2'
+        unittest_utils.assert_strict_equal(
+            self, sorted(test_finder_utils.extract_test_from_tests(
+                FIND_THREE_LIST)), sorted(paths))
+        # selecting a range
+        paths = ['/d/e/f.java', '/g/h/i.java']
+        mock_input.return_value = '1-2'
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_from_tests(FIND_THREE_LIST), paths)
+        # mixed formats
+        paths = ['/a/b/c.java', '/d/e/f.java', '/g/h/i.java']
+        mock_input.return_value = '0,1-2'
+        unittest_utils.assert_strict_equal(
+            self, sorted(test_finder_utils.extract_test_from_tests(
+                FIND_THREE_LIST)), sorted(paths))
+        # input unsupported formats, return empty
+        paths = []
+        mock_input.return_value = '?/#'
+        unittest_utils.assert_strict_equal(
+            self, test_finder_utils.extract_test_path(FIND_THREE), paths)
+
+    @mock.patch('os.path.isdir')
+    def test_is_equal_or_sub_dir(self, mock_isdir):
+        """Test is_equal_or_sub_dir method."""
+        self.assertTrue(test_finder_utils.is_equal_or_sub_dir('/a/b/c', '/'))
+        self.assertTrue(test_finder_utils.is_equal_or_sub_dir('/a/b/c', '/a'))
+        self.assertTrue(test_finder_utils.is_equal_or_sub_dir('/a/b/c',
+                                                              '/a/b/c'))
+        self.assertFalse(test_finder_utils.is_equal_or_sub_dir('/a/b',
+                                                               '/a/b/c'))
+        self.assertFalse(test_finder_utils.is_equal_or_sub_dir('/a', '/f'))
+        mock_isdir.return_value = False
+        self.assertFalse(test_finder_utils.is_equal_or_sub_dir('/a/b', '/a'))
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile',
+                side_effect=unittest_utils.isfile_side_effect)
+    def test_find_parent_module_dir(self, _isfile, _isdir):
+        """Test _find_parent_module_dir method."""
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.path_to_module_info = {}
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            uc.MODULE_DIR)
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=False)
+    def test_find_parent_module_dir_with_autogen_config(self, _isfile, _isdir):
+        """Test _find_parent_module_dir method."""
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.path_to_module_info = PATH_TO_MODULE_INFO_WITH_AUTOGEN
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            uc.MODULE_DIR)
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', side_effect=[False] * 5 + [True])
+    def test_find_parent_module_dir_with_autogen_subconfig(self, _isfile, _isdir):
+        """Test _find_parent_module_dir method.
+
+        This case is testing when the auto generated config is in a
+        sub-directory of a larger test that contains a test config in a parent
+        directory.
+        """
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.path_to_module_info = (
+            PATH_TO_MODULE_INFO_WITH_MULTI_AUTOGEN)
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            uc.MODULE_DIR)
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=False)
+    def test_find_parent_module_dir_with_multi_autogens(self, _isfile, _isdir):
+        """Test _find_parent_module_dir method.
+
+        This case returns folders with multiple autogenerated configs defined.
+        """
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.path_to_module_info = (
+            PATH_TO_MODULE_INFO_WITH_MULTI_AUTOGEN)
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            uc.MODULE_DIR)
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=False)
+    def test_find_parent_module_dir_with_robo_and_autogens(self, _isfile,
+                                                           _isdir):
+        """Test _find_parent_module_dir method.
+
+        This case returns folders with multiple autogenerated configs defined
+        with a Robo test above them, which is the expected result.
+        """
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.path_to_module_info = (
+            PATH_TO_MODULE_INFO_WITH_MULTI_AUTOGEN_AND_ROBO)
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            uc.MODULE_DIR)
+
+
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=False)
+    def test_find_parent_module_dir_robo(self, _isfile, _isdir):
+        """Test _find_parent_module_dir method.
+
+        Make sure we behave as expected when we encounter a robo module path.
+        """
+        abs_class_dir = '/%s' % CLASS_DIR
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.is_robolectric_module.return_value = True
+        rel_class_dir_path = os.path.relpath(abs_class_dir, uc.ROOT)
+        mock_module_info.path_to_module_info = {rel_class_dir_path: [{}]}
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.find_parent_module_dir(uc.ROOT,
+                                                     abs_class_dir,
+                                                     mock_module_info),
+            rel_class_dir_path)
+
+    def test_get_targets_from_xml(self):
+        """Test get_targets_from_xml method."""
+        # Mocking Etree is near impossible, so use a real file, but mocking
+        # ModuleInfo is still fine. Just have it return False when it finds a
+        # module that states it's not a module.
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.is_module.side_effect = lambda module: (
+            not module == 'is_not_module')
+        xml_file = os.path.join(uc.TEST_DATA_DIR, constants.MODULE_CONFIG)
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.get_targets_from_xml(xml_file, mock_module_info),
+            XML_TARGETS)
+
+    @mock.patch.object(test_finder_utils, '_VTS_PUSH_DIR',
+                       os.path.join(uc.TEST_DATA_DIR, VTS_PUSH_DIR))
+    def test_get_targets_from_vts_xml(self):
+        """Test get_targets_from_xml method."""
+        # Mocking Etree is near impossible, so use a real file, but mock out
+        # ModuleInfo,
+        mock_module_info = mock.Mock(spec=module_info.ModuleInfo)
+        mock_module_info.is_module.return_value = True
+        xml_file = os.path.join(uc.TEST_DATA_DIR, VTS_XML)
+        unittest_utils.assert_strict_equal(
+            self,
+            test_finder_utils.get_targets_from_vts_xml(xml_file, '',
+                                                       mock_module_info),
+            VTS_XML_TARGETS)
+
+    @mock.patch('subprocess.check_output')
+    def test_get_ignored_dirs(self, _mock_check_output):
+        """Test _get_ignored_dirs method."""
+
+        # Clean cached value for test.
+        test_finder_utils._get_ignored_dirs.cached_ignore_dirs = []
+
+        build_top = '/a/b'
+        _mock_check_output.return_value = ('/a/b/c/.find-ignore\n'
+                                           '/a/b/out/.out-dir\n'
+                                           '/a/b/d/.out-dir\n\n')
+        # Case 1: $OUT_DIR = ''. No customized out dir.
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: ''}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            correct_ignore_dirs = ['/a/b/c', '/a/b/out', '/a/b/d']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, correct_ignore_dirs)
+        # Case 2: $OUT_DIR = 'out2'
+        test_finder_utils._get_ignored_dirs.cached_ignore_dirs = []
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: 'out2'}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            correct_ignore_dirs = ['/a/b/c', '/a/b/out', '/a/b/d', '/a/b/out2']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, correct_ignore_dirs)
+        # Case 3: The $OUT_DIR is abs dir but not under $ANDROID_BUILD_TOP
+        test_finder_utils._get_ignored_dirs.cached_ignore_dirs = []
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: '/x/y/e/g'}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            correct_ignore_dirs = ['/a/b/c', '/a/b/out', '/a/b/d']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, correct_ignore_dirs)
+        # Case 4: The $OUT_DIR is abs dir and under $ANDROID_BUILD_TOP
+        test_finder_utils._get_ignored_dirs.cached_ignore_dirs = []
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: '/a/b/e/g'}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            correct_ignore_dirs = ['/a/b/c', '/a/b/out', '/a/b/d', '/a/b/e/g']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, correct_ignore_dirs)
+        # Case 5: There is a file of '.out-dir' under $OUT_DIR.
+        test_finder_utils._get_ignored_dirs.cached_ignore_dirs = []
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: 'out'}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            correct_ignore_dirs = ['/a/b/c', '/a/b/out', '/a/b/d']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, correct_ignore_dirs)
+        # Case 6: Testing cache. All of the changes are useless.
+        _mock_check_output.return_value = ('/a/b/X/.find-ignore\n'
+                                           '/a/b/YY/.out-dir\n'
+                                           '/a/b/d/.out-dir\n\n')
+        os_environ_mock = {constants.ANDROID_BUILD_TOP: build_top,
+                           constants.ANDROID_OUT_DIR: 'new'}
+        with mock.patch.dict('os.environ', os_environ_mock, clear=True):
+            cached_answer = ['/a/b/c', '/a/b/out', '/a/b/d']
+            none_cached_answer = ['/a/b/X', '/a/b/YY', '/a/b/d', 'a/b/new']
+            ignore_dirs = test_finder_utils._get_ignored_dirs()
+            self.assertEqual(ignore_dirs, cached_answer)
+            self.assertNotEqual(ignore_dirs, none_cached_answer)
+
+    @mock.patch('__builtin__.raw_input', return_value='0')
+    def test_search_integration_dirs(self, mock_input):
+        """Test search_integration_dirs."""
+        mock_input.return_value = '0'
+        paths = [os.path.join(uc.ROOT, INT_DIR1, INT_FILE_NAME+'.xml')]
+        int_dirs = [INT_DIR1]
+        test_result = test_finder_utils.search_integration_dirs(INT_FILE_NAME, int_dirs)
+        unittest_utils.assert_strict_equal(self, test_result, paths)
+        int_dirs = [INT_DIR1, INT_DIR2]
+        test_result = test_finder_utils.search_integration_dirs(INT_FILE_NAME, int_dirs)
+        unittest_utils.assert_strict_equal(self, test_result, paths)
+
+    @mock.patch('os.path.isfile', return_value=False)
+    @mock.patch('os.environ.get', return_value=uc.TEST_CONFIG_DATA_DIR)
+    @mock.patch('__builtin__.raw_input', return_value='0')
+    # pylint: disable=too-many-statements
+    def test_find_class_file(self, mock_input, _mock_env, _mock_isfile):
+        """Test find_class_file."""
+        # 1. Java class(find).
+        java_tmp_test_result = []
+        mock_input.return_value = '0'
+        java_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_TESTCASE_JAVA + '.java')
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      uc.FIND_PATH_TESTCASE_JAVA))
+        mock_input.return_value = '1'
+        kt_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_TESTCASE_JAVA + '.kt')
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      uc.FIND_PATH_TESTCASE_JAVA))
+        self.assertTrue(java_class in java_tmp_test_result)
+        self.assertTrue(kt_class in java_tmp_test_result)
+
+        # 2. Java class(read index).
+        del java_tmp_test_result[:]
+        mock_input.return_value = '0'
+        _mock_isfile = True
+        test_finder_utils.FIND_INDEXES['CLASS'] = uc.CLASS_INDEX
+        java_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_TESTCASE_JAVA + '.java')
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      uc.FIND_PATH_TESTCASE_JAVA))
+        mock_input.return_value = '1'
+        kt_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_TESTCASE_JAVA + '.kt')
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      uc.FIND_PATH_TESTCASE_JAVA))
+        self.assertTrue(java_class in java_tmp_test_result)
+        self.assertTrue(kt_class in java_tmp_test_result)
+
+        # 3. Qualified Java class(find).
+        del java_tmp_test_result[:]
+        mock_input.return_value = '0'
+        _mock_isfile = False
+        java_qualified_class = '{0}.{1}'.format(uc.FIND_PATH_FOLDER, uc.FIND_PATH_TESTCASE_JAVA)
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      java_qualified_class))
+        mock_input.return_value = '1'
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      java_qualified_class))
+        self.assertTrue(java_class in java_tmp_test_result)
+        self.assertTrue(kt_class in java_tmp_test_result)
+
+        # 4. Qualified Java class(read index).
+        del java_tmp_test_result[:]
+        mock_input.return_value = '0'
+        _mock_isfile = True
+        test_finder_utils.FIND_INDEXES['QUALIFIED_CLASS'] = uc.QCLASS_INDEX
+        java_qualified_class = '{0}.{1}'.format(uc.FIND_PATH_FOLDER, uc.FIND_PATH_TESTCASE_JAVA)
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      java_qualified_class))
+        mock_input.return_value = '1'
+        java_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                      java_qualified_class))
+        self.assertTrue(java_class in java_tmp_test_result)
+        self.assertTrue(kt_class in java_tmp_test_result)
+
+        # 5. CC class(find).
+        cc_tmp_test_result = []
+        _mock_isfile = False
+        mock_input.return_value = '0'
+        cpp_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_FILENAME_CC + '.cpp')
+        cc_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                    uc.FIND_PATH_TESTCASE_CC,
+                                                                    True))
+        mock_input.return_value = '1'
+        cc_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_FILENAME_CC + '.cc')
+        cc_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                    uc.FIND_PATH_TESTCASE_CC,
+                                                                    True))
+        self.assertTrue(cpp_class in cc_tmp_test_result)
+        self.assertTrue(cc_class in cc_tmp_test_result)
+
+        # 6. CC class(read index).
+        del cc_tmp_test_result[:]
+        mock_input.return_value = '0'
+        _mock_isfile = True
+        test_finder_utils.FIND_INDEXES['CC_CLASS'] = uc.CC_CLASS_INDEX
+        cpp_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_FILENAME_CC + '.cpp')
+        cc_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                    uc.FIND_PATH_TESTCASE_CC,
+                                                                    True))
+        mock_input.return_value = '1'
+        cc_class = os.path.join(uc.FIND_PATH, uc.FIND_PATH_FILENAME_CC + '.cc')
+        cc_tmp_test_result.extend(test_finder_utils.find_class_file(uc.FIND_PATH,
+                                                                    uc.FIND_PATH_TESTCASE_CC,
+                                                                    True))
+        self.assertTrue(cpp_class in cc_tmp_test_result)
+        self.assertTrue(cc_class in cc_tmp_test_result)
+
+    @mock.patch('__builtin__.raw_input', return_value='0')
+    @mock.patch.object(test_finder_utils, 'get_dir_path_and_filename')
+    @mock.patch('os.path.exists', return_value=True)
+    def test_get_int_dir_from_path(self, _exists, _find, mock_input):
+        """Test get_int_dir_from_path."""
+        mock_input.return_value = '0'
+        int_dirs = [INT_DIR1]
+        path = os.path.join(uc.ROOT, INT_DIR1, INT_FILE_NAME+'.xml')
+        _find.return_value = (INT_DIR1, INT_FILE_NAME+'.xml')
+        test_result = test_finder_utils.get_int_dir_from_path(path, int_dirs)
+        unittest_utils.assert_strict_equal(self, test_result, INT_DIR1)
+        _find.return_value = (INT_DIR1, None)
+        test_result = test_finder_utils.get_int_dir_from_path(path, int_dirs)
+        unittest_utils.assert_strict_equal(self, test_result, None)
+        int_dirs = [INT_DIR1, INT_DIR2]
+        _find.return_value = (INT_DIR1, INT_FILE_NAME+'.xml')
+        test_result = test_finder_utils.get_int_dir_from_path(path, int_dirs)
+        unittest_utils.assert_strict_equal(self, test_result, INT_DIR1)
+
+    def test_get_install_locations(self):
+        """Test get_install_locations."""
+        host_installed_paths = ["out/host/a/b"]
+        host_expect = set(['host'])
+        self.assertEqual(test_finder_utils.get_install_locations(host_installed_paths),
+                         host_expect)
+        device_installed_paths = ["out/target/c/d"]
+        device_expect = set(['device'])
+        self.assertEqual(test_finder_utils.get_install_locations(device_installed_paths),
+                         device_expect)
+        both_installed_paths = ["out/host/e", "out/target/f"]
+        both_expect = set(['host', 'device'])
+        self.assertEqual(test_finder_utils.get_install_locations(both_installed_paths),
+                         both_expect)
+        no_installed_paths = []
+        no_expect = set()
+        self.assertEqual(test_finder_utils.get_install_locations(no_installed_paths),
+                         no_expect)
+
+    def test_get_plans_from_vts_xml(self):
+        """Test get_plans_from_vts_xml method."""
+        xml_path = os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'vts-staging-default.xml')
+        self.assertEqual(
+            test_finder_utils.get_plans_from_vts_xml(xml_path),
+            VTS_PLAN_TARGETS)
+        xml_path = os.path.join(uc.TEST_DATA_DIR, VTS_PLAN_DIR, 'NotExist.xml')
+        self.assertRaises(atest_error.XmlNotExistError,
+                          test_finder_utils.get_plans_from_vts_xml, xml_path)
+
+    def test_get_levenshtein_distance(self):
+        """Test get_levenshetine distance module correctly returns distance."""
+        self.assertEqual(test_finder_utils.get_levenshtein_distance(uc.MOD1, uc.FUZZY_MOD1), 1)
+        self.assertEqual(test_finder_utils.get_levenshtein_distance(uc.MOD2, uc.FUZZY_MOD2,
+                                                                    dir_costs=(1, 2, 3)), 3)
+        self.assertEqual(test_finder_utils.get_levenshtein_distance(uc.MOD3, uc.FUZZY_MOD3,
+                                                                    dir_costs=(1, 2, 1)), 8)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_finders/test_info.py b/atest-py2/test_finders/test_info.py
new file mode 100644
index 0000000..707f49a
--- /dev/null
+++ b/atest-py2/test_finders/test_info.py
@@ -0,0 +1,120 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+TestInfo class.
+"""
+
+from collections import namedtuple
+
+# pylint: disable=import-error
+import constants
+
+
+TestFilterBase = namedtuple('TestFilter', ['class_name', 'methods'])
+
+
+class TestInfo(object):
+    """Information needed to identify and run a test."""
+
+    # pylint: disable=too-many-arguments
+    def __init__(self, test_name, test_runner, build_targets, data=None,
+                 suite=None, module_class=None, install_locations=None,
+                 test_finder='', compatibility_suites=None):
+        """Init for TestInfo.
+
+        Args:
+            test_name: String of test name.
+            test_runner: String of test runner.
+            build_targets: Set of build targets.
+            data: Dict of data for test runners to use.
+            suite: Suite for test runners to use.
+            module_class: A list of test classes. It's a snippet of class
+                        in module_info. e.g. ["EXECUTABLES",  "NATIVE_TESTS"]
+            install_locations: Set of install locations.
+                        e.g. set(['host', 'device'])
+            test_finder: String of test finder.
+            compatibility_suites: A list of compatibility_suites. It's a
+                        snippet of compatibility_suites in module_info. e.g.
+                        ["device-tests",  "vts10"]
+        """
+        self.test_name = test_name
+        self.test_runner = test_runner
+        self.build_targets = build_targets
+        self.data = data if data else {}
+        self.suite = suite
+        self.module_class = module_class if module_class else []
+        self.install_locations = (install_locations if install_locations
+                                  else set())
+        # True if the TestInfo is built from a test configured in TEST_MAPPING.
+        self.from_test_mapping = False
+        # True if the test should run on host and require no device. The
+        # attribute is only set through TEST_MAPPING file.
+        self.host = False
+        self.test_finder = test_finder
+        self.compatibility_suites = (map(str, compatibility_suites)
+                                     if compatibility_suites else [])
+
+    def __str__(self):
+        host_info = (' - runs on host without device required.' if self.host
+                     else '')
+        return ('test_name: %s - test_runner:%s - build_targets:%s - data:%s - '
+                'suite:%s - module_class: %s - install_locations:%s%s - '
+                'test_finder: %s - compatibility_suites:%s' % (
+                    self.test_name, self.test_runner, self.build_targets,
+                    self.data, self.suite, self.module_class,
+                    self.install_locations, host_info, self.test_finder,
+                    self.compatibility_suites))
+
+    def get_supported_exec_mode(self):
+        """Get the supported execution mode of the test.
+
+        Determine the test supports which execution mode by strategy:
+        Robolectric/JAVA_LIBRARIES --> 'both'
+        Not native tests or installed only in out/target --> 'device'
+        Installed only in out/host --> 'both'
+        Installed under host and target --> 'both'
+
+        Return:
+            String of execution mode.
+        """
+        install_path = self.install_locations
+        if not self.module_class:
+            return constants.DEVICE_TEST
+        # Let Robolectric test support both.
+        if constants.MODULE_CLASS_ROBOLECTRIC in self.module_class:
+            return constants.BOTH_TEST
+        # Let JAVA_LIBRARIES support both.
+        if constants.MODULE_CLASS_JAVA_LIBRARIES in self.module_class:
+            return constants.BOTH_TEST
+        if not install_path:
+            return constants.DEVICE_TEST
+        # Non-Native test runs on device-only.
+        if constants.MODULE_CLASS_NATIVE_TESTS not in self.module_class:
+            return constants.DEVICE_TEST
+        # Native test with install path as host should be treated as both.
+        # Otherwise, return device test.
+        if len(install_path) == 1 and constants.DEVICE_TEST in install_path:
+            return constants.DEVICE_TEST
+        return constants.BOTH_TEST
+
+
+class TestFilter(TestFilterBase):
+    """Information needed to filter a test in Tradefed"""
+
+    def to_set_of_tf_strings(self):
+        """Return TestFilter as set of strings in TradeFed filter format."""
+        if self.methods:
+            return {'%s#%s' % (self.class_name, m) for m in self.methods}
+        return {self.class_name}
diff --git a/atest-py2/test_finders/tf_integration_finder.py b/atest-py2/test_finders/tf_integration_finder.py
new file mode 100644
index 0000000..ed0a539
--- /dev/null
+++ b/atest-py2/test_finders/tf_integration_finder.py
@@ -0,0 +1,269 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Integration Finder class.
+"""
+
+import copy
+import logging
+import os
+import re
+import xml.etree.ElementTree as ElementTree
+
+# pylint: disable=import-error
+import atest_error
+import constants
+from test_finders import test_info
+from test_finders import test_finder_base
+from test_finders import test_finder_utils
+from test_runners import atest_tf_test_runner
+
+# Find integration name based on file path of integration config xml file.
+# Group matches "foo/bar" given "blah/res/config/blah/res/config/foo/bar.xml
+_INT_NAME_RE = re.compile(r'^.*\/res\/config\/(?P<int_name>.*).xml$')
+_TF_TARGETS = frozenset(['tradefed', 'tradefed-contrib'])
+_GTF_TARGETS = frozenset(['google-tradefed', 'google-tradefed-contrib'])
+_CONTRIB_TARGETS = frozenset(['google-tradefed-contrib'])
+_TF_RES_DIR = '../res/config'
+
+
+class TFIntegrationFinder(test_finder_base.TestFinderBase):
+    """Integration Finder class."""
+    NAME = 'INTEGRATION'
+    _TEST_RUNNER = atest_tf_test_runner.AtestTradefedTestRunner.NAME
+
+
+    def __init__(self, module_info=None):
+        super(TFIntegrationFinder, self).__init__()
+        self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+        self.module_info = module_info
+        # TODO: Break this up into AOSP/google_tf integration finders.
+        self.tf_dirs, self.gtf_dirs = self._get_integration_dirs()
+        self.integration_dirs = self.tf_dirs + self.gtf_dirs
+
+    def _get_mod_paths(self, module_name):
+        """Return the paths of the given module name."""
+        if self.module_info:
+            # Since aosp/801774 merged, the path of test configs have been
+            # changed to ../res/config.
+            if module_name in _CONTRIB_TARGETS:
+                mod_paths = self.module_info.get_paths(module_name)
+                return [os.path.join(path, _TF_RES_DIR) for path in mod_paths]
+            return self.module_info.get_paths(module_name)
+        return []
+
+    def _get_integration_dirs(self):
+        """Get integration dirs from MODULE_INFO based on targets.
+
+        Returns:
+            A tuple of lists of strings of integration dir rel to repo root.
+        """
+        tf_dirs = filter(None, [d for x in _TF_TARGETS for d in self._get_mod_paths(x)])
+        gtf_dirs = filter(None, [d for x in _GTF_TARGETS for d in self._get_mod_paths(x)])
+        return tf_dirs, gtf_dirs
+
+    def _get_build_targets(self, rel_config):
+        config_file = os.path.join(self.root_dir, rel_config)
+        xml_root = self._load_xml_file(config_file)
+        targets = test_finder_utils.get_targets_from_xml_root(xml_root,
+                                                              self.module_info)
+        if self.gtf_dirs:
+            targets.add(constants.GTF_TARGET)
+        return frozenset(targets)
+
+    def _load_xml_file(self, path):
+        """Load an xml file with option to expand <include> tags
+
+        Args:
+            path: A string of path to xml file.
+
+        Returns:
+            An xml.etree.ElementTree.Element instance of the root of the tree.
+        """
+        tree = ElementTree.parse(path)
+        root = tree.getroot()
+        self._load_include_tags(root)
+        return root
+
+    #pylint: disable=invalid-name
+    def _load_include_tags(self, root):
+        """Recursively expand in-place the <include> tags in a given xml tree.
+
+        Python xml libraries don't support our type of <include> tags. Logic used
+        below is modified version of the built-in ElementInclude logic found here:
+        https://github.com/python/cpython/blob/2.7/Lib/xml/etree/ElementInclude.py
+
+        Args:
+            root: The root xml.etree.ElementTree.Element.
+
+        Returns:
+            An xml.etree.ElementTree.Element instance with include tags expanded
+        """
+        i = 0
+        while i < len(root):
+            elem = root[i]
+            if elem.tag == 'include':
+                # expand included xml file
+                integration_name = elem.get('name')
+                if not integration_name:
+                    logging.warn('skipping <include> tag with no "name" value')
+                    continue
+                full_paths = self._search_integration_dirs(integration_name)
+                node = None
+                if full_paths:
+                    node = self._load_xml_file(full_paths[0])
+                if node is None:
+                    raise atest_error.FatalIncludeError("can't load %r" %
+                                                        integration_name)
+                node = copy.copy(node)
+                if elem.tail:
+                    node.tail = (node.tail or "") + elem.tail
+                root[i] = node
+            i = i + 1
+
+    def _search_integration_dirs(self, name):
+        """Search integration dirs for name and return full path.
+        Args:
+            name: A string of integration name as seen in tf's list configs.
+
+        Returns:
+            A list of test path.
+        """
+        test_files = []
+        for integration_dir in self.integration_dirs:
+            abs_path = os.path.join(self.root_dir, integration_dir)
+            found_test_files = test_finder_utils.run_find_cmd(
+                test_finder_utils.FIND_REFERENCE_TYPE.INTEGRATION,
+                abs_path, name)
+            if found_test_files:
+                test_files.extend(found_test_files)
+        return test_files
+
+    def find_test_by_integration_name(self, name):
+        """Find the test info matching the given integration name.
+
+        Args:
+            name: A string of integration name as seen in tf's list configs.
+
+        Returns:
+            A populated TestInfo namedtuple if test found, else None
+        """
+        class_name = None
+        if ':' in name:
+            name, class_name = name.split(':')
+        test_files = self._search_integration_dirs(name)
+        if test_files is None:
+            return None
+        # Don't use names that simply match the path,
+        # must be the actual name used by TF to run the test.
+        t_infos = []
+        for test_file in test_files:
+            t_info = self._get_test_info(name, test_file, class_name)
+            if t_info:
+                t_infos.append(t_info)
+        return t_infos
+
+    def _get_test_info(self, name, test_file, class_name):
+        """Find the test info matching the given test_file and class_name.
+
+        Args:
+            name: A string of integration name as seen in tf's list configs.
+            test_file: A string of test_file full path.
+            class_name: A string of user's input.
+
+        Returns:
+            A populated TestInfo namedtuple if test found, else None.
+        """
+        match = _INT_NAME_RE.match(test_file)
+        if not match:
+            logging.error('Integration test outside config dir: %s',
+                          test_file)
+            return None
+        int_name = match.group('int_name')
+        if int_name != name:
+            logging.warn('Input (%s) not valid integration name, '
+                         'did you mean: %s?', name, int_name)
+            return None
+        rel_config = os.path.relpath(test_file, self.root_dir)
+        filters = frozenset()
+        if class_name:
+            class_name, methods = test_finder_utils.split_methods(class_name)
+            test_filters = []
+            if '.' in class_name:
+                test_filters.append(test_info.TestFilter(class_name, methods))
+            else:
+                logging.warn('Looking up fully qualified class name for: %s.'
+                             'Improve speed by using fully qualified names.',
+                             class_name)
+                paths = test_finder_utils.find_class_file(self.root_dir,
+                                                          class_name)
+                if not paths:
+                    return None
+                for path in paths:
+                    class_name = (
+                        test_finder_utils.get_fully_qualified_class_name(
+                            path))
+                    test_filters.append(test_info.TestFilter(
+                        class_name, methods))
+            filters = frozenset(test_filters)
+        return test_info.TestInfo(
+            test_name=name,
+            test_runner=self._TEST_RUNNER,
+            build_targets=self._get_build_targets(rel_config),
+            data={constants.TI_REL_CONFIG: rel_config,
+                  constants.TI_FILTER: filters})
+
+    def find_int_test_by_path(self, path):
+        """Find the first test info matching the given path.
+
+        Strategy:
+            path_to_integration_file --> Resolve to INTEGRATION
+            # If the path is a dir, we return nothing.
+            path_to_dir_with_integration_files --> Return None
+
+        Args:
+            path: A string of the test's path.
+
+        Returns:
+            A list of populated TestInfo namedtuple if test found, else None
+        """
+        path, _ = test_finder_utils.split_methods(path)
+
+        # Make sure we're looking for a config.
+        if not path.endswith('.xml'):
+            return None
+
+        # TODO: See if this can be generalized and shared with methods above
+        # create absolute path from cwd and remove symbolic links
+        path = os.path.realpath(path)
+        if not os.path.exists(path):
+            return None
+        int_dir = test_finder_utils.get_int_dir_from_path(path,
+                                                          self.integration_dirs)
+        if int_dir:
+            rel_config = os.path.relpath(path, self.root_dir)
+            match = _INT_NAME_RE.match(rel_config)
+            if not match:
+                logging.error('Integration test outside config dir: %s',
+                              rel_config)
+                return None
+            int_name = match.group('int_name')
+            return [test_info.TestInfo(
+                test_name=int_name,
+                test_runner=self._TEST_RUNNER,
+                build_targets=self._get_build_targets(rel_config),
+                data={constants.TI_REL_CONFIG: rel_config,
+                      constants.TI_FILTER: frozenset()})]
+        return None
diff --git a/atest-py2/test_finders/tf_integration_finder_unittest.py b/atest-py2/test_finders/tf_integration_finder_unittest.py
new file mode 100755
index 0000000..170da0c
--- /dev/null
+++ b/atest-py2/test_finders/tf_integration_finder_unittest.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for tf_integration_finder."""
+
+import os
+import unittest
+import mock
+
+# pylint: disable=import-error
+import constants
+import unittest_constants as uc
+import unittest_utils
+from test_finders import test_finder_utils
+from test_finders import test_info
+from test_finders import tf_integration_finder
+from test_runners import atest_tf_test_runner as atf_tr
+
+
+INT_NAME_CLASS = uc.INT_NAME + ':' + uc.FULL_CLASS_NAME
+INT_NAME_METHOD = INT_NAME_CLASS + '#' + uc.METHOD_NAME
+GTF_INT_CONFIG = os.path.join(uc.GTF_INT_DIR, uc.GTF_INT_NAME + '.xml')
+INT_CLASS_INFO = test_info.TestInfo(
+    uc.INT_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    data={constants.TI_FILTER: frozenset([uc.CLASS_FILTER]),
+          constants.TI_REL_CONFIG: uc.INT_CONFIG})
+INT_METHOD_INFO = test_info.TestInfo(
+    uc.INT_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    data={constants.TI_FILTER: frozenset([uc.METHOD_FILTER]),
+          constants.TI_REL_CONFIG: uc.INT_CONFIG})
+
+
+class TFIntegrationFinderUnittests(unittest.TestCase):
+    """Unit tests for tf_integration_finder.py"""
+
+    def setUp(self):
+        """Set up for testing."""
+        self.tf_finder = tf_integration_finder.TFIntegrationFinder()
+        self.tf_finder.integration_dirs = [os.path.join(uc.ROOT, uc.INT_DIR),
+                                           os.path.join(uc.ROOT, uc.GTF_INT_DIR)]
+        self.tf_finder.root_dir = uc.ROOT
+
+    @mock.patch.object(tf_integration_finder.TFIntegrationFinder,
+                       '_get_build_targets', return_value=set())
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('subprocess.check_output')
+    @mock.patch('os.path.exists', return_value=True)
+    @mock.patch('os.path.isfile', return_value=False)
+    @mock.patch('os.path.isdir', return_value=False)
+    #pylint: disable=unused-argument
+    def test_find_test_by_integration_name(self, _isdir, _isfile, _path, mock_find,
+                                           _fcqn, _build):
+        """Test find_test_by_integration_name.
+
+        Note that _isfile is always False since we don't index integration tests.
+        """
+        mock_find.return_value = os.path.join(uc.ROOT, uc.INT_DIR, uc.INT_NAME + '.xml')
+        t_infos = self.tf_finder.find_test_by_integration_name(uc.INT_NAME)
+        self.assertEqual(len(t_infos), 0)
+        _isdir.return_value = True
+        t_infos = self.tf_finder.find_test_by_integration_name(uc.INT_NAME)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], uc.INT_INFO)
+        t_infos = self.tf_finder.find_test_by_integration_name(INT_NAME_CLASS)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], INT_CLASS_INFO)
+        t_infos = self.tf_finder.find_test_by_integration_name(INT_NAME_METHOD)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], INT_METHOD_INFO)
+        not_fully_qual = uc.INT_NAME + ':' + 'someClass'
+        t_infos = self.tf_finder.find_test_by_integration_name(not_fully_qual)
+        unittest_utils.assert_equal_testinfos(self, t_infos[0], INT_CLASS_INFO)
+        mock_find.return_value = os.path.join(uc.ROOT, uc.GTF_INT_DIR,
+                                              uc.GTF_INT_NAME + '.xml')
+        t_infos = self.tf_finder.find_test_by_integration_name(uc.GTF_INT_NAME)
+        unittest_utils.assert_equal_testinfos(
+            self,
+            t_infos[0],
+            uc.GTF_INT_INFO)
+        mock_find.return_value = ''
+        self.assertEqual(
+            self.tf_finder.find_test_by_integration_name('NotIntName'), [])
+
+    @mock.patch.object(tf_integration_finder.TFIntegrationFinder,
+                       '_get_build_targets', return_value=set())
+    @mock.patch('os.path.realpath',
+                side_effect=unittest_utils.realpath_side_effect)
+    @mock.patch('os.path.isdir', return_value=True)
+    @mock.patch('os.path.isfile', return_value=True)
+    @mock.patch.object(test_finder_utils, 'find_parent_module_dir')
+    @mock.patch('os.path.exists', return_value=True)
+    def test_find_int_test_by_path(self, _exists, _find, _isfile, _isdir, _real,
+                                   _build):
+        """Test find_int_test_by_path."""
+        path = os.path.join(uc.INT_DIR, uc.INT_NAME + '.xml')
+        t_infos = self.tf_finder.find_int_test_by_path(path)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.INT_INFO, t_infos[0])
+        path = os.path.join(uc.GTF_INT_DIR, uc.GTF_INT_NAME + '.xml')
+        t_infos = self.tf_finder.find_int_test_by_path(path)
+        unittest_utils.assert_equal_testinfos(
+            self, uc.GTF_INT_INFO, t_infos[0])
+
+    #pylint: disable=protected-access
+    @mock.patch.object(tf_integration_finder.TFIntegrationFinder,
+                       '_search_integration_dirs')
+    def test_load_xml_file(self, search):
+        """Test _load_xml_file and _load_include_tags methods."""
+        search.return_value = [os.path.join(uc.TEST_DATA_DIR,
+                                            'CtsUiDeviceTestCases.xml')]
+        xml_file = os.path.join(uc.TEST_DATA_DIR, constants.MODULE_CONFIG)
+        xml_root = self.tf_finder._load_xml_file(xml_file)
+        include_tags = xml_root.findall('.//include')
+        self.assertEqual(0, len(include_tags))
+        option_tags = xml_root.findall('.//option')
+        included = False
+        for tag in option_tags:
+            if tag.attrib['value'].strip() == 'CtsUiDeviceTestCases.apk':
+                included = True
+        self.assertTrue(included)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_mapping.py b/atest-py2/test_mapping.py
new file mode 100644
index 0000000..02f8f31
--- /dev/null
+++ b/atest-py2/test_mapping.py
@@ -0,0 +1,160 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Classes for test mapping related objects
+"""
+
+
+import copy
+import fnmatch
+import os
+import re
+
+import atest_utils
+import constants
+
+TEST_MAPPING = 'TEST_MAPPING'
+
+
+class TestDetail(object):
+    """Stores the test details set in a TEST_MAPPING file."""
+
+    def __init__(self, details):
+        """TestDetail constructor
+
+        Parse test detail from a dictionary, e.g.,
+        {
+          "name": "SettingsUnitTests",
+          "host": true,
+          "options": [
+            {
+              "instrumentation-arg":
+                  "annotation=android.platform.test.annotations.Presubmit"
+            },
+          "file_patterns": ["(/|^)Window[^/]*\\.java",
+                           "(/|^)Activity[^/]*\\.java"]
+        }
+
+        Args:
+            details: A dictionary of test detail.
+        """
+        self.name = details['name']
+        self.options = []
+        # True if the test should run on host and require no device.
+        self.host = details.get('host', False)
+        assert isinstance(self.host, bool), 'host can only have boolean value.'
+        options = details.get('options', [])
+        for option in options:
+            assert len(option) == 1, 'Each option can only have one key.'
+            self.options.append(copy.deepcopy(option).popitem())
+        self.options.sort(key=lambda o: o[0])
+        self.file_patterns = details.get('file_patterns', [])
+
+    def __str__(self):
+        """String value of the TestDetail object."""
+        host_info = (', runs on host without device required.' if self.host
+                     else '')
+        if not self.options:
+            return self.name + host_info
+        options = ''
+        for option in self.options:
+            options += '%s: %s, ' % option
+
+        return '%s (%s)%s' % (self.name, options.strip(', '), host_info)
+
+    def __hash__(self):
+        """Get the hash of TestDetail based on the details"""
+        return hash(str(self))
+
+    def __eq__(self, other):
+        return str(self) == str(other)
+
+
+class Import(object):
+    """Store test mapping import details."""
+
+    def __init__(self, test_mapping_file, details):
+        """Import constructor
+
+        Parse import details from a dictionary, e.g.,
+        {
+            "path": "..\folder1"
+        }
+        in which, project is the name of the project, by default it's the
+        current project of the containing TEST_MAPPING file.
+
+        Args:
+            test_mapping_file: Path to the TEST_MAPPING file that contains the
+                import.
+            details: A dictionary of details about importing another
+                TEST_MAPPING file.
+        """
+        self.test_mapping_file = test_mapping_file
+        self.path = details['path']
+
+    def __str__(self):
+        """String value of the Import object."""
+        return 'Source: %s, path: %s' % (self.test_mapping_file, self.path)
+
+    def get_path(self):
+        """Get the path to TEST_MAPPING import directory."""
+        path = os.path.realpath(os.path.join(
+            os.path.dirname(self.test_mapping_file), self.path))
+        if os.path.exists(path):
+            return path
+        root_dir = os.environ.get(constants.ANDROID_BUILD_TOP, os.sep)
+        path = os.path.realpath(os.path.join(root_dir, self.path))
+        if os.path.exists(path):
+            return path
+        # The import path can't be located.
+        return None
+
+
+def is_match_file_patterns(test_mapping_file, test_detail):
+    """Check if the changed file names match the regex pattern defined in
+    file_patterns of TEST_MAPPING files.
+
+    Args:
+        test_mapping_file: Path to a TEST_MAPPING file.
+        test_detail: A TestDetail object.
+
+    Returns:
+        True if the test's file_patterns setting is not set or contains a
+        pattern matches any of the modified files.
+    """
+    # Only check if the altered files are located in the same or sub directory
+    # of the TEST_MAPPING file. Extract the relative path of the modified files
+    # which match file patterns.
+    file_patterns = test_detail.get('file_patterns', [])
+    if not file_patterns:
+        return True
+    test_mapping_dir = os.path.dirname(test_mapping_file)
+    modified_files = atest_utils.get_modified_files(test_mapping_dir)
+    if not modified_files:
+        return False
+    modified_files_in_source_dir = [
+        os.path.relpath(filepath, test_mapping_dir)
+        for filepath in fnmatch.filter(modified_files,
+                                       os.path.join(test_mapping_dir, '*'))
+    ]
+    for modified_file in modified_files_in_source_dir:
+        # Force to run the test if it's in a TEST_MAPPING file included in the
+        # changesets.
+        if modified_file == constants.TEST_MAPPING:
+            return True
+        for pattern in file_patterns:
+            if re.search(pattern, modified_file):
+                return True
+    return False
diff --git a/atest-py2/test_mapping_unittest.py b/atest-py2/test_mapping_unittest.py
new file mode 100755
index 0000000..557e28d
--- /dev/null
+++ b/atest-py2/test_mapping_unittest.py
@@ -0,0 +1,83 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the 'License');
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an 'AS IS' BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for test_mapping"""
+
+import unittest
+import mock
+
+import test_mapping
+import unittest_constants as uc
+
+
+class TestMappingUnittests(unittest.TestCase):
+    """Unit tests for test_mapping.py"""
+
+    def test_parsing(self):
+        """Test creating TestDetail object"""
+        detail = test_mapping.TestDetail(uc.TEST_MAPPING_TEST)
+        self.assertEqual(uc.TEST_MAPPING_TEST['name'], detail.name)
+        self.assertTrue(detail.host)
+        self.assertEqual([], detail.options)
+
+    def test_parsing_with_option(self):
+        """Test creating TestDetail object with option configured"""
+        detail = test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_OPTION)
+        self.assertEqual(uc.TEST_MAPPING_TEST_WITH_OPTION['name'], detail.name)
+        self.assertEqual(uc.TEST_MAPPING_TEST_WITH_OPTION_STR, str(detail))
+
+    def test_parsing_with_bad_option(self):
+        """Test creating TestDetail object with bad option configured"""
+        with self.assertRaises(Exception) as context:
+            test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_BAD_OPTION)
+        self.assertEqual(
+            'Each option can only have one key.', str(context.exception))
+
+    def test_parsing_with_bad_host_value(self):
+        """Test creating TestDetail object with bad host value configured"""
+        with self.assertRaises(Exception) as context:
+            test_mapping.TestDetail(uc.TEST_MAPPING_TEST_WITH_BAD_HOST_VALUE)
+        self.assertEqual(
+            'host can only have boolean value.', str(context.exception))
+
+    @mock.patch("atest_utils.get_modified_files")
+    def test_is_match_file_patterns(self, mock_modified_files):
+        """Test mathod is_match_file_patterns."""
+        test_mapping_file = ''
+        test_detail = {
+            "name": "Test",
+            "file_patterns": ["(/|^)test_fp1[^/]*\\.java",
+                              "(/|^)test_fp2[^/]*\\.java"]
+        }
+        mock_modified_files.return_value = {'/a/b/test_fp122.java',
+                                            '/a/b/c/d/test_fp222.java'}
+        self.assertTrue(test_mapping.is_match_file_patterns(test_mapping_file,
+                                                            test_detail))
+        mock_modified_files.return_value = {}
+        self.assertFalse(test_mapping.is_match_file_patterns(test_mapping_file,
+                                                             test_detail))
+        mock_modified_files.return_value = {'/a/b/test_fp3.java'}
+        self.assertFalse(test_mapping.is_match_file_patterns(test_mapping_file,
+                                                             test_detail))
+        test_mapping_file = '/a/b/TEST_MAPPING'
+        mock_modified_files.return_value = {'/a/b/test_fp3.java',
+                                            '/a/b/TEST_MAPPING'}
+        self.assertTrue(test_mapping.is_match_file_patterns(test_mapping_file,
+                                                            test_detail))
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_runner_handler.py b/atest-py2/test_runner_handler.py
new file mode 100644
index 0000000..3c18119
--- /dev/null
+++ b/atest-py2/test_runner_handler.py
@@ -0,0 +1,146 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Aggregates test runners, groups tests by test runners and kicks off tests.
+"""
+
+import itertools
+import time
+import traceback
+
+import atest_error
+import constants
+import result_reporter
+
+from metrics import metrics
+from metrics import metrics_utils
+from test_runners import atest_tf_test_runner
+from test_runners import robolectric_test_runner
+from test_runners import suite_plan_test_runner
+from test_runners import vts_tf_test_runner
+
+# pylint: disable=line-too-long
+_TEST_RUNNERS = {
+    atest_tf_test_runner.AtestTradefedTestRunner.NAME: atest_tf_test_runner.AtestTradefedTestRunner,
+    robolectric_test_runner.RobolectricTestRunner.NAME: robolectric_test_runner.RobolectricTestRunner,
+    suite_plan_test_runner.SuitePlanTestRunner.NAME: suite_plan_test_runner.SuitePlanTestRunner,
+    vts_tf_test_runner.VtsTradefedTestRunner.NAME: vts_tf_test_runner.VtsTradefedTestRunner,
+}
+
+
+def _get_test_runners():
+    """Returns the test runners.
+
+    If external test runners are defined outside atest, they can be try-except
+    imported into here.
+
+    Returns:
+        Dict of test runner name to test runner class.
+    """
+    test_runners_dict = _TEST_RUNNERS
+    # Example import of example test runner:
+    try:
+        # pylint: disable=line-too-long
+        from test_runners import example_test_runner
+        test_runners_dict[example_test_runner.ExampleTestRunner.NAME] = example_test_runner.ExampleTestRunner
+    except ImportError:
+        pass
+    return test_runners_dict
+
+
+def group_tests_by_test_runners(test_infos):
+    """Group the test_infos by test runners
+
+    Args:
+        test_infos: List of TestInfo.
+
+    Returns:
+        List of tuples (test runner, tests).
+    """
+    tests_by_test_runner = []
+    test_runner_dict = _get_test_runners()
+    key = lambda x: x.test_runner
+    sorted_test_infos = sorted(list(test_infos), key=key)
+    for test_runner, tests in itertools.groupby(sorted_test_infos, key):
+        # groupby returns a grouper object, we want to operate on a list.
+        tests = list(tests)
+        test_runner_class = test_runner_dict.get(test_runner)
+        if test_runner_class is None:
+            raise atest_error.UnknownTestRunnerError('Unknown Test Runner %s' %
+                                                     test_runner)
+        tests_by_test_runner.append((test_runner_class, tests))
+    return tests_by_test_runner
+
+
+def get_test_runner_reqs(module_info, test_infos):
+    """Returns the requirements for all test runners specified in the tests.
+
+    Args:
+        module_info: ModuleInfo object.
+        test_infos: List of TestInfo.
+
+    Returns:
+        Set of build targets required by the test runners.
+    """
+    unused_result_dir = ''
+    test_runner_build_req = set()
+    for test_runner, _ in group_tests_by_test_runners(test_infos):
+        test_runner_build_req |= test_runner(
+            unused_result_dir,
+            module_info=module_info).get_test_runner_build_reqs()
+    return test_runner_build_req
+
+
+def run_all_tests(results_dir, test_infos, extra_args,
+                  delay_print_summary=False):
+    """Run the given tests.
+
+    Args:
+        results_dir: String directory to store atest results.
+        test_infos: List of TestInfo.
+        extra_args: Dict of extra args for test runners to use.
+
+    Returns:
+        0 if tests succeed, non-zero otherwise.
+    """
+    reporter = result_reporter.ResultReporter()
+    reporter.print_starting_text()
+    tests_ret_code = constants.EXIT_CODE_SUCCESS
+    for test_runner, tests in group_tests_by_test_runners(test_infos):
+        test_name = ' '.join([test.test_name for test in tests])
+        test_start = time.time()
+        is_success = True
+        ret_code = constants.EXIT_CODE_TEST_FAILURE
+        stacktrace = ''
+        try:
+            test_runner = test_runner(results_dir)
+            ret_code = test_runner.run_tests(tests, extra_args, reporter)
+            tests_ret_code |= ret_code
+        # pylint: disable=broad-except
+        except Exception:
+            stacktrace = traceback.format_exc()
+            reporter.runner_failure(test_runner.NAME, stacktrace)
+            tests_ret_code = constants.EXIT_CODE_TEST_FAILURE
+            is_success = False
+        metrics.RunnerFinishEvent(
+            duration=metrics_utils.convert_duration(time.time() - test_start),
+            success=is_success,
+            runner_name=test_runner.NAME,
+            test=[{'name': test_name,
+                   'result': ret_code,
+                   'stacktrace': stacktrace}])
+    if delay_print_summary:
+        return tests_ret_code, reporter
+    return reporter.print_summary() or tests_ret_code, reporter
diff --git a/atest-py2/test_runner_handler_unittest.py b/atest-py2/test_runner_handler_unittest.py
new file mode 100755
index 0000000..b5a430e
--- /dev/null
+++ b/atest-py2/test_runner_handler_unittest.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python
+#
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for test_runner_handler."""
+
+# pylint: disable=protected-access
+
+import unittest
+import mock
+
+import atest_error
+import test_runner_handler
+from metrics import metrics
+from test_finders import test_info
+from test_runners import test_runner_base as tr_base
+
+FAKE_TR_NAME_A = 'FakeTestRunnerA'
+FAKE_TR_NAME_B = 'FakeTestRunnerB'
+MISSING_TR_NAME = 'MissingTestRunner'
+FAKE_TR_A_REQS = {'fake_tr_A_req1', 'fake_tr_A_req2'}
+FAKE_TR_B_REQS = {'fake_tr_B_req1', 'fake_tr_B_req2'}
+MODULE_NAME_A = 'ModuleNameA'
+MODULE_NAME_A_AGAIN = 'ModuleNameA_AGAIN'
+MODULE_NAME_B = 'ModuleNameB'
+MODULE_NAME_B_AGAIN = 'ModuleNameB_AGAIN'
+MODULE_INFO_A = test_info.TestInfo(MODULE_NAME_A, FAKE_TR_NAME_A, set())
+MODULE_INFO_A_AGAIN = test_info.TestInfo(MODULE_NAME_A_AGAIN, FAKE_TR_NAME_A,
+                                         set())
+MODULE_INFO_B = test_info.TestInfo(MODULE_NAME_B, FAKE_TR_NAME_B, set())
+MODULE_INFO_B_AGAIN = test_info.TestInfo(MODULE_NAME_B_AGAIN, FAKE_TR_NAME_B,
+                                         set())
+BAD_TESTINFO = test_info.TestInfo('bad_name', MISSING_TR_NAME, set())
+
+class FakeTestRunnerA(tr_base.TestRunnerBase):
+    """Fake test runner A."""
+
+    NAME = FAKE_TR_NAME_A
+    EXECUTABLE = 'echo'
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        return 0
+
+    def host_env_check(self):
+        pass
+
+    def get_test_runner_build_reqs(self):
+        return FAKE_TR_A_REQS
+
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        return ['fake command']
+
+
+class FakeTestRunnerB(FakeTestRunnerA):
+    """Fake test runner B."""
+
+    NAME = FAKE_TR_NAME_B
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        return 1
+
+    def get_test_runner_build_reqs(self):
+        return FAKE_TR_B_REQS
+
+
+class TestRunnerHandlerUnittests(unittest.TestCase):
+    """Unit tests for test_runner_handler.py"""
+
+    _TEST_RUNNERS = {
+        FakeTestRunnerA.NAME: FakeTestRunnerA,
+        FakeTestRunnerB.NAME: FakeTestRunnerB,
+    }
+
+    def setUp(self):
+        mock.patch('test_runner_handler._get_test_runners',
+                   return_value=self._TEST_RUNNERS).start()
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    def test_group_tests_by_test_runners(self):
+        """Test that we properly group tests by test runners."""
+        # Happy path testing.
+        test_infos = [MODULE_INFO_A, MODULE_INFO_A_AGAIN, MODULE_INFO_B,
+                      MODULE_INFO_B_AGAIN]
+        want_list = [(FakeTestRunnerA, [MODULE_INFO_A, MODULE_INFO_A_AGAIN]),
+                     (FakeTestRunnerB, [MODULE_INFO_B, MODULE_INFO_B_AGAIN])]
+        self.assertEqual(
+            want_list,
+            test_runner_handler.group_tests_by_test_runners(test_infos))
+
+        # Let's make sure we fail as expected.
+        self.assertRaises(
+            atest_error.UnknownTestRunnerError,
+            test_runner_handler.group_tests_by_test_runners, [BAD_TESTINFO])
+
+    def test_get_test_runner_reqs(self):
+        """Test that we get all the reqs from the test runners."""
+        test_infos = [MODULE_INFO_A, MODULE_INFO_B]
+        want_set = FAKE_TR_A_REQS | FAKE_TR_B_REQS
+        empty_module_info = None
+        self.assertEqual(
+            want_set,
+            test_runner_handler.get_test_runner_reqs(empty_module_info,
+                                                     test_infos))
+
+    @mock.patch.object(metrics, 'RunnerFinishEvent')
+    def test_run_all_tests(self, _mock_runner_finish):
+        """Test that the return value as we expected."""
+        results_dir = ""
+        extra_args = []
+        # Tests both run_tests return 0
+        test_infos = [MODULE_INFO_A, MODULE_INFO_A_AGAIN]
+        self.assertEqual(
+            0,
+            test_runner_handler.run_all_tests(
+                results_dir, test_infos, extra_args)[0])
+        # Tests both run_tests return 1
+        test_infos = [MODULE_INFO_B, MODULE_INFO_B_AGAIN]
+        self.assertEqual(
+            1,
+            test_runner_handler.run_all_tests(
+                results_dir, test_infos, extra_args)[0])
+        # Tests with on run_tests return 0, the other return 1
+        test_infos = [MODULE_INFO_A, MODULE_INFO_B]
+        self.assertEqual(
+            1,
+            test_runner_handler.run_all_tests(
+                results_dir, test_infos, extra_args)[0])
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/test_runners/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/test_runners/__init__.py
diff --git a/atest-py2/test_runners/atest_tf_test_runner.py b/atest-py2/test_runners/atest_tf_test_runner.py
new file mode 100644
index 0000000..a59707c
--- /dev/null
+++ b/atest-py2/test_runners/atest_tf_test_runner.py
@@ -0,0 +1,663 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Atest Tradefed test runner class.
+"""
+
+from __future__ import print_function
+import json
+import logging
+import os
+import re
+import select
+import socket
+import subprocess
+
+from functools import partial
+
+# pylint: disable=import-error
+import atest_utils
+import constants
+import result_reporter
+from event_handler import EventHandler
+from test_finders import test_info
+from test_runners import test_runner_base
+
+POLL_FREQ_SECS = 10
+SOCKET_HOST = '127.0.0.1'
+SOCKET_QUEUE_MAX = 1
+SOCKET_BUFFER = 4096
+SELECT_TIMEOUT = 5
+
+# Socket Events of form FIRST_EVENT {JSON_DATA}\nSECOND_EVENT {JSON_DATA}
+# EVENT_RE has groups for the name and the data. "." does not match \n.
+EVENT_RE = re.compile(r'\n*(?P<event_name>[A-Z_]+) (?P<json_data>{.*})(?=\n|.)*')
+
+EXEC_DEPENDENCIES = ('adb', 'aapt')
+
+TRADEFED_EXIT_MSG = 'TradeFed subprocess exited early with exit code=%s.'
+
+LOG_FOLDER_NAME = 'log'
+
+_INTEGRATION_FINDERS = frozenset(['', 'INTEGRATION', 'INTEGRATION_FILE_PATH'])
+
+class TradeFedExitError(Exception):
+    """Raised when TradeFed exists before test run has finished."""
+
+
+class AtestTradefedTestRunner(test_runner_base.TestRunnerBase):
+    """TradeFed Test Runner class."""
+    NAME = 'AtestTradefedTestRunner'
+    EXECUTABLE = 'atest_tradefed.sh'
+    _TF_TEMPLATE = 'template/atest_local_min'
+    # Use --no-enable-granular-attempts to control reporter replay behavior.
+    # TODO(b/142630648): Enable option enable-granular-attempts in sharding mode.
+    _LOG_ARGS = ('--logcat-on-failure --atest-log-file-path={log_path} '
+                 '--no-enable-granular-attempts')
+    _RUN_CMD = ('{exe} {template} --template:map '
+                'test=atest {tf_customize_template} {log_args} {args}')
+    _BUILD_REQ = {'tradefed-core'}
+    _RERUN_OPTION_GROUP = [constants.ITERATIONS,
+                           constants.RERUN_UNTIL_FAILURE,
+                           constants.RETRY_ANY_FAILURE]
+
+    def __init__(self, results_dir, module_info=None, **kwargs):
+        """Init stuff for base class."""
+        super(AtestTradefedTestRunner, self).__init__(results_dir, **kwargs)
+        self.module_info = module_info
+        self.log_path = os.path.join(results_dir, LOG_FOLDER_NAME)
+        if not os.path.exists(self.log_path):
+            os.makedirs(self.log_path)
+        log_args = {'log_path': self.log_path}
+        self.run_cmd_dict = {'exe': self.EXECUTABLE,
+                             'template': self._TF_TEMPLATE,
+                             'tf_customize_template': '',
+                             'args': '',
+                             'log_args': self._LOG_ARGS.format(**log_args)}
+        self.is_verbose = logging.getLogger().isEnabledFor(logging.DEBUG)
+        self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+
+    def _try_set_gts_authentication_key(self):
+        """Set GTS authentication key if it is available or exists.
+
+        Strategy:
+            Get APE_API_KEY from os.environ:
+                - If APE_API_KEY is already set by user -> do nothing.
+            Get the APE_API_KEY from constants:
+                - If the key file exists -> set to env var.
+            If APE_API_KEY isn't set and the key file doesn't exist:
+                - Warn user some GTS tests may fail without authentication.
+        """
+        if os.environ.get('APE_API_KEY'):
+            logging.debug('APE_API_KEY is set by developer.')
+            return
+        ape_api_key = constants.GTS_GOOGLE_SERVICE_ACCOUNT
+        key_path = os.path.join(self.root_dir, ape_api_key)
+        if ape_api_key and os.path.exists(key_path):
+            logging.debug('Set APE_API_KEY: %s', ape_api_key)
+            os.environ['APE_API_KEY'] = key_path
+        else:
+            logging.debug('APE_API_KEY not set, some GTS tests may fail'
+                          ' without authentication.')
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos. See base class for more.
+
+        Args:
+            test_infos: A list of TestInfos.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        reporter.log_path = self.log_path
+        reporter.rerun_options = self._extract_rerun_options(extra_args)
+        # Set google service key if it's available or found before running tests.
+        self._try_set_gts_authentication_key()
+        if os.getenv(test_runner_base.OLD_OUTPUT_ENV_VAR):
+            return self.run_tests_raw(test_infos, extra_args, reporter)
+        return self.run_tests_pretty(test_infos, extra_args, reporter)
+
+    def run_tests_raw(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos. See base class for more.
+
+        Args:
+            test_infos: A list of TestInfos.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        iterations = self._generate_iterations(extra_args)
+        reporter.register_unsupported_runner(self.NAME)
+
+        ret_code = constants.EXIT_CODE_SUCCESS
+        for _ in range(iterations):
+            run_cmds = self.generate_run_commands(test_infos, extra_args)
+            subproc = self.run(run_cmds[0], output_to_stdout=True,
+                               env_vars=self.generate_env_vars(extra_args))
+            ret_code |= self.wait_for_subprocess(subproc)
+        return ret_code
+
+    def run_tests_pretty(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos. See base class for more.
+
+        Args:
+            test_infos: A list of TestInfos.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        iterations = self._generate_iterations(extra_args)
+        ret_code = constants.EXIT_CODE_SUCCESS
+        for _ in range(iterations):
+            server = self._start_socket_server()
+            run_cmds = self.generate_run_commands(test_infos, extra_args,
+                                                  server.getsockname()[1])
+            subproc = self.run(run_cmds[0], output_to_stdout=self.is_verbose,
+                               env_vars=self.generate_env_vars(extra_args))
+            self.handle_subprocess(subproc, partial(self._start_monitor,
+                                                    server,
+                                                    subproc,
+                                                    reporter))
+            server.close()
+            ret_code |= self.wait_for_subprocess(subproc)
+        return ret_code
+
+    # pylint: disable=too-many-branches
+    def _start_monitor(self, server, tf_subproc, reporter):
+        """Polling and process event.
+
+        Args:
+            server: Socket server object.
+            tf_subproc: The tradefed subprocess to poll.
+            reporter: Result_Reporter object.
+        """
+        inputs = [server]
+        event_handlers = {}
+        data_map = {}
+        inv_socket = None
+        while inputs:
+            try:
+                readable, _, _ = select.select(inputs, [], [], SELECT_TIMEOUT)
+                for socket_object in readable:
+                    if socket_object is server:
+                        conn, addr = socket_object.accept()
+                        logging.debug('Accepted connection from %s', addr)
+                        conn.setblocking(False)
+                        inputs.append(conn)
+                        data_map[conn] = ''
+                        # The First connection should be invocation level reporter.
+                        if not inv_socket:
+                            inv_socket = conn
+                    else:
+                        # Count invocation level reporter events
+                        # without showing real-time information.
+                        if inv_socket == socket_object:
+                            reporter.silent = True
+                            event_handler = event_handlers.setdefault(
+                                socket_object, EventHandler(reporter, self.NAME))
+                        else:
+                            event_handler = event_handlers.setdefault(
+                                socket_object, EventHandler(
+                                    result_reporter.ResultReporter(), self.NAME))
+                        recv_data = self._process_connection(data_map,
+                                                             socket_object,
+                                                             event_handler)
+                        if not recv_data:
+                            inputs.remove(socket_object)
+                            socket_object.close()
+            finally:
+                # Subprocess ended and all socket client closed.
+                if tf_subproc.poll() is not None and len(inputs) == 1:
+                    inputs.pop().close()
+                    if not data_map:
+                        raise TradeFedExitError(TRADEFED_EXIT_MSG
+                                                % tf_subproc.returncode)
+
+    def _process_connection(self, data_map, conn, event_handler):
+        """Process a socket connection between TF and ATest.
+
+        Expect data of form EVENT_NAME {JSON_DATA}.  Multiple events will be
+        \n deliminated.  Need to buffer data in case data exceeds socket
+        buffer.
+        E.q.
+            TEST_RUN_STARTED {runName":"hello_world_test","runAttempt":0}\n
+            TEST_STARTED {"start_time":2172917, "testName":"PrintHelloWorld"}\n
+        Args:
+            data_map: The data map of all connections.
+            conn: Socket connection.
+            event_handler: EventHandler object.
+
+        Returns:
+            True if conn.recv() has data , False otherwise.
+        """
+        # Set connection into blocking mode.
+        conn.settimeout(None)
+        data = conn.recv(SOCKET_BUFFER)
+        logging.debug('received: %s', data)
+        if data:
+            data_map[conn] += data
+            while True:
+                match = EVENT_RE.match(data_map[conn])
+                if not match:
+                    break
+                try:
+                    event_data = json.loads(match.group('json_data'))
+                except ValueError:
+                    logging.debug('Json incomplete, wait for more data')
+                    break
+                event_name = match.group('event_name')
+                event_handler.process_event(event_name, event_data)
+                data_map[conn] = data_map[conn][match.end():]
+        return bool(data)
+
+    def _start_socket_server(self):
+        """Start a TCP server."""
+        server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+        # Port 0 lets the OS pick an open port between 1024 and 65535.
+        server.bind((SOCKET_HOST, 0))
+        server.listen(SOCKET_QUEUE_MAX)
+        server.settimeout(POLL_FREQ_SECS)
+        logging.debug('Socket server started on port %s',
+                      server.getsockname()[1])
+        return server
+
+    def generate_env_vars(self, extra_args):
+        """Convert extra args into env vars."""
+        env_vars = os.environ.copy()
+        debug_port = extra_args.get(constants.TF_DEBUG, '')
+        if debug_port:
+            env_vars['TF_DEBUG'] = 'true'
+            env_vars['TF_DEBUG_PORT'] = str(debug_port)
+        return env_vars
+
+    def host_env_check(self):
+        """Check that host env has everything we need.
+
+        We actually can assume the host env is fine because we have the same
+        requirements that atest has. Update this to check for android env vars
+        if that changes.
+        """
+        pass
+
+    @staticmethod
+    def _is_missing_exec(executable):
+        """Check if system build executable is available.
+
+        Args:
+            executable: Executable we are checking for.
+        Returns:
+            True if executable is missing, False otherwise.
+        """
+        try:
+            output = subprocess.check_output(['which', executable])
+        except subprocess.CalledProcessError:
+            return True
+        # TODO: Check if there is a clever way to determine if system adb is
+        # good enough.
+        root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
+        return os.path.commonprefix([output, root_dir]) != root_dir
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        build_req = self._BUILD_REQ
+        # Use different base build requirements if google-tf is around.
+        if self.module_info.is_module(constants.GTF_MODULE):
+            build_req = {constants.GTF_TARGET}
+        # Always add ATest's own TF target.
+        build_req.add(constants.ATEST_TF_MODULE)
+        # Add adb if we can't find it.
+        for executable in EXEC_DEPENDENCIES:
+            if self._is_missing_exec(executable):
+                build_req.add(executable)
+        return build_req
+
+    # pylint: disable=too-many-branches
+    # pylint: disable=too-many-statements
+    @staticmethod
+    def _parse_extra_args(extra_args):
+        """Convert the extra args into something tf can understand.
+
+        Args:
+            extra_args: Dict of args
+
+        Returns:
+            Tuple of args to append and args not supported.
+        """
+        args_to_append = []
+        args_not_supported = []
+        for arg in extra_args:
+            if constants.WAIT_FOR_DEBUGGER == arg:
+                args_to_append.append('--wait-for-debugger')
+                continue
+            if constants.DISABLE_INSTALL == arg:
+                args_to_append.append('--disable-target-preparers')
+                continue
+            if constants.SERIAL == arg:
+                args_to_append.append('--serial')
+                args_to_append.append(extra_args[arg])
+                continue
+            if constants.SHARDING == arg:
+                args_to_append.append('--shard-count')
+                args_to_append.append(str(extra_args[arg]))
+                continue
+            if constants.DISABLE_TEARDOWN == arg:
+                args_to_append.append('--disable-teardown')
+                continue
+            if constants.HOST == arg:
+                args_to_append.append('-n')
+                args_to_append.append('--prioritize-host-config')
+                args_to_append.append('--skip-host-arch-check')
+                continue
+            if constants.CUSTOM_ARGS == arg:
+                # We might need to sanitize it prior to appending but for now
+                # let's just treat it like a simple arg to pass on through.
+                args_to_append.extend(extra_args[arg])
+                continue
+            if constants.ALL_ABI == arg:
+                args_to_append.append('--all-abi')
+                continue
+            if constants.DRY_RUN == arg:
+                continue
+            if constants.INSTANT == arg:
+                args_to_append.append('--enable-parameterized-modules')
+                args_to_append.append('--module-parameter')
+                args_to_append.append('instant_app')
+                continue
+            if constants.USER_TYPE == arg:
+                args_to_append.append('--enable-parameterized-modules')
+                args_to_append.append('--enable-optional-parameterization')
+                args_to_append.append('--module-parameter')
+                args_to_append.append(extra_args[arg])
+                continue
+            if constants.ITERATIONS == arg:
+                args_to_append.append('--retry-strategy')
+                args_to_append.append(constants.ITERATIONS)
+                args_to_append.append('--max-testcase-run-count')
+                args_to_append.append(str(extra_args[arg]))
+                continue
+            if constants.RERUN_UNTIL_FAILURE == arg:
+                args_to_append.append('--retry-strategy')
+                args_to_append.append(constants.RERUN_UNTIL_FAILURE)
+                args_to_append.append('--max-testcase-run-count')
+                args_to_append.append(str(extra_args[arg]))
+                continue
+            if constants.RETRY_ANY_FAILURE == arg:
+                args_to_append.append('--retry-strategy')
+                args_to_append.append(constants.RETRY_ANY_FAILURE)
+                args_to_append.append('--max-testcase-run-count')
+                args_to_append.append(str(extra_args[arg]))
+                continue
+            if constants.COLLECT_TESTS_ONLY == arg:
+                args_to_append.append('--collect-tests-only')
+                continue
+            if constants.TF_DEBUG == arg:
+                print("Please attach process to your IDE...")
+                continue
+            args_not_supported.append(arg)
+        return args_to_append, args_not_supported
+
+    def _generate_metrics_folder(self, extra_args):
+        """Generate metrics folder."""
+        metrics_folder = ''
+        if extra_args.get(constants.PRE_PATCH_ITERATIONS):
+            metrics_folder = os.path.join(self.results_dir, 'baseline-metrics')
+        elif extra_args.get(constants.POST_PATCH_ITERATIONS):
+            metrics_folder = os.path.join(self.results_dir, 'new-metrics')
+        return metrics_folder
+
+    def _generate_iterations(self, extra_args):
+        """Generate iterations."""
+        iterations = 1
+        if extra_args.get(constants.PRE_PATCH_ITERATIONS):
+            iterations = extra_args.pop(constants.PRE_PATCH_ITERATIONS)
+        elif extra_args.get(constants.POST_PATCH_ITERATIONS):
+            iterations = extra_args.pop(constants.POST_PATCH_ITERATIONS)
+        return iterations
+
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        """Generate a single run command from TestInfos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+            extra_args: A Dict of extra args to append.
+            port: Optional. An int of the port number to send events to. If
+                  None, then subprocess reporter in TF won't try to connect.
+
+        Returns:
+            A list that contains the string of atest tradefed run command.
+            Only one command is returned.
+        """
+        args = self._create_test_args(test_infos)
+        metrics_folder = self._generate_metrics_folder(extra_args)
+
+        # Create a copy of args as more args could be added to the list.
+        test_args = list(args)
+        if port:
+            test_args.extend(['--subprocess-report-port', str(port)])
+        if metrics_folder:
+            test_args.extend(['--metrics-folder', metrics_folder])
+            logging.info('Saved metrics in: %s', metrics_folder)
+        log_level = 'WARN'
+        if self.is_verbose:
+            log_level = 'VERBOSE'
+            test_args.extend(['--log-level-display', log_level])
+        test_args.extend(['--log-level', log_level])
+
+        args_to_add, args_not_supported = self._parse_extra_args(extra_args)
+
+        # TODO(b/122889707) Remove this after finding the root cause.
+        env_serial = os.environ.get(constants.ANDROID_SERIAL)
+        # Use the env variable ANDROID_SERIAL if it's set by user but only when
+        # the target tests are not deviceless tests.
+        if env_serial and '--serial' not in args_to_add and '-n' not in args_to_add:
+            args_to_add.append("--serial")
+            args_to_add.append(env_serial)
+
+        test_args.extend(args_to_add)
+        if args_not_supported:
+            logging.info('%s does not support the following args %s',
+                         self.EXECUTABLE, args_not_supported)
+
+        # Only need to check one TestInfo to determine if the tests are
+        # configured in TEST_MAPPING.
+        for_test_mapping = test_infos and test_infos[0].from_test_mapping
+        test_args.extend(atest_utils.get_result_server_args(for_test_mapping))
+        self.run_cmd_dict['args'] = ' '.join(test_args)
+        self.run_cmd_dict['tf_customize_template'] = (
+            self._extract_customize_tf_templates(extra_args))
+        return [self._RUN_CMD.format(**self.run_cmd_dict)]
+
+    def _flatten_test_infos(self, test_infos):
+        """Sort and group test_infos by module_name and sort and group filters
+        by class name.
+
+            Example of three test_infos in a set:
+                Module1, {(classA, {})}
+                Module1, {(classB, {Method1})}
+                Module1, {(classB, {Method2}}
+            Becomes a set with one element:
+                Module1, {(ClassA, {}), (ClassB, {Method1, Method2})}
+            Where:
+                  Each line is a test_info namedtuple
+                  {} = Frozenset
+                  () = TestFilter namedtuple
+
+        Args:
+            test_infos: A set of TestInfo namedtuples.
+
+        Returns:
+            A set of TestInfos flattened.
+        """
+        results = set()
+        key = lambda x: x.test_name
+        for module, group in atest_utils.sort_and_group(test_infos, key):
+            # module is a string, group is a generator of grouped TestInfos.
+            # Module Test, so flatten test_infos:
+            no_filters = False
+            filters = set()
+            test_runner = None
+            test_finder = None
+            build_targets = set()
+            data = {}
+            module_args = []
+            for test_info_i in group:
+                data.update(test_info_i.data)
+                # Extend data with constants.TI_MODULE_ARG instead of overwriting.
+                module_args.extend(test_info_i.data.get(constants.TI_MODULE_ARG, []))
+                test_runner = test_info_i.test_runner
+                test_finder = test_info_i.test_finder
+                build_targets |= test_info_i.build_targets
+                test_filters = test_info_i.data.get(constants.TI_FILTER)
+                if not test_filters or no_filters:
+                    # test_info wants whole module run, so hardcode no filters.
+                    no_filters = True
+                    filters = set()
+                    continue
+                filters |= test_filters
+            if module_args:
+                data[constants.TI_MODULE_ARG] = module_args
+            data[constants.TI_FILTER] = self._flatten_test_filters(filters)
+            results.add(
+                test_info.TestInfo(test_name=module,
+                                   test_runner=test_runner,
+                                   test_finder=test_finder,
+                                   build_targets=build_targets,
+                                   data=data))
+        return results
+
+    @staticmethod
+    def _flatten_test_filters(filters):
+        """Sort and group test_filters by class_name.
+
+            Example of three test_filters in a frozenset:
+                classA, {}
+                classB, {Method1}
+                classB, {Method2}
+            Becomes a frozenset with these elements:
+                classA, {}
+                classB, {Method1, Method2}
+            Where:
+                Each line is a TestFilter namedtuple
+                {} = Frozenset
+
+        Args:
+            filters: A frozenset of test_filters.
+
+        Returns:
+            A frozenset of test_filters flattened.
+        """
+        results = set()
+        key = lambda x: x.class_name
+        for class_name, group in atest_utils.sort_and_group(filters, key):
+            # class_name is a string, group is a generator of TestFilters
+            assert class_name is not None
+            methods = set()
+            for test_filter in group:
+                if not test_filter.methods:
+                    # Whole class should be run
+                    methods = set()
+                    break
+                methods |= test_filter.methods
+            results.add(test_info.TestFilter(class_name, frozenset(methods)))
+        return frozenset(results)
+
+    def _create_test_args(self, test_infos):
+        """Compile TF command line args based on the given test infos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+
+        Returns: A list of TF arguments to run the tests.
+        """
+        args = []
+        if not test_infos:
+            return []
+
+        test_infos = self._flatten_test_infos(test_infos)
+        # In order to do dry-run verification, sort it to make each run has the
+        # same result
+        test_infos = list(test_infos)
+        test_infos.sort()
+        has_integration_test = False
+        for info in test_infos:
+            # Integration test exists in TF's jar, so it must have the option
+            # if it's integration finder.
+            if info.test_finder in _INTEGRATION_FINDERS:
+                has_integration_test = True
+            args.extend([constants.TF_INCLUDE_FILTER, info.test_name])
+            filters = set()
+            for test_filter in info.data.get(constants.TI_FILTER, []):
+                filters.update(test_filter.to_set_of_tf_strings())
+            for test_filter in filters:
+                filter_arg = constants.TF_ATEST_INCLUDE_FILTER_VALUE_FMT.format(
+                    test_name=info.test_name, test_filter=test_filter)
+                args.extend([constants.TF_ATEST_INCLUDE_FILTER, filter_arg])
+            for option in info.data.get(constants.TI_MODULE_ARG, []):
+                if constants.TF_INCLUDE_FILTER_OPTION == option[0]:
+                    suite_filter = (
+                        constants.TF_SUITE_FILTER_ARG_VALUE_FMT.format(
+                            test_name=info.test_name, option_value=option[1]))
+                    args.extend([constants.TF_INCLUDE_FILTER, suite_filter])
+                elif constants.TF_EXCLUDE_FILTER_OPTION == option[0]:
+                    suite_filter = (
+                        constants.TF_SUITE_FILTER_ARG_VALUE_FMT.format(
+                            test_name=info.test_name, option_value=option[1]))
+                    args.extend([constants.TF_EXCLUDE_FILTER, suite_filter])
+                else:
+                    module_arg = (
+                        constants.TF_MODULE_ARG_VALUE_FMT.format(
+                            test_name=info.test_name, option_name=option[0],
+                            option_value=option[1]))
+                    args.extend([constants.TF_MODULE_ARG, module_arg])
+        # TODO (b/141090547) Pass the config path to TF to load configs.
+        # Compile option in TF if finder is not INTEGRATION or not set.
+        if not has_integration_test:
+            args.append(constants.TF_SKIP_LOADING_CONFIG_JAR)
+        return args
+
+    def _extract_rerun_options(self, extra_args):
+        """Extract rerun options to a string for output.
+
+        Args:
+            extra_args: Dict of extra args for test runners to use.
+
+        Returns: A string of rerun options.
+        """
+        extracted_options = ['{} {}'.format(arg, extra_args[arg])
+                             for arg in extra_args
+                             if arg in self._RERUN_OPTION_GROUP]
+        return ' '.join(extracted_options)
+
+    def _extract_customize_tf_templates(self, extra_args):
+        """Extract tradefed template options to a string for output.
+
+        Args:
+            extra_args: Dict of extra args for test runners to use.
+
+        Returns: A string of tradefed template options.
+        """
+        return ''.join(['--template:map %s '
+                        % x for x in extra_args.get(constants.TF_TEMPLATE, [])])
diff --git a/atest-py2/test_runners/atest_tf_test_runner_unittest.py b/atest-py2/test_runners/atest_tf_test_runner_unittest.py
new file mode 100755
index 0000000..5344ba0
--- /dev/null
+++ b/atest-py2/test_runners/atest_tf_test_runner_unittest.py
@@ -0,0 +1,643 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for atest_tf_test_runner."""
+
+import os
+import sys
+import tempfile
+import unittest
+import json
+import mock
+
+# pylint: disable=import-error
+import constants
+import unittest_constants as uc
+import unittest_utils
+import atest_tf_test_runner as atf_tr
+import event_handler
+from test_finders import test_info
+
+if sys.version_info[0] == 2:
+    from StringIO import StringIO
+else:
+    from io import StringIO
+
+#pylint: disable=protected-access
+#pylint: disable=invalid-name
+TEST_INFO_DIR = '/tmp/atest_run_1510085893_pi_Nbi'
+METRICS_DIR = '%s/baseline-metrics' % TEST_INFO_DIR
+METRICS_DIR_ARG = '--metrics-folder %s ' % METRICS_DIR
+# TODO(147567606): Replace {serial} with {extra_args} for general extra
+# arguments testing.
+RUN_CMD_ARGS = '{metrics}--log-level WARN{serial}'
+LOG_ARGS = atf_tr.AtestTradefedTestRunner._LOG_ARGS.format(
+    log_path=os.path.join(TEST_INFO_DIR, atf_tr.LOG_FOLDER_NAME))
+RUN_CMD = atf_tr.AtestTradefedTestRunner._RUN_CMD.format(
+    exe=atf_tr.AtestTradefedTestRunner.EXECUTABLE,
+    template=atf_tr.AtestTradefedTestRunner._TF_TEMPLATE,
+    tf_customize_template='{tf_customize_template}',
+    args=RUN_CMD_ARGS,
+    log_args=LOG_ARGS)
+FULL_CLASS2_NAME = 'android.jank.cts.ui.SomeOtherClass'
+CLASS2_FILTER = test_info.TestFilter(FULL_CLASS2_NAME, frozenset())
+METHOD2_FILTER = test_info.TestFilter(uc.FULL_CLASS_NAME, frozenset([uc.METHOD2_NAME]))
+MODULE_ARG1 = [(constants.TF_INCLUDE_FILTER_OPTION, "A"),
+               (constants.TF_INCLUDE_FILTER_OPTION, "B")]
+MODULE_ARG2 = []
+CLASS2_METHOD_FILTER = test_info.TestFilter(FULL_CLASS2_NAME,
+                                            frozenset([uc.METHOD_NAME, uc.METHOD2_NAME]))
+MODULE2_INFO = test_info.TestInfo(uc.MODULE2_NAME,
+                                  atf_tr.AtestTradefedTestRunner.NAME,
+                                  set(),
+                                  data={constants.TI_REL_CONFIG: uc.CONFIG2_FILE,
+                                        constants.TI_FILTER: frozenset()})
+CLASS1_BUILD_TARGETS = {'class_1_build_target'}
+CLASS1_INFO = test_info.TestInfo(uc.MODULE_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 CLASS1_BUILD_TARGETS,
+                                 data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+                                       constants.TI_FILTER: frozenset([uc.CLASS_FILTER])})
+CLASS2_BUILD_TARGETS = {'class_2_build_target'}
+CLASS2_INFO = test_info.TestInfo(uc.MODULE_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 CLASS2_BUILD_TARGETS,
+                                 data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+                                       constants.TI_FILTER: frozenset([CLASS2_FILTER])})
+CLASS3_BUILD_TARGETS = {'class_3_build_target'}
+CLASS3_INFO = test_info.TestInfo(uc.MODULE_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 CLASS3_BUILD_TARGETS,
+                                 data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+                                       constants.TI_FILTER: frozenset(),
+                                       constants.TI_MODULE_ARG: MODULE_ARG1})
+CLASS4_BUILD_TARGETS = {'class_4_build_target'}
+CLASS4_INFO = test_info.TestInfo(uc.MODULE_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 CLASS4_BUILD_TARGETS,
+                                 data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+                                       constants.TI_FILTER: frozenset(),
+                                       constants.TI_MODULE_ARG: MODULE_ARG2})
+CLASS1_CLASS2_MODULE_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.MODULE_BUILD_TARGETS | CLASS1_BUILD_TARGETS | CLASS2_BUILD_TARGETS,
+    uc.MODULE_DATA)
+FLAT_CLASS_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    CLASS1_BUILD_TARGETS | CLASS2_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER: frozenset([uc.CLASS_FILTER, CLASS2_FILTER])})
+FLAT2_CLASS_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    CLASS3_BUILD_TARGETS | CLASS4_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER: frozenset(),
+          constants.TI_MODULE_ARG: MODULE_ARG1 + MODULE_ARG2})
+GTF_INT_CONFIG = os.path.join(uc.GTF_INT_DIR, uc.GTF_INT_NAME + '.xml')
+CLASS2_METHOD_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER:
+              frozenset([test_info.TestFilter(
+                  FULL_CLASS2_NAME, frozenset([uc.METHOD_NAME, uc.METHOD2_NAME]))])})
+METHOD_AND_CLASS2_METHOD = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.MODULE_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER: frozenset([uc.METHOD_FILTER, CLASS2_METHOD_FILTER])})
+METHOD_METHOD2_AND_CLASS2_METHOD = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    uc.MODULE_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER: frozenset([uc.FLAT_METHOD_FILTER, CLASS2_METHOD_FILTER])})
+METHOD2_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    data={constants.TI_REL_CONFIG: uc.CONFIG_FILE,
+          constants.TI_FILTER: frozenset([METHOD2_FILTER])})
+
+INT_INFO = test_info.TestInfo(
+    uc.INT_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    test_finder='INTEGRATION')
+
+MOD_INFO = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    test_finder='MODULE')
+
+MOD_INFO_NO_TEST_FINDER = test_info.TestInfo(
+    uc.MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set())
+
+EVENTS_NORMAL = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+        'moduleName':'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2}),
+    ('TEST_STARTED', {'start_time':52, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':1048, 'className':'someClassName',
+                    'testName':'someTestName'}),
+    ('TEST_STARTED', {'start_time':48, 'className':'someClassName2',
+                      'testName':'someTestName2'}),
+    ('TEST_FAILED', {'className':'someClassName2', 'testName':'someTestName2',
+                     'trace': 'someTrace'}),
+    ('TEST_ENDED', {'end_time':9876450, 'className':'someClassName2',
+                    'testName':'someTestName2'}),
+    ('TEST_RUN_ENDED', {}),
+    ('TEST_MODULE_ENDED', {'foo': 'bar'}),
+]
+
+class AtestTradefedTestRunnerUnittests(unittest.TestCase):
+    """Unit tests for atest_tf_test_runner.py"""
+
+    def setUp(self):
+        self.tr = atf_tr.AtestTradefedTestRunner(results_dir=TEST_INFO_DIR)
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner,
+                       '_start_socket_server')
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner,
+                       'run')
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner,
+                       '_create_test_args', return_value=['some_args'])
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner,
+                       'generate_run_commands', return_value='some_cmd')
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner,
+                       '_process_connection', return_value=None)
+    @mock.patch('select.select')
+    @mock.patch('os.killpg', return_value=None)
+    @mock.patch('os.getpgid', return_value=None)
+    @mock.patch('signal.signal', return_value=None)
+    def test_run_tests_pretty(self, _signal, _pgid, _killpg, mock_select,
+                              _process, _run_cmd, _test_args,
+                              mock_run, mock_start_socket_server):
+        """Test _run_tests_pretty method."""
+        mock_subproc = mock.Mock()
+        mock_run.return_value = mock_subproc
+        mock_subproc.returncode = 0
+        mock_subproc.poll.side_effect = [True, True, None]
+        mock_server = mock.Mock()
+        mock_server.getsockname.return_value = ('', '')
+        mock_start_socket_server.return_value = mock_server
+        mock_reporter = mock.Mock()
+
+        # Test no early TF exit
+        mock_conn = mock.Mock()
+        mock_server.accept.return_value = (mock_conn, 'some_addr')
+        mock_server.close.return_value = True
+        mock_select.side_effect = [([mock_server], None, None),
+                                   ([mock_conn], None, None)]
+        self.tr.run_tests_pretty([MODULE2_INFO], {}, mock_reporter)
+
+        # Test early TF exit
+        tmp_file = tempfile.NamedTemporaryFile()
+        with open(tmp_file.name, 'w') as f:
+            f.write("tf msg")
+        self.tr.test_log_file = tmp_file
+        mock_select.side_effect = [([], None, None)]
+        mock_subproc.poll.side_effect = None
+        capture_output = StringIO()
+        sys.stdout = capture_output
+        self.assertRaises(atf_tr.TradeFedExitError, self.tr.run_tests_pretty,
+                          [MODULE2_INFO], {}, mock_reporter)
+        sys.stdout = sys.__stdout__
+        self.assertTrue('tf msg' in capture_output.getvalue())
+
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_process_connection')
+    @mock.patch('select.select')
+    def test_start_monitor_2_connection(self, mock_select, mock_process):
+        """Test _start_monitor method."""
+        mock_server = mock.Mock()
+        mock_subproc = mock.Mock()
+        mock_reporter = mock.Mock()
+        mock_conn1 = mock.Mock()
+        mock_conn2 = mock.Mock()
+        mock_server.accept.side_effect = [(mock_conn1, 'addr 1'),
+                                          (mock_conn2, 'addr 2')]
+        mock_select.side_effect = [([mock_server], None, None),
+                                   ([mock_server], None, None),
+                                   ([mock_conn1], None, None),
+                                   ([mock_conn2], None, None),
+                                   ([mock_conn1], None, None),
+                                   ([mock_conn2], None, None)]
+        mock_process.side_effect = ['abc', 'def', False, False]
+        mock_subproc.poll.side_effect = [None, None, None, None,
+                                         None, True]
+        self.tr._start_monitor(mock_server, mock_subproc, mock_reporter)
+        self.assertEqual(mock_process.call_count, 4)
+        calls = [mock.call.accept(), mock.call.close()]
+        mock_server.assert_has_calls(calls)
+        mock_conn1.assert_has_calls([mock.call.close()])
+        mock_conn2.assert_has_calls([mock.call.close()])
+
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_process_connection')
+    @mock.patch('select.select')
+    def test_start_monitor_tf_exit_before_2nd_connection(self,
+                                                         mock_select,
+                                                         mock_process):
+        """Test _start_monitor method."""
+        mock_server = mock.Mock()
+        mock_subproc = mock.Mock()
+        mock_reporter = mock.Mock()
+        mock_conn1 = mock.Mock()
+        mock_conn2 = mock.Mock()
+        mock_server.accept.side_effect = [(mock_conn1, 'addr 1'),
+                                          (mock_conn2, 'addr 2')]
+        mock_select.side_effect = [([mock_server], None, None),
+                                   ([mock_server], None, None),
+                                   ([mock_conn1], None, None),
+                                   ([mock_conn2], None, None),
+                                   ([mock_conn1], None, None),
+                                   ([mock_conn2], None, None)]
+        mock_process.side_effect = ['abc', 'def', False, False]
+        # TF exit early but have not processed data in socket buffer.
+        mock_subproc.poll.side_effect = [None, None, True, True,
+                                         True, True]
+        self.tr._start_monitor(mock_server, mock_subproc, mock_reporter)
+        self.assertEqual(mock_process.call_count, 4)
+        calls = [mock.call.accept(), mock.call.close()]
+        mock_server.assert_has_calls(calls)
+        mock_conn1.assert_has_calls([mock.call.close()])
+        mock_conn2.assert_has_calls([mock.call.close()])
+
+
+    def test_start_socket_server(self):
+        """Test start_socket_server method."""
+        server = self.tr._start_socket_server()
+        host, port = server.getsockname()
+        self.assertEqual(host, atf_tr.SOCKET_HOST)
+        self.assertLessEqual(port, 65535)
+        self.assertGreaterEqual(port, 1024)
+        server.close()
+
+    @mock.patch('os.path.exists')
+    @mock.patch.dict('os.environ', {'APE_API_KEY':'/tmp/123.json'})
+    def test_try_set_gts_authentication_key_is_set_by_user(self, mock_exist):
+        """Test try_set_authentication_key_is_set_by_user method."""
+        # Test key is set by user.
+        self.tr._try_set_gts_authentication_key()
+        mock_exist.assert_not_called()
+
+    @mock.patch('os.path.join', return_value='/tmp/file_not_exist.json')
+    def test_try_set_gts_authentication_key_not_set(self, _):
+        """Test try_set_authentication_key_not_set method."""
+        # Delete the environment variable if it's set. This is fine for this
+        # method because it's for validating the APE_API_KEY isn't set.
+        if os.environ.get('APE_API_KEY'):
+            del os.environ['APE_API_KEY']
+        self.tr._try_set_gts_authentication_key()
+        self.assertEqual(os.environ.get('APE_API_KEY'), None)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_process_connection(self, mock_pe):
+        """Test _process_connection method."""
+        mock_socket = mock.Mock()
+        for name, data in EVENTS_NORMAL:
+            datas = {mock_socket: ''}
+            socket_data = '%s %s' % (name, json.dumps(data))
+            mock_socket.recv.return_value = socket_data
+            self.tr._process_connection(datas, mock_socket, mock_pe)
+
+        calls = [mock.call.process_event(name, data) for name, data in EVENTS_NORMAL]
+        mock_pe.assert_has_calls(calls)
+        mock_socket.recv.return_value = ''
+        self.assertFalse(self.tr._process_connection(datas, mock_socket, mock_pe))
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_process_connection_multiple_lines_in_single_recv(self, mock_pe):
+        """Test _process_connection when recv reads multiple lines in one go."""
+        mock_socket = mock.Mock()
+        squashed_events = '\n'.join(['%s %s' % (name, json.dumps(data))
+                                     for name, data in EVENTS_NORMAL])
+        socket_data = [squashed_events, '']
+        mock_socket.recv.side_effect = socket_data
+        datas = {mock_socket: ''}
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        calls = [mock.call.process_event(name, data) for name, data in EVENTS_NORMAL]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_process_connection_with_buffering(self, mock_pe):
+        """Test _process_connection when events overflow socket buffer size"""
+        mock_socket = mock.Mock()
+        module_events = [EVENTS_NORMAL[0], EVENTS_NORMAL[-1]]
+        socket_events = ['%s %s' % (name, json.dumps(data))
+                         for name, data in module_events]
+        # test try-block code by breaking apart first event after first }
+        index = socket_events[0].index('}') + 1
+        socket_data = [socket_events[0][:index], socket_events[0][index:]]
+        # test non-try block buffering with second event
+        socket_data.extend([socket_events[1][:-4], socket_events[1][-4:], ''])
+        mock_socket.recv.side_effect = socket_data
+        datas = {mock_socket: ''}
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        calls = [mock.call.process_event(name, data) for name, data in module_events]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_process_connection_with_not_completed_event_data(self, mock_pe):
+        """Test _process_connection when event have \n prefix."""
+        mock_socket = mock.Mock()
+        mock_socket.recv.return_value = ('\n%s %s'
+                                         %(EVENTS_NORMAL[0][0],
+                                           json.dumps(EVENTS_NORMAL[0][1])))
+        datas = {mock_socket: ''}
+        self.tr._process_connection(datas, mock_socket, mock_pe)
+        calls = [mock.call.process_event(EVENTS_NORMAL[0][0],
+                                         EVENTS_NORMAL[0][1])]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch('os.environ.get', return_value=None)
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_generate_metrics_folder')
+    @mock.patch('atest_utils.get_result_server_args')
+    def test_generate_run_commands_without_serial_env(self, mock_resultargs, mock_mertrics, _):
+        """Test generate_run_command method."""
+        # Basic Run Cmd
+        mock_resultargs.return_value = []
+        mock_mertrics.return_value = ''
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {}),
+            [RUN_CMD.format(metrics='',
+                            serial='',
+                            tf_customize_template='')])
+        mock_mertrics.return_value = METRICS_DIR
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {}),
+            [RUN_CMD.format(metrics=METRICS_DIR_ARG,
+                            serial='',
+                            tf_customize_template='')])
+        # Run cmd with result server args.
+        result_arg = '--result_arg'
+        mock_resultargs.return_value = [result_arg]
+        mock_mertrics.return_value = ''
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {}),
+            [RUN_CMD.format(metrics='',
+                            serial='',
+                            tf_customize_template='') + ' ' + result_arg])
+
+    @mock.patch('os.environ.get')
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_generate_metrics_folder')
+    @mock.patch('atest_utils.get_result_server_args')
+    def test_generate_run_commands_with_serial_env(self, mock_resultargs, mock_mertrics, mock_env):
+        """Test generate_run_command method."""
+        # Basic Run Cmd
+        env_device_serial = 'env-device-0'
+        mock_resultargs.return_value = []
+        mock_mertrics.return_value = ''
+        mock_env.return_value = env_device_serial
+        env_serial_arg = ' --serial %s' % env_device_serial
+        # Serial env be set and without --serial arg.
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {}),
+            [RUN_CMD.format(metrics='',
+                            serial=env_serial_arg,
+                            tf_customize_template='')])
+        # Serial env be set but with --serial arg.
+        arg_device_serial = 'arg-device-0'
+        arg_serial_arg = ' --serial %s' % arg_device_serial
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {constants.SERIAL:arg_device_serial}),
+            [RUN_CMD.format(metrics='',
+                            serial=arg_serial_arg,
+                            tf_customize_template='')])
+        # Serial env be set but with -n arg
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], {constants.HOST: True}),
+            [RUN_CMD.format(metrics='',
+                            serial='',
+                            tf_customize_template='') +
+             ' -n --prioritize-host-config --skip-host-arch-check'])
+
+
+    def test_flatten_test_filters(self):
+        """Test _flatten_test_filters method."""
+        # No Flattening
+        filters = self.tr._flatten_test_filters({uc.CLASS_FILTER})
+        unittest_utils.assert_strict_equal(self, frozenset([uc.CLASS_FILTER]),
+                                           filters)
+        filters = self.tr._flatten_test_filters({CLASS2_FILTER})
+        unittest_utils.assert_strict_equal(
+            self, frozenset([CLASS2_FILTER]), filters)
+        filters = self.tr._flatten_test_filters({uc.METHOD_FILTER})
+        unittest_utils.assert_strict_equal(
+            self, frozenset([uc.METHOD_FILTER]), filters)
+        filters = self.tr._flatten_test_filters({uc.METHOD_FILTER,
+                                                 CLASS2_METHOD_FILTER})
+        unittest_utils.assert_strict_equal(
+            self, frozenset([uc.METHOD_FILTER, CLASS2_METHOD_FILTER]), filters)
+        # Flattening
+        filters = self.tr._flatten_test_filters({uc.METHOD_FILTER,
+                                                 METHOD2_FILTER})
+        unittest_utils.assert_strict_equal(
+            self, filters, frozenset([uc.FLAT_METHOD_FILTER]))
+        filters = self.tr._flatten_test_filters({uc.METHOD_FILTER,
+                                                 METHOD2_FILTER,
+                                                 CLASS2_METHOD_FILTER,})
+        unittest_utils.assert_strict_equal(
+            self, filters, frozenset([uc.FLAT_METHOD_FILTER,
+                                      CLASS2_METHOD_FILTER]))
+
+    def test_flatten_test_infos(self):
+        """Test _flatten_test_infos method."""
+        # No Flattening
+        test_infos = self.tr._flatten_test_infos({uc.MODULE_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {uc.MODULE_INFO})
+
+        test_infos = self.tr._flatten_test_infos([uc.MODULE_INFO, MODULE2_INFO])
+        unittest_utils.assert_equal_testinfo_sets(
+            self, test_infos, {uc.MODULE_INFO, MODULE2_INFO})
+
+        test_infos = self.tr._flatten_test_infos({CLASS1_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {CLASS1_INFO})
+
+        test_infos = self.tr._flatten_test_infos({uc.INT_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {uc.INT_INFO})
+
+        test_infos = self.tr._flatten_test_infos({uc.METHOD_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {uc.METHOD_INFO})
+
+        # Flattening
+        test_infos = self.tr._flatten_test_infos({CLASS1_INFO, CLASS2_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {FLAT_CLASS_INFO})
+
+        test_infos = self.tr._flatten_test_infos({CLASS1_INFO, uc.INT_INFO,
+                                                  CLASS2_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {uc.INT_INFO,
+                                                   FLAT_CLASS_INFO})
+
+        test_infos = self.tr._flatten_test_infos({CLASS1_INFO, uc.MODULE_INFO,
+                                                  CLASS2_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {CLASS1_CLASS2_MODULE_INFO})
+
+        test_infos = self.tr._flatten_test_infos({MODULE2_INFO, uc.INT_INFO,
+                                                  CLASS1_INFO, CLASS2_INFO,
+                                                  uc.GTF_INT_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {uc.INT_INFO, uc.GTF_INT_INFO,
+                                                   FLAT_CLASS_INFO,
+                                                   MODULE2_INFO})
+
+        test_infos = self.tr._flatten_test_infos({uc.METHOD_INFO,
+                                                  CLASS2_METHOD_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {METHOD_AND_CLASS2_METHOD})
+
+        test_infos = self.tr._flatten_test_infos({uc.METHOD_INFO, METHOD2_INFO,
+                                                  CLASS2_METHOD_INFO})
+        unittest_utils.assert_equal_testinfo_sets(
+            self, test_infos, {METHOD_METHOD2_AND_CLASS2_METHOD})
+        test_infos = self.tr._flatten_test_infos({uc.METHOD_INFO, METHOD2_INFO,
+                                                  CLASS2_METHOD_INFO,
+                                                  MODULE2_INFO,
+                                                  uc.INT_INFO})
+        unittest_utils.assert_equal_testinfo_sets(
+            self, test_infos, {uc.INT_INFO, MODULE2_INFO,
+                               METHOD_METHOD2_AND_CLASS2_METHOD})
+
+        test_infos = self.tr._flatten_test_infos({CLASS3_INFO, CLASS4_INFO})
+        unittest_utils.assert_equal_testinfo_sets(self, test_infos,
+                                                  {FLAT2_CLASS_INFO})
+
+    def test_create_test_args(self):
+        """Test _create_test_args method."""
+        # Only compile '--skip-loading-config-jar' in TF if it's not
+        # INTEGRATION finder or the finder property isn't set.
+        args = self.tr._create_test_args([MOD_INFO])
+        self.assertTrue(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+        args = self.tr._create_test_args([INT_INFO])
+        self.assertFalse(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+        args = self.tr._create_test_args([MOD_INFO_NO_TEST_FINDER])
+        self.assertFalse(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+        args = self.tr._create_test_args([MOD_INFO_NO_TEST_FINDER, INT_INFO])
+        self.assertFalse(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+        args = self.tr._create_test_args([MOD_INFO_NO_TEST_FINDER])
+        self.assertFalse(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+        args = self.tr._create_test_args([MOD_INFO_NO_TEST_FINDER, INT_INFO, MOD_INFO])
+        self.assertFalse(constants.TF_SKIP_LOADING_CONFIG_JAR in args)
+
+
+    @mock.patch('os.environ.get', return_value=None)
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_generate_metrics_folder')
+    @mock.patch('atest_utils.get_result_server_args')
+    def test_generate_run_commands_with_tf_template(self, mock_resultargs, mock_mertrics, _):
+        """Test generate_run_command method."""
+        tf_tmplate_key1 = 'tf_tmplate_key1'
+        tf_tmplate_val1 = 'tf_tmplate_val1'
+        tf_tmplate_key2 = 'tf_tmplate_key2'
+        tf_tmplate_val2 = 'tf_tmplate_val2'
+        # Testing with only one tradefed template command
+        mock_resultargs.return_value = []
+        mock_mertrics.return_value = ''
+        extra_args = {constants.TF_TEMPLATE:
+                          ['{}={}'.format(tf_tmplate_key1,
+                                          tf_tmplate_val1)]}
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], extra_args),
+            [RUN_CMD.format(
+                metrics='',
+                serial='',
+                tf_customize_template=
+                '--template:map {}={} ').format(tf_tmplate_key1,
+                                                tf_tmplate_val1)])
+        # Testing with two tradefed template commands
+        extra_args = {constants.TF_TEMPLATE:
+                          ['{}={}'.format(tf_tmplate_key1,
+                                          tf_tmplate_val1),
+                           '{}={}'.format(tf_tmplate_key2,
+                                          tf_tmplate_val2)]}
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], extra_args),
+            [RUN_CMD.format(
+                metrics='',
+                serial='',
+                tf_customize_template=
+                '--template:map {}={} --template:map {}={} ').format(
+                    tf_tmplate_key1,
+                    tf_tmplate_val1,
+                    tf_tmplate_key2,
+                    tf_tmplate_val2)])
+
+    @mock.patch('os.environ.get', return_value=None)
+    @mock.patch.object(atf_tr.AtestTradefedTestRunner, '_generate_metrics_folder')
+    @mock.patch('atest_utils.get_result_server_args')
+    def test_generate_run_commands_collect_tests_only(self,
+                                                      mock_resultargs,
+                                                      mock_mertrics, _):
+        """Test generate_run_command method."""
+        # Testing  without collect-tests-only
+        mock_resultargs.return_value = []
+        mock_mertrics.return_value = ''
+        extra_args = {}
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], extra_args),
+            [RUN_CMD.format(
+                metrics='',
+                serial='',
+                tf_customize_template='')])
+        # Testing  with collect-tests-only
+        mock_resultargs.return_value = []
+        mock_mertrics.return_value = ''
+        extra_args = {constants.COLLECT_TESTS_ONLY: True}
+        unittest_utils.assert_strict_equal(
+            self,
+            self.tr.generate_run_commands([], extra_args),
+            [RUN_CMD.format(
+                metrics='',
+                serial=' --collect-tests-only',
+                tf_customize_template='')])
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_runners/event_handler.py b/atest-py2/test_runners/event_handler.py
new file mode 100644
index 0000000..efe0236
--- /dev/null
+++ b/atest-py2/test_runners/event_handler.py
@@ -0,0 +1,287 @@
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Atest test event handler class.
+"""
+
+from __future__ import print_function
+from collections import deque
+from datetime import timedelta
+import time
+import logging
+
+import atest_execution_info
+
+from test_runners import test_runner_base
+
+
+EVENT_NAMES = {'module_started': 'TEST_MODULE_STARTED',
+               'module_ended': 'TEST_MODULE_ENDED',
+               'run_started': 'TEST_RUN_STARTED',
+               'run_ended': 'TEST_RUN_ENDED',
+               # Next three are test-level events
+               'test_started': 'TEST_STARTED',
+               'test_failed': 'TEST_FAILED',
+               'test_ended': 'TEST_ENDED',
+               # Last two failures are runner-level, not test-level.
+               # Invocation failure is broader than run failure.
+               'run_failed': 'TEST_RUN_FAILED',
+               'invocation_failed': 'INVOCATION_FAILED',
+               'test_ignored': 'TEST_IGNORED',
+               'test_assumption_failure': 'TEST_ASSUMPTION_FAILURE',
+               'log_association': 'LOG_ASSOCIATION'}
+
+EVENT_PAIRS = {EVENT_NAMES['module_started']: EVENT_NAMES['module_ended'],
+               EVENT_NAMES['run_started']: EVENT_NAMES['run_ended'],
+               EVENT_NAMES['test_started']: EVENT_NAMES['test_ended']}
+START_EVENTS = list(EVENT_PAIRS.keys())
+END_EVENTS = list(EVENT_PAIRS.values())
+TEST_NAME_TEMPLATE = '%s#%s'
+EVENTS_NOT_BALANCED = ('Error: Saw %s Start event and %s End event. These '
+                       'should be equal!')
+
+# time in millisecond.
+ONE_SECOND = 1000
+ONE_MINUTE = 60000
+ONE_HOUR = 3600000
+
+CONNECTION_STATE = {
+    'current_test': None,
+    'test_run_name': None,
+    'last_failed': None,
+    'last_ignored': None,
+    'last_assumption_failed': None,
+    'current_group': None,
+    'current_group_total': None,
+    'test_count': 0,
+    'test_start_time': None}
+
+class EventHandleError(Exception):
+    """Raised when handle event error."""
+
+class EventHandler(object):
+    """Test Event handle class."""
+
+    def __init__(self, reporter, name):
+        self.reporter = reporter
+        self.runner_name = name
+        self.state = CONNECTION_STATE.copy()
+        self.event_stack = deque()
+
+    def _module_started(self, event_data):
+        if atest_execution_info.PREPARE_END_TIME is None:
+            atest_execution_info.PREPARE_END_TIME = time.time()
+        self.state['current_group'] = event_data['moduleName']
+        self.state['last_failed'] = None
+        self.state['current_test'] = None
+
+    def _run_started(self, event_data):
+        # Technically there can be more than one run per module.
+        self.state['test_run_name'] = event_data.setdefault('runName', '')
+        self.state['current_group_total'] = event_data['testCount']
+        self.state['test_count'] = 0
+        self.state['last_failed'] = None
+        self.state['current_test'] = None
+
+    def _test_started(self, event_data):
+        name = TEST_NAME_TEMPLATE % (event_data['className'],
+                                     event_data['testName'])
+        self.state['current_test'] = name
+        self.state['test_count'] += 1
+        self.state['test_start_time'] = event_data['start_time']
+
+    def _test_failed(self, event_data):
+        self.state['last_failed'] = {'name': TEST_NAME_TEMPLATE % (
+            event_data['className'],
+            event_data['testName']),
+                                     'trace': event_data['trace']}
+
+    def _test_ignored(self, event_data):
+        name = TEST_NAME_TEMPLATE % (event_data['className'],
+                                     event_data['testName'])
+        self.state['last_ignored'] = name
+
+    def _test_assumption_failure(self, event_data):
+        name = TEST_NAME_TEMPLATE % (event_data['className'],
+                                     event_data['testName'])
+        self.state['last_assumption_failed'] = name
+
+    def _run_failed(self, event_data):
+        # Module and Test Run probably started, but failure occurred.
+        self.reporter.process_test_result(test_runner_base.TestResult(
+            runner_name=self.runner_name,
+            group_name=self.state['current_group'],
+            test_name=self.state['current_test'],
+            status=test_runner_base.ERROR_STATUS,
+            details=event_data['reason'],
+            test_count=self.state['test_count'],
+            test_time='',
+            runner_total=None,
+            group_total=self.state['current_group_total'],
+            additional_info={},
+            test_run_name=self.state['test_run_name']))
+
+    def _invocation_failed(self, event_data):
+        # Broadest possible failure. May not even start the module/test run.
+        self.reporter.process_test_result(test_runner_base.TestResult(
+            runner_name=self.runner_name,
+            group_name=self.state['current_group'],
+            test_name=self.state['current_test'],
+            status=test_runner_base.ERROR_STATUS,
+            details=event_data['cause'],
+            test_count=self.state['test_count'],
+            test_time='',
+            runner_total=None,
+            group_total=self.state['current_group_total'],
+            additional_info={},
+            test_run_name=self.state['test_run_name']))
+
+    def _run_ended(self, event_data):
+        pass
+
+    def _module_ended(self, event_data):
+        pass
+
+    def _test_ended(self, event_data):
+        name = TEST_NAME_TEMPLATE % (event_data['className'],
+                                     event_data['testName'])
+        test_time = ''
+        if self.state['test_start_time']:
+            test_time = self._calc_duration(event_data['end_time'] -
+                                            self.state['test_start_time'])
+        if self.state['last_failed'] and name == self.state['last_failed']['name']:
+            status = test_runner_base.FAILED_STATUS
+            trace = self.state['last_failed']['trace']
+            self.state['last_failed'] = None
+        elif (self.state['last_assumption_failed'] and
+              name == self.state['last_assumption_failed']):
+            status = test_runner_base.ASSUMPTION_FAILED
+            self.state['last_assumption_failed'] = None
+            trace = None
+        elif self.state['last_ignored'] and name == self.state['last_ignored']:
+            status = test_runner_base.IGNORED_STATUS
+            self.state['last_ignored'] = None
+            trace = None
+        else:
+            status = test_runner_base.PASSED_STATUS
+            trace = None
+
+        default_event_keys = ['className', 'end_time', 'testName']
+        additional_info = {}
+        for event_key in event_data.keys():
+            if event_key not in default_event_keys:
+                additional_info[event_key] = event_data.get(event_key, None)
+
+        self.reporter.process_test_result(test_runner_base.TestResult(
+            runner_name=self.runner_name,
+            group_name=self.state['current_group'],
+            test_name=name,
+            status=status,
+            details=trace,
+            test_count=self.state['test_count'],
+            test_time=test_time,
+            runner_total=None,
+            additional_info=additional_info,
+            group_total=self.state['current_group_total'],
+            test_run_name=self.state['test_run_name']))
+
+    def _log_association(self, event_data):
+        pass
+
+    switch_handler = {EVENT_NAMES['module_started']: _module_started,
+                      EVENT_NAMES['run_started']: _run_started,
+                      EVENT_NAMES['test_started']: _test_started,
+                      EVENT_NAMES['test_failed']: _test_failed,
+                      EVENT_NAMES['test_ignored']: _test_ignored,
+                      EVENT_NAMES['test_assumption_failure']: _test_assumption_failure,
+                      EVENT_NAMES['run_failed']: _run_failed,
+                      EVENT_NAMES['invocation_failed']: _invocation_failed,
+                      EVENT_NAMES['test_ended']: _test_ended,
+                      EVENT_NAMES['run_ended']: _run_ended,
+                      EVENT_NAMES['module_ended']: _module_ended,
+                      EVENT_NAMES['log_association']: _log_association}
+
+    def process_event(self, event_name, event_data):
+        """Process the events of the test run and call reporter with results.
+
+        Args:
+            event_name: A string of the event name.
+            event_data: A dict of event data.
+        """
+        logging.debug('Processing %s %s', event_name, event_data)
+        if event_name in START_EVENTS:
+            self.event_stack.append(event_name)
+        elif event_name in END_EVENTS:
+            self._check_events_are_balanced(event_name, self.reporter)
+        if self.switch_handler.has_key(event_name):
+            self.switch_handler[event_name](self, event_data)
+        else:
+            # TODO(b/128875503): Implement the mechanism to inform not handled TF event.
+            logging.debug('Event[%s] is not processable.', event_name)
+
+    def _check_events_are_balanced(self, event_name, reporter):
+        """Check Start events and End events. They should be balanced.
+
+        If they are not balanced, print the error message in
+        state['last_failed'], then raise TradeFedExitError.
+
+        Args:
+            event_name: A string of the event name.
+            reporter: A ResultReporter instance.
+        Raises:
+            TradeFedExitError if we doesn't have a balance of START/END events.
+        """
+        start_event = self.event_stack.pop() if self.event_stack else None
+        if not start_event or EVENT_PAIRS[start_event] != event_name:
+            # Here bubble up the failed trace in the situation having
+            # TEST_FAILED but never receiving TEST_ENDED.
+            if self.state['last_failed'] and (start_event ==
+                                              EVENT_NAMES['test_started']):
+                reporter.process_test_result(test_runner_base.TestResult(
+                    runner_name=self.runner_name,
+                    group_name=self.state['current_group'],
+                    test_name=self.state['last_failed']['name'],
+                    status=test_runner_base.FAILED_STATUS,
+                    details=self.state['last_failed']['trace'],
+                    test_count=self.state['test_count'],
+                    test_time='',
+                    runner_total=None,
+                    group_total=self.state['current_group_total'],
+                    additional_info={},
+                    test_run_name=self.state['test_run_name']))
+            raise EventHandleError(EVENTS_NOT_BALANCED % (start_event,
+                                                          event_name))
+
+    @staticmethod
+    def _calc_duration(duration):
+        """Convert duration from ms to 3h2m43.034s.
+
+        Args:
+            duration: millisecond
+
+        Returns:
+            string in h:m:s, m:s, s or millis, depends on the duration.
+        """
+        delta = timedelta(milliseconds=duration)
+        timestamp = str(delta).split(':')  # hh:mm:microsec
+
+        if duration < ONE_SECOND:
+            return "({}ms)".format(duration)
+        elif duration < ONE_MINUTE:
+            return "({:.3f}s)".format(float(timestamp[2]))
+        elif duration < ONE_HOUR:
+            return "({0}m{1:.3f}s)".format(timestamp[1], float(timestamp[2]))
+        return "({0}h{1}m{2:.3f}s)".format(timestamp[0],
+                                           timestamp[1], float(timestamp[2]))
diff --git a/atest-py2/test_runners/event_handler_unittest.py b/atest-py2/test_runners/event_handler_unittest.py
new file mode 100755
index 0000000..09069b2
--- /dev/null
+++ b/atest-py2/test_runners/event_handler_unittest.py
@@ -0,0 +1,348 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for event_handler."""
+
+import unittest
+import mock
+
+import atest_tf_test_runner as atf_tr
+import event_handler as e_h
+from test_runners import test_runner_base
+
+
+EVENTS_NORMAL = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+        'moduleName':'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2, 'runName': 'com.android.UnitTests'}),
+    ('TEST_STARTED', {'start_time':52, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':1048, 'className':'someClassName',
+                    'testName':'someTestName'}),
+    ('TEST_STARTED', {'start_time':48, 'className':'someClassName2',
+                      'testName':'someTestName2'}),
+    ('TEST_FAILED', {'className':'someClassName2', 'testName':'someTestName2',
+                     'trace': 'someTrace'}),
+    ('TEST_ENDED', {'end_time':9876450, 'className':'someClassName2',
+                    'testName':'someTestName2'}),
+    ('TEST_RUN_ENDED', {}),
+    ('TEST_MODULE_ENDED', {'foo': 'bar'}),
+]
+
+EVENTS_RUN_FAILURE = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName': 'serial-util11462169742772610436.ser',
+        'moduleName': 'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2, 'runName': 'com.android.UnitTests'}),
+    ('TEST_STARTED', {'start_time':10, 'className': 'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_RUN_FAILED', {'reason': 'someRunFailureReason'})
+]
+
+
+EVENTS_INVOCATION_FAILURE = [
+    ('TEST_RUN_STARTED', {'testCount': None, 'runName': 'com.android.UnitTests'}),
+    ('INVOCATION_FAILED', {'cause': 'someInvocationFailureReason'})
+]
+
+EVENTS_MISSING_TEST_RUN_STARTED_EVENT = [
+    ('TEST_STARTED', {'start_time':52, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':1048, 'className':'someClassName',
+                    'testName':'someTestName'}),
+]
+
+EVENTS_NOT_BALANCED_BEFORE_RAISE = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+        'moduleName':'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2, 'runName': 'com.android.UnitTests'}),
+    ('TEST_STARTED', {'start_time':10, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':18, 'className':'someClassName',
+                    'testName':'someTestName'}),
+    ('TEST_STARTED', {'start_time':19, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_FAILED', {'className':'someClassName2', 'testName':'someTestName2',
+                     'trace': 'someTrace'}),
+]
+
+EVENTS_IGNORE = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+        'moduleName':'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2, 'runName': 'com.android.UnitTests'}),
+    ('TEST_STARTED', {'start_time':8, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':18, 'className':'someClassName',
+                    'testName':'someTestName'}),
+    ('TEST_STARTED', {'start_time':28, 'className':'someClassName2',
+                      'testName':'someTestName2'}),
+    ('TEST_IGNORED', {'className':'someClassName2', 'testName':'someTestName2',
+                      'trace': 'someTrace'}),
+    ('TEST_ENDED', {'end_time':90, 'className':'someClassName2',
+                    'testName':'someTestName2'}),
+    ('TEST_RUN_ENDED', {}),
+    ('TEST_MODULE_ENDED', {'foo': 'bar'}),
+]
+
+EVENTS_WITH_PERF_INFO = [
+    ('TEST_MODULE_STARTED', {
+        'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+        'moduleName':'someTestModule'}),
+    ('TEST_RUN_STARTED', {'testCount': 2, 'runName': 'com.android.UnitTests'}),
+    ('TEST_STARTED', {'start_time':52, 'className':'someClassName',
+                      'testName':'someTestName'}),
+    ('TEST_ENDED', {'end_time':1048, 'className':'someClassName',
+                    'testName':'someTestName'}),
+    ('TEST_STARTED', {'start_time':48, 'className':'someClassName2',
+                      'testName':'someTestName2'}),
+    ('TEST_FAILED', {'className':'someClassName2', 'testName':'someTestName2',
+                     'trace': 'someTrace'}),
+    ('TEST_ENDED', {'end_time':9876450, 'className':'someClassName2',
+                    'testName':'someTestName2', 'cpu_time':'1234.1234(ns)',
+                    'real_time':'5678.5678(ns)', 'iterations':'6666'}),
+    ('TEST_STARTED', {'start_time':10, 'className':'someClassName3',
+                      'testName':'someTestName3'}),
+    ('TEST_ENDED', {'end_time':70, 'className':'someClassName3',
+                    'testName':'someTestName3', 'additional_info_min':'102773',
+                    'additional_info_mean':'105973', 'additional_info_median':'103778'}),
+    ('TEST_RUN_ENDED', {}),
+    ('TEST_MODULE_ENDED', {'foo': 'bar'}),
+]
+
+class EventHandlerUnittests(unittest.TestCase):
+    """Unit tests for event_handler.py"""
+
+    def setUp(self):
+        reload(e_h)
+        self.mock_reporter = mock.Mock()
+        self.fake_eh = e_h.EventHandler(self.mock_reporter,
+                                        atf_tr.AtestTradefedTestRunner.NAME)
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    def test_process_event_normal_results(self):
+        """Test process_event method for normal test results."""
+        for name, data in EVENTS_NORMAL:
+            self.fake_eh.process_event(name, data)
+        call1 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName#someTestName',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=1,
+            test_time='(996ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        call2 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName2#someTestName2',
+            status=test_runner_base.FAILED_STATUS,
+            details='someTrace',
+            test_count=2,
+            test_time='(2h44m36.402s)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call1, call2])
+
+    def test_process_event_run_failure(self):
+        """Test process_event method run failure."""
+        for name, data in EVENTS_RUN_FAILURE:
+            self.fake_eh.process_event(name, data)
+        call = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName#someTestName',
+            status=test_runner_base.ERROR_STATUS,
+            details='someRunFailureReason',
+            test_count=1,
+            test_time='',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call])
+
+    def test_process_event_invocation_failure(self):
+        """Test process_event method with invocation failure."""
+        for name, data in EVENTS_INVOCATION_FAILURE:
+            self.fake_eh.process_event(name, data)
+        call = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name=None,
+            test_name=None,
+            status=test_runner_base.ERROR_STATUS,
+            details='someInvocationFailureReason',
+            test_count=0,
+            test_time='',
+            runner_total=None,
+            group_total=None,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call])
+
+    def test_process_event_missing_test_run_started_event(self):
+        """Test process_event method for normal test results."""
+        for name, data in EVENTS_MISSING_TEST_RUN_STARTED_EVENT:
+            self.fake_eh.process_event(name, data)
+        call = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name=None,
+            test_name='someClassName#someTestName',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=1,
+            test_time='(996ms)',
+            runner_total=None,
+            group_total=None,
+            additional_info={},
+            test_run_name=None
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call])
+
+    # pylint: disable=protected-access
+    def test_process_event_not_balanced(self):
+        """Test process_event method with start/end event name not balanced."""
+        for name, data in EVENTS_NOT_BALANCED_BEFORE_RAISE:
+            self.fake_eh.process_event(name, data)
+        call = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName#someTestName',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=1,
+            test_time='(8ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call])
+        # Event pair: TEST_STARTED -> TEST_RUN_ENDED
+        # It should raise TradeFedExitError in _check_events_are_balanced()
+        name = 'TEST_RUN_ENDED'
+        data = {}
+        self.assertRaises(e_h.EventHandleError,
+                          self.fake_eh._check_events_are_balanced,
+                          name, self.mock_reporter)
+        # Event pair: TEST_RUN_STARTED -> TEST_MODULE_ENDED
+        # It should raise TradeFedExitError in _check_events_are_balanced()
+        name = 'TEST_MODULE_ENDED'
+        data = {'foo': 'bar'}
+        self.assertRaises(e_h.EventHandleError,
+                          self.fake_eh._check_events_are_balanced,
+                          name, self.mock_reporter)
+
+    def test_process_event_ignore(self):
+        """Test _process_event method for normal test results."""
+        for name, data in EVENTS_IGNORE:
+            self.fake_eh.process_event(name, data)
+        call1 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName#someTestName',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=1,
+            test_time='(10ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        call2 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName2#someTestName2',
+            status=test_runner_base.IGNORED_STATUS,
+            details=None,
+            test_count=2,
+            test_time='(62ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call1, call2])
+
+    def test_process_event_with_additional_info(self):
+        """Test process_event method with perf information."""
+        for name, data in EVENTS_WITH_PERF_INFO:
+            self.fake_eh.process_event(name, data)
+        call1 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName#someTestName',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=1,
+            test_time='(996ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info={},
+            test_run_name='com.android.UnitTests'
+        ))
+
+        test_additional_info = {'cpu_time':'1234.1234(ns)', 'real_time':'5678.5678(ns)',
+                                'iterations':'6666'}
+        call2 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName2#someTestName2',
+            status=test_runner_base.FAILED_STATUS,
+            details='someTrace',
+            test_count=2,
+            test_time='(2h44m36.402s)',
+            runner_total=None,
+            group_total=2,
+            additional_info=test_additional_info,
+            test_run_name='com.android.UnitTests'
+        ))
+
+        test_additional_info2 = {'additional_info_min':'102773',
+                                 'additional_info_mean':'105973',
+                                 'additional_info_median':'103778'}
+        call3 = mock.call(test_runner_base.TestResult(
+            runner_name=atf_tr.AtestTradefedTestRunner.NAME,
+            group_name='someTestModule',
+            test_name='someClassName3#someTestName3',
+            status=test_runner_base.PASSED_STATUS,
+            details=None,
+            test_count=3,
+            test_time='(60ms)',
+            runner_total=None,
+            group_total=2,
+            additional_info=test_additional_info2,
+            test_run_name='com.android.UnitTests'
+        ))
+        self.mock_reporter.process_test_result.assert_has_calls([call1, call2, call3])
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_runners/example_test_runner.py b/atest-py2/test_runners/example_test_runner.py
new file mode 100644
index 0000000..dc18112
--- /dev/null
+++ b/atest-py2/test_runners/example_test_runner.py
@@ -0,0 +1,77 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Example test runner class.
+"""
+
+# pylint: disable=import-error
+import test_runner_base
+
+
+class ExampleTestRunner(test_runner_base.TestRunnerBase):
+    """Base Test Runner class."""
+    NAME = 'ExampleTestRunner'
+    EXECUTABLE = 'echo'
+    _RUN_CMD = '{exe} ExampleTestRunner - test:{test}'
+    _BUILD_REQ = set()
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter
+        """
+        run_cmds = self.generate_run_commands(test_infos, extra_args)
+        for run_cmd in run_cmds:
+            super(ExampleTestRunner, self).run(run_cmd)
+
+    def host_env_check(self):
+        """Check that host env has everything we need.
+
+        We actually can assume the host env is fine because we have the same
+        requirements that atest has. Update this to check for android env vars
+        if that changes.
+        """
+        pass
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        return set()
+
+    # pylint: disable=unused-argument
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+            extra_args: A Dict of extra args to append.
+            port: Optional. An int of the port number to send events to.
+                  Subprocess reporter in TF won't try to connect if it's None.
+
+        Returns:
+            A list of run commands to run the tests.
+        """
+        run_cmds = []
+        for test_info in test_infos:
+            run_cmd_dict = {'exe': self.EXECUTABLE,
+                            'test': test_info.test_name}
+            run_cmds.extend(self._RUN_CMD.format(**run_cmd_dict))
+        return run_cmds
diff --git a/atest-py2/test_runners/regression_test_runner.py b/atest-py2/test_runners/regression_test_runner.py
new file mode 100644
index 0000000..078040a
--- /dev/null
+++ b/atest-py2/test_runners/regression_test_runner.py
@@ -0,0 +1,91 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Regression Detection test runner class.
+"""
+
+# pylint: disable=import-error
+import constants
+from test_runners import test_runner_base
+
+
+class RegressionTestRunner(test_runner_base.TestRunnerBase):
+    """Regression Test Runner class."""
+    NAME = 'RegressionTestRunner'
+    EXECUTABLE = 'tradefed.sh'
+    _RUN_CMD = '{exe} run commandAndExit regression -n {args}'
+    _BUILD_REQ = {'tradefed-core', constants.ATEST_TF_MODULE}
+
+    def __init__(self, results_dir):
+        """Init stuff for base class."""
+        super(RegressionTestRunner, self).__init__(results_dir)
+        self.run_cmd_dict = {'exe': self.EXECUTABLE,
+                             'args': ''}
+
+    # pylint: disable=unused-argument
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of args to add to regression detection test run.
+            reporter: A ResultReporter instance.
+
+        Returns:
+            Return code of the process for running tests.
+        """
+        run_cmds = self.generate_run_commands(test_infos, extra_args)
+        proc = super(RegressionTestRunner, self).run(run_cmds[0],
+                                                     output_to_stdout=True)
+        proc.wait()
+        return proc.returncode
+
+    def host_env_check(self):
+        """Check that host env has everything we need.
+
+        We actually can assume the host env is fine because we have the same
+        requirements that atest has. Update this to check for android env vars
+        if that changes.
+        """
+        pass
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        return self._BUILD_REQ
+
+    # pylint: disable=unused-argument
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+            extra_args: A Dict of extra args to append.
+            port: Optional. An int of the port number to send events to.
+                  Subprocess reporter in TF won't try to connect if it's None.
+
+        Returns:
+            A list that contains the string of atest tradefed run command.
+            Only one command is returned.
+        """
+        pre = extra_args.pop(constants.PRE_PATCH_FOLDER)
+        post = extra_args.pop(constants.POST_PATCH_FOLDER)
+        args = ['--pre-patch-metrics', pre, '--post-patch-metrics', post]
+        self.run_cmd_dict['args'] = ' '.join(args)
+        run_cmd = self._RUN_CMD.format(**self.run_cmd_dict)
+        return [run_cmd]
diff --git a/atest-py2/test_runners/robolectric_test_runner.py b/atest-py2/test_runners/robolectric_test_runner.py
new file mode 100644
index 0000000..fa34149
--- /dev/null
+++ b/atest-py2/test_runners/robolectric_test_runner.py
@@ -0,0 +1,256 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Robolectric test runner class.
+
+This test runner will be short lived, once robolectric support v2 is in, then
+robolectric tests will be invoked through AtestTFTestRunner.
+"""
+
+import json
+import logging
+import os
+import re
+import tempfile
+import time
+
+from functools import partial
+
+# pylint: disable=import-error
+import atest_utils
+import constants
+
+from event_handler import EventHandler
+from test_runners import test_runner_base
+
+POLL_FREQ_SECS = 0.1
+# A pattern to match event like below
+#TEST_FAILED {'className':'SomeClass', 'testName':'SomeTestName',
+#            'trace':'{"trace":"AssertionError: <true> is equal to <false>\n
+#               at FailureStrategy.fail(FailureStrategy.java:24)\n
+#               at FailureStrategy.fail(FailureStrategy.java:20)\n"}\n\n
+EVENT_RE = re.compile(r'^(?P<event_name>[A-Z_]+) (?P<json_data>{(.\r*|\n)*})(?:\n|$)')
+
+
+class RobolectricTestRunner(test_runner_base.TestRunnerBase):
+    """Robolectric Test Runner class."""
+    NAME = 'RobolectricTestRunner'
+    # We don't actually use EXECUTABLE because we're going to use
+    # atest_utils.build to kick off the test but if we don't set it, the base
+    # class will raise an exception.
+    EXECUTABLE = 'make'
+
+    # pylint: disable=useless-super-delegation
+    def __init__(self, results_dir, **kwargs):
+        """Init stuff for robolectric runner class."""
+        super(RobolectricTestRunner, self).__init__(results_dir, **kwargs)
+        self.is_verbose = logging.getLogger().isEnabledFor(logging.DEBUG)
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos. See base class for more.
+
+        Args:
+            test_infos: A list of TestInfos.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        if os.getenv(test_runner_base.OLD_OUTPUT_ENV_VAR):
+            return self.run_tests_raw(test_infos, extra_args, reporter)
+        return self.run_tests_pretty(test_infos, extra_args, reporter)
+
+    def run_tests_raw(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos with raw output.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: A ResultReporter Instance.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        reporter.register_unsupported_runner(self.NAME)
+        ret_code = constants.EXIT_CODE_SUCCESS
+        for test_info in test_infos:
+            full_env_vars = self._get_full_build_environ(test_info,
+                                                         extra_args)
+            run_cmd = self.generate_run_commands([test_info], extra_args)[0]
+            subproc = self.run(run_cmd,
+                               output_to_stdout=self.is_verbose,
+                               env_vars=full_env_vars)
+            ret_code |= self.wait_for_subprocess(subproc)
+        return ret_code
+
+    def run_tests_pretty(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos with pretty output mode.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: A ResultReporter Instance.
+
+        Returns:
+            0 if tests succeed, non-zero otherwise.
+        """
+        ret_code = constants.EXIT_CODE_SUCCESS
+        for test_info in test_infos:
+            # Create a temp communication file.
+            with tempfile.NamedTemporaryFile(mode='w+r',
+                                             dir=self.results_dir) as event_file:
+                # Prepare build environment parameter.
+                full_env_vars = self._get_full_build_environ(test_info,
+                                                             extra_args,
+                                                             event_file)
+                run_cmd = self.generate_run_commands([test_info], extra_args)[0]
+                subproc = self.run(run_cmd,
+                                   output_to_stdout=self.is_verbose,
+                                   env_vars=full_env_vars)
+                event_handler = EventHandler(reporter, self.NAME)
+                # Start polling.
+                self.handle_subprocess(subproc, partial(self._exec_with_robo_polling,
+                                                        event_file,
+                                                        subproc,
+                                                        event_handler))
+                ret_code |= self.wait_for_subprocess(subproc)
+        return ret_code
+
+    def _get_full_build_environ(self, test_info=None, extra_args=None, event_file=None):
+        """Helper to get full build environment.
+
+       Args:
+           test_info: TestInfo object.
+           extra_args: Dict of extra args to add to test run.
+           event_file: A file-like object that can be used as a temporary storage area.
+       """
+        full_env_vars = os.environ.copy()
+        env_vars = self.generate_env_vars(test_info,
+                                          extra_args,
+                                          event_file)
+        full_env_vars.update(env_vars)
+        return full_env_vars
+
+    def _exec_with_robo_polling(self, communication_file, robo_proc, event_handler):
+        """Polling data from communication file
+
+        Polling data from communication file. Exit when communication file
+        is empty and subprocess ended.
+
+        Args:
+            communication_file: A monitored communication file.
+            robo_proc: The build process.
+            event_handler: A file-like object storing the events of robolectric tests.
+        """
+        buf = ''
+        while True:
+            # Make sure that ATest gets content from current position.
+            communication_file.seek(0, 1)
+            data = communication_file.read()
+            buf += data
+            reg = re.compile(r'(.|\n)*}\n\n')
+            if not reg.match(buf) or data == '':
+                if robo_proc.poll() is not None:
+                    logging.debug('Build process exited early')
+                    return
+                time.sleep(POLL_FREQ_SECS)
+            else:
+                # Read all new data and handle it at one time.
+                for event in re.split(r'\n\n', buf):
+                    match = EVENT_RE.match(event)
+                    if match:
+                        try:
+                            event_data = json.loads(match.group('json_data'),
+                                                    strict=False)
+                        except ValueError:
+                            # Parse event fail, continue to parse next one.
+                            logging.debug('"%s" is not valid json format.',
+                                          match.group('json_data'))
+                            continue
+                        event_name = match.group('event_name')
+                        event_handler.process_event(event_name, event_data)
+                buf = ''
+
+    @staticmethod
+    def generate_env_vars(test_info, extra_args, event_file=None):
+        """Turn the args into env vars.
+
+        Robolectric tests specify args through env vars, so look for class
+        filters and debug args to apply to the env.
+
+        Args:
+            test_info: TestInfo class that holds the class filter info.
+            extra_args: Dict of extra args to apply for test run.
+            event_file: A file-like object storing the events of robolectric tests.
+
+        Returns:
+            Dict of env vars to pass into invocation.
+        """
+        env_var = {}
+        for arg in extra_args:
+            if constants.WAIT_FOR_DEBUGGER == arg:
+                env_var['DEBUG_ROBOLECTRIC'] = 'true'
+                continue
+        filters = test_info.data.get(constants.TI_FILTER)
+        if filters:
+            robo_filter = next(iter(filters))
+            env_var['ROBOTEST_FILTER'] = robo_filter.class_name
+            if robo_filter.methods:
+                logging.debug('method filtering not supported for robolectric '
+                              'tests yet.')
+        if event_file:
+            env_var['EVENT_FILE_ROBOLECTRIC'] = event_file.name
+        return env_var
+
+    def host_env_check(self):
+        """Check that host env has everything we need.
+
+        We actually can assume the host env is fine because we have the same
+        requirements that atest has. Update this to check for android env vars
+        if that changes.
+        """
+        pass
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        return set()
+
+    # pylint: disable=unused-argument
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+            extra_args: A Dict of extra args to append.
+            port: Optional. An int of the port number to send events to.
+                  Subprocess reporter in TF won't try to connect if it's None.
+
+        Returns:
+            A list of run commands to run the tests.
+        """
+        run_cmds = []
+        for test_info in test_infos:
+            robo_command = atest_utils.get_build_cmd() + [str(test_info.test_name)]
+            run_cmd = ' '.join(x for x in robo_command)
+            if constants.DRY_RUN in extra_args:
+                run_cmd = run_cmd.replace(
+                    os.environ.get(constants.ANDROID_BUILD_TOP) + os.sep, '')
+            run_cmds.append(run_cmd)
+        return run_cmds
diff --git a/atest-py2/test_runners/robolectric_test_runner_unittest.py b/atest-py2/test_runners/robolectric_test_runner_unittest.py
new file mode 100755
index 0000000..46164f0
--- /dev/null
+++ b/atest-py2/test_runners/robolectric_test_runner_unittest.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Unittests for robolectric_test_runner."""
+
+import json
+import unittest
+import subprocess
+import tempfile
+import mock
+
+import event_handler
+# pylint: disable=import-error
+from test_finders import test_info
+from test_runners import robolectric_test_runner
+
+# pylint: disable=protected-access
+class RobolectricTestRunnerUnittests(unittest.TestCase):
+    """Unit tests for robolectric_test_runner.py"""
+
+    def setUp(self):
+        self.polling_time = robolectric_test_runner.POLL_FREQ_SECS
+        self.suite_tr = robolectric_test_runner.RobolectricTestRunner(results_dir='')
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    @mock.patch.object(robolectric_test_runner.RobolectricTestRunner, 'run')
+    def test_run_tests_raw(self, mock_run):
+        """Test run_tests_raw method."""
+        test_infos = [test_info.TestInfo("Robo1",
+                                         "RobolectricTestRunner",
+                                         ["RoboTest"])]
+        extra_args = []
+        mock_subproc = mock.Mock()
+        mock_run.return_value = mock_subproc
+        mock_subproc.returncode = 0
+        mock_reporter = mock.Mock()
+        # Test Build Pass
+        self.assertEqual(
+            0,
+            self.suite_tr.run_tests_raw(test_infos, extra_args, mock_reporter))
+        # Test Build Fail
+        mock_subproc.returncode = 1
+        self.assertNotEqual(
+            0,
+            self.suite_tr.run_tests_raw(test_infos, extra_args, mock_reporter))
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_exec_with_robo_polling_complete_information(self, mock_pe):
+        """Test _exec_with_robo_polling method."""
+        event_name = 'TEST_STARTED'
+        event_data = {'className':'SomeClass', 'testName':'SomeTestName'}
+
+        json_event_data = json.dumps(event_data)
+        data = '%s %s\n\n' %(event_name, json_event_data)
+        event_file = tempfile.NamedTemporaryFile(mode='w+r', delete=True)
+        subprocess.call("echo '%s' -n >> %s" %(data, event_file.name), shell=True)
+        robo_proc = subprocess.Popen("sleep %s" %str(self.polling_time * 2), shell=True)
+        self.suite_tr. _exec_with_robo_polling(event_file, robo_proc, mock_pe)
+        calls = [mock.call.process_event(event_name, event_data)]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_exec_with_robo_polling_with_partial_info(self, mock_pe):
+        """Test _exec_with_robo_polling method."""
+        event_name = 'TEST_STARTED'
+        event1 = '{"className":"SomeClass","test'
+        event2 = 'Name":"SomeTestName"}\n\n'
+        data1 = '%s %s'%(event_name, event1)
+        data2 = event2
+        event_file = tempfile.NamedTemporaryFile(mode='w+r', delete=True)
+        subprocess.Popen("echo -n '%s' >> %s" %(data1, event_file.name), shell=True)
+        robo_proc = subprocess.Popen("echo '%s' >> %s && sleep %s"
+                                     %(data2,
+                                       event_file.name,
+                                       str(self.polling_time*5)),
+                                     shell=True)
+        self.suite_tr. _exec_with_robo_polling(event_file, robo_proc, mock_pe)
+        calls = [mock.call.process_event(event_name,
+                                         json.loads(event1 + event2))]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_exec_with_robo_polling_with_fail_stacktrace(self, mock_pe):
+        """Test _exec_with_robo_polling method."""
+        event_name = 'TEST_FAILED'
+        event_data = {'className':'SomeClass', 'testName':'SomeTestName',
+                      'trace':'{"trace":"AssertionError: <true> is equal to <false>\n'
+                              'at FailureStrategy.fail(FailureStrategy.java:24)\n'
+                              'at FailureStrategy.fail(FailureStrategy.java:20)\n'}
+        data = '%s %s\n\n'%(event_name, json.dumps(event_data))
+        event_file = tempfile.NamedTemporaryFile(mode='w+r', delete=True)
+        subprocess.call("echo '%s' -n >> %s" %(data, event_file.name), shell=True)
+        robo_proc = subprocess.Popen("sleep %s" %str(self.polling_time * 2), shell=True)
+        self.suite_tr. _exec_with_robo_polling(event_file, robo_proc, mock_pe)
+        calls = [mock.call.process_event(event_name, event_data)]
+        mock_pe.assert_has_calls(calls)
+
+    @mock.patch.object(event_handler.EventHandler, 'process_event')
+    def test_exec_with_robo_polling_with_multi_event(self, mock_pe):
+        """Test _exec_with_robo_polling method."""
+        event_file = tempfile.NamedTemporaryFile(mode='w+r', delete=True)
+        events = [
+            ('TEST_MODULE_STARTED', {
+                'moduleContextFileName':'serial-util1146216{974}2772610436.ser',
+                'moduleName':'someTestModule'}),
+            ('TEST_RUN_STARTED', {'testCount': 2}),
+            ('TEST_STARTED', {'start_time':52, 'className':'someClassName',
+                              'testName':'someTestName'}),
+            ('TEST_ENDED', {'end_time':1048, 'className':'someClassName',
+                            'testName':'someTestName'}),
+            ('TEST_STARTED', {'start_time':48, 'className':'someClassName2',
+                              'testName':'someTestName2'}),
+            ('TEST_FAILED', {'className':'someClassName2', 'testName':'someTestName2',
+                             'trace': 'someTrace'}),
+            ('TEST_ENDED', {'end_time':9876450, 'className':'someClassName2',
+                            'testName':'someTestName2'}),
+            ('TEST_RUN_ENDED', {}),
+            ('TEST_MODULE_ENDED', {'foo': 'bar'}),]
+        data = ''
+        for event in events:
+            data += '%s %s\n\n'%(event[0], json.dumps(event[1]))
+
+        subprocess.call("echo '%s' -n >> %s" %(data, event_file.name), shell=True)
+        robo_proc = subprocess.Popen("sleep %s" %str(self.polling_time * 2), shell=True)
+        self.suite_tr. _exec_with_robo_polling(event_file, robo_proc, mock_pe)
+        calls = [mock.call.process_event(name, data) for name, data in events]
+        mock_pe.assert_has_calls(calls)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_runners/suite_plan_test_runner.py b/atest-py2/test_runners/suite_plan_test_runner.py
new file mode 100644
index 0000000..9ba8233
--- /dev/null
+++ b/atest-py2/test_runners/suite_plan_test_runner.py
@@ -0,0 +1,125 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+SUITE Tradefed test runner class.
+"""
+
+import copy
+import logging
+
+# pylint: disable=import-error
+from test_runners import atest_tf_test_runner
+import atest_utils
+import constants
+
+
+class SuitePlanTestRunner(atest_tf_test_runner.AtestTradefedTestRunner):
+    """Suite Plan Test Runner class."""
+    NAME = 'SuitePlanTestRunner'
+    EXECUTABLE = '%s-tradefed'
+    _RUN_CMD = ('{exe} run commandAndExit {test} {args}')
+
+    def __init__(self, results_dir, **kwargs):
+        """Init stuff for suite tradefed runner class."""
+        super(SuitePlanTestRunner, self).__init__(results_dir, **kwargs)
+        self.run_cmd_dict = {'exe': '',
+                             'test': '',
+                             'args': ''}
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        build_req = set()
+        build_req |= super(SuitePlanTestRunner,
+                           self).get_test_runner_build_reqs()
+        return build_req
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos.
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            Return code of the process for running tests.
+        """
+        reporter.register_unsupported_runner(self.NAME)
+        run_cmds = self.generate_run_commands(test_infos, extra_args)
+        ret_code = constants.EXIT_CODE_SUCCESS
+        for run_cmd in run_cmds:
+            proc = super(SuitePlanTestRunner, self).run(run_cmd,
+                                                        output_to_stdout=True)
+            ret_code |= self.wait_for_subprocess(proc)
+        return ret_code
+
+    def _parse_extra_args(self, extra_args):
+        """Convert the extra args into something *ts-tf can understand.
+
+        We want to transform the top-level args from atest into specific args
+        that *ts-tradefed supports. The only arg we take as is
+        EXTRA_ARG since that is what the user intentionally wants to pass to
+        the test runner.
+
+        Args:
+            extra_args: Dict of args
+
+        Returns:
+            List of args to append.
+        """
+        args_to_append = []
+        args_not_supported = []
+        for arg in extra_args:
+            if constants.SERIAL == arg:
+                args_to_append.append('--serial')
+                args_to_append.append(extra_args[arg])
+                continue
+            if constants.CUSTOM_ARGS == arg:
+                args_to_append.extend(extra_args[arg])
+                continue
+            if constants.DRY_RUN == arg:
+                continue
+            args_not_supported.append(arg)
+        if args_not_supported:
+            logging.info('%s does not support the following args: %s',
+                         self.EXECUTABLE, args_not_supported)
+        return args_to_append
+
+    # pylint: disable=arguments-differ
+    def generate_run_commands(self, test_infos, extra_args):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: List of TestInfo tests to run.
+            extra_args: Dict of extra args to add to test run.
+
+        Returns:
+            A List of strings that contains the run command
+            which *ts-tradefed supports.
+        """
+        cmds = []
+        args = []
+        args.extend(self._parse_extra_args(extra_args))
+        args.extend(atest_utils.get_result_server_args())
+        for test_info in test_infos:
+            cmd_dict = copy.deepcopy(self.run_cmd_dict)
+            cmd_dict['test'] = test_info.test_name
+            cmd_dict['args'] = ' '.join(args)
+            cmd_dict['exe'] = self.EXECUTABLE % test_info.suite
+            cmds.append(self._RUN_CMD.format(**cmd_dict))
+        return cmds
diff --git a/atest-py2/test_runners/suite_plan_test_runner_unittest.py b/atest-py2/test_runners/suite_plan_test_runner_unittest.py
new file mode 100755
index 0000000..857452e
--- /dev/null
+++ b/atest-py2/test_runners/suite_plan_test_runner_unittest.py
@@ -0,0 +1,133 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Unittests for test_suite_test_runner."""
+
+import unittest
+import mock
+
+# pylint: disable=import-error
+import suite_plan_test_runner
+import unittest_utils
+from test_finders import test_info
+
+
+# pylint: disable=protected-access
+class SuitePlanTestRunnerUnittests(unittest.TestCase):
+    """Unit tests for test_suite_test_runner.py"""
+
+    def setUp(self):
+        self.suite_tr = suite_plan_test_runner.SuitePlanTestRunner(results_dir='')
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    @mock.patch('atest_utils.get_result_server_args')
+    def test_generate_run_commands(self, mock_resultargs):
+        """Test _generate_run_command method.
+        Strategy:
+            suite_name: cts --> run_cmd: cts-tradefed run commandAndExit cts
+            suite_name: cts-common --> run_cmd:
+                                cts-tradefed run commandAndExit cts-common
+        """
+        test_infos = set()
+        suite_name = 'cts'
+        t_info = test_info.TestInfo(suite_name,
+                                    suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                    {suite_name},
+                                    suite=suite_name)
+        test_infos.add(t_info)
+
+        # Basic Run Cmd
+        run_cmd = []
+        exe_cmd = suite_plan_test_runner.SuitePlanTestRunner.EXECUTABLE % suite_name
+        run_cmd.append(suite_plan_test_runner.SuitePlanTestRunner._RUN_CMD.format(
+            exe=exe_cmd,
+            test=suite_name,
+            args=''))
+        mock_resultargs.return_value = []
+        unittest_utils.assert_strict_equal(
+            self,
+            self.suite_tr.generate_run_commands(test_infos, ''),
+            run_cmd)
+
+        # Run cmd with --serial LG123456789.
+        run_cmd = []
+        run_cmd.append(suite_plan_test_runner.SuitePlanTestRunner._RUN_CMD.format(
+            exe=exe_cmd,
+            test=suite_name,
+            args='--serial LG123456789'))
+        unittest_utils.assert_strict_equal(
+            self,
+            self.suite_tr.generate_run_commands(test_infos, {'SERIAL':'LG123456789'}),
+            run_cmd)
+
+        test_infos = set()
+        suite_name = 'cts-common'
+        suite = 'cts'
+        t_info = test_info.TestInfo(suite_name,
+                                    suite_plan_test_runner.SuitePlanTestRunner.NAME,
+                                    {suite_name},
+                                    suite=suite)
+        test_infos.add(t_info)
+
+        # Basic Run Cmd
+        run_cmd = []
+        exe_cmd = suite_plan_test_runner.SuitePlanTestRunner.EXECUTABLE % suite
+        run_cmd.append(suite_plan_test_runner.SuitePlanTestRunner._RUN_CMD.format(
+            exe=exe_cmd,
+            test=suite_name,
+            args=''))
+        mock_resultargs.return_value = []
+        unittest_utils.assert_strict_equal(
+            self,
+            self.suite_tr.generate_run_commands(test_infos, ''),
+            run_cmd)
+
+        # Run cmd with --serial LG123456789.
+        run_cmd = []
+        run_cmd.append(suite_plan_test_runner.SuitePlanTestRunner._RUN_CMD.format(
+            exe=exe_cmd,
+            test=suite_name,
+            args='--serial LG123456789'))
+        unittest_utils.assert_strict_equal(
+            self,
+            self.suite_tr.generate_run_commands(test_infos, {'SERIAL':'LG123456789'}),
+            run_cmd)
+
+    @mock.patch('subprocess.Popen')
+    @mock.patch.object(suite_plan_test_runner.SuitePlanTestRunner, 'run')
+    @mock.patch.object(suite_plan_test_runner.SuitePlanTestRunner,
+                       'generate_run_commands')
+    def test_run_tests(self, _mock_gen_cmd, _mock_run, _mock_popen):
+        """Test run_tests method."""
+        test_infos = []
+        extra_args = []
+        mock_reporter = mock.Mock()
+        _mock_gen_cmd.return_value = ["cmd1", "cmd2"]
+        # Test Build Pass
+        _mock_popen.return_value.returncode = 0
+        self.assertEqual(
+            0,
+            self.suite_tr.run_tests(test_infos, extra_args, mock_reporter))
+
+        # Test Build Pass
+        _mock_popen.return_value.returncode = 1
+        self.assertNotEqual(
+            0,
+            self.suite_tr.run_tests(test_infos, extra_args, mock_reporter))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest-py2/test_runners/test_runner_base.py b/atest-py2/test_runners/test_runner_base.py
new file mode 100644
index 0000000..22994e3
--- /dev/null
+++ b/atest-py2/test_runners/test_runner_base.py
@@ -0,0 +1,205 @@
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Base test runner class.
+
+Class that other test runners will instantiate for test runners.
+"""
+
+from __future__ import print_function
+import errno
+import logging
+import signal
+import subprocess
+import tempfile
+import os
+import sys
+
+from collections import namedtuple
+
+# pylint: disable=import-error
+import atest_error
+import atest_utils
+import constants
+
+OLD_OUTPUT_ENV_VAR = 'ATEST_OLD_OUTPUT'
+
+# TestResult contains information of individual tests during a test run.
+TestResult = namedtuple('TestResult', ['runner_name', 'group_name',
+                                       'test_name', 'status', 'details',
+                                       'test_count', 'test_time',
+                                       'runner_total', 'group_total',
+                                       'additional_info', 'test_run_name'])
+ASSUMPTION_FAILED = 'ASSUMPTION_FAILED'
+FAILED_STATUS = 'FAILED'
+PASSED_STATUS = 'PASSED'
+IGNORED_STATUS = 'IGNORED'
+ERROR_STATUS = 'ERROR'
+
+class TestRunnerBase(object):
+    """Base Test Runner class."""
+    NAME = ''
+    EXECUTABLE = ''
+
+    def __init__(self, results_dir, **kwargs):
+        """Init stuff for base class."""
+        self.results_dir = results_dir
+        self.test_log_file = None
+        if not self.NAME:
+            raise atest_error.NoTestRunnerName('Class var NAME is not defined.')
+        if not self.EXECUTABLE:
+            raise atest_error.NoTestRunnerExecutable('Class var EXECUTABLE is '
+                                                     'not defined.')
+        if kwargs:
+            logging.debug('ignoring the following args: %s', kwargs)
+
+    def run(self, cmd, output_to_stdout=False, env_vars=None):
+        """Shell out and execute command.
+
+        Args:
+            cmd: A string of the command to execute.
+            output_to_stdout: A boolean. If False, the raw output of the run
+                              command will not be seen in the terminal. This
+                              is the default behavior, since the test_runner's
+                              run_tests() method should use atest's
+                              result reporter to print the test results.
+
+                              Set to True to see the output of the cmd. This
+                              would be appropriate for verbose runs.
+            env_vars: Environment variables passed to the subprocess.
+        """
+        if not output_to_stdout:
+            self.test_log_file = tempfile.NamedTemporaryFile(mode='w',
+                                                             dir=self.results_dir,
+                                                             delete=True)
+        logging.debug('Executing command: %s', cmd)
+        return subprocess.Popen(cmd, preexec_fn=os.setsid, shell=True,
+                                stderr=subprocess.STDOUT, stdout=self.test_log_file,
+                                env=env_vars)
+
+    # pylint: disable=broad-except
+    def handle_subprocess(self, subproc, func):
+        """Execute the function. Interrupt the subproc when exception occurs.
+
+        Args:
+            subproc: A subprocess to be terminated.
+            func: A function to be run.
+        """
+        try:
+            signal.signal(signal.SIGINT, self._signal_passer(subproc))
+            func()
+        except Exception as error:
+            # exc_info=1 tells logging to log the stacktrace
+            logging.debug('Caught exception:', exc_info=1)
+            # Remember our current exception scope, before new try block
+            # Python3 will make this easier, the error itself stores
+            # the scope via error.__traceback__ and it provides a
+            # "raise from error" pattern.
+            # https://docs.python.org/3.5/reference/simple_stmts.html#raise
+            exc_type, exc_msg, traceback_obj = sys.exc_info()
+            # If atest crashes, try to kill subproc group as well.
+            try:
+                logging.debug('Killing subproc: %s', subproc.pid)
+                os.killpg(os.getpgid(subproc.pid), signal.SIGINT)
+            except OSError:
+                # this wipes our previous stack context, which is why
+                # we have to save it above.
+                logging.debug('Subproc already terminated, skipping')
+            finally:
+                if self.test_log_file:
+                    with open(self.test_log_file.name, 'r') as f:
+                        intro_msg = "Unexpected Issue. Raw Output:"
+                        print(atest_utils.colorize(intro_msg, constants.RED))
+                        print(f.read())
+                # Ignore socket.recv() raising due to ctrl-c
+                if not error.args or error.args[0] != errno.EINTR:
+                    raise exc_type, exc_msg, traceback_obj
+
+    def wait_for_subprocess(self, proc):
+        """Check the process status. Interrupt the TF subporcess if user
+        hits Ctrl-C.
+
+        Args:
+            proc: The tradefed subprocess.
+
+        Returns:
+            Return code of the subprocess for running tests.
+        """
+        try:
+            logging.debug('Runner Name: %s, Process ID: %s', self.NAME, proc.pid)
+            signal.signal(signal.SIGINT, self._signal_passer(proc))
+            proc.wait()
+            return proc.returncode
+        except:
+            # If atest crashes, kill TF subproc group as well.
+            os.killpg(os.getpgid(proc.pid), signal.SIGINT)
+            raise
+
+    def _signal_passer(self, proc):
+        """Return the signal_handler func bound to proc.
+
+        Args:
+            proc: The tradefed subprocess.
+
+        Returns:
+            signal_handler function.
+        """
+        def signal_handler(_signal_number, _frame):
+            """Pass SIGINT to proc.
+
+            If user hits ctrl-c during atest run, the TradeFed subprocess
+            won't stop unless we also send it a SIGINT. The TradeFed process
+            is started in a process group, so this SIGINT is sufficient to
+            kill all the child processes TradeFed spawns as well.
+            """
+            logging.info('Ctrl-C received. Killing subprocess group')
+            os.killpg(os.getpgid(proc.pid), signal.SIGINT)
+        return signal_handler
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos.
+
+        Should contain code for kicking off the test runs using
+        test_runner_base.run(). Results should be processed and printed
+        via the reporter passed in.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+        """
+        raise NotImplementedError
+
+    def host_env_check(self):
+        """Checks that host env has met requirements."""
+        raise NotImplementedError
+
+    def get_test_runner_build_reqs(self):
+        """Returns a list of build targets required by the test runner."""
+        raise NotImplementedError
+
+    def generate_run_commands(self, test_infos, extra_args, port=None):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: A set of TestInfo instances.
+            extra_args: A Dict of extra args to append.
+            port: Optional. An int of the port number to send events to.
+                  Subprocess reporter in TF won't try to connect if it's None.
+
+        Returns:
+            A list of run commands to run the tests.
+        """
+        raise NotImplementedError
diff --git a/atest-py2/test_runners/vts_tf_test_runner.py b/atest-py2/test_runners/vts_tf_test_runner.py
new file mode 100644
index 0000000..c1f53e0
--- /dev/null
+++ b/atest-py2/test_runners/vts_tf_test_runner.py
@@ -0,0 +1,129 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+VTS Tradefed test runner class.
+"""
+
+import copy
+import logging
+
+# pylint: disable=import-error
+from test_runners import atest_tf_test_runner
+import atest_utils
+import constants
+
+
+class VtsTradefedTestRunner(atest_tf_test_runner.AtestTradefedTestRunner):
+    """TradeFed Test Runner class."""
+    NAME = 'VtsTradefedTestRunner'
+    EXECUTABLE = 'vts10-tradefed'
+    _RUN_CMD = ('{exe} run commandAndExit {plan} -m {test} {args}')
+    _BUILD_REQ = {'vts10-tradefed-standalone'}
+    _DEFAULT_ARGS = ['--skip-all-system-status-check',
+                     '--skip-preconditions',
+                     '--primary-abi-only']
+
+    def __init__(self, results_dir, **kwargs):
+        """Init stuff for vts10 tradefed runner class."""
+        super(VtsTradefedTestRunner, self).__init__(results_dir, **kwargs)
+        self.run_cmd_dict = {'exe': self.EXECUTABLE,
+                             'test': '',
+                             'args': ''}
+
+    def get_test_runner_build_reqs(self):
+        """Return the build requirements.
+
+        Returns:
+            Set of build targets.
+        """
+        build_req = self._BUILD_REQ
+        build_req |= super(VtsTradefedTestRunner,
+                           self).get_test_runner_build_reqs()
+        return build_req
+
+    def run_tests(self, test_infos, extra_args, reporter):
+        """Run the list of test_infos.
+
+        Args:
+            test_infos: List of TestInfo.
+            extra_args: Dict of extra args to add to test run.
+            reporter: An instance of result_report.ResultReporter.
+
+        Returns:
+            Return code of the process for running tests.
+        """
+        ret_code = constants.EXIT_CODE_SUCCESS
+        reporter.register_unsupported_runner(self.NAME)
+        run_cmds = self.generate_run_commands(test_infos, extra_args)
+        for run_cmd in run_cmds:
+            proc = super(VtsTradefedTestRunner, self).run(run_cmd,
+                                                          output_to_stdout=True)
+            ret_code |= self.wait_for_subprocess(proc)
+        return ret_code
+
+    def _parse_extra_args(self, extra_args):
+        """Convert the extra args into something vts10-tf can understand.
+
+        We want to transform the top-level args from atest into specific args
+        that vts10-tradefed supports. The only arg we take as is is EXTRA_ARG
+        since that is what the user intentionally wants to pass to the test
+        runner.
+
+        Args:
+            extra_args: Dict of args
+
+        Returns:
+            List of args to append.
+        """
+        args_to_append = []
+        args_not_supported = []
+        for arg in extra_args:
+            if constants.SERIAL == arg:
+                args_to_append.append('--serial')
+                args_to_append.append(extra_args[arg])
+                continue
+            if constants.CUSTOM_ARGS == arg:
+                args_to_append.extend(extra_args[arg])
+                continue
+            if constants.DRY_RUN == arg:
+                continue
+            args_not_supported.append(arg)
+        if args_not_supported:
+            logging.info('%s does not support the following args: %s',
+                         self.EXECUTABLE, args_not_supported)
+        return args_to_append
+
+    # pylint: disable=arguments-differ
+    def generate_run_commands(self, test_infos, extra_args):
+        """Generate a list of run commands from TestInfos.
+
+        Args:
+            test_infos: List of TestInfo tests to run.
+            extra_args: Dict of extra args to add to test run.
+
+        Returns:
+            A List of strings that contains the vts10-tradefed run command.
+        """
+        cmds = []
+        args = self._DEFAULT_ARGS
+        args.extend(self._parse_extra_args(extra_args))
+        args.extend(atest_utils.get_result_server_args())
+        for test_info in test_infos:
+            cmd_dict = copy.deepcopy(self.run_cmd_dict)
+            cmd_dict['plan'] = constants.VTS_STAGING_PLAN
+            cmd_dict['test'] = test_info.test_name
+            cmd_dict['args'] = ' '.join(args)
+            cmds.append(self._RUN_CMD.format(**cmd_dict))
+        return cmds
diff --git a/atest-py2/test_runners/vts_tf_test_runner_unittest.py b/atest-py2/test_runners/vts_tf_test_runner_unittest.py
new file mode 100755
index 0000000..7e8b408
--- /dev/null
+++ b/atest-py2/test_runners/vts_tf_test_runner_unittest.py
@@ -0,0 +1,58 @@
+#!/usr/bin/env python
+#
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Unittests for vts_tf_test_runner."""
+
+import unittest
+import mock
+
+# pylint: disable=import-error
+from test_runners import vts_tf_test_runner
+
+# pylint: disable=protected-access
+class VtsTradefedTestRunnerUnittests(unittest.TestCase):
+    """Unit tests for vts_tf_test_runner.py"""
+
+    def setUp(self):
+        self.vts_tr = vts_tf_test_runner.VtsTradefedTestRunner(results_dir='')
+
+    def tearDown(self):
+        mock.patch.stopall()
+
+    @mock.patch('subprocess.Popen')
+    @mock.patch.object(vts_tf_test_runner.VtsTradefedTestRunner, 'run')
+    @mock.patch.object(vts_tf_test_runner.VtsTradefedTestRunner,
+                       'generate_run_commands')
+    def test_run_tests(self, _mock_gen_cmd, _mock_run, _mock_popen):
+        """Test run_tests method."""
+        test_infos = []
+        extra_args = []
+        mock_reporter = mock.Mock()
+        _mock_gen_cmd.return_value = ["cmd1", "cmd2"]
+        # Test Build Pass
+        _mock_popen.return_value.returncode = 0
+        self.assertEqual(
+            0,
+            self.vts_tr.run_tests(test_infos, extra_args, mock_reporter))
+
+        # Test Build Pass
+        _mock_popen.return_value.returncode = 1
+        self.assertNotEqual(
+            0,
+            self.vts_tr.run_tests(test_infos, extra_args, mock_reporter))
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/tools/__init__.py
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/tools/__init__.py
diff --git a/atest-py2/tools/atest_tools.py b/atest-py2/tools/atest_tools.py
new file mode 100755
index 0000000..3cf189e
--- /dev/null
+++ b/atest-py2/tools/atest_tools.py
@@ -0,0 +1,354 @@
+#!/usr/bin/env python
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Atest tool functions.
+"""
+
+from __future__ import print_function
+
+import logging
+import os
+import pickle
+import shutil
+import subprocess
+import sys
+
+import constants
+import module_info
+
+from metrics import metrics_utils
+
+MAC_UPDB_SRC = os.path.join(os.path.dirname(__file__), 'updatedb_darwin.sh')
+MAC_UPDB_DST = os.path.join(os.getenv(constants.ANDROID_HOST_OUT, ''), 'bin')
+UPDATEDB = 'updatedb'
+LOCATE = 'locate'
+SEARCH_TOP = os.getenv(constants.ANDROID_BUILD_TOP, '')
+MACOSX = 'Darwin'
+OSNAME = os.uname()[0]
+# When adding new index, remember to append constants to below tuple.
+INDEXES = (constants.CC_CLASS_INDEX,
+           constants.CLASS_INDEX,
+           constants.LOCATE_CACHE,
+           constants.MODULE_INDEX,
+           constants.PACKAGE_INDEX,
+           constants.QCLASS_INDEX)
+
+# The list was generated by command:
+# find `gettop` -type d -wholename `gettop`/out -prune  -o -type d -name '.*'
+# -print | awk -F/ '{{print $NF}}'| sort -u
+PRUNENAMES = ['.abc', '.appveyor', '.azure-pipelines',
+              '.bazelci', '.buildscript',
+              '.ci', '.circleci', '.conan', '.config',
+              '.externalToolBuilders',
+              '.git', '.github', '.github-ci', '.google', '.gradle',
+              '.idea', '.intermediates',
+              '.jenkins',
+              '.kokoro',
+              '.libs_cffi_backend',
+              '.mvn',
+              '.prebuilt_info', '.private', '__pycache__',
+              '.repo',
+              '.semaphore', '.settings', '.static', '.svn',
+              '.test', '.travis', '.tx',
+              '.vscode']
+
+def _mkdir_when_inexists(dirname):
+    if not os.path.isdir(dirname):
+        os.makedirs(dirname)
+
+def _install_updatedb():
+    """Install a customized updatedb for MacOS and ensure it is executable."""
+    _mkdir_when_inexists(MAC_UPDB_DST)
+    _mkdir_when_inexists(constants.INDEX_DIR)
+    if OSNAME == MACOSX:
+        shutil.copy2(MAC_UPDB_SRC, os.path.join(MAC_UPDB_DST, UPDATEDB))
+        os.chmod(os.path.join(MAC_UPDB_DST, UPDATEDB), 0755)
+
+def _delete_indexes():
+    """Delete all available index files."""
+    for index in INDEXES:
+        if os.path.isfile(index):
+            os.remove(index)
+
+def has_command(cmd):
+    """Detect if the command is available in PATH.
+
+    shutil.which('cmd') is only valid in Py3 so we need to customise it.
+
+    Args:
+        cmd: A string of the tested command.
+
+    Returns:
+        True if found, False otherwise."""
+    paths = os.getenv('PATH', '').split(':')
+    for path in paths:
+        if os.path.isfile(os.path.join(path, cmd)):
+            return True
+    return False
+
+def run_updatedb(search_root=SEARCH_TOP, output_cache=constants.LOCATE_CACHE,
+                 **kwargs):
+    """Run updatedb and generate cache in $ANDROID_HOST_OUT/indexes/mlocate.db
+
+    Args:
+        search_root: The path of the search root(-U).
+        output_cache: The filename of the updatedb cache(-o).
+        kwargs: (optional)
+            prunepaths: A list of paths unwanted to be searched(-e).
+            prunenames: A list of dirname that won't be cached(-n).
+    """
+    prunenames = kwargs.pop('prunenames', ' '.join(PRUNENAMES))
+    prunepaths = kwargs.pop('prunepaths', os.path.join(search_root, 'out'))
+    if kwargs:
+        raise TypeError('Unexpected **kwargs: %r' % kwargs)
+    updatedb_cmd = [UPDATEDB, '-l0']
+    updatedb_cmd.append('-U%s' % search_root)
+    updatedb_cmd.append('-e%s' % prunepaths)
+    updatedb_cmd.append('-n%s' % prunenames)
+    updatedb_cmd.append('-o%s' % output_cache)
+    try:
+        _install_updatedb()
+    except IOError as e:
+        logging.error('Error installing updatedb: %s', e)
+
+    if not has_command(UPDATEDB):
+        return
+    logging.debug('Running updatedb... ')
+    try:
+        full_env_vars = os.environ.copy()
+        logging.debug('Executing: %s', updatedb_cmd)
+        subprocess.check_call(updatedb_cmd, env=full_env_vars)
+    except (KeyboardInterrupt, SystemExit):
+        logging.error('Process interrupted or failure.')
+
+def _dump_index(dump_file, output, output_re, key, value):
+    """Dump indexed data with pickle.
+
+    Args:
+        dump_file: A string of absolute path of the index file.
+        output: A string generated by locate and grep.
+        output_re: An regex which is used for grouping patterns.
+        key: A string for dictionary key, e.g. classname, package, cc_class, etc.
+        value: A set of path.
+
+    The data structure will be like:
+    {
+      'Foo': {'/path/to/Foo.java', '/path2/to/Foo.kt'},
+      'Boo': {'/path3/to/Boo.java'}
+    }
+    """
+    _dict = {}
+    with open(dump_file, 'wb') as cache_file:
+        for entry in output.splitlines():
+            match = output_re.match(entry)
+            if match:
+                _dict.setdefault(match.group(key), set()).add(match.group(value))
+        try:
+            pickle.dump(_dict, cache_file, protocol=2)
+        except IOError:
+            os.remove(dump_file)
+            logging.error('Failed in dumping %s', dump_file)
+
+def _get_cc_result(locatedb=None):
+    """Search all testable cc/cpp and grep TEST(), TEST_F() or TEST_P().
+
+    Returns:
+        A string object generated by subprocess.
+    """
+    if not locatedb:
+        locatedb = constants.LOCATE_CACHE
+    cc_grep_re = r'^\s*TEST(_P|_F)?\s*\(\w+,'
+    if OSNAME == MACOSX:
+        find_cmd = (r"locate -d {0} '*.cpp' '*.cc' | grep -i test "
+                    "| xargs egrep -sH '{1}' || true")
+    else:
+        find_cmd = (r"locate -d {0} / | egrep -i '/*.test.*\.(cc|cpp)$' "
+                    "| xargs egrep -sH '{1}' || true")
+    find_cc_cmd = find_cmd.format(locatedb, cc_grep_re)
+    logging.debug('Probing CC classes:\n %s', find_cc_cmd)
+    return subprocess.check_output(find_cc_cmd, shell=True)
+
+def _get_java_result(locatedb=None):
+    """Search all testable java/kt and grep package.
+
+    Returns:
+        A string object generated by subprocess.
+    """
+    if not locatedb:
+        locatedb = constants.LOCATE_CACHE
+    package_grep_re = r'^\s*package\s+[a-z][[:alnum:]]+[^{]'
+    if OSNAME == MACOSX:
+        find_cmd = r"locate -d%s '*.java' '*.kt'|grep -i test" % locatedb
+    else:
+        find_cmd = r"locate -d%s / | egrep -i '/*.test.*\.(java|kt)$'" % locatedb
+    find_java_cmd = find_cmd + '| xargs egrep -sH \'%s\' || true' % package_grep_re
+    logging.debug('Probing Java classes:\n %s', find_java_cmd)
+    return subprocess.check_output(find_java_cmd, shell=True)
+
+def _index_testable_modules(index):
+    """Dump testable modules read by tab completion.
+
+    Args:
+        index: A string path of the index file.
+    """
+    logging.debug('indexing testable modules.')
+    testable_modules = module_info.ModuleInfo().get_testable_modules()
+    with open(index, 'wb') as cache:
+        try:
+            pickle.dump(testable_modules, cache, protocol=2)
+        except IOError:
+            os.remove(cache)
+            logging.error('Failed in dumping %s', cache)
+
+def _index_cc_classes(output, index):
+    """Index CC classes.
+
+    The data structure is like:
+    {
+      'FooTestCase': {'/path1/to/the/FooTestCase.cpp',
+                      '/path2/to/the/FooTestCase.cc'}
+    }
+
+    Args:
+        output: A string object generated by _get_cc_result().
+        index: A string path of the index file.
+    """
+    logging.debug('indexing CC classes.')
+    _dump_index(dump_file=index, output=output,
+                output_re=constants.CC_OUTPUT_RE,
+                key='test_name', value='file_path')
+
+def _index_java_classes(output, index):
+    """Index Java classes.
+    The data structure is like:
+    {
+        'FooTestCase': {'/path1/to/the/FooTestCase.java',
+                        '/path2/to/the/FooTestCase.kt'}
+    }
+
+    Args:
+        output: A string object generated by _get_java_result().
+        index: A string path of the index file.
+    """
+    logging.debug('indexing Java classes.')
+    _dump_index(dump_file=index, output=output,
+                output_re=constants.CLASS_OUTPUT_RE,
+                key='class', value='java_path')
+
+def _index_packages(output, index):
+    """Index Java packages.
+    The data structure is like:
+    {
+        'a.b.c.d': {'/path1/to/a/b/c/d/',
+                    '/path2/to/a/b/c/d/'
+    }
+
+    Args:
+        output: A string object generated by _get_java_result().
+        index: A string path of the index file.
+    """
+    logging.debug('indexing packages.')
+    _dump_index(dump_file=index,
+                output=output, output_re=constants.PACKAGE_OUTPUT_RE,
+                key='package', value='java_dir')
+
+def _index_qualified_classes(output, index):
+    """Index Fully Qualified Java Classes(FQCN).
+    The data structure is like:
+    {
+        'a.b.c.d.FooTestCase': {'/path1/to/a/b/c/d/FooTestCase.java',
+                                '/path2/to/a/b/c/d/FooTestCase.kt'}
+    }
+
+    Args:
+        output: A string object generated by _get_java_result().
+        index: A string path of the index file.
+    """
+    logging.debug('indexing qualified classes.')
+    _dict = {}
+    with open(index, 'wb') as cache_file:
+        for entry in output.split('\n'):
+            match = constants.QCLASS_OUTPUT_RE.match(entry)
+            if match:
+                fqcn = match.group('package') + '.' + match.group('class')
+                _dict.setdefault(fqcn, set()).add(match.group('java_path'))
+        try:
+            pickle.dump(_dict, cache_file, protocol=2)
+        except (KeyboardInterrupt, SystemExit):
+            logging.error('Process interrupted or failure.')
+            os.remove(index)
+        except IOError:
+            logging.error('Failed in dumping %s', index)
+
+def index_targets(output_cache=constants.LOCATE_CACHE, **kwargs):
+    """The entrypoint of indexing targets.
+
+    Utilise mlocate database to index reference types of CLASS, CC_CLASS,
+    PACKAGE and QUALIFIED_CLASS. Testable module for tab completion is also
+    generated in this method.
+
+    Args:
+        output_cache: A file path of the updatedb cache(e.g. /path/to/mlocate.db).
+        kwargs: (optional)
+            class_index: A path string of the Java class index.
+            qclass_index: A path string of the qualified class index.
+            package_index: A path string of the package index.
+            cc_class_index: A path string of the CC class index.
+            module_index: A path string of the testable module index.
+            integration_index: A path string of the integration index.
+    """
+    class_index = kwargs.pop('class_index', constants.CLASS_INDEX)
+    qclass_index = kwargs.pop('qclass_index', constants.QCLASS_INDEX)
+    package_index = kwargs.pop('package_index', constants.PACKAGE_INDEX)
+    cc_class_index = kwargs.pop('cc_class_index', constants.CC_CLASS_INDEX)
+    module_index = kwargs.pop('module_index', constants.MODULE_INDEX)
+    # Uncomment below if we decide to support INTEGRATION.
+    #integration_index = kwargs.pop('integration_index', constants.INT_INDEX)
+    if kwargs:
+        raise TypeError('Unexpected **kwargs: %r' % kwargs)
+
+    try:
+        # Step 0: generate mlocate database prior to indexing targets.
+        run_updatedb(SEARCH_TOP, constants.LOCATE_CACHE)
+        if not has_command(LOCATE):
+            return
+        # Step 1: generate output string for indexing targets.
+        logging.debug('Indexing targets... ')
+        cc_result = _get_cc_result(output_cache)
+        java_result = _get_java_result(output_cache)
+        # Step 2: index Java and CC classes.
+        _index_cc_classes(cc_result, cc_class_index)
+        _index_java_classes(java_result, class_index)
+        _index_qualified_classes(java_result, qclass_index)
+        _index_packages(java_result, package_index)
+        # Step 3: index testable mods and TEST_MAPPING files.
+        _index_testable_modules(module_index)
+
+    # Delete indexes when mlocate.db is locked() or other CalledProcessError.
+    # (b/141588997)
+    except subprocess.CalledProcessError as err:
+        logging.error('Executing %s error.', UPDATEDB)
+        metrics_utils.handle_exc_and_send_exit_event(
+            constants.MLOCATEDB_LOCKED)
+        if err.output:
+            logging.error(err.output)
+        _delete_indexes()
+
+
+if __name__ == '__main__':
+    if not os.getenv(constants.ANDROID_HOST_OUT, ''):
+        sys.exit()
+    index_targets()
diff --git a/atest-py2/tools/atest_tools_unittest.py b/atest-py2/tools/atest_tools_unittest.py
new file mode 100755
index 0000000..34bdfb2
--- /dev/null
+++ b/atest-py2/tools/atest_tools_unittest.py
@@ -0,0 +1,111 @@
+#!/usr/bin/env python
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittest for atest_tools."""
+
+# pylint: disable=line-too-long
+
+import os
+import pickle
+import platform
+import subprocess
+import unittest
+import mock
+
+from tools import atest_tools
+
+import unittest_constants as uc
+
+SEARCH_ROOT = uc.TEST_DATA_DIR
+PRUNEPATH = uc.TEST_CONFIG_DATA_DIR
+LOCATE = atest_tools.LOCATE
+UPDATEDB = atest_tools.UPDATEDB
+
+class AtestToolsUnittests(unittest.TestCase):
+    """"Unittest Class for atest_tools.py."""
+
+    @mock.patch('constants.LOCATE_CACHE', uc.LOCATE_CACHE)
+    @mock.patch('tools.atest_tools.SEARCH_TOP', uc.TEST_DATA_DIR)
+    @mock.patch('module_info.ModuleInfo.get_testable_modules')
+    @mock.patch('module_info.ModuleInfo.__init__')
+    def test_index_targets(self, mock_mod_info, mock_testable_mod):
+        """Test method index_targets."""
+        mock_mod_info.return_value = None
+        mock_testable_mod.return_value = {uc.MODULE_NAME, uc.MODULE2_NAME}
+        if atest_tools.has_command(UPDATEDB) and atest_tools.has_command(LOCATE):
+            # 1. Test run_updatedb() is functional.
+            atest_tools.run_updatedb(SEARCH_ROOT, uc.LOCATE_CACHE,
+                                     prunepaths=PRUNEPATH)
+            # test_config/ is excluded so that a.xml won't be found.
+            locate_cmd1 = [LOCATE, '-d', uc.LOCATE_CACHE, '/a.xml']
+            # locate always return 0 when not found in Darwin, therefore,
+            # check null return in Darwin and return value in Linux.
+            if platform.system() == 'Darwin':
+                self.assertEqual(subprocess.check_output(locate_cmd1), "")
+            else:
+                self.assertEqual(subprocess.call(locate_cmd1), 1)
+            # module-info.json can be found in the search_root.
+            locate_cmd2 = [LOCATE, '-d', uc.LOCATE_CACHE, 'module-info.json']
+            self.assertEqual(subprocess.call(locate_cmd2), 0)
+
+            # 2. Test index_targets() is functional.
+            atest_tools.index_targets(uc.LOCATE_CACHE,
+                                      class_index=uc.CLASS_INDEX,
+                                      cc_class_index=uc.CC_CLASS_INDEX,
+                                      module_index=uc.MODULE_INDEX,
+                                      package_index=uc.PACKAGE_INDEX,
+                                      qclass_index=uc.QCLASS_INDEX)
+            _cache = {}
+            # Test finding a Java class.
+            with open(uc.CLASS_INDEX, 'rb') as cache:
+                _cache = pickle.load(cache)
+            self.assertIsNotNone(_cache.get('PathTesting'))
+            # Test finding a CC class.
+            with open(uc.CC_CLASS_INDEX, 'rb') as cache:
+                _cache = pickle.load(cache)
+            self.assertIsNotNone(_cache.get('HelloWorldTest'))
+            # Test finding a package.
+            with open(uc.PACKAGE_INDEX, 'rb') as cache:
+                _cache = pickle.load(cache)
+            self.assertIsNotNone(_cache.get(uc.PACKAGE))
+            # Test finding a fully qualified class name.
+            with open(uc.QCLASS_INDEX, 'rb') as cache:
+                _cache = pickle.load(cache)
+            self.assertIsNotNone(_cache.get('android.jank.cts.ui.PathTesting'))
+            _cache = set()
+            # Test finding a module name.
+            with open(uc.MODULE_INDEX, 'rb') as cache:
+                _cache = pickle.load(cache)
+            self.assertTrue(uc.MODULE_NAME in _cache)
+            self.assertFalse(uc.CLASS_NAME in _cache)
+            # Clean up.
+            targets_to_delete = (uc.CC_CLASS_INDEX,
+                                 uc.CLASS_INDEX,
+                                 uc.LOCATE_CACHE,
+                                 uc.MODULE_INDEX,
+                                 uc.PACKAGE_INDEX,
+                                 uc.QCLASS_INDEX)
+            for idx in targets_to_delete:
+                os.remove(idx)
+        else:
+            self.assertEqual(atest_tools.has_command(UPDATEDB), False)
+            self.assertEqual(atest_tools.has_command(LOCATE), False)
+
+
+
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/atest-py2/tools/updatedb_darwin.sh b/atest-py2/tools/updatedb_darwin.sh
new file mode 100755
index 0000000..d0b2339
--- /dev/null
+++ b/atest-py2/tools/updatedb_darwin.sh
@@ -0,0 +1,111 @@
+#!/usr/bin/env bash
+#
+# Copyright 2019, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Warning and exit when failed to meet the requirements.
+[ "$(uname -s)" != "Darwin" ] && { echo "This program runs on Darwin only."; exit 0; }
+[ "$UID" -eq 0 ] && { echo "Running with root user is not supported."; exit 0; }
+
+function usage() {
+    echo "###########################################"
+    echo "Usage: $prog [-U|-e|-n|-o||-l|-f|-h]"
+    echo "  -U: The PATH of the search root."
+    echo "  -e: The PATH that unwanted to be searched."
+    echo "  -n: The name of directories that won't be cached."
+    echo "  -o: The PATH of the generated database."
+    echo "  -l: No effect. For compatible with Linux mlocate."
+    echo "  -f: Filesystems which should not search for."
+    echo "  -h: This usage helper."
+    echo
+    echo "################ [EXAMPLE] ################"
+    echo "$prog -U \$ANDROID_BUILD_TOP -n .git -l 0 \\"
+    echo " -e \"\$ANDROID_BUILD_TOP/out \$ANDROID_BUILD_TOP/.repo\" \\"
+    echo " -o \"\$ANDROID_HOST_OUT/locate.database\""
+    echo
+    echo "locate -d \$ANDROID_HOST_OUT/locate.database atest.py"
+    echo "locate -d \$ANDROID_HOST_OUT/locate.database contrib/res/config"
+}
+
+function mktempdir() {
+    TMPDIR=/tmp
+    if ! TMPDIR=`mktemp -d $TMPDIR/locateXXXXXXXXXX`; then
+        exit 1
+    fi
+    temp=$TMPDIR/_updatedb$$
+}
+
+function _updatedb_main() {
+    # 0. Disable default features of bash.
+    set -o noglob   # Disable * expansion before passing arguments to find.
+    set -o errtrace # Sub-shells inherit error trap.
+
+    # 1. Get positional arguments and set variables.
+    prog=$(basename $0)
+    while getopts 'U:n:e:o:l:f:h' option; do
+        case $option in
+            U) SEARCHROOT="$OPTARG";; # Search root.
+            e) PRUNEPATHS="$OPTARG";; # Paths to be excluded.
+            n) PRUNENAMES="$OPTARG";; # Dirnames to be pruned.
+            o) DATABASE="$OPTARG";;   # the output of the DB.
+            l) ;;                     # No effect.
+            f) PRUNEFS="$OPTARG";;    # Disallow network filesystems.
+            *) usage; exit 0;;
+        esac
+    done
+
+    : ${SEARCHROOT:="$ANDROID_BUILD_TOP"}
+    if [ -z "$SEARCHROOT" ]; then
+        echo 'Either $SEARCHROOT or $ANDROID_BUILD_TOP is required.'
+        exit 0
+    fi
+
+    if [ -n "$ANDROID_BUILD_TOP" ]; then
+        PRUNEPATHS="$PRUNEPATHS $ANDROID_BUILD_TOP/out"
+    fi
+
+    PRUNENAMES="$PRUNENAMES *.class *.pyc .gitignore"
+    : ${DATABASE:=/tmp/locate.database}
+    : ${PRUNEFS:="nfs afp smb"}
+
+    # 2. Assemble excludes strings.
+    excludes=""
+    or=""
+    sortarg="-presort"
+    for fs in $PRUNEFS; do
+        excludes="$excludes $or -fstype $fs -prune"
+        or="-o"
+    done
+    for path in $PRUNEPATHS; do
+        excludes="$excludes $or -path $path -prune"
+    done
+    for file in $PRUNENAMES; do
+        excludes="$excludes $or -name $file -prune"
+    done
+
+    # 3. Find and create locate database.
+    # Delete $temp when trapping specified return values.
+    mktempdir
+    trap 'rm -rf $temp $TMPDIR; exit' 0 1 2 3 5 10 15
+    if find -s $SEARCHROOT $excludes $or -print 2>/dev/null -true |
+        /usr/libexec/locate.mklocatedb $sortarg > $temp 2>/dev/null; then
+            case x"`find $temp -size 257c -print`" in
+                x) cat $temp > $DATABASE;;
+                *) echo "$prog: database $temp is found empty."
+                   exit 1;;
+            esac
+    fi
+}
+
+_updatedb_main "$@"
diff --git a/atest-py2/unittest_constants.py b/atest-py2/unittest_constants.py
new file mode 100644
index 0000000..c757936
--- /dev/null
+++ b/atest-py2/unittest_constants.py
@@ -0,0 +1,247 @@
+# Copyright 2018, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Unittest constants.
+
+Unittest constants get their own file since they're used purely for testing and
+should not be combined with constants_defaults as part of normal atest
+operation. These constants are used commonly as test data so when updating a
+constant, do so with care and run all unittests to make sure nothing breaks.
+"""
+
+import os
+
+import constants
+from test_finders import test_info
+from test_runners import atest_tf_test_runner as atf_tr
+
+ROOT = '/'
+MODULE_DIR = 'foo/bar/jank'
+MODULE2_DIR = 'foo/bar/hello'
+MODULE_NAME = 'CtsJankDeviceTestCases'
+TYPO_MODULE_NAME = 'CtsJankDeviceTestCase'
+MODULE2_NAME = 'HelloWorldTests'
+CLASS_NAME = 'CtsDeviceJankUi'
+FULL_CLASS_NAME = 'android.jank.cts.ui.CtsDeviceJankUi'
+PACKAGE = 'android.jank.cts.ui'
+FIND_ONE = ROOT + 'foo/bar/jank/src/android/jank/cts/ui/CtsDeviceJankUi.java\n'
+FIND_TWO = ROOT + 'other/dir/test.java\n' + FIND_ONE
+FIND_PKG = ROOT + 'foo/bar/jank/src/android/jank/cts/ui\n'
+INT_NAME = 'example/reboot'
+GTF_INT_NAME = 'some/gtf_int_test'
+TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'unittest_data')
+TEST_CONFIG_DATA_DIR = os.path.join(TEST_DATA_DIR, 'test_config')
+
+INT_DIR = 'tf/contrib/res/config'
+GTF_INT_DIR = 'gtf/core/res/config'
+
+CONFIG_FILE = os.path.join(MODULE_DIR, constants.MODULE_CONFIG)
+CONFIG2_FILE = os.path.join(MODULE2_DIR, constants.MODULE_CONFIG)
+JSON_FILE = 'module-info.json'
+MODULE_INFO_TARGET = '/out/%s' % JSON_FILE
+MODULE_BUILD_TARGETS = {'tradefed-core', MODULE_INFO_TARGET,
+                        'MODULES-IN-%s' % MODULE_DIR.replace('/', '-'),
+                        'module-specific-target'}
+MODULE_BUILD_TARGETS2 = {'build-target2'}
+MODULE_DATA = {constants.TI_REL_CONFIG: CONFIG_FILE,
+               constants.TI_FILTER: frozenset()}
+MODULE_DATA2 = {constants.TI_REL_CONFIG: CONFIG_FILE,
+                constants.TI_FILTER: frozenset()}
+MODULE_INFO = test_info.TestInfo(MODULE_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 MODULE_BUILD_TARGETS,
+                                 MODULE_DATA)
+MODULE_INFO2 = test_info.TestInfo(MODULE2_NAME,
+                                  atf_tr.AtestTradefedTestRunner.NAME,
+                                  MODULE_BUILD_TARGETS2,
+                                  MODULE_DATA2)
+MODULE_INFOS = [MODULE_INFO]
+MODULE_INFOS2 = [MODULE_INFO, MODULE_INFO2]
+CLASS_FILTER = test_info.TestFilter(FULL_CLASS_NAME, frozenset())
+CLASS_DATA = {constants.TI_REL_CONFIG: CONFIG_FILE,
+              constants.TI_FILTER: frozenset([CLASS_FILTER])}
+PACKAGE_FILTER = test_info.TestFilter(PACKAGE, frozenset())
+PACKAGE_DATA = {constants.TI_REL_CONFIG: CONFIG_FILE,
+                constants.TI_FILTER: frozenset([PACKAGE_FILTER])}
+TEST_DATA_CONFIG = os.path.relpath(os.path.join(TEST_DATA_DIR,
+                                                constants.MODULE_CONFIG), ROOT)
+PATH_DATA = {
+    constants.TI_REL_CONFIG: TEST_DATA_CONFIG,
+    constants.TI_FILTER: frozenset([PACKAGE_FILTER])}
+EMPTY_PATH_DATA = {
+    constants.TI_REL_CONFIG: TEST_DATA_CONFIG,
+    constants.TI_FILTER: frozenset()}
+
+CLASS_BUILD_TARGETS = {'class-specific-target'}
+CLASS_INFO = test_info.TestInfo(MODULE_NAME,
+                                atf_tr.AtestTradefedTestRunner.NAME,
+                                CLASS_BUILD_TARGETS,
+                                CLASS_DATA)
+CLASS_INFOS = [CLASS_INFO]
+
+CLASS_BUILD_TARGETS2 = {'class-specific-target2'}
+CLASS_DATA2 = {constants.TI_REL_CONFIG: CONFIG_FILE,
+               constants.TI_FILTER: frozenset([CLASS_FILTER])}
+CLASS_INFO2 = test_info.TestInfo(MODULE2_NAME,
+                                 atf_tr.AtestTradefedTestRunner.NAME,
+                                 CLASS_BUILD_TARGETS2,
+                                 CLASS_DATA2)
+CLASS_INFOS = [CLASS_INFO]
+CLASS_INFOS2 = [CLASS_INFO, CLASS_INFO2]
+PACKAGE_INFO = test_info.TestInfo(MODULE_NAME,
+                                  atf_tr.AtestTradefedTestRunner.NAME,
+                                  CLASS_BUILD_TARGETS,
+                                  PACKAGE_DATA)
+PATH_INFO = test_info.TestInfo(MODULE_NAME,
+                               atf_tr.AtestTradefedTestRunner.NAME,
+                               MODULE_BUILD_TARGETS,
+                               PATH_DATA)
+EMPTY_PATH_INFO = test_info.TestInfo(MODULE_NAME,
+                                     atf_tr.AtestTradefedTestRunner.NAME,
+                                     MODULE_BUILD_TARGETS,
+                                     EMPTY_PATH_DATA)
+MODULE_CLASS_COMBINED_BUILD_TARGETS = MODULE_BUILD_TARGETS | CLASS_BUILD_TARGETS
+INT_CONFIG = os.path.join(INT_DIR, INT_NAME + '.xml')
+GTF_INT_CONFIG = os.path.join(GTF_INT_DIR, GTF_INT_NAME + '.xml')
+METHOD_NAME = 'method1'
+METHOD_FILTER = test_info.TestFilter(FULL_CLASS_NAME, frozenset([METHOD_NAME]))
+METHOD_INFO = test_info.TestInfo(
+    MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    MODULE_BUILD_TARGETS,
+    data={constants.TI_FILTER: frozenset([METHOD_FILTER]),
+          constants.TI_REL_CONFIG: CONFIG_FILE})
+METHOD2_NAME = 'method2'
+FLAT_METHOD_FILTER = test_info.TestFilter(
+    FULL_CLASS_NAME, frozenset([METHOD_NAME, METHOD2_NAME]))
+INT_INFO = test_info.TestInfo(INT_NAME,
+                              atf_tr.AtestTradefedTestRunner.NAME,
+                              set(),
+                              data={constants.TI_REL_CONFIG: INT_CONFIG,
+                                    constants.TI_FILTER: frozenset()})
+GTF_INT_INFO = test_info.TestInfo(
+    GTF_INT_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    set(),
+    data={constants.TI_FILTER: frozenset(),
+          constants.TI_REL_CONFIG: GTF_INT_CONFIG})
+
+# Sample test configurations in TEST_MAPPING file.
+TEST_MAPPING_TEST = {'name': MODULE_NAME, 'host': True}
+TEST_MAPPING_TEST_WITH_OPTION = {
+    'name': CLASS_NAME,
+    'options': [
+        {
+            'arg1': 'val1'
+        },
+        {
+            'arg2': ''
+        }
+    ]
+}
+TEST_MAPPING_TEST_WITH_OPTION_STR = '%s (arg1: val1, arg2:)' % CLASS_NAME
+TEST_MAPPING_TEST_WITH_BAD_OPTION = {
+    'name': CLASS_NAME,
+    'options': [
+        {
+            'arg1': 'val1',
+            'arg2': ''
+        }
+    ]
+}
+TEST_MAPPING_TEST_WITH_BAD_HOST_VALUE = {
+    'name': CLASS_NAME,
+    'host': 'true'
+}
+# Constrants of cc test unittest
+FIND_CC_ONE = ROOT + 'foo/bt/hci/test/pf_test.cc\n'
+CC_MODULE_NAME = 'net_test_hci'
+CC_CLASS_NAME = 'PFTest'
+CC_MODULE_DIR = 'system/bt/hci'
+CC_CLASS_FILTER = test_info.TestFilter(CC_CLASS_NAME+".*", frozenset())
+CC_CONFIG_FILE = os.path.join(CC_MODULE_DIR, constants.MODULE_CONFIG)
+CC_MODULE_CLASS_DATA = {constants.TI_REL_CONFIG: CC_CONFIG_FILE,
+                        constants.TI_FILTER: frozenset([CC_CLASS_FILTER])}
+CC_MODULE_CLASS_INFO = test_info.TestInfo(CC_MODULE_NAME,
+                                          atf_tr.AtestTradefedTestRunner.NAME,
+                                          CLASS_BUILD_TARGETS, CC_MODULE_CLASS_DATA)
+CC_MODULE2_DIR = 'foo/bar/hello'
+CC_MODULE2_NAME = 'hello_world_test'
+CC_PATH = 'pf_test.cc'
+CC_FIND_ONE = ROOT + 'system/bt/hci/test/pf_test.cc:TEST_F(PFTest, test1) {\n' + \
+              ROOT + 'system/bt/hci/test/pf_test.cc:TEST_F(PFTest, test2) {\n'
+CC_FIND_TWO = ROOT + 'other/dir/test.cpp:TEST(PFTest, test_f) {\n' + \
+              ROOT + 'other/dir/test.cpp:TEST(PFTest, test_p) {\n'
+CC_CONFIG2_FILE = os.path.join(CC_MODULE2_DIR, constants.MODULE_CONFIG)
+CC_CLASS_FILTER = test_info.TestFilter(CC_CLASS_NAME+".*", frozenset())
+CC_CLASS_DATA = {constants.TI_REL_CONFIG: CC_CONFIG_FILE,
+                 constants.TI_FILTER: frozenset([CC_CLASS_FILTER])}
+CC_CLASS_INFO = test_info.TestInfo(CC_MODULE_NAME,
+                                   atf_tr.AtestTradefedTestRunner.NAME,
+                                   CLASS_BUILD_TARGETS, CC_CLASS_DATA)
+CC_METHOD_NAME = 'test1'
+CC_METHOD2_NAME = 'test2'
+CC_METHOD_FILTER = test_info.TestFilter(CC_CLASS_NAME+"."+CC_METHOD_NAME,
+                                        frozenset())
+CC_METHOD2_FILTER = test_info.TestFilter(CC_CLASS_NAME+"."+CC_METHOD_NAME+ \
+                                         ":"+CC_CLASS_NAME+"."+CC_METHOD2_NAME,
+                                         frozenset())
+CC_METHOD_INFO = test_info.TestInfo(
+    CC_MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    MODULE_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: CC_CONFIG_FILE,
+          constants.TI_FILTER: frozenset([CC_METHOD_FILTER])})
+CC_METHOD2_INFO = test_info.TestInfo(
+    CC_MODULE_NAME,
+    atf_tr.AtestTradefedTestRunner.NAME,
+    MODULE_BUILD_TARGETS,
+    data={constants.TI_REL_CONFIG: CC_CONFIG_FILE,
+          constants.TI_FILTER: frozenset([CC_METHOD2_FILTER])})
+CC_PATH_DATA = {
+    constants.TI_REL_CONFIG: TEST_DATA_CONFIG,
+    constants.TI_FILTER: frozenset()}
+CC_PATH_INFO = test_info.TestInfo(CC_MODULE_NAME,
+                                  atf_tr.AtestTradefedTestRunner.NAME,
+                                  MODULE_BUILD_TARGETS,
+                                  CC_PATH_DATA)
+CC_PATH_DATA2 = {constants.TI_REL_CONFIG: CC_CONFIG_FILE,
+                 constants.TI_FILTER: frozenset()}
+CC_PATH_INFO2 = test_info.TestInfo(CC_MODULE_NAME,
+                                   atf_tr.AtestTradefedTestRunner.NAME,
+                                   CLASS_BUILD_TARGETS, CC_PATH_DATA2)
+CTS_INT_DIR = 'test/suite_harness/tools/cts-tradefed/res/config'
+# Constrants of java, kt, cc, cpp test_find_class_file() unittest
+FIND_PATH_TESTCASE_JAVA = 'hello_world_test'
+FIND_PATH_FILENAME_CC = 'hello_world_test'
+FIND_PATH_TESTCASE_CC = 'HelloWorldTest'
+FIND_PATH_FOLDER = 'class_file_path_testing'
+FIND_PATH = os.path.join(TEST_DATA_DIR, FIND_PATH_FOLDER)
+
+DEFAULT_INSTALL_PATH = ['/path/to/install']
+# Module names
+MOD1 = 'mod1'
+MOD2 = 'mod2'
+MOD3 = 'mod3'
+FUZZY_MOD1 = 'Mod1'
+FUZZY_MOD2 = 'nod2'
+FUZZY_MOD3 = 'mod3mod3'
+
+LOCATE_CACHE = '/tmp/mcloate.db'
+CLASS_INDEX = '/tmp/classes.idx'
+QCLASS_INDEX = '/tmp/fqcn.idx'
+CC_CLASS_INDEX = '/tmp/cc_classes.idx'
+PACKAGE_INDEX = '/tmp/packages.idx'
+MODULE_INDEX = '/tmp/modules.idx'
diff --git a/atest-py2/unittest_data/AndroidTest.xml b/atest-py2/unittest_data/AndroidTest.xml
new file mode 100644
index 0000000..431eafc
--- /dev/null
+++ b/atest-py2/unittest_data/AndroidTest.xml
@@ -0,0 +1,18 @@
+<configuration description="Config for CTS Jank test cases">
+  <option name="test-suite-tag" value="cts" />
+  <option name="not-shardable" value="true" />
+  <option name="config-descriptor:metadata" key="component" value="graphics" />
+  <target_preparer class="com.android.tradefed.targetprep.suite.SuiteApkInstaller">
+    <option name="cleanup-apks" value="true" />
+    <option name="test-file-name" value="CtsJankDeviceTestCases.apk" />
+    <option name="test-file-name" value="is_not_module.apk" />
+    <option name="push" value="GtsEmptyTestApp.apk->/data/local/tmp/gts/packageinstaller/GtsEmptyTestApp.apk" />
+  </target_preparer>
+  <include name="CtsUiDeviceTestCases"/>
+  <test class="com.android.tradefed.testtype.AndroidJUnitTest" >
+    <option name="package" value="android.jank.cts" />
+    <option name="runtime-hint" value="11m20s" />
+  </test>
+  <option name="perf_arg" value="perf-setup.sh" />
+  <test class="com.android.compatibility.class.for.test" />
+</configuration>
diff --git a/atest-py2/unittest_data/CtsUiDeviceTestCases.xml b/atest-py2/unittest_data/CtsUiDeviceTestCases.xml
new file mode 100644
index 0000000..2dd30f9
--- /dev/null
+++ b/atest-py2/unittest_data/CtsUiDeviceTestCases.xml
@@ -0,0 +1,3 @@
+<target_preparer class="com.android.tradefed.targetprep.suite.SuiteApkInstaller">
+  <option name="test-file-name" value="CtsUiDeviceTestCases.apk" />
+</target_preparer>
diff --git a/atest-py2/unittest_data/KernelTest.xml b/atest-py2/unittest_data/KernelTest.xml
new file mode 100644
index 0000000..a2a110f
--- /dev/null
+++ b/atest-py2/unittest_data/KernelTest.xml
@@ -0,0 +1,21 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2020 The Android Open Source Project
+     Licensed under the Apache License, Version 2.0 (the "License");
+     you may not use this file except in compliance with the License.
+     You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+     Unless required by applicable law or agreed to in writing, software
+     distributed under the License is distributed on an "AS IS" BASIS,
+     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+     See the License for the specific language governing permissions and
+     limitations under the License.
+-->
+<configuration description="Runs kernel_test.">
+    <test class="com.android.tradefed.testtype.binary.KernelTargetTest" >
+        <option name="ignore-binary-check" value="true" />
+        <option name="per-binary-timeout" value="360000" />
+        <option name="test-command-line" key="test_class_1" value="command 1" />
+        <option name="test-command-line" key="test_class_2" value="command 2" />
+        <option name="test-command-line" key="test_class_3" value="command 3" />
+    </test>
+</configuration>
diff --git a/atest-py2/unittest_data/VtsAndroidTest.xml b/atest-py2/unittest_data/VtsAndroidTest.xml
new file mode 100644
index 0000000..35c2f4b
--- /dev/null
+++ b/atest-py2/unittest_data/VtsAndroidTest.xml
@@ -0,0 +1,30 @@
+<configuration description="Config for VTS target parsing">
+    <option name="config-descriptor:metadata" key="plan" value="vts-treble" />
+    <target_preparer class="com.android.compatibility.common.tradefed.targetprep.VtsFilePusher">
+        <option name="abort-on-push-failure" value="false"/>
+        <option name="push-group" value="push_file1.push"/>
+        <option name="push" value="DATA/lib/libhidl-gen-hash.so->/data/local/tmp/32/libhidl-gen-hash.so"/>
+        <option name="push" value="DATA/lib64/libhidl-gen-hash.so->/data/local/tmp/64/libhidl-gen-hash.so"/>
+        <option name="push" value="hal-hidl-hash/frameworks/hardware/interfaces/current.txt->/data/local/tmp/frameworks/hardware/interfaces/current.txt"/>
+        <option name="push" value="hal-hidl-hash/hardware/interfaces/current.txt->/data/local/tmp/hardware/interfaces/current.txt"/>
+        <option name="push" value="hal-hidl-hash/system/hardware/interfaces/current.txt->/data/local/tmp/system/hardware/interfaces/current.txt"/>
+        <option name="push" value="hal-hidl-hash/system/libhidl/transport/current.txt->/data/local/tmp/system/libhidl/transport/current.txt"/>
+    </target_preparer>
+    <multi_target_preparer class="com.android.tradefed.targetprep.VtsPythonVirtualenvPreparer" />
+    <test class="com.android.tradefed.testtype.VtsMultiDeviceTest">
+        <option name="test-module-name" value="VtsTestName"/>
+        <option name="binary-test-working-directory" value="_32bit::/data/nativetest/" />
+        <option name="binary-test-working-directory" value="_64bit::/data/nativetest64/" />
+        <option name="binary-test-source" value="_32bit::DATA/nativetest/vts_treble_vintf_test/vts_treble_vintf_test" />
+        <option name="binary-test-source" value="_64bit::DATA/nativetest64/vts_treble_vintf_test/vts_treble_vintf_test" />
+        <option name="binary-test-source" value="target_with_delim->/path/to/target_with_delim" />
+        <option name="binary-test-source" value="out/dir/target" />
+        <option name="binary-test-type" value="gtest"/>
+        <option name="test-timeout" value="5m"/>
+    </test>
+    <target_preparer class="com.android.compatibility.common.tradefed.targetprep.DeviceInfoCollector">
+        <option name="apk" value="CtsDeviceInfo.apk"/>
+        <option name="test-file-name" value="DeviceHealthTests.apk" />
+        <option name="test-file-name" value="DATA/app/sl4a/sl4a.apk" />
+    </target_preparer>
+</configuration>
diff --git a/atest-py2/unittest_data/cache_root/78ea54ef315f5613f7c11dd1a87f10c7.cache b/atest-py2/unittest_data/cache_root/78ea54ef315f5613f7c11dd1a87f10c7.cache
new file mode 100644
index 0000000..3b384c7
--- /dev/null
+++ b/atest-py2/unittest_data/cache_root/78ea54ef315f5613f7c11dd1a87f10c7.cache
@@ -0,0 +1,81 @@
+c__builtin__
+set
+p0
+((lp1
+ccopy_reg
+_reconstructor
+p2
+(ctest_finders.test_info
+TestInfo
+p3
+c__builtin__
+object
+p4
+Ntp5
+Rp6
+(dp7
+S'compatibility_suites'
+p8
+(lp9
+S'device-tests'
+p10
+asS'install_locations'
+p11
+g0
+((lp12
+S'device'
+p13
+aS'host'
+p14
+atp15
+Rp16
+sS'test_runner'
+p17
+S'AtestTradefedTestRunner'
+p18
+sS'test_finder'
+p19
+S'MODULE'
+p20
+sS'module_class'
+p21
+(lp22
+VNATIVE_TESTS
+p23
+asS'from_test_mapping'
+p24
+I00
+sS'build_targets'
+p25
+g0
+((lp26
+VMODULES-IN-platform_testing-tests-example-native
+p27
+atp28
+Rp29
+sg14
+I00
+sS'test_name'
+p30
+S'hello_world_test'
+p31
+sS'suite'
+p32
+NsS'data'
+p33
+(dp34
+S'rel_config'
+p35
+Vplatform_testing/tests/example/native/AndroidTest.xml
+p36
+sS'filter'
+p37
+c__builtin__
+frozenset
+p38
+((lp39
+tp40
+Rp41
+ssbatp42
+Rp43
+.
\ No newline at end of file
diff --git a/atest-py2/unittest_data/cache_root/cd66f9f5ad63b42d0d77a9334de6bb73.cache b/atest-py2/unittest_data/cache_root/cd66f9f5ad63b42d0d77a9334de6bb73.cache
new file mode 100644
index 0000000..451a51e
--- /dev/null
+++ b/atest-py2/unittest_data/cache_root/cd66f9f5ad63b42d0d77a9334de6bb73.cache
@@ -0,0 +1,72 @@
+c__builtin__
+set
+p0
+((lp1
+ccopy_reg
+_reconstructor
+p2
+(ctest_finders.test_info
+TestInfo
+p3
+c__builtin__
+object
+p4
+Ntp5
+Rp6
+(dp7
+S'install_locations'
+p8
+g0
+((lp9
+S'device'
+p10
+aS'host'
+p11
+atp12
+Rp13
+sS'test_runner'
+p14
+S'AtestTradefedTestRunner'
+p15
+sS'module_class'
+p16
+(lp17
+VNATIVE_TESTS
+p18
+asS'from_test_mapping'
+p19
+I00
+sS'build_targets'
+p20
+g0
+((lp21
+VMODULES-IN-platform_testing-tests-example-native
+p22
+atp23
+Rp24
+sg11
+I00
+sS'test_name'
+p25
+S'hello_world_test'
+p26
+sS'suite'
+p27
+NsS'data'
+p28
+(dp29
+S'rel_config'
+p30
+Vplatform_testing/tests/example/native/AndroidTest.xml
+p31
+sS'filter'
+p32
+c__builtin__
+frozenset
+p33
+((lp34
+tp35
+Rp36
+ssbatp37
+Rp38
+.
\ No newline at end of file
diff --git a/atest-py2/unittest_data/cc_path_testing/PathTesting.cpp b/atest-py2/unittest_data/cc_path_testing/PathTesting.cpp
new file mode 100644
index 0000000..cf29370
--- /dev/null
+++ b/atest-py2/unittest_data/cc_path_testing/PathTesting.cpp
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+
+#include <stdio.h>
+
+TEST(HelloWorldTest, PrintHelloWorld) {
+    printf("Hello, World!");
+}
+
diff --git a/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cc b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cc
new file mode 100644
index 0000000..8062618
--- /dev/null
+++ b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cc
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+
+#include <stdio.h>
+
+TEST_F(HelloWorldTest, PrintHelloWorld) {
+    printf("Hello, World!");
+}
\ No newline at end of file
diff --git a/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cpp b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cpp
new file mode 100644
index 0000000..8062618
--- /dev/null
+++ b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.cpp
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <gtest/gtest.h>
+
+#include <stdio.h>
+
+TEST_F(HelloWorldTest, PrintHelloWorld) {
+    printf("Hello, World!");
+}
\ No newline at end of file
diff --git a/atest-py2/unittest_data/class_file_path_testing/hello_world_test.java b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.java
new file mode 100644
index 0000000..8e0a999
--- /dev/null
+++ b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.java
@@ -0,0 +1,9 @@
+package com.test.hello_world_test;
+
+public class HelloWorldTest {
+    @Test
+    public void testMethod1() throws Exception {}
+
+    @Test
+    public void testMethod2() throws Exception {}
+}
diff --git a/atest-py2/unittest_data/class_file_path_testing/hello_world_test.kt b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.kt
new file mode 100644
index 0000000..623b4a2
--- /dev/null
+++ b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.kt
@@ -0,0 +1 @@
+package com.test.hello_world_test
\ No newline at end of file
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/unittest_data/class_file_path_testing/hello_world_test.other
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/unittest_data/class_file_path_testing/hello_world_test.other
diff --git a/atest-py2/unittest_data/gts_auth_key.json b/atest-py2/unittest_data/gts_auth_key.json
new file mode 100644
index 0000000..0e48d55
--- /dev/null
+++ b/atest-py2/unittest_data/gts_auth_key.json
@@ -0,0 +1,8 @@
+{
+  "type": "service_account",
+  "project_id": "test",
+  "private_key_id": "test",
+  "private_key": "test",
+  "client_email": "test",
+  "client_id": "test"
+}
diff --git a/atest-py2/unittest_data/integration_dir_testing/int_dir1/int_dir_testing.xml b/atest-py2/unittest_data/integration_dir_testing/int_dir1/int_dir_testing.xml
new file mode 100644
index 0000000..2dd30f9
--- /dev/null
+++ b/atest-py2/unittest_data/integration_dir_testing/int_dir1/int_dir_testing.xml
@@ -0,0 +1,3 @@
+<target_preparer class="com.android.tradefed.targetprep.suite.SuiteApkInstaller">
+  <option name="test-file-name" value="CtsUiDeviceTestCases.apk" />
+</target_preparer>
diff --git a/atest-py2/unittest_data/integration_dir_testing/int_dir2/int_dir_testing.xml b/atest-py2/unittest_data/integration_dir_testing/int_dir2/int_dir_testing.xml
new file mode 100644
index 0000000..2dd30f9
--- /dev/null
+++ b/atest-py2/unittest_data/integration_dir_testing/int_dir2/int_dir_testing.xml
@@ -0,0 +1,3 @@
+<target_preparer class="com.android.tradefed.targetprep.suite.SuiteApkInstaller">
+  <option name="test-file-name" value="CtsUiDeviceTestCases.apk" />
+</target_preparer>
diff --git a/atest-py2/unittest_data/module-info.json b/atest-py2/unittest_data/module-info.json
new file mode 100644
index 0000000..0959fad
--- /dev/null
+++ b/atest-py2/unittest_data/module-info.json
@@ -0,0 +1,19 @@
+{
+  "AmSlam": { "class": ["APPS"],  "path": ["foo/bar/AmSlam"],  "tags": ["tests"],  "installed": ["out/target/product/generic/data/app/AmSlam/AmSlam.apk"], "module_name": "AmSlam" },
+  "CtsJankDeviceTestCases": { "class": ["APPS"],  "path": ["foo/bar/jank"],  "tags": ["optional"],  "installed": ["out/target/product/generic/data/app/CtsJankDeviceTestCases/CtsJankDeviceTestCases.apk"], "module_name": "CtsJankDeviceTestCases" },
+  "CtsUiDeviceTestCases": { "class": ["APPS"],  "path": ["tf/core/CtsUiDeviceTestCases"],  "tags": ["optional"],  "installed": ["out/target/product/generic/data/app/CtsUiDeviceTestCases/CtsUiDeviceTestCases.apk"], "module_name": "CtsJankDeviceTestCases" },
+  "VtsTarget": { "class": ["FAKE"],  "path": ["foo/bar/jank"],  "tags": ["optional"],  "installed": ["out/target/product/generic/VtsTarget"], "module_name": "VtsTarget" },
+  "google-tradefed": { "class": ["JAVA_LIBRARIES"],  "path": ["gtf/core"],  "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/google-tradefed.jar"], "module_name": "google-tradefed" },
+  "google-tradefed-contrib": { "class": ["JAVA_LIBRARIES"],  "path": ["gtf/contrib"],  "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/google-tradefed-contrib.jar"], "module_name": "google-tradefed-contrib" },
+  "tradefed": { "class": ["EXECUTABLES",  "JAVA_LIBRARIES"],  "path": ["tf/core"],  "tags": ["optional"],  "installed": ["out/host/linux-x86/bin/tradefed.sh",  "out/host/linux-x86/framework/tradefed.jar"], "module_name": "tradefed" },
+  "tradefed-contrib": { "class": ["JAVA_LIBRARIES"],  "path": ["tf/contrib"],  "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "tradefed-contrib" },
+  "module-no-path": { "class": ["JAVA_LIBRARIES"],  "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": ["module-no-path"] },
+  "module1": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "module1" },
+  "module2": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "module2" },
+  "multiarch1": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch1" },
+  "multiarch1_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch1" },
+  "multiarch2": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch2" },
+  "multiarch2_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch2" },
+  "multiarch3": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch3" },
+  "multiarch3_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch3_32" }
+}
diff --git a/atest-py2/unittest_data/path_testing/PathTesting.java b/atest-py2/unittest_data/path_testing/PathTesting.java
new file mode 100644
index 0000000..2245c67
--- /dev/null
+++ b/atest-py2/unittest_data/path_testing/PathTesting.java
@@ -0,0 +1,22 @@
+/*
+ * Copyright (C) 2017 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package android.jank.cts.ui;
+
+/** UNUSED Class file for unit tests. */
+public class SomeClassForTesting {
+    private static final String SOME_UNUSED_VAR = "For testing purposes";
+}
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/unittest_data/path_testing_empty/.empty_file
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/unittest_data/path_testing_empty/.empty_file
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest-py2/unittest_data/test_config/a.xml
old mode 100755
new mode 100644
similarity index 100%
copy from atest/tools/tradefederation/core/proto/__init__.py
copy to atest-py2/unittest_data/test_config/a.xml
diff --git a/atest-py2/unittest_data/test_mapping/folder1/test_mapping_sample b/atest-py2/unittest_data/test_mapping/folder1/test_mapping_sample
new file mode 100644
index 0000000..05cea61
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder1/test_mapping_sample
@@ -0,0 +1,22 @@
+{
+  "presubmit": [
+    {
+      "name": "test2"
+    }
+  ],
+  "postsubmit": [
+    {
+      "name": "test3"
+    }
+  ],
+  "other_group": [
+    {
+      "name": "test4"
+    }
+  ],
+  "imports": [
+    {
+      "path": "../folder2"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder2/test_mapping_sample b/atest-py2/unittest_data/test_mapping/folder2/test_mapping_sample
new file mode 100644
index 0000000..7517cd5
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder2/test_mapping_sample
@@ -0,0 +1,23 @@
+{
+  "presubmit": [
+    {
+      "name": "test5"
+    }
+  ],
+  "postsubmit": [
+    {
+      "name": "test6"
+    }
+  ],
+  "imports": [
+    {
+      "path": "../folder1"
+    },
+    {
+      "path": "../folder3/folder4"
+    },
+    {
+      "path": "../folder3/non-existing"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder3/folder4/test_mapping_sample b/atest-py2/unittest_data/test_mapping/folder3/folder4/test_mapping_sample
new file mode 100644
index 0000000..6310055
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder3/folder4/test_mapping_sample
@@ -0,0 +1,7 @@
+{
+  "imports": [
+    {
+      "path": "../../folder5"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder3/test_mapping_sample b/atest-py2/unittest_data/test_mapping/folder3/test_mapping_sample
new file mode 100644
index 0000000..ecd5b7d
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder3/test_mapping_sample
@@ -0,0 +1,17 @@
+{
+  "presubmit": [
+    {
+      "name": "test7"
+    }
+  ],
+  "postsubmit": [
+    {
+      "name": "test8"
+    }
+  ],
+  "imports": [
+    {
+      "path": "../folder1"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder5/test_mapping_sample b/atest-py2/unittest_data/test_mapping/folder5/test_mapping_sample
new file mode 100644
index 0000000..c449a0a
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder5/test_mapping_sample
@@ -0,0 +1,12 @@
+{
+  "presubmit": [
+    {
+      "name": "test9"
+    }
+  ],
+  "postsubmit": [
+    {
+      "name": "test10"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_golden b/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_golden
new file mode 100644
index 0000000..db3998d
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_golden
@@ -0,0 +1,14 @@
+{
+  "presubmit": [
+    {
+      "name": "test1",
+      "host": true,
+      "include-filter": "testClass#testMethod"
+    }
+  ],
+  "imports": [
+    {
+      "path": "path1//path2//path3"
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_with_comments b/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_with_comments
new file mode 100644
index 0000000..3f4083f
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/folder6/test_mapping_sample_with_comments
@@ -0,0 +1,16 @@
+{#comments1
+  "presubmit": [//comments2 // comments3 # comment4
+  #comments3
+    { #comments4
+      "name": "test1",#comments5
+//comments6
+      "host": true,//comments7
+      "include-filter": "testClass#testMethod" #comment11 // another comments
+    }#comments8
+  ],#comments9 // another comments
+  "imports": [
+    {
+      "path": "path1//path2//path3"#comment12
+    }
+  ]
+}#comments10
diff --git a/atest-py2/unittest_data/test_mapping/test_mapping_sample b/atest-py2/unittest_data/test_mapping/test_mapping_sample
new file mode 100644
index 0000000..a4edd9c
--- /dev/null
+++ b/atest-py2/unittest_data/test_mapping/test_mapping_sample
@@ -0,0 +1,8 @@
+{
+  "presubmit": [
+    {
+      "name": "test1",
+      "host": true
+    }
+  ]
+}
diff --git a/atest-py2/unittest_data/vts_plan_files/vts-aa.xml b/atest-py2/unittest_data/vts_plan_files/vts-aa.xml
new file mode 100644
index 0000000..629005c
--- /dev/null
+++ b/atest-py2/unittest_data/vts_plan_files/vts-aa.xml
@@ -0,0 +1,4 @@
+<configuration description="VTS Serving Plan for Staging(new) tests">
+  <include name="vts-bb" />
+  <include name="vts-dd" />
+</configuration>
diff --git a/atest-py2/unittest_data/vts_plan_files/vts-bb.xml b/atest-py2/unittest_data/vts_plan_files/vts-bb.xml
new file mode 100644
index 0000000..87c7588
--- /dev/null
+++ b/atest-py2/unittest_data/vts_plan_files/vts-bb.xml
@@ -0,0 +1,3 @@
+<configuration description="VTS Serving Plan for Staging(new) tests">
+  <include name="vts-cc" />
+</configuration>
diff --git a/atest-py2/unittest_data/vts_plan_files/vts-cc.xml b/atest-py2/unittest_data/vts_plan_files/vts-cc.xml
new file mode 100644
index 0000000..14125c0
--- /dev/null
+++ b/atest-py2/unittest_data/vts_plan_files/vts-cc.xml
@@ -0,0 +1,2 @@
+<configuration description="Common preparer">
+</configuration>
diff --git a/atest-py2/unittest_data/vts_plan_files/vts-dd.xml b/atest-py2/unittest_data/vts_plan_files/vts-dd.xml
new file mode 100644
index 0000000..a56597b
--- /dev/null
+++ b/atest-py2/unittest_data/vts_plan_files/vts-dd.xml
@@ -0,0 +1,2 @@
+<configuration description="VTS Serving Plan for Staging(new) tests">
+</configuration>
diff --git a/atest-py2/unittest_data/vts_plan_files/vts-staging-default.xml b/atest-py2/unittest_data/vts_plan_files/vts-staging-default.xml
new file mode 100644
index 0000000..34cccce
--- /dev/null
+++ b/atest-py2/unittest_data/vts_plan_files/vts-staging-default.xml
@@ -0,0 +1,4 @@
+<?xml version="1.0" encoding="utf-8"?>
+<configuration description="VTS Serving Plan for Staging(new) tests">
+  <include name="vts-aa" />
+</configuration>
diff --git a/atest-py2/unittest_data/vts_push_files/push_file1.push b/atest-py2/unittest_data/vts_push_files/push_file1.push
new file mode 100644
index 0000000..b55f453
--- /dev/null
+++ b/atest-py2/unittest_data/vts_push_files/push_file1.push
@@ -0,0 +1,4 @@
+push_file1_target1->/path/to/push/push_file1_target1
+push_file1_target2->/path/to/push/push_file1_target2
+
+push_file2.push
diff --git a/atest-py2/unittest_data/vts_push_files/push_file2.push b/atest-py2/unittest_data/vts_push_files/push_file2.push
new file mode 100644
index 0000000..3c5ae78
--- /dev/null
+++ b/atest-py2/unittest_data/vts_push_files/push_file2.push
@@ -0,0 +1,2 @@
+push_file2_target1->/path/to/push_file2_target1
+push_file2_target2->/path/to/push_file2_target2
diff --git a/atest-py2/unittest_utils.py b/atest-py2/unittest_utils.py
new file mode 100644
index 0000000..a57afac
--- /dev/null
+++ b/atest-py2/unittest_utils.py
@@ -0,0 +1,104 @@
+#!/usr/bin/env python
+#
+# Copyright 2017, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Utility functions for unit tests."""
+
+import os
+
+import constants
+import unittest_constants as uc
+
+def assert_strict_equal(test_class, first, second):
+    """Check for strict equality and strict equality of nametuple elements.
+
+    assertEqual considers types equal to their subtypes, but we want to
+    not consider set() and frozenset() equal for testing.
+    """
+    test_class.assertEqual(first, second)
+    # allow byte and unicode string equality.
+    if not (isinstance(first, basestring) and
+            isinstance(second, basestring)):
+        test_class.assertIsInstance(first, type(second))
+        test_class.assertIsInstance(second, type(first))
+    # Recursively check elements of namedtuples for strict equals.
+    if isinstance(first, tuple) and hasattr(first, '_fields'):
+        # pylint: disable=invalid-name
+        for f in first._fields:
+            assert_strict_equal(test_class, getattr(first, f),
+                                getattr(second, f))
+
+def assert_equal_testinfos(test_class, test_info_a, test_info_b):
+    """Check that the passed in TestInfos are equal."""
+    # Use unittest.assertEqual to do checks when None is involved.
+    if test_info_a is None or test_info_b is None:
+        test_class.assertEqual(test_info_a, test_info_b)
+        return
+
+    for attr in test_info_a.__dict__:
+        test_info_a_attr = getattr(test_info_a, attr)
+        test_info_b_attr = getattr(test_info_b, attr)
+        test_class.assertEqual(test_info_a_attr, test_info_b_attr,
+                               msg=('TestInfo.%s mismatch: %s != %s' %
+                                    (attr, test_info_a_attr, test_info_b_attr)))
+
+def assert_equal_testinfo_sets(test_class, test_info_set_a, test_info_set_b):
+    """Check that the sets of TestInfos are equal."""
+    test_class.assertEqual(len(test_info_set_a), len(test_info_set_b),
+                           msg=('mismatch # of TestInfos: %d != %d' %
+                                (len(test_info_set_a), len(test_info_set_b))))
+    # Iterate over a set and pop them out as you compare them.
+    while test_info_set_a:
+        test_info_a = test_info_set_a.pop()
+        test_info_b_to_remove = None
+        for test_info_b in test_info_set_b:
+            try:
+                assert_equal_testinfos(test_class, test_info_a, test_info_b)
+                test_info_b_to_remove = test_info_b
+                break
+            except AssertionError:
+                pass
+        if test_info_b_to_remove:
+            test_info_set_b.remove(test_info_b_to_remove)
+        else:
+            # We haven't found a match, raise an assertion error.
+            raise AssertionError('No matching TestInfo (%s) in [%s]' %
+                                 (test_info_a, ';'.join([str(t) for t in test_info_set_b])))
+
+
+def isfile_side_effect(value):
+    """Mock return values for os.path.isfile."""
+    if value == '/%s/%s' % (uc.CC_MODULE_DIR, constants.MODULE_CONFIG):
+        return True
+    if value == '/%s/%s' % (uc.MODULE_DIR, constants.MODULE_CONFIG):
+        return True
+    if value.endswith('.cc'):
+        return True
+    if value.endswith('.cpp'):
+        return True
+    if value.endswith('.java'):
+        return True
+    if value.endswith('.kt'):
+        return True
+    if value.endswith(uc.INT_NAME + '.xml'):
+        return True
+    if value.endswith(uc.GTF_INT_NAME + '.xml'):
+        return True
+    return False
+
+
+def realpath_side_effect(path):
+    """Mock return values for os.path.realpath."""
+    return os.path.join(uc.ROOT, path)
diff --git a/atest/Android.bp b/atest/Android.bp
index 413dc32..35636c1 100644
--- a/atest/Android.bp
+++ b/atest/Android.bp
@@ -52,8 +52,8 @@
         "asuite_lib_test/*.py",
         "proto/*_pb2.py",
         "proto/__init__.py",
-        "tools/tradefederation/core/proto/__init__.py",
-        "tools/tradefederation/core/proto/*_pb2.py",
+        "tf_proto/__init__.py",
+        "tf_proto/*_pb2.py",
     ],
     libs: [
         "atest_py3_proto",
@@ -94,6 +94,9 @@
     srcs: [
         "**/*.py",
     ],
+    test_options: {
+        unit_test: true,
+    },
     data: [
         "tools/updatedb_darwin.sh",
         "unittest_data/**/*",
@@ -104,8 +107,8 @@
         "proto/*_pb2.py",
         "proto/__init__.py",
         "tools/atest_updatedb_unittest.py",
-        "tools/tradefederation/core/proto/__init__.py",
-        "tools/tradefederation/core/proto/*_pb2.py",
+        "tf_proto/__init__.py",
+        "tf_proto/*_pb2.py",
     ],
     libs: [
         "atest_py3_proto",
diff --git a/atest/INTEGRATION_TESTS b/atest/INTEGRATION_TESTS
index 4867f50..268e52a 100644
--- a/atest/INTEGRATION_TESTS
+++ b/atest/INTEGRATION_TESTS
@@ -49,11 +49,8 @@
 
 ###[Test Finder: CC_CLASS, Test Runner:AtestTradefedTestRunner]###
 ###Purpose: Test with finder: CC_CLASS and runner: AtestTradefedTestRunner###
-### TODO: (b/168681581): Add PacketFragmenterTest and
-### PacketFragmenterTest#test_no_fragment_necessary,test_ble_fragment_necessary
-### back after following-up enhancements done.
-ProxyResolverV8Test
-ProxyResolverV8Test#Direct,Direct_C_API,ReturnEmptyString
+PacketFragmenterTest
+PacketFragmenterTest#test_no_fragment_necessary,test_ble_fragment_necessary
 
 ###[Test Finder: INTEGRATION, Test Runner:AtestTradefedTestRunner]###
 ###Purpose: Test with finder: INTEGRATION and runner: AtestTradefedTestRunner###
@@ -91,8 +88,8 @@
 ###[Paramatize GTest + AtestTradefedTestRunner]###
 ###Purpose: Test with Paramatize GTest testcases###
 # Mark this due to multiple selection not support in integration test.
-# Run/HeapprofdEndToEnd
-perfetto_integrationtests:Run/HeapprofdEndToEnd
+# PerInstance/CameraHidlTest.startStopPreview/0_internal_0
+VtsHalCameraProviderV2_4TargetTest:PerInstance/CameraHidlTest#startStopPreview/0_internal_0
 
 ###[Paramatize Java Test + AtestTradefedTestRunner]###
 ###Purpose: Test with Paramatize Java testcases###
diff --git a/atest/asuite_metrics.py b/atest/asuite_metrics.py
index a8dbb94..3eba00f 100644
--- a/atest/asuite_metrics.py
+++ b/atest/asuite_metrics.py
@@ -93,7 +93,7 @@
 def _get_old_key():
     """Get key from old meta data file if exists, else return None."""
     old_file = os.path.join(os.environ[_ANDROID_BUILD_TOP],
-                            'tools/tradefederation/core/atest', '.metadata')
+                            'tools/asuite/atest', '.metadata')
     key = None
     if os.path.isfile(old_file):
         with open(old_file) as f:
diff --git a/atest/atest.py b/atest/atest.py
index c973723..443c4c8 100755
--- a/atest/atest.py
+++ b/atest/atest.py
@@ -35,15 +35,6 @@
 import time
 import platform
 
-# This is a workaround of b/144743252, where the http.client failed to loaded
-# because the googleapiclient was found before the built-in libs; enabling embedded
-# launcher(b/135639220) has not been reliable and other issue will raise.
-# The workaround is repositioning the built-in libs before other 3rd libs in PYTHONPATH(sys.path)
-# to eliminate the symptom of failed loading http.client.
-import sysconfig
-sys.path.insert(0, os.path.dirname(sysconfig.get_paths()['purelib']))
-
-#pylint: disable=wrong-import-position
 from multiprocessing import Process
 
 import atest_arg_parser
@@ -122,7 +113,9 @@
     args = parser.parse_args(pruned_argv)
     args.custom_args = []
     if custom_args_index is not None:
-        args.custom_args = argv[custom_args_index+1:]
+        for arg in argv[custom_args_index+1:]:
+            logging.debug('Quoting regex argument %s', arg)
+            args.custom_args.append(atest_utils.quote(arg))
     return args
 
 
@@ -720,9 +713,6 @@
     if args.list_modules:
         _print_testable_modules(mod_info, args.list_modules)
         return constants.EXIT_CODE_SUCCESS
-    # Clear cache if user pass -c option
-    if args.clear_cache:
-        atest_utils.clean_test_info_caches(args.tests)
     build_targets = set()
     test_infos = set()
     if _will_run_tests(args):
@@ -838,9 +828,10 @@
 
         EXIT_CODE = main(sys.argv[1:], RESULTS_DIR, atest_configs.GLOBAL_ARGS)
         DETECTOR = bug_detector.BugDetector(sys.argv[1:], EXIT_CODE)
-        metrics.LocalDetectEvent(
-            detect_type=constants.DETECT_TYPE_BUG_DETECTED,
-            result=DETECTOR.caught_result)
-        if result_file:
-            print("Run 'atest --history' to review test result history.")
+        if EXIT_CODE not in constants.EXIT_CODES_BEFORE_TEST:
+            metrics.LocalDetectEvent(
+                detect_type=constants.DETECT_TYPE_BUG_DETECTED,
+                result=DETECTOR.caught_result)
+            if result_file:
+                print("Run 'atest --history' to review test result history.")
     sys.exit(EXIT_CODE)
diff --git a/atest/atest_arg_parser.py b/atest/atest_arg_parser.py
index e827744..bdacfd4 100644
--- a/atest/atest_arg_parser.py
+++ b/atest/atest_arg_parser.py
@@ -77,7 +77,7 @@
 SHARDING = 'Option to specify sharding count. The default value is 2'
 UPDATE_CMD_MAPPING = ('Update the test command of input tests. Warning: result '
                       'will be saved under '
-                      'tools/tradefederation/core/atest/test_data.')
+                      'tools/asuite/atest/test_data.')
 USER_TYPE = ('Run test with specific user type, e.g. atest <test> --user-type '
              'secondary_user')
 VERBOSE = 'Display DEBUG level logging.'
@@ -108,8 +108,8 @@
         if converted_value < 1:
             raise argparse.ArgumentTypeError(err_msg)
         return converted_value
-    except ValueError:
-        raise argparse.ArgumentTypeError(err_msg)
+    except ValueError as value_err:
+        raise argparse.ArgumentTypeError(err_msg) from value_err
 
 
 class AtestArgParser(argparse.ArgumentParser):
@@ -117,8 +117,7 @@
 
     def __init__(self):
         """Initialise an ArgumentParser instance."""
-        super(AtestArgParser, self).__init__(
-            description=HELP_DESC, add_help=False)
+        super().__init__(description=HELP_DESC, add_help=False)
 
     def add_atest_args(self):
         """A function that does ArgumentParser.add_argument()"""
@@ -350,9 +349,6 @@
         -D --tf-debug
             {TF_DEBUG}
 
-        --history
-            {HISTORY}
-
         --host
             {HOST}
 
@@ -401,6 +397,9 @@
         --collect-tests-only
             {COLLECT_TESTS_ONLY}
 
+        --history
+            {HISTORY}
+
         --info
             {INFO}
 
@@ -777,5 +776,5 @@
         atest -v <test> -- <custom_args1> <custom_args2>
 
 
-                                                     2020-06-04
+                                                     2020-12-09
 '''
diff --git a/atest/atest_unittest.py b/atest/atest_unittest.py
index a56b78f..bcac7c7 100755
--- a/atest/atest_unittest.py
+++ b/atest/atest_unittest.py
@@ -28,6 +28,7 @@
 from io import StringIO
 from unittest import mock
 
+# pylint: disable=wrong-import-order
 import atest
 import constants
 import module_info
@@ -93,6 +94,7 @@
                     atest._has_valid_test_mapping_args(parsed_args),
                     'Failed to validate: %s' % args)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch('json.load', return_value={})
     @mock.patch('builtins.open', new_callable=mock.mock_open)
     @mock.patch('os.path.isfile', return_value=True)
@@ -145,6 +147,7 @@
         # Check if no module_info, then nothing printed to screen.
         self.assertEqual(capture_output.getvalue(), null_output)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch('json.load', return_value={})
     @mock.patch('builtins.open', new_callable=mock.mock_open)
     @mock.patch('os.path.isfile', return_value=True)
diff --git a/atest/atest_utils.py b/atest/atest_utils.py
index 91b068e..b217e5a 100644
--- a/atest/atest_utils.py
+++ b/atest/atest_utils.py
@@ -18,6 +18,7 @@
 
 
 # pylint: disable=import-outside-toplevel
+# pylint: disable=too-many-lines
 
 from __future__ import print_function
 
@@ -33,9 +34,20 @@
 import shutil
 import subprocess
 import sys
+import sysconfig
 import time
 import zipfile
 
+# This is a workaround of b/144743252, where the http.client failed to loaded
+# because the googleapiclient was found before the built-in libs; enabling
+# embedded launcher(b/135639220) has not been reliable and other issue will
+# raise.
+# The workaround is repositioning the built-in libs before other 3rd libs in
+# PYTHONPATH(sys.path) to eliminate the symptom of failed loading http.client.
+sys.path.insert(0, os.path.dirname(sysconfig.get_paths()['purelib']))
+sys.path.insert(0, os.path.dirname(sysconfig.get_paths()['stdlib']))
+
+#pylint: disable=wrong-import-position
 import atest_decorator
 import atest_error
 import constants
@@ -43,7 +55,7 @@
 # This proto related module will be auto generated in build time.
 # pylint: disable=no-name-in-module
 # pylint: disable=import-error
-from tools.tradefederation.core.proto import test_record_pb2
+from tools.asuite.atest.tf_proto import test_record_pb2
 
 # b/147562331 only occurs when running atest in source code. We don't encourge
 # the users to manually "pip3 install protobuf", therefore when the exception
@@ -57,7 +69,8 @@
     print("You shouldn't see this message unless you ran 'atest-src'."
           "To resolve the issue, please run:\n\t{}\n"
           "and try again.".format('pip3 install protobuf'))
-    logging.debug('Import error, %s', err)
+    print('Import error, %s', err)
+    print('sys.path: %s', sys.path)
     sys.exit(constants.IMPORT_FAILURE)
 
 _BASH_RESET_CODE = '\033[0m\n'
@@ -90,8 +103,7 @@
     "| awk '{{print $1}}');"
     # Get the list of modified files from HEAD to previous $ahead generation.
     "git diff HEAD~$ahead --name-only")
-_TEST_WITH_MAINLINE_MODULES_RE = re.compile(
-    r'(?P<test>.*)\[(?P<mainline_modules>.*)\]')
+_ANDROID_BUILD_EXT = ('.bp', '.mk')
 
 def get_build_cmd():
     """Compose build command with no-absolute path and flag "--make-mode".
@@ -945,9 +957,55 @@
         A string of test without mainline modules,
         A string of mainline modules.
     """
-    result = _TEST_WITH_MAINLINE_MODULES_RE.match(test)
+    result = constants.TEST_WITH_MAINLINE_MODULES_RE.match(test)
     if not result:
         return test, ""
     test_wo_mainline_modules = result.group('test')
     mainline_modules = result.group('mainline_modules')
     return test_wo_mainline_modules, mainline_modules
+
+def has_wildcard(test_name):
+    """ Tell whether the test_name(either a list or string) contains wildcard
+    symbols.
+
+    Args:
+        test_name: A list or a str.
+
+    Return:
+        True if test_name contains wildcard, False otherwise.
+    """
+    if isinstance(test_name, str):
+        return any(char in test_name for char in ('*', '?'))
+    if isinstance(test_name, list):
+        for name in test_name:
+            if has_wildcard(name):
+                return True
+    return False
+
+def is_build_file(path):
+    """ If input file is one of an android build file.
+
+    Args:
+        path: A string of file path.
+
+    Return:
+        True if path is android build file, False otherwise.
+    """
+    return bool(os.path.splitext(path)[-1] in _ANDROID_BUILD_EXT)
+
+def quote(input_str):
+    """ If the input string -- especially in custom args -- contains shell-aware
+    characters, insert a blackslash "\" before the char.
+
+    e.g. unit(test|testing|testing) -> 'unit(test|testing|testing)'
+
+    Args:
+        input_str: A string from user input.
+
+    Returns: A string with single quotes if regex chars were detected.
+    """
+    special_chars = {'[', '(', '{', '|', '\\', '*', '?', '+', '^'}
+    for char in special_chars:
+        if char in input_str:
+            return "\'" + input_str + "\'"
+    return input_str
diff --git a/atest/atest_utils_unittest.py b/atest/atest_utils_unittest.py
index b4b80bd..2b33118 100755
--- a/atest/atest_utils_unittest.py
+++ b/atest/atest_utils_unittest.py
@@ -538,6 +538,20 @@
         mock_check_output.return_value = REPO_INFO_OUTPUT
         self.assertEqual(None, atest_utils.get_manifest_branch())
 
+    def test_has_wildcard(self):
+        """Test method of has_wildcard"""
+        self.assertFalse(atest_utils.has_wildcard('test1'))
+        self.assertFalse(atest_utils.has_wildcard(['test1']))
+        self.assertTrue(atest_utils.has_wildcard('test1?'))
+        self.assertTrue(atest_utils.has_wildcard(['test1', 'b*', 'a?b*']))
+
+    # pylint: disable=anomalous-backslash-in-string
+    def test_quote(self):
+        """Test method of quote()"""
+        target_str = r'TEST_(F|P)[0-9].*\w$'
+        expected_str = '\'TEST_(F|P)[0-9].*\w$\''
+        self.assertEqual(atest_utils.quote(target_str), expected_str)
+        self.assertEqual(atest_utils.quote('TEST_P224'), 'TEST_P224')
 
 if __name__ == "__main__":
     unittest.main()
diff --git a/atest/cli_translator.py b/atest/cli_translator.py
index 777d95b..d7dcd8e 100644
--- a/atest/cli_translator.py
+++ b/atest/cli_translator.py
@@ -19,6 +19,7 @@
 
 from __future__ import print_function
 
+import fnmatch
 import json
 import logging
 import os
@@ -80,6 +81,7 @@
 
     # pylint: disable=too-many-locals
     # pylint: disable=too-many-branches
+    # pylint: disable=too-many-statements
     def _find_test_infos(self, test, tm_test_detail,
                          is_rebuild_module_info=False):
         """Return set of TestInfos based on a given test.
@@ -128,6 +130,12 @@
             if found_test_infos:
                 finder_info = finder.finder_info
                 for test_info in found_test_infos:
+                    test_deps = set()
+                    if self.mod_info:
+                        test_deps = self.mod_info.get_module_dependency(
+                            test_info.test_name)
+                        logging.debug('(%s) Test dependencies: %s',
+                                      test_info.test_name, test_deps)
                     if tm_test_detail:
                         test_info.data[constants.TI_MODULE_ARG] = (
                             tm_test_detail.options)
@@ -140,6 +148,12 @@
                         x for x in test_info.build_targets
                         if x not in test_modules_to_build}
                     test_info.build_targets.update(mm_build_targets)
+                    # Only add dependencies to build_targets when they are in
+                    # module info
+                    test_deps_in_mod_info = [
+                        test_dep for test_dep in test_deps
+                        if self.mod_info.is_module(test_dep)]
+                    test_info.build_targets.update(test_deps_in_mod_info)
                     test_infos.add(test_info)
                 test_found = True
                 print("Found '%s' as %s" % (
@@ -343,6 +357,13 @@
                 grouped_tests = all_tests.setdefault(test_group_name, set())
                 tests = []
                 for test in test_list:
+                    # TODO: uncomment below when atest support testing mainline
+                    # module in TEST_MAPPING files.
+                    if constants.TEST_WITH_MAINLINE_MODULES_RE.match(test['name']):
+                        logging.debug('Skipping mainline module: %s',
+                                      atest_utils.colorize(test['name'],
+                                                           constants.RED))
+                        continue
                     if (self.enable_file_patterns and
                             not test_mapping.is_match_file_patterns(
                                 test_mapping_file, test)):
@@ -537,6 +558,31 @@
         test_names = [detail.name for detail in test_details_list]
         return test_names, test_details_list
 
+    def _extract_testable_modules_by_wildcard(self, user_input):
+        """Extract the given string with wildcard symbols to testable
+        module names.
+
+        Assume the available testable modules is:
+            ['Google', 'google', 'G00gle', 'g00gle']
+        and the user_input is:
+            ['*oo*', 'g00gle']
+        This method will return:
+            ['Google', 'google', 'g00gle']
+
+        Args:
+            user_input: A list of input.
+
+        Returns:
+            A list of testable modules.
+        """
+        testable_mods = self.mod_info.get_testable_modules()
+        extracted_tests = []
+        for test in user_input:
+            if atest_utils.has_wildcard(test):
+                extracted_tests.extend(fnmatch.filter(testable_mods, test))
+            else:
+                extracted_tests.append(test)
+        return extracted_tests
 
     def translate(self, args):
         """Translate atest command line into build targets and run commands.
@@ -557,6 +603,12 @@
         atest_utils.colorful_print("\nFinding Tests...", constants.CYAN)
         logging.debug('Finding Tests: %s', tests)
         start = time.time()
+        # Clear cache if user pass -c option
+        if args.clear_cache:
+            atest_utils.clean_test_info_caches(tests)
+        # Process tests which might contain wildcard symbols in advance.
+        if atest_utils.has_wildcard(tests):
+            tests = self._extract_testable_modules_by_wildcard(tests)
         test_infos = self._get_test_infos(tests, test_details_list,
                                           args.rebuild_module_info)
         logging.debug('Found tests in %ss', time.time() - start)
diff --git a/atest/cli_translator_unittest.py b/atest/cli_translator_unittest.py
index 05c7a1a..20df4e5 100755
--- a/atest/cli_translator_unittest.py
+++ b/atest/cli_translator_unittest.py
@@ -103,8 +103,9 @@
     @mock.patch.object(metrics, 'FindTestFinishEvent')
     @mock.patch.object(test_finder_handler, 'get_find_methods_for_test')
     # pylint: disable=too-many-locals
-    def test_get_test_infos(self, mock_getfindmethods, _metrics, mock_getfuzzyresults,
-                            mock_findtestbymodule, mock_input):
+    def test_get_test_infos(self, mock_getfindmethods, _metrics,
+                            mock_getfuzzyresults, mock_findtestbymodule,
+                            mock_input):
         """Test _get_test_infos method."""
         ctr = cli_t.CLITranslator()
         find_method_return_module_info = lambda x, y: uc.MODULE_INFOS
@@ -217,6 +218,7 @@
                     test_detail2.options,
                     test_info.data[constants.TI_MODULE_ARG])
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_finder.ModuleFinder, 'get_fuzzy_searching_results')
     @mock.patch.object(metrics, 'FindTestFinishEvent')
     @mock.patch.object(test_finder_handler, 'get_find_methods_for_test')
@@ -389,6 +391,30 @@
 
         self.assertEqual(test_mapping_dict, test_mapping_dict_gloden)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    @mock.patch.object(module_info.ModuleInfo, 'get_testable_modules')
+    def test_extract_testable_modules_by_wildcard(self, mock_mods):
+        """Test _extract_testable_modules_by_wildcard method."""
+        mod_info = module_info.ModuleInfo(
+            module_file=os.path.join(uc.TEST_DATA_DIR, uc.JSON_FILE))
+        ctr = cli_t.CLITranslator(module_info=mod_info)
+        mock_mods.return_value = ['test1', 'test2', 'test3', 'test11',
+                                  'Test22', 'Test100', 'aTest101']
+        # test '*'
+        expr1 = ['test*']
+        result1 = ['test1', 'test2', 'test3', 'test11']
+        self.assertEqual(ctr._extract_testable_modules_by_wildcard(expr1),
+                         result1)
+        # test '?'
+        expr2 = ['test?']
+        result2 = ['test1', 'test2', 'test3']
+        self.assertEqual(ctr._extract_testable_modules_by_wildcard(expr2),
+                         result2)
+        # test '*' and '?'
+        expr3 = ['*Test???']
+        result3 = ['Test100', 'aTest101']
+        self.assertEqual(ctr._extract_testable_modules_by_wildcard(expr3),
+                         result3)
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/atest/constants_default.py b/atest/constants_default.py
index ff29981..130c9aa 100644
--- a/atest/constants_default.py
+++ b/atest/constants_default.py
@@ -71,6 +71,12 @@
 EXIT_CODE_OUTSIDE_ROOT = 7
 EXIT_CODE_AVD_CREATE_FAILURE = 8
 EXIT_CODE_AVD_INVALID_ARGS = 9
+# Conditions that atest should exit without sending result to metrics.
+EXIT_CODES_BEFORE_TEST = [EXIT_CODE_ENV_NOT_SETUP,
+                          EXIT_CODE_TEST_NOT_FOUND,
+                          EXIT_CODE_OUTSIDE_ROOT,
+                          EXIT_CODE_AVD_CREATE_FAILURE,
+                          EXIT_CODE_AVD_INVALID_ARGS]
 
 # Codes of specific events. These are exceptions that don't stop anything
 # but sending metrics.
@@ -91,6 +97,8 @@
 MODULE_CLASS_JAVA_LIBRARIES = 'JAVA_LIBRARIES'
 MODULE_TEST_CONFIG = 'test_config'
 MODULE_MAINLINE_MODULES = 'test_mainline_modules'
+MODULE_DEPENDENCIES = 'dependencies'
+MODULE_SRCS = 'srcs'
 
 # Env constants
 ANDROID_BUILD_TOP = 'ANDROID_BUILD_TOP'
@@ -250,6 +258,8 @@
 ATEST_TEST_RECORD_PROTO = 'test_record.proto'
 LATEST_RESULT_FILE = os.path.join(ATEST_RESULT_ROOT, 'LATEST', 'test_result')
 ACLOUD_REPORT_FILE_RE = re.compile(r'.*--report[_-]file(=|\s+)(?P<report_file>[\w/.]+)')
+TEST_WITH_MAINLINE_MODULES_RE = re.compile(r'(?P<test>.*)\[(?P<mainline_modules>.*'
+                                           r'[.](apk|apks|apex))\]$')
 
 # Tests list which need vts_kernel_tests as test dependency
 REQUIRED_KERNEL_TEST_MODULES = [
diff --git a/atest/docs/developer_workflow.md b/atest/docs/developer_workflow.md
index d3c2a32..cdf7eb6 100644
--- a/atest/docs/developer_workflow.md
+++ b/atest/docs/developer_workflow.md
@@ -52,7 +52,7 @@
 
 ##### Where does the Python code live?
 
-The python code lives here: `tools/tradefederation/core/atest/`
+The python code lives here: `tools/asuite/atest/`
 (path relative to android repo root)
 
 ##### Writing tests
diff --git a/atest/module_info.py b/atest/module_info.py
index 56586f7..6431e95 100644
--- a/atest/module_info.py
+++ b/atest/module_info.py
@@ -21,13 +21,17 @@
 import json
 import logging
 import os
+import sys
 
 import atest_utils
 import constants
 
 # JSON file generated by build system that lists all buildable targets.
 _MODULE_INFO = 'module-info.json'
-
+# JSON file generated by build system that lists dependencies for java.
+_JAVA_DEP_INFO = 'module_bp_java_deps.json'
+# JSON file generated by build system that lists dependencies for cc.
+_CC_DEP_INFO = 'module_bp_cc_deps.json'
 
 class ModuleInfo:
     """Class that offers fast/easy lookup for Module related details."""
@@ -44,7 +48,7 @@
         """
         module_info_target, name_to_module_info = self._load_module_info_file(
             force_build, module_file)
-        self.name_to_module_info = name_to_module_info
+        self.name_to_module_info = self._merge_build_system_infos(name_to_module_info)
         self.module_info_target = module_info_target
         self.path_to_module_info = self._get_path_to_module_info(
             self.name_to_module_info)
@@ -83,9 +87,10 @@
             logging.debug('Generating %s - this is required for '
                           'initial runs.', _MODULE_INFO)
             build_env = dict(constants.ATEST_BUILD_ENV)
-            atest_utils.build([module_info_target],
-                              verbose=logging.getLogger().isEnabledFor(
-                                  logging.DEBUG), env_vars=build_env)
+            if not atest_utils.build([module_info_target],
+                                     verbose=logging.getLogger().isEnabledFor(
+                                         logging.DEBUG), env_vars=build_env):
+                sys.exit(constants.EXIT_CODE_BUILD_FAILURE)
         return module_info_target, module_file_path
 
     def _load_module_info_file(self, force_build, module_file):
@@ -357,3 +362,114 @@
                                             []):
             return True
         return False
+
+    def _merge_build_system_infos(self, name_to_module_info,
+        java_bp_info_path=None, cc_bp_info_path=None):
+        """Merge the full build system's info to name_to_module_info.
+
+        Args:
+            name_to_module_info: Dict of module name to module info dict.
+            java_bp_info_path: String of path to java dep file to load up.
+                               Used for testing.
+            cc_bp_info_path: String of path to cc dep file to load up.
+                             Used for testing.
+
+        Returns:
+            Dict of merged json of input def_file_path and name_to_module_info.
+        """
+        # Merge _JAVA_DEP_INFO
+        if not java_bp_info_path:
+            java_bp_info_path = os.path.join(atest_utils.get_build_out_dir(),
+                                             'soong', _JAVA_DEP_INFO)
+        if os.path.isfile(java_bp_info_path):
+            try:
+                with open(java_bp_info_path) as json_file:
+                    java_bp_infos = json.load(json_file)
+                name_to_module_info = self._merge_soong_info(
+                    name_to_module_info, java_bp_infos)
+            except json.JSONDecodeError:
+                logging.debug('Failed loading %s', java_bp_info_path)
+        # Merge _CC_DEP_INFO
+        if not cc_bp_info_path:
+            cc_bp_info_path = os.path.join(atest_utils.get_build_out_dir(),
+                                           'soong', _CC_DEP_INFO)
+        if os.path.isfile(cc_bp_info_path):
+            try:
+                with open(cc_bp_info_path) as json_file:
+                    cc_bp_infos = json.load(json_file)
+                # CC's dep json format is different with j.
+                # Below is the example content:
+                # {
+                #   "clang": "${ANDROID_ROOT}/bin/clang",
+                #   "clang++": "${ANDROID_ROOT}/bin/clang++",
+                #   "modules": {
+                #       "ACameraNdkVendorTest": {
+                #           "path": [
+                #                   "frameworks/av/camera/ndk"
+                #           ],
+                #           "srcs": [
+                #                   "frameworks/tests/AImageVendorTest.cpp",
+                #                   "frameworks/tests/ACameraManagerTest.cpp"
+                #           ],
+                name_to_module_info = self._merge_soong_info(
+                    name_to_module_info, cc_bp_infos.get('modules', {}))
+            except json.JSONDecodeError:
+                logging.debug('Failed loading %s', cc_bp_info_path)
+        return name_to_module_info
+
+    def _merge_soong_info(self, name_to_module_info, mod_bp_infos):
+        """Merge the dependency and srcs in mod_bp_infos to name_to_module_info.
+
+        Args:
+            name_to_module_info: Dict of module name to module info dict.
+            mod_bp_infos: Dict of module name to bp's module info dict.
+
+        Returns:
+            Dict of merged json of input def_file_path and name_to_module_info.
+        """
+        merge_items = [constants.MODULE_DEPENDENCIES, constants.MODULE_SRCS]
+        for module_name, dep_info in mod_bp_infos.items():
+            if name_to_module_info.get(module_name, None):
+                mod_info = name_to_module_info.get(module_name)
+                for merge_item in merge_items:
+                    dep_info_values = dep_info.get(merge_item, [])
+                    mod_info_values = mod_info.get(merge_item, [])
+                    for dep_info_value in dep_info_values:
+                        if dep_info_value not in mod_info_values:
+                            mod_info_values.append(dep_info_value)
+                    mod_info_values.sort()
+                    name_to_module_info[
+                        module_name][merge_item] = mod_info_values
+        output_file = os.path.join(atest_utils.get_build_out_dir(),
+                                   'soong', 'atest_merged_dep.json')
+        if os.path.isdir(os.path.dirname(output_file)):
+            with open(output_file, 'w') as file_out:
+                json.dump(name_to_module_info, file_out, indent=0)
+        return name_to_module_info
+
+    def get_module_dependency(self, module_name, parent_dependencies=None):
+        """Get the dependency sets for input module.
+
+        Recursively find all the dependencies of the input module.
+
+        Args:
+            module_name: String of module to check.
+            parent_dependencies: The list of parent dependencies.
+
+        Returns:
+            Set of dependency modules.
+        """
+        if not parent_dependencies:
+            parent_dependencies = set()
+        deps = set()
+        mod_info = self.get_module_info(module_name)
+        if not mod_info:
+            return deps
+        mod_deps = set(mod_info.get(constants.MODULE_DEPENDENCIES, []))
+        # Remove item in deps if it already in parent_dependencies:
+        mod_deps = mod_deps - parent_dependencies
+        deps = deps.union(mod_deps)
+        for mod_dep in mod_deps:
+            deps = deps.union(set(self.get_module_dependency(
+                mod_dep, parent_dependencies=parent_dependencies.union(deps))))
+        return deps
diff --git a/atest/module_info_unittest.py b/atest/module_info_unittest.py
index 1f73624..31d5d03 100755
--- a/atest/module_info_unittest.py
+++ b/atest/module_info_unittest.py
@@ -104,6 +104,7 @@
             self.assertEqual(custom_abs_out_dir_mod_targ,
                              mod_info.module_info_target)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, '_load_module_info_file',)
     def test_get_path_to_module_info(self, mock_load_module):
         """Test that we correctly create the path to module info dict."""
@@ -124,6 +125,7 @@
         self.assertDictEqual(path_to_mod_info,
                              mod_info._get_path_to_module_info(mod_info_dict))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_is_module(self):
         """Test that we get the module when it's properly loaded."""
         # Load up the test json file and check that module is in it
@@ -131,6 +133,7 @@
         self.assertTrue(mod_info.is_module(EXPECTED_MOD_TARGET))
         self.assertFalse(mod_info.is_module(UNEXPECTED_MOD_TARGET))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_get_path(self):
         """Test that we get the module path when it's properly loaded."""
         # Load up the test json file and check that module is in it
@@ -139,6 +142,7 @@
                          EXPECTED_MOD_TARGET_PATH)
         self.assertEqual(mod_info.get_paths(MOD_NO_PATH), [])
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_get_module_names(self):
         """test that we get the module name properly."""
         mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
@@ -148,6 +152,7 @@
             self, mod_info.get_module_names(PATH_TO_MULT_MODULES),
             MULT_MOODULES_WITH_SHARED_PATH)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_path_to_mod_info(self):
         """test that we get the module name properly."""
         mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
@@ -158,6 +163,7 @@
         TESTABLE_MODULES_WITH_SHARED_PATH.sort()
         self.assertEqual(module_list, TESTABLE_MODULES_WITH_SHARED_PATH)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_is_suite_in_compatibility_suites(self):
         """Test is_suite_in_compatibility_suites."""
         mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
@@ -171,6 +177,7 @@
         self.assertTrue(mod_info.is_suite_in_compatibility_suites("vts10", info3))
         self.assertFalse(mod_info.is_suite_in_compatibility_suites("ats", info3))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, 'is_testable_module')
     @mock.patch.object(module_info.ModuleInfo, 'is_suite_in_compatibility_suites')
     def test_get_testable_modules(self, mock_is_suite_exist, mock_is_testable):
@@ -186,6 +193,7 @@
         self.assertEqual(0, len(mod_info.get_testable_modules('test_suite')))
         self.assertEqual(1, len(mod_info.get_testable_modules()))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, 'has_test_config')
     @mock.patch.object(module_info.ModuleInfo, 'is_robolectric_test')
     def test_is_testable_module(self, mock_is_robo_test, mock_has_test_config):
@@ -227,6 +235,7 @@
                      uc.TEST_CONFIG_DATA_DIR, "a.xml.data")]}
         self.assertTrue(mod_info.has_test_config(info2))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, 'get_module_names')
     def test_get_robolectric_test_name(self, mock_get_module_names):
         """Test get_robolectric_test_name."""
@@ -243,6 +252,7 @@
         self.assertEqual(mod_info.get_robolectric_test_name(
             NON_RUN_ROBO_MOD_NAME), None)
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, 'get_module_info')
     @mock.patch.object(module_info.ModuleInfo, 'get_module_names')
     def test_is_robolectric_test(self, mock_get_module_names, mock_get_module_info):
@@ -261,6 +271,7 @@
         mock_get_module_info.return_value = None
         self.assertFalse(mod_info.is_robolectric_test('rand_mod'))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     @mock.patch.object(module_info.ModuleInfo, 'is_module')
     def test_is_auto_gen_test_config(self, mock_is_module):
         """Test is_auto_gen_test_config correctly detects the module."""
@@ -279,6 +290,7 @@
         self.assertFalse(mod_info.is_auto_gen_test_config(MOD_NAME3))
         self.assertFalse(mod_info.is_auto_gen_test_config(MOD_NAME4))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
     def test_is_robolectric_module(self):
         """Test is_robolectric_module correctly detects the module."""
         mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
@@ -290,6 +302,82 @@
         self.assertTrue(mod_info.is_robolectric_module(MOD_INFO_DICT[MOD_NAME1]))
         self.assertFalse(mod_info.is_robolectric_module(MOD_INFO_DICT[MOD_NAME2]))
 
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    def test_merge_build_system_infos(self):
+        """Test _merge_build_system_infos."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        java_dep_file = os.path.join(uc.TEST_DATA_DIR,
+                                     'module_bp_java_deps.json')
+        mod_info_1 = {constants.MODULE_NAME: 'module_1',
+                      constants.MODULE_DEPENDENCIES: []}
+        name_to_mod_info = {'module_1' : mod_info_1}
+        expect_deps = ['test_dep_level_1_1', 'test_dep_level_1_2']
+        name_to_mod_info = mod_info._merge_build_system_infos(
+            name_to_mod_info, java_bp_info_path=java_dep_file)
+        self.assertEqual(
+            name_to_mod_info['module_1'].get(constants.MODULE_DEPENDENCIES),
+            expect_deps)
+
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    def test_merge_dependency_with_ori_dependency(self):
+        """Test _merge_dependency."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        java_dep_file = os.path.join(uc.TEST_DATA_DIR,
+                                     'module_bp_java_deps.json')
+        mod_info_1 = {constants.MODULE_NAME: 'module_1',
+                      constants.MODULE_DEPENDENCIES: ['ori_dep_1']}
+        name_to_mod_info = {'module_1' : mod_info_1}
+        expect_deps = ['ori_dep_1', 'test_dep_level_1_1', 'test_dep_level_1_2']
+        name_to_mod_info = mod_info._merge_build_system_infos(
+            name_to_mod_info, java_bp_info_path=java_dep_file)
+        self.assertEqual(
+            name_to_mod_info['module_1'].get(constants.MODULE_DEPENDENCIES),
+            expect_deps)
+
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    def test_get_module_dependency(self):
+        """Test get_module_dependency."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        java_dep_file = os.path.join(uc.TEST_DATA_DIR,
+                                     'module_bp_java_deps.json')
+        expect_deps = {'test_dep_level_1_1', 'module_1', 'test_dep_level_1_2',
+                       'test_dep_level_2_2', 'test_dep_level_2_1', 'module_2'}
+        mod_info._merge_build_system_infos(mod_info.name_to_module_info,
+                                   java_bp_info_path=java_dep_file)
+        self.assertEqual(
+            mod_info.get_module_dependency('dep_test_module'),
+            expect_deps)
+
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    def test_get_module_dependency_w_loop(self):
+        """Test get_module_dependency with problem dep file."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        # Java dependency file with a endless loop define.
+        java_dep_file = os.path.join(uc.TEST_DATA_DIR,
+                                     'module_bp_java_loop_deps.json')
+        expect_deps = {'test_dep_level_1_1', 'module_1', 'test_dep_level_1_2',
+                       'test_dep_level_2_2', 'test_dep_level_2_1', 'module_2'}
+        mod_info._merge_build_system_infos(mod_info.name_to_module_info,
+                                   java_bp_info_path=java_dep_file)
+        self.assertEqual(
+            mod_info.get_module_dependency('dep_test_module'),
+            expect_deps)
+
+    @mock.patch.dict('os.environ', {constants.ANDROID_BUILD_TOP:'/'})
+    def test_cc_merge_build_system_infos(self):
+        """Test _merge_build_system_infos for cc."""
+        mod_info = module_info.ModuleInfo(module_file=JSON_FILE_PATH)
+        cc_dep_file = os.path.join(uc.TEST_DATA_DIR,
+                                     'module_bp_cc_deps.json')
+        mod_info_1 = {constants.MODULE_NAME: 'module_cc_1',
+                      constants.MODULE_DEPENDENCIES: []}
+        name_to_mod_info = {'module_cc_1' : mod_info_1}
+        expect_deps = ['test_cc_dep_level_1_1', 'test_cc_dep_level_1_2']
+        name_to_mod_info = mod_info._merge_build_system_infos(
+            name_to_mod_info, cc_bp_info_path=cc_dep_file)
+        self.assertEqual(
+            name_to_mod_info['module_cc_1'].get(constants.MODULE_DEPENDENCIES),
+            expect_deps)
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/atest/test_data/test_commands.json b/atest/test_data/test_commands.json
index a299426..bacf300 100644
--- a/atest/test_data/test_commands.json
+++ b/atest/test_data/test_commands.json
@@ -146,42 +146,6 @@
 "template/atest_local_min",
 "test=atest"
 ],
-"ProxyResolverV8Test": [
-"--atest-include-filter",
-"--include-filter",
-"--log-level",
-"--log-level-display",
-"--logcat-on-failure",
-"--no-early-device-release",
-"--no-enable-granular-attempts",
-"--skip-loading-config-jar",
-"--template:map",
-"VERBOSE",
-"VERBOSE",
-"atest_tradefed.sh",
-"proxy_resolver_v8_unittest",
-"proxy_resolver_v8_unittest:ProxyResolverV8Test.*",
-"template/atest_local_min",
-"test=atest"
-],
-"ProxyResolverV8Test#Direct,Direct_C_API,ReturnEmptyString": [
-"--atest-include-filter",
-"--include-filter",
-"--log-level",
-"--log-level-display",
-"--logcat-on-failure",
-"--no-early-device-release",
-"--no-enable-granular-attempts",
-"--skip-loading-config-jar",
-"--template:map",
-"VERBOSE",
-"VERBOSE",
-"atest_tradefed.sh",
-"proxy_resolver_v8_unittest",
-"proxy_resolver_v8_unittest:ProxyResolverV8Test.Direct:ProxyResolverV8Test.Direct_C_API:ProxyResolverV8Test.ReturnEmptyString",
-"template/atest_local_min",
-"test=atest"
-],
 "android.animation.cts": [
 "--atest-include-filter",
 "--include-filter",
@@ -254,24 +218,6 @@
 "./build/soong/soong_ui.bash",
 "RunCarMessengerRoboTests"
 ],
-"perfetto_integrationtests:Run/HeapprofdEndToEnd": [
-"--atest-include-filter",
-"--include-filter",
-"--log-level",
-"--log-level-display",
-"--logcat-on-failure",
-"--no-early-device-release",
-"--no-enable-granular-attempts",
-"--skip-loading-config-jar",
-"--template:map",
-"VERBOSE",
-"VERBOSE",
-"atest_tradefed.sh",
-"perfetto_integrationtests",
-"perfetto_integrationtests:Run/HeapprofdEndToEnd.*",
-"template/atest_local_min",
-"test=atest"
-],
 "platform_testing/tests/example/native": [
 "--include-filter",
 "--log-level",
@@ -318,5 +264,59 @@
 "native-benchmark",
 "template/atest_local_min",
 "test=atest"
+],
+"PacketFragmenterTest": [
+"--atest-include-filter",
+"--include-filter",
+"--log-level",
+"--log-level-display",
+"--logcat-on-failure",
+"--no-early-device-release",
+"--no-enable-granular-attempts",
+"--skip-loading-config-jar",
+"--template:map",
+"VERBOSE",
+"VERBOSE",
+"atest_tradefed.sh",
+"net_test_hci",
+"net_test_hci:PacketFragmenterTest.*",
+"template/atest_local_min",
+"test=atest"
+],
+"PacketFragmenterTest#test_no_fragment_necessary,test_ble_fragment_necessary": [
+"--atest-include-filter",
+"--include-filter",
+"--log-level",
+"--log-level-display",
+"--logcat-on-failure",
+"--no-early-device-release",
+"--no-enable-granular-attempts",
+"--skip-loading-config-jar",
+"--template:map",
+"VERBOSE",
+"VERBOSE",
+"atest_tradefed.sh",
+"net_test_hci",
+"net_test_hci:PacketFragmenterTest.test_ble_fragment_necessary:PacketFragmenterTest.test_no_fragment_necessary",
+"template/atest_local_min",
+"test=atest"
+],
+"VtsHalCameraProviderV2_4TargetTest:PerInstance/CameraHidlTest#startStopPreview/0_internal_0": [
+"--atest-include-filter",
+"--include-filter",
+"--log-level",
+"--log-level-display",
+"--logcat-on-failure",
+"--no-early-device-release",
+"--no-enable-granular-attempts",
+"--skip-loading-config-jar",
+"--template:map",
+"VERBOSE",
+"VERBOSE",
+"VtsHalCameraProviderV2_4TargetTest",
+"VtsHalCameraProviderV2_4TargetTest:PerInstance/CameraHidlTest.startStopPreview/0_internal_0",
+"atest_tradefed.sh",
+"template/atest_local_min",
+"test=atest"
 ]
-}
\ No newline at end of file
+}
diff --git a/atest/test_finders/cache_finder.py b/atest/test_finders/cache_finder.py
index 7e0765c..9272da6 100644
--- a/atest/test_finders/cache_finder.py
+++ b/atest/test_finders/cache_finder.py
@@ -16,7 +16,10 @@
 Cache Finder class.
 """
 
+import logging
+
 import atest_utils
+import constants
 
 from test_finders import test_finder_base
 from test_finders import test_info
@@ -25,8 +28,9 @@
     """Cache Finder class."""
     NAME = 'CACHE'
 
-    def __init__(self, **kwargs):
-        super(CacheFinder, self).__init__()
+    def __init__(self, module_info=None):
+        super().__init__()
+        self.module_info = module_info
 
     def _is_latest_testinfos(self, test_infos):
         """Check whether test_infos are up-to-date.
@@ -43,6 +47,7 @@
         for cached_test_info in test_infos:
             sorted_cache_ti = sorted(vars(cached_test_info).keys())
             if not sorted_cache_ti == sorted_base_ti:
+                logging.debug('test_info is not up-to-date.')
                 return False
         return True
 
@@ -57,6 +62,118 @@
             TestInfo format, else None.
         """
         test_infos = atest_utils.load_test_info_cache(test_reference)
-        if test_infos and self._is_latest_testinfos(test_infos):
+        if test_infos and self._is_test_infos_valid(test_infos):
             return test_infos
         return None
+
+    def _is_test_infos_valid(self, test_infos):
+        """Check if the given test_infos are valid.
+
+        Args:
+            test_infos: A list of TestInfo.
+
+        Returns:
+            True if test_infos are all valid. Otherwise, False.
+        """
+        if not self._is_latest_testinfos(test_infos):
+            return False
+        for t_info in test_infos:
+            if not self._is_test_path_valid(t_info):
+                return False
+            if not self._is_test_build_target_valid(t_info):
+                return False
+            if not self._is_test_filter_valid(t_info):
+                return False
+        return True
+
+    def _is_test_path_valid(self, t_info):
+        """Check if test path is valid.
+
+        Args:
+            t_info: TestInfo that has been filled out by a find method.
+
+        Returns:
+            True if test path is valid. Otherwise, False.
+        """
+        # For RoboTest it won't have 'MODULES-IN-' as build target. Treat test
+        # path is valid if cached_test_paths is None.
+        cached_test_paths = t_info.get_test_paths()
+        if cached_test_paths is None:
+            return True
+        current_test_paths = self.module_info.get_paths(t_info.test_name)
+        if not current_test_paths:
+            return False
+        if sorted(cached_test_paths) != sorted(current_test_paths):
+            logging.debug('Not a valid test path.')
+            return False
+        return True
+
+    def _is_test_build_target_valid(self, t_info):
+        """Check if test build targets are valid.
+
+        Args:
+            t_info: TestInfo that has been filled out by a find method.
+
+        Returns:
+            True if test's build target is valid. Otherwise, False.
+        """
+        # If the cached build target can be found in current module-info, then
+        # it is a valid build targets of the test.
+        for build_target in t_info.build_targets:
+            if self.module_info.is_module(build_target):
+                logging.debug('Not a valid build target.')
+                return False
+        return True
+
+    def _is_test_filter_valid(self, t_info):
+        """Check if test filter is valid.
+
+        Args:
+            t_info: TestInfo that has been filled out by a find method.
+
+        Returns:
+            True if test filter is valid. Otherwise, False.
+        """
+        test_filters = t_info.data.get(constants.TI_FILTER, [])
+        if not test_filters:
+            return True
+        for test_filter in test_filters:
+            # Check if the class filter is under current module.
+            # TODO: (b/172260100) The test_name may not be inevitably equal to
+            #  the module_name.
+            if self._is_java_filter_in_module(t_info.test_name ,
+                                              test_filter.class_name):
+                return True
+            # TODO: (b/172260100) Also check for CC.
+        logging.debug('Not a valid test filter.')
+        return False
+
+    def _is_java_filter_in_module(self, module_name, filter_class):
+        """Check if input class is part of input module.
+
+        Args:
+            module_name: A string of the module name of the test.
+            filter_class: A string of the class name field of TI_FILTER.
+
+        Returns:
+            True if input filter_class is in the input module. Otherwise, False.
+        """
+        mod_info = self.module_info.get_module_info(module_name)
+        if not mod_info:
+            return False
+        module_srcs = mod_info.get(constants.MODULE_SRCS, [])
+        # If module didn't have src information treat the cached filter still
+        # valid. Remove this after all java srcs could be found in module-info.
+        if not module_srcs:
+            return True
+        ref_end = filter_class.rsplit('.', 1)[-1]
+        if '.' in filter_class:
+            file_path = str(filter_class).replace('.', '/')
+            # A Java class file always starts with a capital letter.
+            if ref_end[0].isupper():
+                file_path = file_path + '.'
+            for src_path in module_srcs:
+                # If java class, check if class file in module's src.
+                if src_path.find(file_path) >= 0:
+                    return True
+        return False
diff --git a/atest/test_finders/cache_finder_unittest.py b/atest/test_finders/cache_finder_unittest.py
index fcb3e54..dde9996 100755
--- a/atest/test_finders/cache_finder_unittest.py
+++ b/atest/test_finders/cache_finder_unittest.py
@@ -24,6 +24,8 @@
 from unittest import mock
 
 import atest_utils
+import constants
+import module_info
 import unittest_constants as uc
 
 from test_finders import cache_finder
@@ -35,9 +37,15 @@
     def setUp(self):
         """Set up stuff for testing."""
         self.cache_finder = cache_finder.CacheFinder()
+        self.cache_finder.module_info = mock.Mock(spec=module_info.ModuleInfo)
 
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_filter_valid',
+                       return_value=True)
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_build_target_valid',
+                       return_value=True)
     @mock.patch.object(atest_utils, 'get_test_info_cache_path')
-    def test_find_test_by_cache(self, mock_get_cache_path):
+    def test_find_test_by_cache(self, mock_get_cache_path,
+            _mock_build_target_valid, _mock_filter_valid):
         """Test find_test_by_cache method."""
         uncached_test = 'mytest1'
         cached_test = 'hello_world_test'
@@ -51,6 +59,8 @@
         self.assertIsNone(self.cache_finder.find_test_by_cache(uncached_test))
         # Hit matched cache file and original_finder is in it,
         # should return cached test infos.
+        self.cache_finder.module_info.get_paths.return_value = [
+            'platform_testing/tests/example/native']
         mock_get_cache_path.return_value = os.path.join(
             test_cache_root,
             '78ea54ef315f5613f7c11dd1a87f10c7.cache')
@@ -61,5 +71,86 @@
             '39488b7ac83c56d5a7d285519fe3e3fd.cache')
         self.assertIsNone(self.cache_finder.find_test_by_cache(uncached_test2))
 
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_build_target_valid',
+                       return_value=True)
+    @mock.patch.object(atest_utils, 'get_test_info_cache_path')
+    def test_find_test_by_cache_wo_valid_path(self, mock_get_cache_path,
+            _mock_build_target_valid):
+        """Test find_test_by_cache method."""
+        cached_test = 'hello_world_test'
+        test_cache_root = os.path.join(uc.TEST_DATA_DIR, 'cache_root')
+        # Return None when the actual test_path is not identical to that in the
+        # existing cache.
+        self.cache_finder.module_info.get_paths.return_value = [
+            'not/matched/test/path']
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            '78ea54ef315f5613f7c11dd1a87f10c7.cache')
+        self.assertIsNone(self.cache_finder.find_test_by_cache(cached_test))
+
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_build_target_valid',
+                       return_value=False)
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_path_valid',
+                       return_value=True)
+    @mock.patch.object(atest_utils, 'get_test_info_cache_path')
+    def test_find_test_by_cache_wo_valid_build_target(self, mock_get_cache_path,
+            _mock_path_valid, _mock_build_target_valid):
+        """Test find_test_by_cache method."""
+        cached_test = 'hello_world_test'
+        test_cache_root = os.path.join(uc.TEST_DATA_DIR, 'cache_root')
+        # Return None when the build target is not exist in module-info.
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            '78ea54ef315f5613f7c11dd1a87f10c7.cache')
+        self.assertIsNone(self.cache_finder.find_test_by_cache(cached_test))
+
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_filter_valid',
+                       return_value=False)
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_build_target_valid',
+                       return_value=True)
+    @mock.patch.object(cache_finder.CacheFinder, '_is_test_path_valid',
+                       return_value=True)
+    @mock.patch.object(atest_utils, 'get_test_info_cache_path')
+    def test_find_test_by_cache_wo_valid_java_filter(self, mock_get_cache_path,
+        _mock_path_valid, _mock_build_target_valid, _mock_filter_valid):
+        """Test _is_test_filter_valid method."""
+        cached_test = 'hello_world_test'
+        test_cache_root = os.path.join(uc.TEST_DATA_DIR, 'cache_root')
+        # Return None if the cached test filter is not valid.
+        mock_get_cache_path.return_value = os.path.join(
+            test_cache_root,
+            '78ea54ef315f5613f7c11dd1a87f10c7.cache')
+        self.assertIsNone(self.cache_finder.find_test_by_cache(cached_test))
+
+    def test_is_java_filter_in_module_for_java_class(self):
+        """Test _is_java_filter_in_module method if input is java class."""
+        print('Bill test_is_java_filter_in_module_for_java_class()')
+        mock_mod = {constants.MODULE_SRCS:
+                             ['src/a/b/c/MyTestClass1.java']}
+        self.cache_finder.module_info.get_module_info.return_value = mock_mod
+        # Should not match if class name does not exist.
+        self.assertFalse(
+            self.cache_finder._is_java_filter_in_module(
+                'MyModule', 'a.b.c.MyTestClass'))
+        # Should match if class name exist.
+        self.assertTrue(
+            self.cache_finder._is_java_filter_in_module(
+                'MyModule', 'a.b.c.MyTestClass1'))
+
+    def test_is_java_filter_in_module_for_java_package(self):
+        """Test _is_java_filter_in_module method if input is java package."""
+        mock_mod = {constants.MODULE_SRCS:
+                        ['src/a/b/c/MyTestClass1.java']}
+        self.cache_finder.module_info.get_module_info.return_value = mock_mod
+        # Should not match if package name does not match the src.
+        self.assertFalse(
+            self.cache_finder._is_java_filter_in_module(
+                'MyModule', 'a.b.c.d'))
+        # Should match if package name matches the src.
+        print('Bill test_is_java_filter_in_module_for_java_package()')
+        self.assertTrue(
+            self.cache_finder._is_java_filter_in_module(
+                'MyModule', 'a.b.c'))
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/atest/test_finders/module_finder.py b/atest/test_finders/module_finder.py
index 8184ee2..64f7f4d 100644
--- a/atest/test_finders/module_finder.py
+++ b/atest/test_finders/module_finder.py
@@ -48,11 +48,11 @@
     _VTS_TEST_RUNNER = vts_tf_test_runner.VtsTradefedTestRunner.NAME
 
     def __init__(self, module_info=None):
-        super(ModuleFinder, self).__init__()
+        super().__init__()
         self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
         self.module_info = module_info
 
-    def _determine_testable_module(self, path):
+    def _determine_testable_module(self, path, file_path=None):
         """Determine which module the user is trying to test.
 
         Returns the module to test. If there are multiple possibilities, will
@@ -60,11 +60,14 @@
 
         Args:
             path: String path of module to look for.
+            file_path: String path of input file.
 
         Returns:
             A list of the module names.
         """
         testable_modules = []
+        # A list to save those testable modules but srcs information is empty.
+        testable_modules_no_srcs = []
         for mod in self.module_info.get_module_names(path):
             mod_info = self.module_info.get_module_info(mod)
             # Robolectric tests always exist in pairs of 2, one module to build
@@ -74,7 +77,22 @@
                 # return a list with one module name if it is robolectric.
                 return [mod]
             if self.module_info.is_testable_module(mod_info):
+                # If test module defined srcs, input file_path should be defined
+                # in the src list of module.
+                module_srcs = mod_info.get(constants.MODULE_SRCS, [])
+                if file_path and os.path.relpath(
+                    file_path, self.root_dir) not in module_srcs:
+                    logging.debug('Skip module: %s for %s', mod, file_path)
+                    # Collect those modules if they don't have srcs information
+                    # in module-info, use this list if there's no other matched
+                    # module with src information.
+                    if not module_srcs:
+                        testable_modules_no_srcs.append(
+                            mod_info.get(constants.MODULE_NAME))
+                    continue
                 testable_modules.append(mod_info.get(constants.MODULE_NAME))
+        if not testable_modules:
+            testable_modules.extend(testable_modules_no_srcs)
         return test_finder_utils.extract_test_from_tests(testable_modules)
 
     def _is_vts_module(self, module_name):
@@ -246,6 +264,7 @@
         return [rel_config] if rel_config else []
 
     # pylint: disable=too-many-branches
+    # pylint: disable=too-many-locals
     def _get_test_info_filter(self, path, methods, **kwargs):
         """Get test info filter.
 
@@ -283,13 +302,25 @@
                 [test_info.TestFilter(full_class_name, methods)])
         # Path to cc file.
         elif file_name and constants.CC_EXT_RE.match(file_name):
+            # TODO (b/173019813) Should setup correct filter for an input file.
             if not test_finder_utils.has_cc_class(path):
                 raise atest_error.MissingCCTestCaseError(
                     "Can't find CC class in %s" % path)
-            if methods:
-                ti_filter = frozenset(
-                    [test_info.TestFilter(test_finder_utils.get_cc_filter(
-                        kwargs.get('class_name', '*'), methods), frozenset())])
+            # Extract class_name, method_name and parameterized_class from
+            # the given cc path.
+            file_classes, _, file_para_classes = (
+                test_finder_utils.get_cc_test_classes_methods(path))
+            cc_filters = []
+            # When instantiate tests found, recompose the class name in
+            # $(InstantiationName)/$(ClassName)
+            for file_class in file_classes:
+                if file_class in file_para_classes:
+                    file_class = '*/%s' % file_class
+                cc_filters.append(
+                    test_info.TestFilter(
+                        test_finder_utils.get_cc_filter(file_class, methods),
+                        frozenset()))
+            ti_filter = frozenset(cc_filters)
         # If input path is a folder and have class_name information.
         elif (not file_name and kwargs.get('class_name', None)):
             ti_filter = frozenset(
@@ -311,6 +342,7 @@
                         ti_filter = frozenset(
                             [test_info.TestFilter(package_name, methods)])
                         break
+        logging.debug('_get_test_info_filter() ti_filter: %s', ti_filter)
         return ti_filter
 
     def _get_rel_config(self, test_path):
@@ -349,7 +381,8 @@
             module_names = [module_name]
         else:
             module_names = self._determine_testable_module(
-                os.path.dirname(rel_config))
+                os.path.dirname(rel_config),
+                test_path if self._is_comparted_src(test_path) else None)
         test_infos = []
         if module_names:
             for mname in module_names:
@@ -755,3 +788,21 @@
                         # name in source tree.
                         return [tinfo]
         return None
+
+    def _is_comparted_src(self, path):
+        """Check if the input path need to match srcs information in module.
+
+        If path is a folder or android build file, we don't need to compart
+        with module's srcs.
+
+        Args:
+            path: A string of the test's path.
+
+        Returns:
+            True if input path need to match with module's src info, else False.
+        """
+        if os.path.isdir(path):
+            return False
+        if atest_utils.is_build_file(path):
+            return False
+        return True
diff --git a/atest/test_finders/module_finder_unittest.py b/atest/test_finders/module_finder_unittest.py
index 9445848..9ce7543 100755
--- a/atest/test_finders/module_finder_unittest.py
+++ b/atest/test_finders/module_finder_unittest.py
@@ -27,6 +27,7 @@
 
 import atest_error
 import atest_configs
+import atest_utils
 import constants
 import module_info
 import unittest_constants as uc
@@ -184,6 +185,7 @@
         unittest_utils.assert_equal_testinfos(
             self, t_infos[1], uc.MODULE_INFO_W_CONFIG)
 
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
     @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
                        return_value=False)
     @mock.patch.object(test_finder_utils, 'has_method_in_file',
@@ -200,7 +202,7 @@
     def test_find_test_by_class_name(self, _isdir, _isfile, _fqcn,
                                      mock_checkoutput, mock_build,
                                      _vts, _has_method_in_file,
-                                     _is_parameterized):
+                                     _is_parameterized, _is_build_file):
         """Test find_test_by_class_name."""
         mock_build.return_value = uc.CLASS_BUILD_TARGETS
         self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
@@ -441,6 +443,9 @@
         self.mod_finder.module_info.get_module_info.return_value = mod_info
         self.assertIsNone(self.mod_finder.find_test_by_module_and_package(bad_pkg))
 
+    @mock.patch.object(test_finder_utils, 'get_cc_test_classes_methods',
+                       return_value=(set(), set(), set()))
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
     @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
                        return_value=False)
     @mock.patch.object(test_finder_utils, 'has_method_in_file',
@@ -460,7 +465,8 @@
     #pylint: disable=unused-argument
     def test_find_test_by_path(self, mock_pathexists, mock_dir, _isfile, _real,
                                _fqcn, _vts, mock_build, _has_cc_class,
-                               _has_method_in_file, _is_parameterized):
+                               _has_method_in_file, _is_parameterized,
+                                _is_build_file, _get_cc_test_classed):
         """Test find_test_by_path."""
         self.mod_finder.module_info.is_robolectric_test.return_value = False
         self.mod_finder.module_info.has_test_config.return_value = True
@@ -563,6 +569,7 @@
         unittest_utils.assert_equal_testinfos(
             self, uc.CC_PATH_INFO, t_infos[0])
 
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
     @mock.patch.object(test_finder_utils, 'has_method_in_file',
                        return_value=True)
     @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
@@ -574,7 +581,7 @@
     #pylint: disable=unused-argument
     def test_find_test_by_cc_class_name(self, _isdir, _isfile,
                                         mock_checkoutput, mock_build,
-                                        _vts, _has_method):
+                                        _vts, _has_method, _is_build_file):
         """Test find_test_by_cc_class_name."""
         mock_build.return_value = uc.CLASS_BUILD_TARGETS
         self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
@@ -713,6 +720,7 @@
             self, t_infos[0],
             uc.PACKAGE_INFO)
 
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
     @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
                        return_value=True)
     @mock.patch.object(test_finder_utils, 'has_method_in_file',
@@ -732,7 +740,8 @@
     #pylint: disable=unused-argument
     def test_find_test_by_path_is_parameterized_java(
             self, mock_pathexists, mock_dir, _isfile, _real, _fqcn, _vts,
-            mock_build, _has_cc_class, _has_method_in_file, _is_parameterized):
+            mock_build, _has_cc_class, _has_method_in_file, _is_parameterized,
+            _is_build_file):
         """Test find_test_by_path and input path is parameterized class."""
         self.mod_finder.module_info.is_robolectric_test.return_value = False
         self.mod_finder.module_info.has_test_config.return_value = True
@@ -760,6 +769,7 @@
         unittest_utils.assert_equal_testinfos(
             self, t_infos[0], uc.PARAMETERIZED_FLAT_METHOD_INFO)
 
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
     @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
                        return_value=True)
     @mock.patch.object(test_finder_utils, 'has_method_in_file',
@@ -775,7 +785,7 @@
     #pylint: disable=unused-argument
     def test_find_test_by_class_name_is_parameterized(
             self, _isdir, _isfile, _fqcn, mock_checkoutput, mock_build, _vts,
-            _has_method_in_file, _is_parameterized):
+            _has_method_in_file, _is_parameterized, _is_build_file):
         """Test find_test_by_class_name and the class is parameterized java."""
         mock_build.return_value = uc.CLASS_BUILD_TARGETS
         self.mod_finder.module_info.is_auto_gen_test_config.return_value = False
@@ -822,6 +832,137 @@
             t_infos[0],
             uc.TEST_CONFIG_MODULE_INFO)
 
+    @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(test_finder_utils, 'has_cc_class',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('os.path.realpath',
+                side_effect=unittest_utils.realpath_side_effect)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch.object(test_finder_utils, 'find_parent_module_dir')
+    @mock.patch('os.path.exists')
+    #pylint: disable=unused-argument
+    def test_find_test_by_path_w_src_verify(
+            self, mock_pathexists, mock_dir, _isfile, _real, _fqcn, _vts,
+            mock_build, _has_cc_class, _has_method_in_file, _is_parameterized):
+        """Test find_test_by_path with src information."""
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+
+        # Happy path testing.
+        mock_dir.return_value = uc.MODULE_DIR
+        # Test path not in module's src list.
+        class_path = '%s.java' % uc.CLASS_NAME
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: [],
+            constants.MODULE_SRCS: ['not_matched_%s' % class_path]}
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        self.assertEqual(0, len(t_infos))
+
+        # Test input file is in module's src list.
+        class_path = '%s.java' % uc.CLASS_NAME
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: [],
+            constants.MODULE_SRCS: [class_path]}
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(self, uc.CLASS_INFO, t_infos[0])
+
+    @mock.patch.object(test_finder_utils, 'get_cc_test_classes_methods')
+    @mock.patch.object(atest_utils, 'is_build_file', return_value=True)
+    @mock.patch.object(test_finder_utils, 'is_parameterized_java_class',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'has_method_in_file',
+                       return_value=True)
+    @mock.patch.object(test_finder_utils, 'has_cc_class',
+                       return_value=True)
+    @mock.patch.object(module_finder.ModuleFinder, '_get_build_targets')
+    @mock.patch.object(module_finder.ModuleFinder, '_is_vts_module',
+                       return_value=False)
+    @mock.patch.object(test_finder_utils, 'get_fully_qualified_class_name',
+                       return_value=uc.FULL_CLASS_NAME)
+    @mock.patch('os.path.realpath',
+                side_effect=unittest_utils.realpath_side_effect)
+    @mock.patch('os.path.isfile', side_effect=unittest_utils.isfile_side_effect)
+    @mock.patch.object(test_finder_utils, 'find_parent_module_dir')
+    @mock.patch('os.path.exists')
+    #pylint: disable=unused-argument
+    def test_find_test_by_path_for_cc_file(self, mock_pathexists, mock_dir,
+        _isfile, _real, _fqcn, _vts, mock_build, _has_cc_class,
+        _has_method_in_file, _is_parameterized, _is_build_file,
+        _mock_get_cc_test_class):
+        """Test find_test_by_path for handling correct CC filter."""
+        self.mod_finder.module_info.is_robolectric_test.return_value = False
+        self.mod_finder.module_info.has_test_config.return_value = True
+        mock_build.return_value = set()
+        # Check that we don't return anything with invalid test references.
+        mock_pathexists.return_value = False
+        mock_pathexists.return_value = True
+        mock_dir.return_value = None
+        self.mod_finder.module_info.get_module_names.return_value = [uc.MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        # Happy path testing.
+        mock_dir.return_value = uc.MODULE_DIR
+        # Cc path testing if get_cc_test_classes_methods found those information.
+        self.mod_finder.module_info.get_module_names.return_value = [uc.CC_MODULE_NAME]
+        self.mod_finder.module_info.get_module_info.return_value = {
+            constants.MODULE_INSTALLED: DEFAULT_INSTALL_PATH,
+            constants.MODULE_NAME: uc.CC_MODULE_NAME,
+            constants.MODULE_CLASS: [],
+            constants.MODULE_COMPATIBILITY_SUITES: []}
+        mock_dir.return_value = uc.CC_MODULE_DIR
+        class_path = '%s' % uc.CC_PATH
+        mock_build.return_value = uc.CLASS_BUILD_TARGETS
+        # Test without paramertize test
+        founded_classed = {'class1'}
+        founded_methods = {'method1'}
+        founded_para_classes = set()
+        _mock_get_cc_test_class.return_value = (founded_classed,
+                                                founded_methods,
+                                                founded_para_classes)
+        cc_path_data = {constants.TI_REL_CONFIG: uc.CC_CONFIG_FILE,
+                        constants.TI_FILTER: frozenset(
+                            {test_info.TestFilter(class_name='class1.*',
+                                                  methods=frozenset())})}
+        cc_path_info = test_info.TestInfo(uc.CC_MODULE_NAME,
+                                          atf_tr.AtestTradefedTestRunner.NAME,
+                                          uc.CLASS_BUILD_TARGETS, cc_path_data)
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(self, cc_path_info, t_infos[0])
+        # Test with paramertize test defined in input path
+        founded_classed = {'class1'}
+        founded_methods = {'method1'}
+        founded_para_classes = {'class1'}
+        _mock_get_cc_test_class.return_value = (founded_classed,
+                                                founded_methods,
+                                                founded_para_classes)
+        cc_path_data = {constants.TI_REL_CONFIG: uc.CC_CONFIG_FILE,
+                        constants.TI_FILTER: frozenset(
+                            {test_info.TestFilter(class_name='*/class1.*',
+                                                  methods=frozenset())})}
+        cc_path_info = test_info.TestInfo(uc.CC_MODULE_NAME,
+                                          atf_tr.AtestTradefedTestRunner.NAME,
+                                          uc.CLASS_BUILD_TARGETS, cc_path_data)
+        t_infos = self.mod_finder.find_test_by_path(class_path)
+        unittest_utils.assert_equal_testinfos(self, cc_path_info, t_infos[0])
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/atest/test_finders/test_finder_utils.py b/atest/test_finders/test_finder_utils.py
index f0230ab..b089401 100644
--- a/atest/test_finders/test_finder_utils.py
+++ b/atest/test_finders/test_finder_utils.py
@@ -42,8 +42,13 @@
 # We want to make sure we don't grab apks with paths in their name since we
 # assume the apk name is the build target.
 _APK_RE = re.compile(r'^[^/]+\.apk$', re.I)
-# RE for checking if TEST or TEST_F is in a cc file or not.
-_CC_CLASS_RE = re.compile(r'^[ ]*TEST(_F|_P)?[ ]*\(', re.I)
+# Group matches "class" of line "TEST_F(class, "
+_CC_CLASS_METHOD_RE = re.compile(
+    r'^\s*TEST(_F|_P)?\s*\(\s*(?P<class>\w+)\s*,\s*(?P<method>\w+)\s*\)', re.M)
+# Group matches parameterized "class" of line "INSTANTIATE_TEST_CASE_P( ,class "
+_PARA_CC_CLASS_RE = re.compile(
+    r'^\s*INSTANTIATE[_TYPED]*_TEST_(SUITE|CASE)_P\s*\(\s*(?P<instantiate>\w+)\s*,'
+    r'\s*(?P<class>\w+)\s*\,', re.M)
 # RE for checking if there exists one of the methods in java file.
 _JAVA_METHODS_PATTERN = r'.*[ ]+({0})\(.*'
 # RE for checking if there exists one of the methods in cc file.
@@ -212,7 +217,7 @@
     """
     with open(test_path) as class_file:
         for line in class_file:
-            match = _CC_CLASS_RE.match(line)
+            match = _CC_CLASS_METHOD_RE.match(line)
             if match:
                 return True
     return False
@@ -1019,3 +1024,35 @@
             if match:
                 return True
     return False
+
+
+def get_cc_test_classes_methods(test_path):
+    """Find out the cc test class of input test_path.
+
+    Args:
+        test_path: A string of absolute path to the cc file.
+
+    Returns:
+        A tuple of sets: classes, methods and para_classes.
+    """
+    classes = set()
+    methods = set()
+    para_classes = set()
+    with open(test_path) as class_file:
+        content = class_file.read()
+        # Search matched CC CLASS/METHOD
+        matches = re.findall(_CC_CLASS_METHOD_RE, content)
+        logging.debug('Found cc classes: %s', matches)
+        for match in matches:
+            # The elements of `matches` will be "Group 1"(_F),
+            # "Group class"(MyClass1) and "Group method"(MyMethod1)
+            classes.update([match[1]])
+            methods.update([match[2]])
+        # Search matched parameterized CC CLASS.
+        matches = re.findall(_PARA_CC_CLASS_RE, content)
+        logging.debug('Found parameterized classes: %s', matches)
+        for match in matches:
+            # The elements of `matches` will be "Group 1"(_F),
+            # "Group instantiate class"(MyClass1) and "Group method"(MyMethod1)
+            para_classes.update([match[2]])
+    return classes, methods, para_classes
diff --git a/atest/test_finders/test_finder_utils_unittest.py b/atest/test_finders/test_finder_utils_unittest.py
index 8b308ac..17d681b 100755
--- a/atest/test_finders/test_finder_utils_unittest.py
+++ b/atest/test_finders/test_finder_utils_unittest.py
@@ -615,5 +615,23 @@
             finally:
                 tmp_file.close()
 
+    def test_get_cc_test_classes_methods(self):
+        """Test get_cc_test_classes_methods method."""
+        expect_classes = ('MyClass1', 'MyClass2', 'MyClass3', 'MyClass4',
+                          'MyClass5')
+        expect_methods = ('Method1', 'Method2', 'Method3', 'Method5')
+        expect_para_classes = ('MyInstantClass1', 'MyInstantClass2',
+                               'MyInstantClass3', 'MyInstantTypeClass1',
+                               'MyInstantTypeClass2')
+        expected_result = [sorted(expect_classes), sorted(expect_methods),
+                           sorted(expect_para_classes)]
+        file_path = os.path.join(uc.TEST_DATA_DIR, 'my_cc_test.cc')
+        classes, methods, para_classes = (
+            test_finder_utils.get_cc_test_classes_methods(file_path))
+        self.assertEqual(expected_result,
+                         [sorted(classes),
+                          sorted(methods),
+                          sorted(para_classes)])
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/atest/test_finders/test_info.py b/atest/test_finders/test_info.py
index 71b092a..f2b0b93 100644
--- a/atest/test_finders/test_info.py
+++ b/atest/test_finders/test_info.py
@@ -114,6 +114,22 @@
             return constants.DEVICE_TEST
         return constants.BOTH_TEST
 
+    def get_test_paths(self):
+        """Get the relative path of test_info.
+
+        Search build target's MODULE-IN as the test path.
+
+        Return:
+            A list of string of the relative path for test, None if test
+            path information not found.
+        """
+        test_paths = []
+        for build_target in self.build_targets:
+            if str(build_target).startswith('MODULES-IN-'):
+                test_paths.append(
+                    str(build_target).replace(
+                        'MODULES-IN-', '').replace('-', '/'))
+        return test_paths if test_paths else None
 
 class TestFilter(TestFilterBase):
     """Information needed to filter a test in Tradefed"""
diff --git a/atest/test_finders/test_info_unittest.py b/atest/test_finders/test_info_unittest.py
new file mode 100755
index 0000000..25a56c5
--- /dev/null
+++ b/atest/test_finders/test_info_unittest.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python3
+#
+# Copyright 2020, The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Unittests for cache_finder."""
+
+
+import unittest
+
+from test_finders import test_info
+
+
+#pylint: disable=protected-access
+class TestInfoUnittests(unittest.TestCase):
+    """Unit tests for cache_finder.py"""
+
+    def test_get_test_path(self):
+        """Test test_get_test_paths method."""
+        build_targets = set()
+        exp_rel_paths = ['a/b/c', 'd/e/f']
+        for exp_rel_path in exp_rel_paths:
+            build_targets.add('MODULES-IN-%s' % exp_rel_path.replace('/', '-'))
+        t_info = test_info.TestInfo('mock_name', 'mock_runner', build_targets)
+        self.assertEqual(sorted(t_info.get_test_paths()), sorted(exp_rel_paths))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/atest/test_finders/tf_integration_finder.py b/atest/test_finders/tf_integration_finder.py
index 7c4d496..51a203e 100644
--- a/atest/test_finders/tf_integration_finder.py
+++ b/atest/test_finders/tf_integration_finder.py
@@ -38,7 +38,7 @@
 _TF_TARGETS = frozenset(['tradefed', 'tradefed-contrib'])
 _GTF_TARGETS = frozenset(['google-tradefed', 'google-tradefed-contrib'])
 _CONTRIB_TARGETS = frozenset(['google-tradefed-contrib'])
-_TF_RES_DIR = '../res/config'
+_TF_RES_DIRS = frozenset(['../res/config', 'res/config'])
 
 
 class TFIntegrationFinder(test_finder_base.TestFinderBase):
@@ -48,7 +48,7 @@
 
 
     def __init__(self, module_info=None):
-        super(TFIntegrationFinder, self).__init__()
+        super().__init__()
         self.root_dir = os.environ.get(constants.ANDROID_BUILD_TOP)
         self.module_info = module_info
         # TODO: Break this up into AOSP/google_tf integration finders.
@@ -62,7 +62,8 @@
             # changed to ../res/config.
             if module_name in _CONTRIB_TARGETS:
                 mod_paths = self.module_info.get_paths(module_name)
-                return [os.path.join(path, _TF_RES_DIR) for path in mod_paths]
+                return [os.path.join(path, res_path) for path in mod_paths
+                        for res_path in _TF_RES_DIRS]
             return self.module_info.get_paths(module_name)
         return []
 
diff --git a/atest/test_runners/robolectric_test_runner_unittest.py b/atest/test_runners/robolectric_test_runner_unittest.py
index e036aa4..0edd061 100755
--- a/atest/test_runners/robolectric_test_runner_unittest.py
+++ b/atest/test_runners/robolectric_test_runner_unittest.py
@@ -19,9 +19,10 @@
 # pylint: disable=line-too-long
 
 import json
-import unittest
+import platform
 import subprocess
 import tempfile
+import unittest
 
 from unittest import mock
 
@@ -94,7 +95,10 @@
         self.suite_tr. _exec_with_robo_polling(event_file, robo_proc, mock_pe)
         calls = [mock.call.process_event(event_name,
                                          json.loads(event1 + event2))]
-        mock_pe.assert_has_calls(calls)
+        # (b/147569951) subprocessing 'echo'  behaves differently between
+        # linux/darwin. Ensure it is not called in MacOS.
+        if platform.system() == 'Linux':
+            mock_pe.assert_has_calls(calls)
 
     @mock.patch.object(event_handler.EventHandler, 'process_event')
     def test_exec_with_robo_polling_with_fail_stacktrace(self, mock_pe):
diff --git a/atest/tf_proto/Android.bp b/atest/tf_proto/Android.bp
new file mode 100644
index 0000000..3756212
--- /dev/null
+++ b/atest/tf_proto/Android.bp
@@ -0,0 +1,37 @@
+// Copyright 2020 The Android Open Source Project
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// This is a copy of the proto from Tradefed at tools/tradefederation/core/proto
+python_library_host {
+    name: "tradefed-protos-py",
+    pkg_path: "atest",
+    srcs: ["*.proto"],
+    visibility: [
+        "//tools/asuite/atest",
+    ],
+    libs: [
+        "libprotobuf-python",
+    ],
+    proto: {
+        include_dirs: ["external/protobuf/src"],
+    },
+    version: {
+        py2: {
+            enabled: true,
+        },
+        py3: {
+            enabled: true,
+        },
+    },
+}
diff --git a/atest/tf_proto/build_info.proto b/atest/tf_proto/build_info.proto
new file mode 100644
index 0000000..79be6ae
--- /dev/null
+++ b/atest/tf_proto/build_info.proto
@@ -0,0 +1,57 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+package tradefed.build;
+
+option java_package = "com.android.tradefed.build.proto";
+option java_outer_classname = "BuildInformation";
+
+// The representation of a versioned build file
+message BuildFile {
+  // The version of the file
+  string version = 1;
+  // The local path of the file.
+  string local_path = 2;
+}
+
+// The representation of a map of key to a list of versioned files. Similar to
+// Tradefed MultiMap structure.
+message KeyBuildFilePair {
+  // The Key indexing a list of BuildFile that are tracked by the BuildInfo.
+  string build_file_key = 1;
+  // List of BuildFile that are tracked by the BuildInfo.
+  repeated BuildFile file = 2;
+}
+
+// Proto representation of IBuildInfo
+message BuildInfo {
+  // The build identifier of the represented build.
+  string build_id = 1;
+  // The build flavor. For example: sailfish-userdebug
+  string build_flavor = 2;
+  // The branch where the build come from: For example: git_main
+  string branch = 3;
+  // The build attributes, as a key value
+  map<string, string> attributes = 4;
+  // The versioned files part of the build that are tracked.
+  repeated KeyBuildFilePair versioned_file = 5;
+  // The type of the IBuildInfo instance.
+  // For example: com.android.tradefed.build.BuildInfo
+  string build_info_class = 6;
+  // Deprecated: Whether or not the build info represents a test resource.
+  bool is_test_resource = 7;
+}
diff --git a/atest/tf_proto/configuration_description.proto b/atest/tf_proto/configuration_description.proto
new file mode 100644
index 0000000..361218b
--- /dev/null
+++ b/atest/tf_proto/configuration_description.proto
@@ -0,0 +1,57 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+package tradefed.config;
+
+option java_package = "com.android.tradefed.config.proto";
+option java_outer_classname = "ConfigurationDescription";
+
+// Representation of the metadata attributes in a similar way as MultiMap in
+// Tradefed. One key associated to a list of values.
+message Metadata {
+  // Key of the pair to identify the metadata.
+  string key = 1;
+  // List of values associated to the key.
+  repeated string value = 2;
+}
+
+// Representation of abi
+message Abi {
+  // Name of the abi.
+  // For example: arm64-v8a, armeabi-v7a, x86_64, x86
+  string name = 1;
+  // The bitness of the abi. Can be 32 or 64.
+  string bitness = 2;
+}
+
+// Representation of a Tradefed Configuration Descriptor in proto format.
+message Descriptor {
+  // The suite names that the configuration belong to.
+  repeated string test_suite_tag = 1;
+  // A set of metadata representing some configuration attributes
+  repeated Metadata metadata = 2;
+  // Whether the configuration is shardable or not.
+  bool shardable = 3;
+  // Whether the configuration is strict shardable or not.
+  bool strict_shardable = 4;
+  // Whether we are currently running the configuration in sandbox mode or not.
+  bool use_sandboxing = 5;
+  // The module name if running in a suite.
+  string module_name = 6;
+  // The Abi of the module.
+  Abi abi = 7;
+}
\ No newline at end of file
diff --git a/atest/tf_proto/invocation_context.proto b/atest/tf_proto/invocation_context.proto
new file mode 100644
index 0000000..bbb8545
--- /dev/null
+++ b/atest/tf_proto/invocation_context.proto
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+package tradefed.invoker;
+
+option java_package = "com.android.tradefed.invoker.proto";
+option java_outer_classname = "InvocationContext";
+
+import public "tools/asuite/atest/tf_proto/configuration_description.proto";
+import public "tools/asuite/atest/tf_proto/build_info.proto";
+
+// Representation of a Tradefed Invocation Context in proto.
+message Context {
+  // The invocation test tag to identify it.
+  string test_tag = 1;
+  // Map of the configured device name to the build info associated. The device
+  // name can be found in the Tradefed configuration xml of the test.
+  map<string, tradefed.build.BuildInfo> name_build_info = 2;
+  // A list of Metadata representing the invocation attributes
+  repeated tradefed.config.Metadata metadata = 3;
+  // The configuration description associated with the test configuration.
+  tradefed.config.Descriptor configuration_description = 4;
+  // Optional, a context under the invocation representing a module.
+  Context module_context = 5;
+}
diff --git a/atest/tf_proto/log_file.proto b/atest/tf_proto/log_file.proto
new file mode 100644
index 0000000..ea05bb0
--- /dev/null
+++ b/atest/tf_proto/log_file.proto
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+option java_package = "com.android.tradefed.result.proto";
+option java_outer_classname = "LogFileProto";
+
+// Represents a single log file
+message LogFileInfo {
+  // The local path of the log file
+  string path = 1;
+  // The remote path of the log file once logged
+  string url = 2;
+  // The type of the log file (For example: Logcat, screenshot)
+  string log_type = 3;
+  // Whether the file is a text file
+  bool is_text = 4;
+  // whether the file is compressed or not
+  bool is_compressed = 5;
+  // Size of the file in bytes
+  int64 size = 6;
+  // The actual mime content type of the file
+  string content_type = 7;
+}
diff --git a/atest/tf_proto/metric_measurement.proto b/atest/tf_proto/metric_measurement.proto
new file mode 100644
index 0000000..234ee8a
--- /dev/null
+++ b/atest/tf_proto/metric_measurement.proto
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+package tradefed.metric;
+
+option java_package = "com.android.tradefed.metrics.proto";
+option java_outer_classname = "MetricMeasurement";
+
+// Represents what is the expected directionality of the measurements
+// For example: If we are measuring how fast a device is charging, the
+// directionality of UP_BETTER would describe it the best. Overall if the trend
+// of the list of measurements has a desired pattern that we can refer too for
+// understanding the expectation better.
+enum Directionality {
+  DIRECTIONALITY_UNSPECIFIED = 0;
+  UP_BETTER = 1;
+  DOWN_BETTER = 2;
+  CLOSER_BETTER = 3; // If the values should be as close as possible
+}
+
+// Represents whether the data was already processed or is raw data.
+enum DataType {
+  RAW = 0;
+  PROCESSED = 1;
+}
+
+// Represents the actual measurement values
+message Measurements {
+  // All the types a measurement can take, use the oneOf to find which type was
+  // used.
+  oneof measurement {
+    string single_string = 1;
+    int64 single_int = 2;
+    double single_double = 3;
+    StringValues string_values = 4;
+    NumericValues numeric_values = 5;
+    DoubleValues double_values = 6;
+  }
+}
+
+// Represents a list of string measurements
+message StringValues {
+  repeated string string_value = 1;
+}
+
+// Represents a list of numeric measurements
+message NumericValues {
+  repeated int64 numeric_value = 1;
+}
+
+// Represents a list of float measurements
+message DoubleValues {
+  repeated double double_value = 1;
+}
+
+// Represents the full metric: The measurements and its metadata
+message Metric {
+  // The measurements
+  Measurements measurements = 1;
+
+  // The Unit of the measurements.
+  string unit = 2;
+
+  // The Directionality of the measurements
+  Directionality direction = 3;
+
+  // Whether the measurements is raw data or processed.
+  DataType type = 4;
+}
diff --git a/atest/tf_proto/test_record.proto b/atest/tf_proto/test_record.proto
new file mode 100644
index 0000000..ae41ab4
--- /dev/null
+++ b/atest/tf_proto/test_record.proto
@@ -0,0 +1,150 @@
+/*
+ * Copyright (C) 2018 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto3";
+
+import "google/protobuf/any.proto";
+import "google/protobuf/timestamp.proto";
+import "tools/asuite/atest/tf_proto/metric_measurement.proto";
+
+option java_package = "com.android.tradefed.result.proto";
+option java_outer_classname = "TestRecordProto";
+
+package android_test_record;
+
+// A record containing the status, logs, and other information associated with a
+// particular test execution.
+message TestRecord {
+  // The UUID of this TestRecord.
+  string test_record_id = 1;
+
+  // The UUID of this TestRecord's parent. Unset if this is a top-level record.
+  string parent_test_record_id = 2;
+
+  // References to any finer-grained TestRecords that were generated as part of
+  // this test.
+  repeated ChildReference children = 3;
+
+  // The number of children this TestRecord was expected to have. Unset if not
+  // known in advance.
+  int64 num_expected_children = 4;
+
+  // The result status (Pass, Fail, etc) of this test unit.
+  TestStatus status = 5;
+
+  // Extra debugging information.
+  DebugInfo debug_info = 6;
+
+  // The time at which this test started executing.
+  google.protobuf.Timestamp start_time = 7;
+
+  // The time at which this test finished executing.
+  google.protobuf.Timestamp end_time = 8;
+
+  // Any artifact files associated with this test.
+  map<string, google.protobuf.Any> artifacts = 9;
+
+  // Any metrics or measurements associated with this test.
+  map<string, tradefed.metric.Metric> metrics = 10;
+
+  // Metadata describing the test that was run.
+  google.protobuf.Any description = 11;
+
+  // The attempt number of a target if the target ran several times. First
+  // attempt is 0 (Default value).
+  int64 attempt_id = 12;
+}
+
+// A reference to a finer-grained TestRecord.
+message ChildReference {
+  oneof reference {
+    // The UUID of the TestRecord.
+    string test_record_id = 1;
+
+    // An inlined TestRecord.
+    TestRecord inline_test_record = 2;
+  }
+}
+
+// The overall pass / fail status for a particular TestRecord.
+enum TestStatus {
+  UNKNOWN = 0;
+  PASS = 1;
+  FAIL = 2;
+  IGNORED = 3;
+  ASSUMPTION_FAILURE = 4;
+}
+
+// Associated debugging information to accompany a TestStatus.
+message DebugInfo {
+  // An error message.
+  string error_message = 1;
+
+  // A stacktrace.
+  string trace = 2;
+
+  // A more detailed failure status description.
+  FailureStatus failure_status = 3;
+
+  // Optional context to the failure
+  DebugInfoContext debug_info_context = 4;
+}
+
+// A Fail TestStatus can be associated with a more granular failure status that helps understanding
+// the context.
+enum FailureStatus {
+  UNSET = 0;
+  // The test in progress was the reason for the failure.
+  TEST_FAILURE = 1;
+  // A timeout condition on the operation in progress occurred.
+  TIMED_OUT = 2;
+  // The test in progress was cancelled.
+  CANCELLED = 3;
+  // A failure attributed to something not functioning properly.
+  INFRA_FAILURE = 10;
+  // System under test crashed and caused the test to fail.
+  SYSTEM_UNDER_TEST_CRASHED = 20;
+  // The test was expected to run but did not.
+  NOT_EXECUTED = 30;
+  // System under test became unavailable and never came back available again.
+  LOST_SYSTEM_UNDER_TEST = 35;
+  // Represent an error caused by an unmet dependency that the current infra
+  // depends on. For example: Unfound resources, Device error, Hardware issue
+  // (lab host, device wear), Underlying tools
+  DEPENDENCY_ISSUE = 40;
+  // Represent an error caused by the input from the end user. For example:
+  // Unexpected option combination, Configuration error, Bad flags
+  CUSTOMER_ISSUE = 41;
+}
+
+// A context to DebugInfo that allows to optionally specify some debugging context.
+message DebugInfoContext {
+  // Category of the action that was in progress during the failure
+  string action_in_progress = 1;
+
+  // A free-formed text that can help debugging the issue at hand.
+  string debug_help_message = 10;
+
+  // The fully-qualified name of the exception class associated with the error.
+  string error_type = 20;
+
+  // Error Identifiers
+  // The name identifying the error
+  string error_name = 30;
+  // The class that raised the error
+  string origin = 31;
+  // The error code associated with the error_name
+  int64 error_code = 32;
+}
diff --git a/atest/tools/tradefederation/core/proto/__init__.py b/atest/tools/asuite/atest/tf_proto/__init__.py
similarity index 100%
rename from atest/tools/tradefederation/core/proto/__init__.py
rename to atest/tools/asuite/atest/tf_proto/__init__.py
diff --git a/atest/tools/tradefederation/core/proto/metric_measurement_pb2.py b/atest/tools/asuite/atest/tf_proto/metric_measurement_pb2.py
similarity index 84%
rename from atest/tools/tradefederation/core/proto/metric_measurement_pb2.py
rename to atest/tools/asuite/atest/tf_proto/metric_measurement_pb2.py
index 6a4ab39..9938102 100644
--- a/atest/tools/tradefederation/core/proto/metric_measurement_pb2.py
+++ b/atest/tools/asuite/atest/tf_proto/metric_measurement_pb2.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 # Generated by the protocol buffer compiler.  DO NOT EDIT!
-# source: tools/tradefederation/core/proto/metric_measurement.proto
+# source: tools/asuite/atest/tf_proto/metric_measurement.proto
 
 import sys
 _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
@@ -17,11 +17,11 @@
 
 
 DESCRIPTOR = _descriptor.FileDescriptor(
-  name='tools/tradefederation/core/proto/metric_measurement.proto',
+  name='tools/asuite/atest/tf_proto/metric_measurement.proto',
   package='tradefed.metric',
   syntax='proto3',
   serialized_options=_b('\n\"com.android.tradefed.metrics.protoB\021MetricMeasurement'),
-  serialized_pb=_b('\n9tools/tradefederation/core/proto/metric_measurement.proto\x12\x0ftradefed.metric\"\x8f\x02\n\x0cMeasurements\x12\x17\n\rsingle_string\x18\x01 \x01(\tH\x00\x12\x14\n\nsingle_int\x18\x02 \x01(\x03H\x00\x12\x17\n\rsingle_double\x18\x03 \x01(\x01H\x00\x12\x36\n\rstring_values\x18\x04 \x01(\x0b\x32\x1d.tradefed.metric.StringValuesH\x00\x12\x38\n\x0enumeric_values\x18\x05 \x01(\x0b\x32\x1e.tradefed.metric.NumericValuesH\x00\x12\x36\n\rdouble_values\x18\x06 \x01(\x0b\x32\x1d.tradefed.metric.DoubleValuesH\x00\x42\r\n\x0bmeasurement\"$\n\x0cStringValues\x12\x14\n\x0cstring_value\x18\x01 \x03(\t\"&\n\rNumericValues\x12\x15\n\rnumeric_value\x18\x01 \x03(\x03\"$\n\x0c\x44oubleValues\x12\x14\n\x0c\x64ouble_value\x18\x01 \x03(\x01\"\xa8\x01\n\x06Metric\x12\x33\n\x0cmeasurements\x18\x01 \x01(\x0b\x32\x1d.tradefed.metric.Measurements\x12\x0c\n\x04unit\x18\x02 \x01(\t\x12\x32\n\tdirection\x18\x03 \x01(\x0e\x32\x1f.tradefed.metric.Directionality\x12\'\n\x04type\x18\x04 \x01(\x0e\x32\x19.tradefed.metric.DataType*c\n\x0e\x44irectionality\x12\x1e\n\x1a\x44IRECTIONALITY_UNSPECIFIED\x10\x00\x12\r\n\tUP_BETTER\x10\x01\x12\x0f\n\x0b\x44OWN_BETTER\x10\x02\x12\x11\n\rCLOSER_BETTER\x10\x03*\"\n\x08\x44\x61taType\x12\x07\n\x03RAW\x10\x00\x12\r\n\tPROCESSED\x10\x01\x42\x37\n\"com.android.tradefed.metrics.protoB\x11MetricMeasurementb\x06proto3')
+  serialized_pb=_b('\n4tools/asuite/atest/tf_proto/metric_measurement.proto\x12\x0ftradefed.metric\"\x8f\x02\n\x0cMeasurements\x12\x17\n\rsingle_string\x18\x01 \x01(\tH\x00\x12\x14\n\nsingle_int\x18\x02 \x01(\x03H\x00\x12\x17\n\rsingle_double\x18\x03 \x01(\x01H\x00\x12\x36\n\rstring_values\x18\x04 \x01(\x0b\x32\x1d.tradefed.metric.StringValuesH\x00\x12\x38\n\x0enumeric_values\x18\x05 \x01(\x0b\x32\x1e.tradefed.metric.NumericValuesH\x00\x12\x36\n\rdouble_values\x18\x06 \x01(\x0b\x32\x1d.tradefed.metric.DoubleValuesH\x00\x42\r\n\x0bmeasurement\"$\n\x0cStringValues\x12\x14\n\x0cstring_value\x18\x01 \x03(\t\"&\n\rNumericValues\x12\x15\n\rnumeric_value\x18\x01 \x03(\x03\"$\n\x0c\x44oubleValues\x12\x14\n\x0c\x64ouble_value\x18\x01 \x03(\x01\"\xa8\x01\n\x06Metric\x12\x33\n\x0cmeasurements\x18\x01 \x01(\x0b\x32\x1d.tradefed.metric.Measurements\x12\x0c\n\x04unit\x18\x02 \x01(\t\x12\x32\n\tdirection\x18\x03 \x01(\x0e\x32\x1f.tradefed.metric.Directionality\x12\'\n\x04type\x18\x04 \x01(\x0e\x32\x19.tradefed.metric.DataType*c\n\x0e\x44irectionality\x12\x1e\n\x1a\x44IRECTIONALITY_UNSPECIFIED\x10\x00\x12\r\n\tUP_BETTER\x10\x01\x12\x0f\n\x0b\x44OWN_BETTER\x10\x02\x12\x11\n\rCLOSER_BETTER\x10\x03*\"\n\x08\x44\x61taType\x12\x07\n\x03RAW\x10\x00\x12\r\n\tPROCESSED\x10\x01\x42\x37\n\"com.android.tradefed.metrics.protoB\x11MetricMeasurementb\x06proto3')
 )
 
 _DIRECTIONALITY = _descriptor.EnumDescriptor(
@@ -49,8 +49,8 @@
   ],
   containing_type=None,
   serialized_options=None,
-  serialized_start=639,
-  serialized_end=738,
+  serialized_start=634,
+  serialized_end=733,
 )
 _sym_db.RegisterEnumDescriptor(_DIRECTIONALITY)
 
@@ -72,8 +72,8 @@
   ],
   containing_type=None,
   serialized_options=None,
-  serialized_start=740,
-  serialized_end=774,
+  serialized_start=735,
+  serialized_end=769,
 )
 _sym_db.RegisterEnumDescriptor(_DATATYPE)
 
@@ -151,8 +151,8 @@
       name='measurement', full_name='tradefed.metric.Measurements.measurement',
       index=0, containing_type=None, fields=[]),
   ],
-  serialized_start=79,
-  serialized_end=350,
+  serialized_start=74,
+  serialized_end=345,
 )
 
 
@@ -182,8 +182,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=352,
-  serialized_end=388,
+  serialized_start=347,
+  serialized_end=383,
 )
 
 
@@ -213,8 +213,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=390,
-  serialized_end=428,
+  serialized_start=385,
+  serialized_end=423,
 )
 
 
@@ -244,8 +244,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=430,
-  serialized_end=466,
+  serialized_start=425,
+  serialized_end=461,
 )
 
 
@@ -296,8 +296,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=469,
-  serialized_end=637,
+  serialized_start=464,
+  serialized_end=632,
 )
 
 _MEASUREMENTS.fields_by_name['string_values'].message_type = _STRINGVALUES
@@ -335,35 +335,35 @@
 
 Measurements = _reflection.GeneratedProtocolMessageType('Measurements', (_message.Message,), {
   'DESCRIPTOR' : _MEASUREMENTS,
-  '__module__' : 'tools.tradefederation.core.proto.metric_measurement_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.metric_measurement_pb2'
   # @@protoc_insertion_point(class_scope:tradefed.metric.Measurements)
   })
 _sym_db.RegisterMessage(Measurements)
 
 StringValues = _reflection.GeneratedProtocolMessageType('StringValues', (_message.Message,), {
   'DESCRIPTOR' : _STRINGVALUES,
-  '__module__' : 'tools.tradefederation.core.proto.metric_measurement_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.metric_measurement_pb2'
   # @@protoc_insertion_point(class_scope:tradefed.metric.StringValues)
   })
 _sym_db.RegisterMessage(StringValues)
 
 NumericValues = _reflection.GeneratedProtocolMessageType('NumericValues', (_message.Message,), {
   'DESCRIPTOR' : _NUMERICVALUES,
-  '__module__' : 'tools.tradefederation.core.proto.metric_measurement_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.metric_measurement_pb2'
   # @@protoc_insertion_point(class_scope:tradefed.metric.NumericValues)
   })
 _sym_db.RegisterMessage(NumericValues)
 
 DoubleValues = _reflection.GeneratedProtocolMessageType('DoubleValues', (_message.Message,), {
   'DESCRIPTOR' : _DOUBLEVALUES,
-  '__module__' : 'tools.tradefederation.core.proto.metric_measurement_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.metric_measurement_pb2'
   # @@protoc_insertion_point(class_scope:tradefed.metric.DoubleValues)
   })
 _sym_db.RegisterMessage(DoubleValues)
 
 Metric = _reflection.GeneratedProtocolMessageType('Metric', (_message.Message,), {
   'DESCRIPTOR' : _METRIC,
-  '__module__' : 'tools.tradefederation.core.proto.metric_measurement_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.metric_measurement_pb2'
   # @@protoc_insertion_point(class_scope:tradefed.metric.Metric)
   })
 _sym_db.RegisterMessage(Metric)
diff --git a/atest/tools/tradefederation/core/proto/test_record_pb2.py b/atest/tools/asuite/atest/tf_proto/test_record_pb2.py
similarity index 78%
rename from atest/tools/tradefederation/core/proto/test_record_pb2.py
rename to atest/tools/asuite/atest/tf_proto/test_record_pb2.py
index a1c84c6..beb5cf1 100644
--- a/atest/tools/tradefederation/core/proto/test_record_pb2.py
+++ b/atest/tools/asuite/atest/tf_proto/test_record_pb2.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 # Generated by the protocol buffer compiler.  DO NOT EDIT!
-# source: tools/tradefederation/core/proto/test_record.proto
+# source: tools/asuite/atest/tf_proto/test_record.proto
 
 import sys
 _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
@@ -16,17 +16,17 @@
 
 from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2
 from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
-from tools.tradefederation.core.proto import metric_measurement_pb2 as tools_dot_tradefederation_dot_core_dot_proto_dot_metric__measurement__pb2
+from tools.asuite.atest.tf_proto import metric_measurement_pb2 as tools_dot_asuite_dot_atest_dot_tf__proto_dot_metric__measurement__pb2
 
 
 DESCRIPTOR = _descriptor.FileDescriptor(
-  name='tools/tradefederation/core/proto/test_record.proto',
+  name='tools/asuite/atest/tf_proto/test_record.proto',
   package='android_test_record',
   syntax='proto3',
   serialized_options=_b('\n!com.android.tradefed.result.protoB\017TestRecordProto'),
-  serialized_pb=_b('\n2tools/tradefederation/core/proto/test_record.proto\x12\x13\x61ndroid_test_record\x1a\x19google/protobuf/any.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x39tools/tradefederation/core/proto/metric_measurement.proto\"\xae\x05\n\nTestRecord\x12\x16\n\x0etest_record_id\x18\x01 \x01(\t\x12\x1d\n\x15parent_test_record_id\x18\x02 \x01(\t\x12\x35\n\x08\x63hildren\x18\x03 \x03(\x0b\x32#.android_test_record.ChildReference\x12\x1d\n\x15num_expected_children\x18\x04 \x01(\x03\x12/\n\x06status\x18\x05 \x01(\x0e\x32\x1f.android_test_record.TestStatus\x12\x32\n\ndebug_info\x18\x06 \x01(\x0b\x32\x1e.android_test_record.DebugInfo\x12.\n\nstart_time\x18\x07 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12,\n\x08\x65nd_time\x18\x08 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x41\n\tartifacts\x18\t \x03(\x0b\x32..android_test_record.TestRecord.ArtifactsEntry\x12=\n\x07metrics\x18\n \x03(\x0b\x32,.android_test_record.TestRecord.MetricsEntry\x12)\n\x0b\x64\x65scription\x18\x0b \x01(\x0b\x32\x14.google.protobuf.Any\x12\x12\n\nattempt_id\x18\x0c \x01(\x03\x1a\x46\n\x0e\x41rtifactsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12#\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.Any:\x02\x38\x01\x1aG\n\x0cMetricsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12&\n\x05value\x18\x02 \x01(\x0b\x32\x17.tradefed.metric.Metric:\x02\x38\x01\"v\n\x0e\x43hildReference\x12\x18\n\x0etest_record_id\x18\x01 \x01(\tH\x00\x12=\n\x12inline_test_record\x18\x02 \x01(\x0b\x32\x1f.android_test_record.TestRecordH\x00\x42\x0b\n\treference\"\xb0\x01\n\tDebugInfo\x12\x15\n\rerror_message\x18\x01 \x01(\t\x12\r\n\x05trace\x18\x02 \x01(\t\x12:\n\x0e\x66\x61ilure_status\x18\x03 \x01(\x0e\x32\".android_test_record.FailureStatus\x12\x41\n\x12\x64\x65\x62ug_info_context\x18\x04 \x01(\x0b\x32%.android_test_record.DebugInfoContext\"^\n\x10\x44\x65\x62ugInfoContext\x12\x1a\n\x12\x61\x63tion_in_progress\x18\x01 \x01(\t\x12\x1a\n\x12\x64\x65\x62ug_help_message\x18\n \x01(\t\x12\x12\n\nerror_type\x18\x14 \x01(\t*R\n\nTestStatus\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x08\n\x04PASS\x10\x01\x12\x08\n\x04\x46\x41IL\x10\x02\x12\x0b\n\x07IGNORED\x10\x03\x12\x16\n\x12\x41SSUMPTION_FAILURE\x10\x04*\xaa\x01\n\rFailureStatus\x12\t\n\x05UNSET\x10\x00\x12\x10\n\x0cTEST_FAILURE\x10\x01\x12\r\n\tTIMED_OUT\x10\x02\x12\r\n\tCANCELLED\x10\x03\x12\x11\n\rINFRA_FAILURE\x10\n\x12\x1d\n\x19SYSTEM_UNDER_TEST_CRASHED\x10\x14\x12\x10\n\x0cNOT_EXECUTED\x10\x1e\x12\x1a\n\x16LOST_SYSTEM_UNDER_TEST\x10#B4\n!com.android.tradefed.result.protoB\x0fTestRecordProtob\x06proto3')
+  serialized_pb=_b('\n-tools/asuite/atest/tf_proto/test_record.proto\x12\x13\x61ndroid_test_record\x1a\x19google/protobuf/any.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x34tools/asuite/atest/tf_proto/metric_measurement.proto\"\xae\x05\n\nTestRecord\x12\x16\n\x0etest_record_id\x18\x01 \x01(\t\x12\x1d\n\x15parent_test_record_id\x18\x02 \x01(\t\x12\x35\n\x08\x63hildren\x18\x03 \x03(\x0b\x32#.android_test_record.ChildReference\x12\x1d\n\x15num_expected_children\x18\x04 \x01(\x03\x12/\n\x06status\x18\x05 \x01(\x0e\x32\x1f.android_test_record.TestStatus\x12\x32\n\ndebug_info\x18\x06 \x01(\x0b\x32\x1e.android_test_record.DebugInfo\x12.\n\nstart_time\x18\x07 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12,\n\x08\x65nd_time\x18\x08 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x41\n\tartifacts\x18\t \x03(\x0b\x32..android_test_record.TestRecord.ArtifactsEntry\x12=\n\x07metrics\x18\n \x03(\x0b\x32,.android_test_record.TestRecord.MetricsEntry\x12)\n\x0b\x64\x65scription\x18\x0b \x01(\x0b\x32\x14.google.protobuf.Any\x12\x12\n\nattempt_id\x18\x0c \x01(\x03\x1a\x46\n\x0e\x41rtifactsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12#\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.Any:\x02\x38\x01\x1aG\n\x0cMetricsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12&\n\x05value\x18\x02 \x01(\x0b\x32\x17.tradefed.metric.Metric:\x02\x38\x01\"v\n\x0e\x43hildReference\x12\x18\n\x0etest_record_id\x18\x01 \x01(\tH\x00\x12=\n\x12inline_test_record\x18\x02 \x01(\x0b\x32\x1f.android_test_record.TestRecordH\x00\x42\x0b\n\treference\"\xb0\x01\n\tDebugInfo\x12\x15\n\rerror_message\x18\x01 \x01(\t\x12\r\n\x05trace\x18\x02 \x01(\t\x12:\n\x0e\x66\x61ilure_status\x18\x03 \x01(\x0e\x32\".android_test_record.FailureStatus\x12\x41\n\x12\x64\x65\x62ug_info_context\x18\x04 \x01(\x0b\x32%.android_test_record.DebugInfoContext\"\x96\x01\n\x10\x44\x65\x62ugInfoContext\x12\x1a\n\x12\x61\x63tion_in_progress\x18\x01 \x01(\t\x12\x1a\n\x12\x64\x65\x62ug_help_message\x18\n \x01(\t\x12\x12\n\nerror_type\x18\x14 \x01(\t\x12\x12\n\nerror_name\x18\x1e \x01(\t\x12\x0e\n\x06origin\x18\x1f \x01(\t\x12\x12\n\nerror_code\x18  \x01(\x03*R\n\nTestStatus\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x08\n\x04PASS\x10\x01\x12\x08\n\x04\x46\x41IL\x10\x02\x12\x0b\n\x07IGNORED\x10\x03\x12\x16\n\x12\x41SSUMPTION_FAILURE\x10\x04*\xd4\x01\n\rFailureStatus\x12\t\n\x05UNSET\x10\x00\x12\x10\n\x0cTEST_FAILURE\x10\x01\x12\r\n\tTIMED_OUT\x10\x02\x12\r\n\tCANCELLED\x10\x03\x12\x11\n\rINFRA_FAILURE\x10\n\x12\x1d\n\x19SYSTEM_UNDER_TEST_CRASHED\x10\x14\x12\x10\n\x0cNOT_EXECUTED\x10\x1e\x12\x1a\n\x16LOST_SYSTEM_UNDER_TEST\x10#\x12\x14\n\x10\x44\x45PENDENCY_ISSUE\x10(\x12\x12\n\x0e\x43USTOMER_ISSUE\x10)B4\n!com.android.tradefed.result.protoB\x0fTestRecordProtob\x06proto3')
   ,
-  dependencies=[google_dot_protobuf_dot_any__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,tools_dot_tradefederation_dot_core_dot_proto_dot_metric__measurement__pb2.DESCRIPTOR,])
+  dependencies=[google_dot_protobuf_dot_any__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,tools_dot_asuite_dot_atest_dot_tf__proto_dot_metric__measurement__pb2.DESCRIPTOR,])
 
 _TESTSTATUS = _descriptor.EnumDescriptor(
   name='TestStatus',
@@ -57,8 +57,8 @@
   ],
   containing_type=None,
   serialized_options=None,
-  serialized_start=1278,
-  serialized_end=1360,
+  serialized_start=1325,
+  serialized_end=1407,
 )
 _sym_db.RegisterEnumDescriptor(_TESTSTATUS)
 
@@ -101,11 +101,19 @@
       name='LOST_SYSTEM_UNDER_TEST', index=7, number=35,
       serialized_options=None,
       type=None),
+    _descriptor.EnumValueDescriptor(
+      name='DEPENDENCY_ISSUE', index=8, number=40,
+      serialized_options=None,
+      type=None),
+    _descriptor.EnumValueDescriptor(
+      name='CUSTOMER_ISSUE', index=9, number=41,
+      serialized_options=None,
+      type=None),
   ],
   containing_type=None,
   serialized_options=None,
-  serialized_start=1363,
-  serialized_end=1533,
+  serialized_start=1410,
+  serialized_end=1622,
 )
 _sym_db.RegisterEnumDescriptor(_FAILURESTATUS)
 
@@ -123,6 +131,8 @@
 SYSTEM_UNDER_TEST_CRASHED = 20
 NOT_EXECUTED = 30
 LOST_SYSTEM_UNDER_TEST = 35
+DEPENDENCY_ISSUE = 40
+CUSTOMER_ISSUE = 41
 
 
 
@@ -159,8 +169,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=738,
-  serialized_end=808,
+  serialized_start=728,
+  serialized_end=798,
 )
 
 _TESTRECORD_METRICSENTRY = _descriptor.Descriptor(
@@ -196,8 +206,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=810,
-  serialized_end=881,
+  serialized_start=800,
+  serialized_end=871,
 )
 
 _TESTRECORD = _descriptor.Descriptor(
@@ -303,8 +313,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=195,
-  serialized_end=881,
+  serialized_start=185,
+  serialized_end=871,
 )
 
 
@@ -344,8 +354,8 @@
       name='reference', full_name='android_test_record.ChildReference.reference',
       index=0, containing_type=None, fields=[]),
   ],
-  serialized_start=883,
-  serialized_end=1001,
+  serialized_start=873,
+  serialized_end=991,
 )
 
 
@@ -396,8 +406,8 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=1004,
-  serialized_end=1180,
+  serialized_start=994,
+  serialized_end=1170,
 )
 
 
@@ -429,6 +439,27 @@
       message_type=None, enum_type=None, containing_type=None,
       is_extension=False, extension_scope=None,
       serialized_options=None, file=DESCRIPTOR),
+    _descriptor.FieldDescriptor(
+      name='error_name', full_name='android_test_record.DebugInfoContext.error_name', index=3,
+      number=30, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      serialized_options=None, file=DESCRIPTOR),
+    _descriptor.FieldDescriptor(
+      name='origin', full_name='android_test_record.DebugInfoContext.origin', index=4,
+      number=31, type=9, cpp_type=9, label=1,
+      has_default_value=False, default_value=_b("").decode('utf-8'),
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      serialized_options=None, file=DESCRIPTOR),
+    _descriptor.FieldDescriptor(
+      name='error_code', full_name='android_test_record.DebugInfoContext.error_code', index=5,
+      number=32, type=3, cpp_type=2, label=1,
+      has_default_value=False, default_value=0,
+      message_type=None, enum_type=None, containing_type=None,
+      is_extension=False, extension_scope=None,
+      serialized_options=None, file=DESCRIPTOR),
   ],
   extensions=[
   ],
@@ -441,13 +472,13 @@
   extension_ranges=[],
   oneofs=[
   ],
-  serialized_start=1182,
-  serialized_end=1276,
+  serialized_start=1173,
+  serialized_end=1323,
 )
 
 _TESTRECORD_ARTIFACTSENTRY.fields_by_name['value'].message_type = google_dot_protobuf_dot_any__pb2._ANY
 _TESTRECORD_ARTIFACTSENTRY.containing_type = _TESTRECORD
-_TESTRECORD_METRICSENTRY.fields_by_name['value'].message_type = tools_dot_tradefederation_dot_core_dot_proto_dot_metric__measurement__pb2._METRIC
+_TESTRECORD_METRICSENTRY.fields_by_name['value'].message_type = tools_dot_asuite_dot_atest_dot_tf__proto_dot_metric__measurement__pb2._METRIC
 _TESTRECORD_METRICSENTRY.containing_type = _TESTRECORD
 _TESTRECORD.fields_by_name['children'].message_type = _CHILDREFERENCE
 _TESTRECORD.fields_by_name['status'].enum_type = _TESTSTATUS
@@ -478,19 +509,19 @@
 
   'ArtifactsEntry' : _reflection.GeneratedProtocolMessageType('ArtifactsEntry', (_message.Message,), {
     'DESCRIPTOR' : _TESTRECORD_ARTIFACTSENTRY,
-    '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+    '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
     # @@protoc_insertion_point(class_scope:android_test_record.TestRecord.ArtifactsEntry)
     })
   ,
 
   'MetricsEntry' : _reflection.GeneratedProtocolMessageType('MetricsEntry', (_message.Message,), {
     'DESCRIPTOR' : _TESTRECORD_METRICSENTRY,
-    '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+    '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
     # @@protoc_insertion_point(class_scope:android_test_record.TestRecord.MetricsEntry)
     })
   ,
   'DESCRIPTOR' : _TESTRECORD,
-  '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
   # @@protoc_insertion_point(class_scope:android_test_record.TestRecord)
   })
 _sym_db.RegisterMessage(TestRecord)
@@ -499,21 +530,21 @@
 
 ChildReference = _reflection.GeneratedProtocolMessageType('ChildReference', (_message.Message,), {
   'DESCRIPTOR' : _CHILDREFERENCE,
-  '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
   # @@protoc_insertion_point(class_scope:android_test_record.ChildReference)
   })
 _sym_db.RegisterMessage(ChildReference)
 
 DebugInfo = _reflection.GeneratedProtocolMessageType('DebugInfo', (_message.Message,), {
   'DESCRIPTOR' : _DEBUGINFO,
-  '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
   # @@protoc_insertion_point(class_scope:android_test_record.DebugInfo)
   })
 _sym_db.RegisterMessage(DebugInfo)
 
 DebugInfoContext = _reflection.GeneratedProtocolMessageType('DebugInfoContext', (_message.Message,), {
   'DESCRIPTOR' : _DEBUGINFOCONTEXT,
-  '__module__' : 'tools.tradefederation.core.proto.test_record_pb2'
+  '__module__' : 'tools.asuite.atest.tf_proto.test_record_pb2'
   # @@protoc_insertion_point(class_scope:android_test_record.DebugInfoContext)
   })
 _sym_db.RegisterMessage(DebugInfoContext)
diff --git a/atest/unittest_data/module-info.json b/atest/unittest_data/module-info.json
index 0959fad..3ff300c 100644
--- a/atest/unittest_data/module-info.json
+++ b/atest/unittest_data/module-info.json
@@ -15,5 +15,13 @@
   "multiarch2": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch2" },
   "multiarch2_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch2" },
   "multiarch3": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch3" },
-  "multiarch3_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch3_32" }
+  "multiarch3_32": { "class": ["JAVA_LIBRARIES"],  "path": ["shared/path/to/be/used2"], "tags": ["optional"],  "installed": ["out/host/linux-x86/framework/tradefed-contrib.jar"], "module_name": "multiarch3_32" },
+  "dep_test_module": { "module_name": "dep_test_module", "dependencies": ["module_1", "module_2"] },
+  "module_1": { "module_name": "module_1", "dependencies": [] },
+  "module_2": { "module_name": "module_2", "dependencies": [] },
+  "module_3": { "module_name": "module_3", "dependencies": [] },
+  "test_dep_level_1_1": { "module_name": "test_dep_level_1_1", "dependencies": [] },
+  "test_dep_level_1_2": { "module_name": "test_dep_level_1_2", "dependencies": [] },
+  "test_dep_level_2_1": { "module_name": "test_dep_level_2_1", "dependencies": [] },
+  "test_dep_level_2_2": { "module_name": "test_dep_level_2_2", "dependencies": [] }
 }
diff --git a/atest/unittest_data/module_bp_cc_deps.json b/atest/unittest_data/module_bp_cc_deps.json
new file mode 100644
index 0000000..a1b6549
--- /dev/null
+++ b/atest/unittest_data/module_bp_cc_deps.json
@@ -0,0 +1,33 @@
+{
+	"clang": "${ANDROID_ROOT}/prebuilts/clang/host/linux-x86/clang-r399163b/bin/clang",
+	"clang++": "${ANDROID_ROOT}/prebuilts/clang/host/linux-x86/clang-r399163b/bin/clang++",
+	"modules": {
+        "module_cc_1": {
+                "dependencies": [
+                        "test_cc_dep_level_1_1",
+                        "test_cc_dep_level_1_2"
+                ]
+        },
+        "module_cc_2": {
+                "dependencies": [
+                        "test_cc_dep_level_1_2"
+                ]
+        },
+        "module_cc_3": {
+        },
+        "test_cc_dep_level_1_1": {
+                "dependencies": [
+                        "test_cc_dep_level_2_1"
+                ]
+        },
+        "test_cc_dep_level_1_2": {
+                "dependencies": [
+                        "test_cc_dep_level_2_2"
+                ]
+        },
+        "test_cc_dep_level_2_1": {
+        },
+        "test_cc_dep_level_2_2": {
+        }
+  }
+}
diff --git a/atest/unittest_data/module_bp_java_deps.json b/atest/unittest_data/module_bp_java_deps.json
new file mode 100644
index 0000000..72b1839
--- /dev/null
+++ b/atest/unittest_data/module_bp_java_deps.json
@@ -0,0 +1,29 @@
+{
+        "module_1": {
+                "dependencies": [
+                        "test_dep_level_1_1",
+                        "test_dep_level_1_2"
+                ]
+        },
+        "module_2": {
+                "dependencies": [
+                        "test_dep_level_1_2"
+                ]
+        },
+        "module_3": {
+        },
+        "test_dep_level_1_1": {
+                "dependencies": [
+                        "test_dep_level_2_1"
+                ]
+        },
+        "test_dep_level_1_2": {
+                "dependencies": [
+                        "test_dep_level_2_2"
+                ]
+        },
+        "test_dep_level_2_1": {
+        },
+        "test_dep_level_2_2": {
+        }
+}
diff --git a/atest/unittest_data/module_bp_java_loop_deps.json b/atest/unittest_data/module_bp_java_loop_deps.json
new file mode 100644
index 0000000..f74aced
--- /dev/null
+++ b/atest/unittest_data/module_bp_java_loop_deps.json
@@ -0,0 +1,31 @@
+{
+        "module_1": {
+                "dependencies": [
+                        "test_dep_level_1_1",
+                        "test_dep_level_1_2"
+                ]
+        },
+        "module_2": {
+                "dependencies": [
+                        "test_dep_level_1_2"
+                ]
+        },
+        "test_dep_level_1_1": {
+                "dependencies": [
+                        "module_1",
+                        "test_dep_level_2_1"
+                ]
+        },
+        "test_dep_level_1_2": {
+                "dependencies": [
+                        "test_dep_level_2_2"
+                ]
+        },
+        "test_dep_level_2_1": {
+                "dependencies": [
+                        "module_1"
+                ]
+        },
+        "test_dep_level_2_2": {
+        }
+}
diff --git a/atest/unittest_data/my_cc_test.cc b/atest/unittest_data/my_cc_test.cc
new file mode 100644
index 0000000..db0047c
--- /dev/null
+++ b/atest/unittest_data/my_cc_test.cc
@@ -0,0 +1,72 @@
+INSTANTIATE_TEST_SUITE_P( Instantiation1, MyInstantClass1,
+    testing::Combine(testing::Values(Options::Language::CPP, Options::Language::JAVA,
+                                     Options::Language::NDK, Options::Language::RUST),
+                     testing::ValuesIn(kTypeParams)),
+    [](const testing::TestParamInfo<std::tuple<Options::Language, TypeParam>>& info) {
+      return Options::LanguageToString(std::get<0>(info.param)) + "_" +
+             std::get<1>(info.param).kind;
+    });
+
+INSTANTIATE_TEST_CASE_P(Instantiation2,
+    MyInstantClass2,
+    testing::Combine(testing::Values(Options::Language::CPP, Options::Language::JAVA,
+                                     Options::Language::NDK, Options::Language::RUST),
+                     testing::ValuesIn(kTypeParams)),
+    [](const testing::TestParamInfo<std::tuple<Options::Language, TypeParam>>& info) {
+      return Options::LanguageToString(std::get<0>(info.param)) + "_" +
+             std::get<1>(info.param).kind;
+    });
+
+INSTANTIATE_TEST_SUITE_P(
+    Instantiation3, MyInstantClass1 ,
+    testing::Combine(testing::Values(Options::Language::CPP, Options::Language::JAVA,
+                                     Options::Language::NDK, Options::Language::RUST),
+                     testing::ValuesIn(kTypeParams)),
+    [](const testing::TestParamInfo<std::tuple<Options::Language, TypeParam>>& info) {
+      return Options::LanguageToString(std::get<0>(info.param)) + "_" +
+             std::get<1>(info.param).kind;
+    });
+
+
+INSTANTIATE_TEST_CASE_P(
+    Instantiation4,
+    MyInstantClass3,
+    testing::Combine(testing::Values(Options::Language::CPP, Options::Language::JAVA,
+                                     Options::Language::NDK, Options::Language::RUST),
+                     testing::ValuesIn(kTypeParams)),
+    [](const testing::TestParamInfo<std::tuple<Options::Language, TypeParam>>& info) {
+      return Options::LanguageToString(std::get<0>(info.param)) + "_" +
+             std::get<1>(info.param).kind;
+    });
+    
+TEST_P( MyClass1, Method1) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+TEST_F(
+MyClass1, 
+Method2) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+TEST_P(MyClass2, 
+       Method3) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+TEST_F(MyClass3, Method2) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+TEST(MyClass4, Method5) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+TEST(MyClass5, Method5) {
+  Run("List<{}>", kListSupportExpectations);
+}
+
+INSTANTIATE_TYPED_TEST_CASE_P(Instantiation5, MyInstantTypeClass1, IntTypes);
+
+INSTANTIATE_TYPED_TEST_SUITE_P(Instantiation6, MyInstantTypeClass2, IntTypes);
+