Fix issues with combining -y with other args

The -y flag produces a false verification failure when combined with --dry-run or -g flags. This is because the test_commands var is assigned a list of command str in some cases, and a test name in other cases.

The -y flag relies on test_commands being assigned a test name, while the -g flag relies on it being a command str. They work fine separately, but cause a failure when combined.

The current behavior of -gu is also wrong. `atest HelloWorldTests -gu` will add to runner_commands.json and append test_commands.json without prompt.
Because it entered the "-g" / generate_runner_cmd flow first, test_name was subsequently used in the "-u" / update_cmd_mapping and updating the test_commands with an entry that has the whole command as the name of the test. The correct behavior is for the file to not be updated because the commands are expected to be the same.

This CL renames the variable to fix the bug and make the code more readable.

Bug: 313717371
Bug: 309036702
Test: m atest &&  atest-dev atest_unittests --host && atest-dev HelloWorldTests -yg && atest-dev HelloWorldTests -y --dry-run
Change-Id: I8f0e444e09edabed80de737223044de47b3cd344
diff --git a/atest/atest_main.py b/atest/atest_main.py
index 3dfccb9..686d889 100755
--- a/atest/atest_main.py
+++ b/atest/atest_main.py
@@ -721,18 +721,17 @@
         print("add command %s to file %s" % (
             atest_utils.mark_green(test_commands),
             atest_utils.mark_green(constants.RUNNER_COMMAND_PATH)))
-    else:
-        test_commands = atest_utils.get_verify_key(args.tests, extra_args)
+    test_name = atest_utils.get_verify_key(args.tests, extra_args)
     if args.verify_cmd_mapping:
         try:
-            atest_utils.handle_test_runner_cmd(test_commands,
+            atest_utils.handle_test_runner_cmd(test_name,
                                                dry_run_cmds,
                                                do_verification=True)
         except atest_error.DryRunVerificationError as e:
             atest_utils.colorful_print(str(e), constants.RED)
             return ExitCode.VERIFY_FAILURE
     if args.update_cmd_mapping:
-        atest_utils.handle_test_runner_cmd(test_commands,
+        atest_utils.handle_test_runner_cmd(test_name,
                                            dry_run_cmds)
     return ExitCode.SUCCESS
 
diff --git a/atest/atest_utils.py b/atest/atest_utils.py
index 44f823c..3eb46c3 100644
--- a/atest/atest_utils.py
+++ b/atest/atest_utils.py
@@ -590,13 +590,14 @@
     return columns, rows
 
 
-def handle_test_runner_cmd(input_test, test_cmds, do_verification=False,
+def handle_test_runner_cmd(input_test, dry_run_cmds, do_verification=False,
                            result_path=constants.VERIFY_DATA_PATH):
     """Handle the runner command of input tests.
 
     Args:
-        input_test: A string of input tests pass to atest.
-        test_cmds: A list of strings for running input tests.
+        input_test: The name of a test.
+        dry_run_cmds: A list of strings which make up the command for running
+        the input test.
         do_verification: A boolean to indicate the action of this method.
                          True: Do verification without updating result map and
                                raise DryRunVerificationError if verifying fails.
@@ -607,9 +608,9 @@
     """
     full_result_content = load_json_safely(result_path)
     former_test_cmds = full_result_content.get(input_test, [])
-    test_cmds = _normalize(test_cmds)
+    dry_run_cmds = _normalize(dry_run_cmds)
     former_test_cmds = _normalize(former_test_cmds)
-    if not _are_identical_cmds(test_cmds, former_test_cmds):
+    if not _are_identical_cmds(dry_run_cmds, former_test_cmds):
         if do_verification:
             raise atest_error.DryRunVerificationError(
                 'Dry run verification failed, former commands: {}'.format(
@@ -618,7 +619,7 @@
             # If former_test_cmds is different from test_cmds, ask users if they
             # are willing to update the result.
             print('Former cmds = %s' % former_test_cmds)
-            print('Current cmds = %s' % test_cmds)
+            print('Current cmds = %s' % dry_run_cmds)
             if not prompt_with_yn_result('Do you want to update former result '
                                          'to the latest one?', True):
                 print('SKIP updating result!!!')
@@ -627,7 +628,7 @@
         # If current commands are the same as the formers, no need to update
         # result.
         return
-    full_result_content[input_test] = test_cmds
+    full_result_content[input_test] = dry_run_cmds
     with open(result_path, 'w', encoding='utf-8') as outfile:
         json.dump(full_result_content, outfile, indent=0)
         print('Save result mapping to %s' % result_path)