Merge "Docs: Update Safe Mode default behavior Bug: 31063270"
diff --git a/src/compatibility/cts/run.jd b/src/compatibility/cts/run.jd
index 98062d2..9f8ffa5 100644
--- a/src/compatibility/cts/run.jd
+++ b/src/compatibility/cts/run.jd
@@ -118,7 +118,7 @@
<td>Run the specified test plan</td>
</tr>
<tr>
- <td><code>-- package/-p <test_package_name> [--package/-p <test_package2>...]</code></td>
+ <td><code>--package/-p <test_package_name> [--package/-p <test_package2>...]</code></td>
<td>Run the specified test packages</td>
</tr>
<tr>
@@ -294,8 +294,20 @@
<td>Run CTS on the specific device.</td>
</tr>
<tr>
- <td><code>--abi 32|64</code></td>
- <td>Forces the test to run on the given ABI. By default CTS runs a test once for each ABI the device supports.</td>
+ <td><code>--include-filter <module_name> [--include-filter <module2>...]</code></td>
+ <td>Run only with the specified modules.</td>
+ </tr>
+ <tr>
+ <td><code>--exclude-filter <module_name> [--exclude-filter <module2>...]</code></td>
+ <td>Exclude the specified modules from the run.</td>
+ </tr>
+ <tr>
+ <td><code>--log-level-display/-l <log_level></code></td>
+ <td>Run with the minimum specified log level displayed to STDOUT. Valid values: [VERBOSE, DEBUG, INFO, WARN, ERROR, ASSERT].</td>
+ </tr>
+ <tr>
+ <td><code>--abi <abi_name></code></td>
+ <td>Force the test to run on the given ABI, 32 or 64. By default CTS runs a test once for each ABI the device supports.</td>
</tr>
<tr>
<td><code>--logcat</code>, <code>--bugreport</code>, and <code>--screenshoot-on-failure</code></td>
diff --git a/src/compatibility/source/android-cdd-cover.html b/src/compatibility/source/android-cdd-cover.html
index ee76ef8..12c0db0 100644
--- a/src/compatibility/source/android-cdd-cover.html
+++ b/src/compatibility/source/android-cdd-cover.html
@@ -1,6 +1,6 @@
<!DOCTYPE html>
<head>
-<title>Android 6.0 Compatibility Definition</title>
+<title>Android 7.0 Compatibility Definition</title>
<link rel="stylesheet" type="text/css" href="android-cdd-cover.css"/>
</head>
@@ -17,15 +17,16 @@
<tr>
<td>
-<img src="images/android-marshmallow-1.png" alt="Marshmallow logo" style="border-top: 5px solid orange; border-bottom: 5px solid orange"/>
+<img src="images/android-nougat-dark.png" alt="Nougat cover images"
+style="border-top: 5px solid orange; border-bottom: 5px solid orange"/>
</td>
</tr>
<tr>
<td>
-<p class="subtitle">Android 6.0</p>
-<p class="cover-text">Last updated: October 7th, 2015</p>
-<p class="cover-text">Copyright © 2015, Google Inc. All rights reserved.</p>
+<p class="subtitle">Android 7.0</p>
+<p class="cover-text">Last updated: July 8th, 2016</p>
+<p class="cover-text">Copyright © 2016, Google Inc. All rights reserved.</p>
<p class="cover-text"><a href="mailto:compatibility@android.com">compatibility@android.com</a></p>
</td>
</tr>
diff --git a/src/compatibility/source/android-cdd-footer.html b/src/compatibility/source/android-cdd-footer.html
index fce6481..dfb0f51 100644
--- a/src/compatibility/source/android-cdd-footer.html
+++ b/src/compatibility/source/android-cdd-footer.html
@@ -24,7 +24,7 @@
<table class="noborder" style="border-top: 1px solid silver; width: 100%">
<tr>
- <td class="noborder"><img src="../images/android-logo.png" alt="Android logo"/></td>
+ <td class="noborder"><img src="images/android-logo.png" alt="Android logo"/></td>
<td class="noborder" style="text-align:right">
Page <span class="page"></span> of <span class="topage"></span>
</td>
@@ -34,4 +34,4 @@
</div>
</body>
-</html>
\ No newline at end of file
+</html>
diff --git a/src/compatibility/source/images/android-nougat-dark.png b/src/compatibility/source/images/android-nougat-dark.png
new file mode 100644
index 0000000..31a76ed
--- /dev/null
+++ b/src/compatibility/source/images/android-nougat-dark.png
Binary files differ
diff --git a/src/compatibility/source/images/android-nougat-light.png b/src/compatibility/source/images/android-nougat-light.png
new file mode 100644
index 0000000..8cb7e43
--- /dev/null
+++ b/src/compatibility/source/images/android-nougat-light.png
Binary files differ
diff --git a/src/devices/audio/implement-policy.jd b/src/devices/audio/implement-policy.jd
new file mode 100644
index 0000000..ae6ede2
--- /dev/null
+++ b/src/devices/audio/implement-policy.jd
@@ -0,0 +1,446 @@
+page.title=Configuring Audio Policies
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Android 7.0 introduces a new audio policy configuration file format (XML) for
+describing your audio topology.</p>
+
+<p>Previous Android releases required using the
+<code>device/<company>/<device>/audio/audio_policy.conf</code>
+to declare the audio devices present on your product (you can see an example of
+this file for the Galaxy Nexus audio hardware in
+<code>device/samsung/tuna/audio/audio_policy.conf</code>). However, .conf is a
+simple proprietary format that is too limited to describe complex topologies for
+applications such as televisions and automobiles.</p>
+
+<p>Android 7.0 deprecates the <code>audio_policy.conf</code> and adds support
+for defining audio topology using an XML file format that is more
+human-readable, has a wide range of editing and parsing tools, and is flexible
+enough to describe complex audio topologies.</p>
+
+<p class="note".<strong>Note:</strong> Android 7.0 preserves support for using
+<code>audio_policy.conf</code>; this legacy format is used by default. To use
+the XML file format, include the build option <code>USE_XML_AUDIO_POLICY_CONF
+:= 1</code> in device makefile.</p>
+
+<h2 id=xml_advantages>Advantages of the XML format</h2>
+<p>As in the .conf file, the new XML file enables defining the number and types
+of output an input stream profiles, devices usable for playback and capture, and
+audio attributes. In addition, the XML format offers the following enhancements:
+</p>
+
+<ul>
+<li>Audio profiles are now structured similar to HDMI Simple Audio Descriptors
+and enable a different set of sampling rates/channel masks for each audio
+format.</li>
+<li>Explicit definitions of all possible connections between devices and
+streams. Previously, an implicit rule made it possible to interconnect all
+devices attached to the same HAL module, preventing the audio policy from
+controlling connections requested with audio patch APIs. In the XML format, the
+topology description now defines connection limitations.</li>
+<li>Support for <em>includes</em> avoids repeating standard A2DP, USB, or
+reroute submit definitions.</li>
+<li>Customizable volume curves. Previously, volume tables were hardcoded. In the
+XML format, volume tables are described and can be customized.</li>
+</ul>
+
+<p>The template at
+<code>frameworks/av/services/audiopolicy/config/audio_policy_configuration.xml</code>
+shows many of these features in use.</p>
+
+<h2 id=xml_file_format>File format and location</h2>
+<p>The new audio policy configuration file is
+<code>audio_policy_configuration.xml</code> and is located in
+<code>/system/etc</code>. To view a simple audio policy configuration in the new
+XML file format, view the example below.</p>
+
+<p>
+<div class="toggle-content closed">
+ <p><a href="#" onclick="return toggleContent(this)">
+ <img src="{@docRoot}assets/images/triangle-closed.png" class="toggle-content-img" />
+ <strong><span class="toggle-content-text">Show audio policy example</span>
+ <span class="toggle-content-text" style="display:none;">Hide audio policy
+ example</span></strong>
+ </a></p>
+
+ <div class="toggle-content-toggleme">
+<pre class="prettyprint">
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<audioPolicyConfiguration version="1.0" xmlns:xi="http://www.w3.org/2001/XInclude">
+ <globalConfiguration speaker_drc_enabled="true"/>
+ <modules>
+ <module name="primary" halVersion="3.0">
+ <attachedDevices>
+ <item>Speaker</item>
+ <item>Earpiece</item>
+ <item>Built-In Mic</item>
+ </attachedDevices>
+ <defaultOutputDevice>Speaker</defaultOutputDevice>
+ <mixPorts>
+ <mixPort name="primary output" role="source" flags="AUDIO_OUTPUT_FLAG_PRIMARY">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_STEREO"/>
+ </mixPort>
+ <mixPort name="primary input" role="sink">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="8000,16000,48000"
+ channelMasks="AUDIO_CHANNEL_IN_MONO"/>
+ </mixPort>
+ </mixPorts>
+ <devicePorts>
+ <devicePort tagName="Earpiece" type="AUDIO_DEVICE_OUT_EARPIECE" role="sink">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="48000" channelMasks="AUDIO_CHANNEL_IN_MONO"/>
+ </devicePort>
+ <devicePort tagName="Speaker" role="sink" type="AUDIO_DEVICE_OUT_SPEAKER" address="">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_STEREO"/>
+ </devicePort>
+ <devicePort tagName="Wired Headset" type="AUDIO_DEVICE_OUT_WIRED_HEADSET" role="sink">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_STEREO"/>
+ </devicePort>
+ <devicePort tagName="Built-In Mic" type="AUDIO_DEVICE_IN_BUILTIN_MIC" role="source">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="8000,16000,48000"
+ channelMasks="AUDIO_CHANNEL_IN_MONO"/>
+ </devicePort>
+ <devicePort tagName="Wired Headset Mic" type="AUDIO_DEVICE_IN_WIRED_HEADSET" role="source">
+ <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
+ samplingRates="8000,16000,48000"
+ channelMasks="AUDIO_CHANNEL_IN_MONO"/>
+ </devicePort>
+ </devicePorts>
+ <routes>
+ <route type="mix" sink="Earpiece" sources="primary output"/>
+ <route type="mix" sink="Speaker" sources="primary output"/>
+ <route type="mix" sink="Wired Headset" sources="primary output"/>
+ <route type="mix" sink="primary input" sources="Built-In Mic,Wired Headset Mic"/>
+ </routes>
+ </module>
+ <xi:include href="a2dp_audio_policy_configuration.xml"/>
+ </modules>
+
+ <xi:include href="audio_policy_volumes.xml"/>
+ <xi:include href="default_volume_tables.xml"/>
+</audioPolicyConfiguration>
+</pre></div></div>
+</p>
+
+<p>The top level structure contains modules that correspond to each audio HAL
+hardware module, where each module has a list of mix ports, device ports, and
+routes:</p>
+<ul>
+<li><strong>Mix ports</strong> describe the possible config profiles for streams
+that can be opened at the audio HAL for playback and capture.</li>
+<li><strong>Device ports</strong> describe the devices that can be attached with
+their type (and optionally address and audio properties, if relevant).</li>
+<li><strong>Routes</strong> (new) is now separated from the mix port descriptor,
+enabling description of routes from device to device or stream to device.</li>
+</ul>
+
+<p>Volume tables are simple lists of points defining the curve used to translate
+form a UI index to a volume in dB. A separate include file provides default
+curves, but each curve for a given use case and device category can be
+overwritten.</p>
+
+<div class="toggle-content closed">
+ <p><a href="#" onclick="return toggleContent(this)">
+ <img src="{@docRoot}assets/images/triangle-closed.png" class="toggle-content-img" />
+ <strong><span class="toggle-content-text">Show volume table example</span>
+ <span class="toggle-content-text" style="display:none;">Hide volume table
+ example</span></strong>
+ </a></p>
+
+ <div class="toggle-content-toggleme">
+<p><pre>
+<?xml version="1.0" encoding="UTF-8"?>
+<volumes>
+ <reference name="FULL_SCALE_VOLUME_CURVE">
+ <point>0,0</point>
+ <point>100,0</point>
+ </reference>
+ <reference name="SILENT_VOLUME_CURVE">
+ <point>0,-9600</point>
+ <point>100,-9600</point>
+ </reference>
+ <reference name="DEFAULT_VOLUME_CURVE">
+ <point>1,-4950</point>
+ <point>33,-3350</point>
+ <point>66,-1700</point>
+ <point>100,0</point>
+ </reference>
+</volumes>
+</pre></p></div></div>
+
+<div class="toggle-content closed">
+ <p><a href="#" onclick="return toggleContent(this)">
+ <img src="{@docRoot}assets/images/triangle-closed.png" class="toggle-content-img" />
+ <strong><span class="toggle-content-text">Show volumes example</span>
+ <span class="toggle-content-text" style="display:none;">Hide volumes
+ example</span></strong>
+ </a></p>
+
+ <div class="toggle-content-toggleme">
+<p><pre>
+<?xml version="1.0" encoding="UTF-8"?>
+<volumes>
+ <volume stream="AUDIO_STREAM_VOICE_CALL" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_VOICE_CALL" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_VOICE_CALL" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_VOICE_CALL" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_SYSTEM" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_SYSTEM" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_SYSTEM" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_SYSTEM" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_RING" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA"ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_SPEAKER">
+ <point>1,-5500</point>
+ <point>20,-4300</point>
+ <point>86,-1200</point>
+ <point>100,0</point>
+ </volume>
+ <volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_MUSIC" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_ALARM" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ALARM" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ALARM" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ALARM" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_NOTIFICATION" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_NOTIFICATION" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_NOTIFICATION" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_NOTIFICATION" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_BLUETOOTH_SCO" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_BLUETOOTH_SCO" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_BLUETOOTH_SCO" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_BLUETOOTH_SCO" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_ENFORCED_AUDIBLE" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ENFORCED_AUDIBLE" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ENFORCED_AUDIBLE" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ENFORCED_AUDIBLE" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_DTMF" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_DTMF" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_DTMF" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_DTMF" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_TTS" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="SILENT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_TTS" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_TTS" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="SILENT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_TTS" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="SILENT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_ACCESSIBILITY" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ACCESSIBILITY" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ACCESSIBILITY" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="DEFAULT_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_ACCESSIBILITY" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="DEFAULT_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_REROUTING" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_REROUTING" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_REROUTING" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_REROUTING" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="FULL_SCALE_VOLUME_CURVE"/>
+
+ <volume stream="AUDIO_STREAM_PATCH" deviceCategory="DEVICE_CATEGORY_HEADSET" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_PATCH" deviceCategory="DEVICE_CATEGORY_SPEAKER" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_PATCH" deviceCategory="DEVICE_CATEGORY_EARPIECE" ref="FULL_SCALE_VOLUME_CURVE"/>
+ <volume stream="AUDIO_STREAM_PATCH" deviceCategory="DEVICE_CATEGORY_EXT_MEDIA" ref="FULL_SCALE_VOLUME_CURVE"/>
+</volumes>
+</pre></p></div></div>
+
+<h2 id=file_inclusions>File inclusions</h2>
+<p>The XML Inclusions (XInclude) method can be used to include audio policy
+configuration information located in other XML files. All included files must
+follow the structure described above with the following restrictions:</p>
+<ul>
+<li>Files can contain only top-level elements.</li>
+<li>Files cannot contain Xinclude elements.</li>
+</ul>
+<p>Use includes to avoid copying standard Android Open Source Project (AOSP)
+audio HAL modules configuration information to all audio policy configuration
+files (which is prone to errors). A standard audio policy configuration xml file
+is provided for the following audio HALs:</p>
+<ul>
+<li><strong>A2DP:</strong> <code>a2dp_audio_policy_configuration.xml</code></li>
+<li><strong>Reroute submix:</strong> <code>rsubmix_audio_policy_configuration.xml</code></li>
+<li><strong>USB:</strong> <code>usb_audio_policy_configuration.xml</code></li>
+</ul>
+
+<h2 id=code_reorg>Audio policy code reorganization</h2>
+<p>Android 7.0 splits <code>AudioPolicyManager.cpp</code> into several modules
+to make it more maintainable and to highlight what is configurable. The new
+organization of <code>frameworks/av/services/audiopolicy</code> includes the
+following modules:</p>
+
+<table>
+<tr>
+<th>Module</th>
+<th>Description</th>
+</tr>
+
+<tr>
+<td><code>/managerdefault</code></td>
+<td>Includes the generic interfaces and behavior implementation common to all
+applications. Similar to <code>AudioPolicyManager.cpp</code> with engine
+functionality and common concepts abstracted away.</td>
+</tr>
+
+<tr>
+<td><code>/common</code></td>
+<td>Defines base classes (e.g data structures for input output audio stream
+profiles, audio device descriptors, audio patches, audio port, etc.). Previously
+defined inside <code>AudioPolicyManager.cpp</code>.</td>
+</tr>
+
+<tr>
+<td><code>/engine</code></td>
+<td><p>Implements the rules that define which device and volumes should be used for
+a given use case. It implements a standard interface with the generic part, such
+as to get the appropriate device for a given playback or capture use case, or to
+set connected devices or external state (i.e. a call state of forced usage) that
+can alter the routing decision.</p>
+<p>Available in two versions, customized and default; use build option
+<code>USE_CONFIGURABLE_AUDIO_POLICY</code> to select.</p></td>
+</tr>
+
+<tr>
+<td><code>/engineconfigurable</code></td>
+<td>Policy engine implementation that relies on parameter framework (see below).
+Configuration is based on the parameter framework and where the policy is
+defined by XML files.</td>
+</tr>
+
+<tr>
+<td><code>/enginedefault</code></td>
+<td>Policy engine implementation based on previous Android Audio Policy Manager
+implementations. This is the default and includes hard coded rules that
+correspond to current Nexus and AOSP implementations.</td>
+</tr>
+
+<tr>
+<td><code>/service</code></td>
+<td>Includes binder interfaces, threading and locking implementation with
+interface to the rest of the framework.</td>
+</tr>
+
+</table>
+
+<h2 id=policy_config>Configuration using parameter-framework</h2>
+<p>Android 7.0 reorganizes audio policy code to make it easier to understand and
+maintain while also supporting an audio policy defined entirely by configuration
+files. The reorganization and audio policy design is based on Intel's parameter
+framework, a plugin-based and rule-based framework for handling parameters.</p>
+
+<p>Using the new configurable audio policy enables vendors OEMs to:</p>
+<ul>
+<li>Describe a system's structure and its parameters in XML.</li>
+<li>Write (in C++) or reuse a backend (plugin) for accessing described
+parameters.</li>
+<li>Define (in XML or in a domain-specific language) conditions/rules upon which
+a given parameter must take a given value.</li>
+</ul>
+
+<p>AOSP includes an example of an audio policy configuration file that uses the parameter-framework at: <code>Frameworks/av/services/audiopolicy/engineconfigurable/parameter-framework/example/Settings/PolicyConfigurableDomains.xml</code>. For
+details, refer to Intel documentation on the
+<a href="https://github.com/01org/parameter-framework">parameter-framework</a>
+and
+<a href="http://01org.github.io/parameter-framework/hosting/Android_M_Configurable_Audio_Policy.pdf">Android
+Configurable Audio Policy</a>.</p>
+
+<h2 id=policy_routing_apis>Audio policy routing APIs</h2>
+<p>Android 6.0 introduced a public Enumeration and Selection API that sits on
+top of the audio patch/audio port infrastructure and allows application
+developers to indicate a preference for a specific device output or input for
+connected audio records or tracks.</p>
+<p>In Android 7.0, the Enumeration and Selection API is verified by CTS tests
+and is extended to include routing for native C/C++ (OpenSL ES) audio streams.
+The routing of native streams continues to be done in Java, with the addition of
+an <code>AudioRouting</code> interface that supersedes, combines, and deprecates
+the explicit routing methods that were specific to <code>AudioTrack</code> and
+<code>AudioRecord</code> classes.</p>
+
+<p>For details on the Enumeration and Selection API, refer to
+<a href="https://developer.android.com/ndk/guides/audio/opensl-for-android.html?hl=fi#configuration-interface">Android
+configuration interfaces</a> and <code>OpenSLES_AndroidConfiguration.h</code>.
+For details on audio routing, refer to
+<a href="https://developer.android.com/reference/android/media/AudioRouting.html">AudioRouting</a>.
+</p>
+
+<h2 id=multichannel>Multi-channel support</h2>
+
+<p>If your hardware and driver supports multichannel audio via HDMI, you can
+output the audio stream directly to the audio hardware (this bypasses the
+AudioFlinger mixer so it doesn't get downmixed to two channels.) The audio HAL
+must expose whether an output stream profile supports multichannel audio
+capabilities. If the HAL exposes its capabilities, the default policy manager
+allows multichannel playback over HDMI. For implementation details, see
+<code>device/samsung/tuna/audio/audio_hw.c</code>.</p>
+
+<p>To specify that your product contains a multichannel audio output, edit the
+audio policy configuration file to describe the multichannel output for your
+product. The following example from a Galaxy Nexus shows a <em>dynamic</em>
+channel mask, which means the audio policy manager queries the actual channel
+masks supported by the HDMI sink after connection.</p>
+
+<pre>
+audio_hw_modules {
+ primary {
+ outputs {
+ ...
+ hdmi {
+ sampling_rates 44100|48000
+ channel_masks dynamic
+ formats AUDIO_FORMAT_PCM_16_BIT
+ devices AUDIO_DEVICE_OUT_AUX_DIGITAL
+ flags AUDIO_OUTPUT_FLAG_DIRECT
+ }
+ ...
+ }
+ ...
+ }
+ ...
+}
+</pre>
+
+<p>You can also specify a static channel mask such as
+<code>AUDIO_CHANNEL_OUT_5POINT1</code>. AudioFlinger's mixer downmixes the
+content to stereo automatically when sent to an audio device that does not
+support multichannel audio.</p>
+
+<h2 id=codecs>Media codecs</h2>
+
+<p>Ensure the audio codecs your hardware and drivers support are properly
+declared for your product. For details, see
+<a href="{@docRoot}devices/media/index.html#expose">Exposing Codecs to the
+Framework</a>.</p>
diff --git a/src/devices/audio/implement-pre-processing.jd b/src/devices/audio/implement-pre-processing.jd
new file mode 100644
index 0000000..ab6cfa9
--- /dev/null
+++ b/src/devices/audio/implement-pre-processing.jd
@@ -0,0 +1,154 @@
+page.title=Configuring Pre-Processing Effects
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>The Android platform provides audio effects on supported devices in the
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx</a>
+package, which is available for developers to access. For example, the Nexus 10
+supports the following pre-processing effects:</p>
+
+<ul>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic
+Echo Cancellation</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise
+Suppression</a></li>
+</ul>
+
+<h2 id=audiosources>Pairing with AudioSources</h2>
+<p>Pre-processing effects are paired with the use case mode in which the
+pre-processing is requested. In Android app development, a use case is referred
+to as an <code>AudioSource</code>; and app developers request to use the
+<code>AudioSource</code> abstraction instead of the actual audio hardware
+device. The Android Audio Policy Manager maps an <code>AudioSource</code> to a
+given capture path configuration (device, gain, pre processing, etc.) according
+to product-specific rules. The following sources are exposed to developers:</p>
+
+<ul>
+<li><code>android.media.MediaRecorder.AudioSource.CAMCORDER</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_CALL</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.MIC</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.DEFAULT</code></li>
+</ul>
+
+<p>The default pre-processing effects applied for each <code>AudioSource</code>
+are specified in the <code>/system/etc/audio_effects.conf</code> file. To
+specify your own default effects for every <code>AudioSource</code>, create a
+<code>/system/vendor/etc/audio_effects.conf</code> file and specify the
+pre-processing effects to turn on. For an example, see the implementation for
+the Nexus 10 in <code>device/samsung/manta/audio_effects.conf</code>.
+AudioEffect instances acquire and release a session when created and destroyed,
+enabling the effects (such as the Loudness Enhancer) to persist throughout the
+duration of the session.</p>
+
+<p class="warning"><strong>Warning:</strong> For the
+<code>VOICE_RECOGNITION</code> use case, do not enable the noise suppression
+pre-processing effect. It should not be turned on by default when recording from
+this audio source, and you should not enable it in your own audio_effects.conf
+file. Turning on the effect by default will cause the device to fail the
+<a href="{@docRoot}compatibility/index.html"> compatibility requirement</a>
+regardless of whether this was on by default due to configuration file , or the
+audio HAL implementation's default behavior.</p>
+
+<p>The following example enables pre-processing for the VoIP
+<code>AudioSource</code> and Camcorder <code>AudioSource</code>. By declaring
+the <code>AudioSource</code> configuration in this manner, the framework will
+automatically request from the audio HAL the use of those effects.</p>
+
+<p><pre>
+pre_processing {
+ voice_communication {
+ aec {}
+ ns {}
+ }
+ camcorder {
+ agc {}
+ }
+}
+</pre></p>
+
+<h2 id=tuning>Source tuning</h2>
+
+<p><code>AudioSource</code> tuning does not have explicit requirements on audio
+gain or audio processing with the exception of voice recognition
+(<code>VOICE_RECOGNITION</code>). Requirements for voice recognition include:</p>
+
+<ul>
+<li>Flat frequency response (+/- 3dB) from 100Hz to 4kHz</li>
+<li>Close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
+<li>Level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
+<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
+<li>Near-ultrasound requirements (for testing, see
+<a href="{@docRoot}compatibility/cts/near-ultrasound.html">Near Ultrasound
+Tests</a>):
+<ul>
+<li>Support for SUPPORT_PROPERTY_MIC_NEAR_ULTRASOUND as defined in section 7.8.3
+of the CDD.</li>
+<li>Support one or both of 44100 or 48000 sampling rates with no band-pass or
+anti-aliasing filters.</li>
+</ul></li>
+<li>Effects/pre-processing must be disabled by default</li>
+</ul>
+
+<p>Examples of tuning different effects for different sources are:</p>
+
+<ul>
+<li>Noise Suppressor
+<ul>
+<li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
+<li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
+</ul>
+</li>
+<li>Automatic Gain Control
+<ul>
+<li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone
+mic</li>
+<li>Tuned for far-talk for <code>CAMCORDER</code></li>
+</ul>
+</li>
+</ul>
+
+<h2 id="resources">Resources</h2>
+
+<p>For more information, refer to the following resources:</p>
+
+<ul>
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx
+package</a></li>
+
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise
+Suppression audio effect</a></li>
+
+<li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
+</ul>
diff --git a/src/devices/audio/implement-shared-library.jd b/src/devices/audio/implement-shared-library.jd
new file mode 100644
index 0000000..f9539a9
--- /dev/null
+++ b/src/devices/audio/implement-shared-library.jd
@@ -0,0 +1,95 @@
+page.title=Configuring a Shared Library
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<p>After creating an
+<a href="{@docRoot}devices/audio/implement-policy.html">audio policy
+configuration</a>, you must package the HAL implementation into a shared library
+and copy it to the appropriate location:</p>
+
+<ol>
+<li>Create a <code>device/<company>/<device>/audio</code>
+directory to contain your library's source files.</li>
+<li>Create an <code>Android.mk</code> file to build the shared library. Ensure
+the Makefile contains the following line:
+<br>
+<pre>
+LOCAL_MODULE := audio.primary.<device>
+</pre>
+<br>
+<p>Your library must be named <code>audio.primary.<device>.so</code>
+so Android can correctly load the library. The <code>primary</code> portion of
+this filename indicates that this shared library is for the primary audio
+hardware located on the device. The module names
+<code>audio.a2dp.<device></code> and
+<code>audio.usb.<device></code> are also available for Bluetooth and
+USB audio interfaces. Here is an example of an <code>Android.mk</code> from the
+Galaxy Nexus audio hardware:</p>
+<p><pre>
+LOCAL_PATH := $(call my-dir)
+
+include $(CLEAR_VARS)
+
+LOCAL_MODULE := audio.primary.tuna
+LOCAL_MODULE_RELATIVE_PATH := hw
+LOCAL_SRC_FILES := audio_hw.c ril_interface.c
+LOCAL_C_INCLUDES += \
+ external/tinyalsa/include \
+ $(call include-path-for, audio-utils) \
+ $(call include-path-for, audio-effects)
+LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
+LOCAL_MODULE_TAGS := optional
+
+include $(BUILD_SHARED_LIBRARY)
+</pre></p>
+</li>
+<br>
+<li>If your product supports low latency audio as specified by the Android CDD,
+copy the corresponding XML feature file into your product. For example, in your
+product's <code>device/<company>/<device>/device.mk</code>
+Makefile:
+<p><pre>
+PRODUCT_COPY_FILES := ...
+
+PRODUCT_COPY_FILES += \
+frameworks/native/data/etc/android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
+</pre></p>
+</li>
+<br>
+<li>Copy the audio policy configuration file you created earlier to the
+<code>system/etc/</code> directory in your product's
+<code>device/<company>/<device>/device.mk</code> Makefile.
+For example:
+<p><pre>
+PRODUCT_COPY_FILES += \
+ device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
+</pre></p>
+</li>
+<br>
+<li>Declare the shared modules of your audio HAL that are required by your
+product in the product's
+<code>device/<company>/<device>/device.mk</code> Makefile.
+For example, the Galaxy Nexus requires the primary and Bluetooth audio HAL
+modules:
+<pre>
+PRODUCT_PACKAGES += \
+ audio.primary.tuna \
+ audio.a2dp.default
+</pre>
+</li>
+</ol>
diff --git a/src/devices/audio/implement.jd b/src/devices/audio/implement.jd
index 1e81136..31e795b 100644
--- a/src/devices/audio/implement.jd
+++ b/src/devices/audio/implement.jd
@@ -24,279 +24,46 @@
</div>
</div>
-<p>This page explains how to implement the audio Hardware Abstraction Layer (HAL) and configure the
-shared library.</p>
+<p>This section explains how to implement the audio Hardware Abstraction Layer
+(HAL), provides details about configuring an audio policy (file formats, code
+organization, pre-processing effects), and describes how to configure the shared
+library (creating the <code>Android.mk</code> file).</p>
-<h2 id="implementing">Implementing the HAL</h2>
+<h2 id=implementing>Implementing the audio HAL</h2>
-<p>The audio HAL is composed of three different interfaces that you must implement:</p>
+<p>The audio HAL is composed of the following interfaces:</p>
<ul>
-<li><code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions
-of an audio device.</li>
-<li><code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
-manager, which handles things like audio routing and volume control policies.</li>
-<li><code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
-be applied to audio such as downmixing, echo, or noise suppression.</li>
+<li><code>hardware/libhardware/include/hardware/audio.h</code>. Represents the
+main functions of an audio device.</li>
+<li><code>hardware/libhardware/include/hardware/audio_effect.h</code>.
+Represents effects that can be applied to audio such as downmixing, echo, or
+noise suppression.</li>
+</ul>
+
+<p>You must implement all interfaces.</p>
+
+<h2 id=headers>Audio header files</h2>
+<p>For a reference of the properties you can define, refer to the audio header
+files:</p>
+
+<ul>
+<li>In Android 6.0 and higher, see
+<code>system/media/audio/include/system/audio.h</code>.</li>
+<li>In Android 5.1 and lower, see
+<code>system/core/include/system/audio.h</code>.</li>
</ul>
<p>For an example, refer to the implementation for the Galaxy Nexus at
<code>device/samsung/tuna/audio</code>.</p>
-<p>In addition to implementing the HAL, you need to create a
-<code>device/<company_name>/<device_name>/audio/audio_policy.conf</code> file that
-declares the audio devices present on your product. For an example, see the file for the Galaxy
-Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. Also, see the
-audio header files for a reference of the properties that you can define.</p>
+<h2 id=next-steps>Next steps</h2>
-<p>In the Android M release and later, the paths are:<br />
-<code>system/media/audio/include/system/audio.h</code><br />
-<code>system/media/audio/include/system/audio_policy.h</code></p>
-
-<p>In Android 5.1 and earlier, the paths are:<br />
-<code>system/core/include/system/audio.h</code><br />
-<code>system/core/include/system/audio_policy.h</code></p>
-
-<h3 id="multichannel">Multi-channel support</h3>
-
-<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio
-stream directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get
-downmixed to two channels.</p>
-
-<p>The audio HAL must expose whether an output stream profile supports multichannel audio
-capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel
-playback over HDMI.</p>
-
-<p>For more implementation details, see the <code>device/samsung/tuna/audio/audio_hw.c</code> in
-the Android 4.1 release.</p>
-
-<p>To specify that your product contains a multichannel audio output, edit the
-<code>audio_policy.conf</code> file to describe the multichannel output for your product. The
-following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the
-audio policy manager queries the actual channel masks supported by the HDMI sink after connection.
-You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>.</p>
-
-<pre>
-audio_hw_modules {
- primary {
- outputs {
- ...
- hdmi {
- sampling_rates 44100|48000
- channel_masks dynamic
- formats AUDIO_FORMAT_PCM_16_BIT
- devices AUDIO_DEVICE_OUT_AUX_DIGITAL
- flags AUDIO_OUTPUT_FLAG_DIRECT
- }
- ...
- }
- ...
- }
- ...
-}
-</pre>
-
-<p>AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device
-that does not support multichannel audio.</p>
-
-<h3 id="codecs">Media codecs</h3>
-
-<p>Ensure the audio codecs your hardware and drivers support are properly declared for your
-product. For details on declaring supported codecs, see <a href="{@docRoot}devices/media.html#expose">Exposing Codecs
-to the Framework</a>.</p>
-
-<h2 id="configuring">Configuring the shared library</h2>
-
-<p>You need to package the HAL implementation into a shared library and copy it to the appropriate
-location by creating an <code>Android.mk</code> file:</p>
-
-<ol>
-<li>Create a <code>device/<company_name>/<device_name>/audio</code> directory to
-contain your library's source files.</li>
-<li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the Makefile
-contains the following line:
-<pre>
-LOCAL_MODULE := audio.primary.<device_name>
-</pre>
-
-<p>Notice your library must be named <code>audio.primary.<device_name>.so</code> so
-that Android can correctly load the library. The "<code>primary</code>" portion of this filename
-indicates that this shared library is for the primary audio hardware located on the device. The
-module names <code>audio.a2dp.<device_name></code> and
-<code>audio.usb.<device_name></code> are also available for bluetooth and USB audio
-interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy Nexus audio hardware:
-</p>
-
-<pre>
-LOCAL_PATH := $(call my-dir)
-
-include $(CLEAR_VARS)
-
-LOCAL_MODULE := audio.primary.tuna
-LOCAL_MODULE_RELATIVE_PATH := hw
-LOCAL_SRC_FILES := audio_hw.c ril_interface.c
-LOCAL_C_INCLUDES += \
- external/tinyalsa/include \
- $(call include-path-for, audio-utils) \
- $(call include-path-for, audio-effects)
-LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
-LOCAL_MODULE_TAGS := optional
-
-include $(BUILD_SHARED_LIBRARY)
-</pre>
-
-</li>
-
-<li>If your product supports low latency audio as specified by the Android CDD, copy the
-corresponding XML feature file into your product. For example, in your product's
-<code>device/<company_name>/<device_name>/device.mk</code> Makefile:
-
-<pre>
-PRODUCT_COPY_FILES := ...
-
-PRODUCT_COPY_FILES += \
-frameworks/native/data/etc/android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
-</pre>
-
-</li>
-
-<li>Copy the <code>audio_policy.conf</code> file that you created earlier to the
-<code>system/etc/</code> directory in your product's
-<code>device/<company_name>/<device_name>/device.mk</code> Makefile. For example:
-
-<pre>
-PRODUCT_COPY_FILES += \
- device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
-</pre>
-
-</li>
-
-<li>Declare the shared modules of your audio HAL that are required by your product in the
-product's <code>device/<company_name>/<device_name>/device.mk</code> Makefile. For
-example, the Galaxy Nexus requires the primary and bluetooth audio HAL modules:
-
-<pre>
-PRODUCT_PACKAGES += \
- audio.primary.tuna \
- audio.a2dp.default
-</pre>
-
-</li>
-</ol>
-
-<h2 id="preprocessing">Audio pre-processing effects</h2>
-
-<p>The Android platform provides audio effects on supported devices in the
-<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx
-</a> package, which is available for developers to access. For example, on the Nexus 10, the
-following pre-processing effects are supported:</p>
-
-<ul>
-<li>
-<a href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">
-Acoustic Echo Cancellation</a></li>
-<li>
-<a href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">
-Automatic Gain Control</a></li>
-<li>
-<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
-Noise Suppression</a></li>
-</ul>
-
-
-<p>Pre-processing effects are paired with the use case mode in which the pre-processing is requested
-. In Android app development, a use case is referred to as an <code>AudioSource</code>; and app
-developers request to use the <code>AudioSource</code> abstraction instead of the actual audio
-hardware device. The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual
-hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int inputSource)</code>. The
-following sources are exposed to developers:</p>
-
-<ul>
-<li><code>android.media.MediaRecorder.AudioSource.CAMCORDER</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.VOICE_CALL</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.MIC</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.DEFAULT</code></li> </ul>
-
-<p>The default pre-processing effects applied for each <code>AudioSource</code> are specified in
-the <code>/system/etc/audio_effects.conf</code> file. To specify your own default effects for every
-<code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file and
-specify the pre-processing effects to turn on. For an example, see the implementation for the Nexus
-10 in <code>device/samsung/manta/audio_effects.conf</code>. AudioEffect instances acquire and
-release a session when created and destroyed, enabling the effects (such as the Loudness Enhancer)
-to persist throughout the duration of the session. </p>
-
-<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do
-not enable the noise suppression pre-processing effect. It should not be turned on by default when
-recording from this audio source, and you should not enable it in your own audio_effects.conf file.
-Turning on the effect by default will cause the device to fail the <a
-href="{@docRoot}compatibility/index.html"> compatibility requirement</a> regardless of whether this was on by
-default due to configuration file , or the audio HAL implementation's default behavior.</p>
-
-<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder
-<code>AudioSource</code>. By declaring the <code>AudioSource</code> configuration in this manner,
-the framework will automatically request from the audio HAL the use of those effects.</p>
-
-<pre>
-pre_processing {
- voice_communication {
- aec {}
- ns {}
- }
- camcorder {
- agc {}
- }
-}
-</pre>
-
-<h3 id="tuning">Source tuning</h3>
-
-<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio
-processing with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
-
-<p>The requirements for voice recognition are:</p>
-
-<ul>
-<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
-<li>close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
-<li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
-<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
-<li>8kHz sampling rate (anti-aliasing)</li>
-<li>Effects/pre-processing must be disabled by default</li>
-</ul>
-
-<p>Examples of tuning different effects for different sources are:</p>
-
-<ul>
-<li>Noise Suppressor
-<ul>
-<li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
-<li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
-</ul>
-</li>
-<li>Automatic Gain Control
-<ul>
-<li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
-<li>Tuned for far-talk for <code>CAMCORDER</code></li>
-</ul>
-</li>
-</ul>
-
-<h3 id="more">More information</h3>
-
-<p>For more information, see:</p>
-
-<ul>
-<li>Android documentation for
-<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">
-audiofx package</a></li>
-
-<li>Android documentation for
-<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
-Noise Suppression audio effect</a></li>
-
-<li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
-</ul>
+<p>In addition to implementing the audio HAL, you must also create an
+<a href="{@docRoot}devices/audio/implement-policy.html">audio policy
+configuration file</a> that describes your audio topology and package the HAL
+implementation into a
+<a href="{@docRoot}devices/audio/implement-shared-library.html">shared
+library</a>. You can also configure
+<a href="{@docRoot}devices/audio/implement-pre-processing.html">pre-processing
+effects</a> such as automatic gain control and noise suppression.</p>
diff --git a/src/devices/audio/testing_circuit.jd b/src/devices/audio/testing_circuit.jd
index 12a5bcb..1881e0c 100644
--- a/src/devices/audio/testing_circuit.jd
+++ b/src/devices/audio/testing_circuit.jd
@@ -89,6 +89,6 @@
<p>
This <a href="http://www.youtube.com/watch?v=f95S2IILBJY">Youtube video</a>
-shows the the breadboard version testing circuit in operation.
+shows the breadboard version testing circuit in operation.
Skip ahead to 1:00 to see the circuit.
</p>
diff --git a/src/devices/automotive.jd b/src/devices/automotive.jd
new file mode 100644
index 0000000..cfcccc9
--- /dev/null
+++ b/src/devices/automotive.jd
@@ -0,0 +1,293 @@
+page.title=Automotive
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<img style="float: right; margin: 0px 15px 15px 15px;"
+src="images/ape_fwk_hal_vehicle.png" alt="Android vehicle HAL icon"/>
+
+<p>Many car subsystems interconnect with each other and the in-vehicle
+infotainment (IVI) system via various bus topologies. The exact bus type and
+protocols vary widely between manufacturers (and even between different vehicle
+models of the same brand); examples include Controller Area Network (CAN) bus,
+Local Interconnect Network (LIN) bus, Media Oriented Systems Transport (MOST),
+as well as automotive-grade Ethernet and TCP/IP networks such as BroadR-Reach.
+</p>
+<p>Android Automotive has a hardware abstraction layer (HAL) that provides a
+consistent interface to the Android framework regardless of physical transport
+layer. This vehicle HAL is the interface for developing Android Automotive
+implementations.</p>
+<p>System integrators can implement a vehicle HAL module by connecting
+function-specific platform HAL interfaces (e.g. HVAC) with technology-specific
+network interfaces (e.g. CAN bus). Typical implementations may include a
+dedicated Microcontroller Unit (MCU) running a proprietary real-time operating
+system (RTOS) for CAN bus access or similar, which may be connected via a serial
+link to the CPU running Android Automotive. Instead of a dedicated MCU, it may
+also be possible to implement the bus access as a virtualized CPU. It is up to
+each partner to choose the architecture suitable for the hardware as long as the
+implementation fulfills the interface requirements for the vehicle HAL.</p>
+
+<h2 id=arch>Architecture</h2>
+<p>The vehicle HAL is the interface definition between the car and the vehicle
+network service:</p>
+
+<img src="images/vehicle_hal_arch.png" alt="Android vehicle HAL architecture">
+<p class="img-caption"><strong>Figure 1</strong>. Vehicle HAL and Android
+automotive architecture</p>
+
+<ul>
+<li><strong>Car API</strong>. Contains the APIs such as CarHvacManager,
+CarSensorManager, and CarCameraManager. For details on all supported APIs,
+refer to <code>/platform/packages/services/Car/car-lib</code>.</li>
+<li><strong>CarService</strong>. Located at
+<code>/platform/packages/services/Car/</code>.</li>
+<li><strong>VehicleNetworkService</strong>. Controls vehicle HAL with built-in
+security. Access restricted to system components only (non-system components
+such as third party apps should use car API instead). OEMs can control access
+using <code>vns_policy.xml</code> and <code>vendor_vns_policy.xml</code>.
+Located at <code>/platform/packages/services/Car/vehicle_network_service/</code>;
+for libraries to access the vehicle network, refer to
+<code>/platform/packages/services/Car/libvehiclenetwork/</code>.</li>
+<li><strong>Vehicle HAL</strong>. Interface that defines the properties OEMs can
+implement and contains property metadata (for example, whether the property is
+an int and which change modes are allowed). Located at
+<code>hardware/libhardware/include/hardware/vehicle.h</code>. For a basic
+reference implementation, refer to
+<code>hardware/libhardware/modules/vehicle/</code>.</li>
+</ul>
+
+<h2 id=prop>Vehicle properties</h2>
+<p>The vehicle HAL interface is based on accessing (read, write, subscribe) a
+property, which is an abstraction for a specific function. Properties can be
+read-only, write-only (used to pass information to vehicle HAL level), or read
+and write. Support of most properties is optional.</p>
+<p>Each property is uniquely identified by an int32 key and has a predefined
+type (<code>value_type</code>):</p>
+
+<ul>
+<li><code>INT32</code> (and array), <code>INT64</code>, <code>BOOLEAN</code>,
+<code>FLOAT</code> (and array), string, bytes.</li>
+<li>Zoned type has zone in addition to value.</li>
+</ul>
+
+<h3 id-=zone_type>Zone types</h3>
+<p>The vehicle HAL defines three zone types:</p>
+<ul>
+<li><code>vehicle_zone</code>: Zone based on rows.</li>
+<li><code>vehicle_seat</code>: Zone based on seats.</li>
+<li><code>vehicle_window</code>: Zone based on windows.</li>
+</ul>
+<p>Each zoned property should use pre-defined zone type. If necessary, you can
+use a custom zone type for each property (for details, see
+<a href=#prop_custom>Handling custom properties</a>).</p>
+
+<h3 id=prop_config>Configuring a property</h3>
+<p>Use <code>vehicle_prop_config_t</code> to provide configuration information
+for each property. Information includes:</p>
+<ul>
+<li><code>access</code> (r, w, rw)</li>
+<li><code>change_mode</code> (represents how property is monitored: on change vs
+continuous)</li>
+<li><code>min_value</code> (int32, float, int64), <code>max_value</code> (int32,
+float, int64)</li>
+<li><code>min_sample_rate</code>, <code>max_sample_rate</code></li>
+<li><code>permission_model</code></li>
+<li><code>prop</code> (Property ID, int)</li>
+<li><code>value_type</code></li>
+<li><code>zone_flags</code> (represents supported zones as bit flags)</li>
+</ul>
+<p>In addition, some properties have specific configuration flags to represent
+capability.</p>
+
+<h2 id=interfaces>HAL interfaces</h2>
+<p>The vehicle HAL uses the following interfaces:</p>
+<ul>
+<li><code>vehicle_prop_config_t const *(*list_properties)(..., int*
+num_properties)</code>. List configuration of all properties supported by the
+vehicle HAL. Only supported properties will be used by vehicle network service.
+</li>
+<li><code>(*get)(..., vehicle_prop_value_t *data)</code>. Read the current value
+of the property. For zoned property, each zone may have different value.</li>
+<li><code>(*set)(..., const vehicle_prop_value_t *data)</code>. Write a value to
+property. Result of write is defined per each property.</li>
+<li><code>(*subscribe)(..., int32_t prop, float sample_rate, int32_t
+zones)</code>.<ul>
+<li>Start monitoring property value's change. For zoned property, subscription
+applies to requested zones. Zones = 0 is used to request all zones supported.
+</li>
+<li>Vehicle HAL should call separate callback when the property's value changes
+(=on change) or in const interval (=continuous type).</ul></li>
+<li><code>(*release_memory_from_get)(struct vehicle_hw_device* device,
+vehicle_prop_value_t *data)</code>. Release memory allocated from get call.</ul>
+</li>
+</ul>
+
+<p>The vehicle HAL uses the following callback interfaces:</p>
+<ul>
+<li><code>(*vehicle_event_callback_fn)(const vehicle_prop_value_t
+*event_data)</code>. Notifies vehicle property's value change. Should be done
+only for subscribed properties.</li>
+<li><code>(*vehicle_error_callback_fn)(int32_t error_code, int32_t property,
+int32_t operation).</code> Return global vehicle HAL level error or error per
+each property. Global error causes HAL restart, which can lead to restarting
+other components, including applications.</li>
+</ul>
+
+<h2 id=zone_prop>Handling zone properties</h2>
+<p>A zoned property is equivalent to a collection of multiple properties where
+each sub property is accessible by specified zone value.</p>
+<ul>
+<li><code>get</code> call for zoned property always includes zone in request, so
+only the current value for the requested zone should be returned.</li>
+<li><code>set</code> call for zoned property always includes zone in request, so
+only the requested zone should be changed.</li>
+<li><code>subscribe</code> call includes flags of all zones subscribed. Events
+from un-subscribed zones should not be reported.</li>
+</ul>
+
+<h3 id=get>Get calls</h3>
+<p>During initialization, the value for the property may not be available yet as
+the matching vehicle network message has not yet been received. In such cases,
+the <code>get</code> call should return <code>-EAGAIN</code>. Some properties
+(such as HVAC) have separate on/off power property. Calling <code>get</code> for
+such a property (when powered off) should return a special value
+<code>(VEHICLE_INT_OUT_OF_RANGE_OFF/VEHICLE_FLOAT_OUT_OF_RANGE_OFF)</code>
+rather than returning an error.</p>
+<p>In addition, some properties (such as HVAC temperature) can have a value to
+indicate it is in max power mode rather than in specific temperature value. In
+such cases, use special values to represent such state.</p>
+<ul>
+<li>VEHICLE_INT_OUT_OF_RANGE_MAX/MIN</li>
+<li>VEHICLE_FLOAT_OUT_OF_RANGE_MAX/MIN</li>
+</ul>
+
+<p>Example: get HVAC Temperature</p>
+<img src="images/vehicle_hvac_get.png" alt="Vehicle HAL get HVAC example">
+<p class="img-caption"><strong>Figure 2</strong>. Get HVAC temperature (CD =
+CarService, VNS = VehicleNetworkService, VHAL = Vehicle HAL)</p>
+
+<h3 id=set>Set calls</h3>
+<p>A <code>set</code> call is an asynchronous operation involving event
+notification after a requested change is made. In a typical operation, a
+<code>set</code> call leads to making a change request across vehicle network.
+When the change is performed by the electronic control unit (ECU) owning the
+property, the updated value is returned through vehicle network and the vehicle
+HAL sends an updated value as an event to vehicle network service (VNS).</p>
+<p>Some <code>set</code> calls may require initial data to be ready but during
+initialization, such data may not be available yet. In such cases, the
+<code>set</code> call should return <code>-EAGAIN</code>. Some properties with
+separate power on /off should return <code>-ESHUTDOWN</code> when the property
+is powered off and set cannot be done.</p>
+<p>Until <code>set</code> is made effective, <code>get</code> does not
+necessarily return the same value as what is set. The exception is a property
+with change mode of <code>VEHICLE_PROP_CHANGE_MODE_ON_SET.</code> This property
+notifies change only when it is set by external component outside Android (for
+example, clock properties such as <code>VEHICLE_PROPERTY_UNIX_TIME</code>).</p>
+
+<p>Example: set HVAC Temperature</p>
+<img src="images/vehicle_hvac_set.png" alt="Vehicle HAL set HVAC example">
+<p class="img-caption"><strong>Figure 3</strong>. Set HVAC temperature (CD =
+CarService, VNS = VehicleNetworkService, VHAL = Vehicle HAL)</p>
+
+<h2 id=prop_custom>Handling custom properties</h2>
+<p>To support partner-specific needs, the vehicle HAL allows custom properties
+that are restricted to system apps. Use the following guidelines when working
+with custom properties:</p>
+<ul>
+<li>Key should be in [<code>VEHICLE_PROPERTY_CUSTOM_START,
+VEHICLE_PROPERTY_CUSTOM_END</code>] range. Other ranges are reserved for future
+extension; using such ranges can cause conflicts in future Android releases.</li>
+<li>Use only defined <code>value_type</code>. BYTES type allows passing raw
+data, so this is enough in most cases. Sending big data frequently through
+custom properties can slow down the whole vehicle network access, so be careful
+when you add a big payload.</li>
+<li>Add access policy into <code>vendor_vns_policy.xml</code> (otherwise, all
+access will be rejected).</li>
+<li>Access via <code>VendorExtensionManager</code> (for Java components) or
+via Vehicle Network Service API (for native). Do not modify other car APIs as it
+can lead to compatibility issues in the future.</li>
+</ul>
+
+<h2 id=prop_hvac>Handling HVAC properties</h2>
+<p>You can use the vehicle HAL to control HVAC by setting HVAC-related
+properties. Most HVAC properties are zoned properties, but a few are non-zoned
+(global) properties. Example properties defined include:</p>
+<ul>
+<li><code>VEHICLE_PROPERTY_HVAC_TEMPERATURE_SET</code> (set temperature per each
+zone).</li>
+<li><code>VEHICLE_PROPERTY_HVAC_RECIRC_ON</code> (control recirculation per each
+zone).</li>
+</ul>
+<p>For full list of HVAC properties, search for
+<code>VEHICLE_PROPERTY_HVAC_*</code> in <code>vehicle.h</code>.</p>
+
+<h2 id=prop_sensor>Handling sensor properties</h2>
+<p>Vehicle HAL sensor properties represent real sensor data or policy
+information such as driving status. Some sensor information (such as driving
+status and day/night mode) is accessible by any app without restriction as the
+data is mandatory to build a safe vehicle application. Other sensor information
+(such as vehicle speed) is more sensitive and requires specific permissions that
+users can manage.</p>
+<p>Supported sensor properties include:</p>
+<ul>
+<li><code>DRIVING_STATUS</code> (should support). Represents allowed operations
+in the current driving state. This information is used to block unsafe
+applications while driving.</li>
+<li><code>NIGHT_MODE</code> (should support). Determines day/night mode of
+display.</li>
+<li><code>GEAR_SELECTION/CURRENT_GEAR</code>. Gear selected by driver vs.
+actual gear.</li>
+<li><code>VEHICLE_SPEED</code>. Vehicle speed. Protected with permission.</li>
+<li><code>ODOMETER</code>. Current odometer reading. Protected with permission.
+</li>
+<li><code>FUEL_LEVEL</code>. Current fuel level in %.</li>
+<li><code>FUEL_LEVEL_LOW</code>. Fuel level is low or not (boolean).</li>
+</ul>
+
+<h2 id=security>Security</h2>
+<p>The vehicle HAL supports three levels of security for accessing data:</p>
+<ul>
+<li>System only (controlled by <code>vns_policy.xml</code>)</li>
+<li>Accessible to app with permission (through car service)</li>
+<li>Accessible without permission (through car service)</li>
+</ul>
+<p>Direct access to vehicle properties is allowed only to selected system
+components with vehicle network service acting as the gatekeeper. Most
+applications go through additional gatekeeping by car service (for example, only
+system applications can control HVAC as it requires system permission granted
+only to system apps).</p>
+
+<h2 id=validation>Validation</h2>
+<p>AOSP includes the following testing resources for use in development:</p>
+<ul>
+<li><code>hardware/libhardware/tests/vehicle/vehicle-hal-tool.c</code>.
+Command-line native tool to load vehicle HAL and do simple operations. Useful
+for getting the system up and running in the early stages of development.</li>
+<li><code>packages/services/Car/tests/carservice_test/</code>. Contains car
+service testing with mocked vehicle HAL properties. For each property, expected
+behavior is implemented in the test. This can be a good starting point to
+understand expected behavior.</li>
+<li><code>hardware/libhardware/modules/vehicle/</code>. A basic reference
+implementation.</li>
+</ul>
diff --git a/src/devices/camera/camera3.jd b/src/devices/camera/camera3.jd
index 9811a98..a3fa938 100644
--- a/src/devices/camera/camera3.jd
+++ b/src/devices/camera/camera3.jd
@@ -1,4 +1,4 @@
-page.title=Camera HAL v3 overview
+page.title=Camera HAL3
@jd:body
<!--
@@ -25,39 +25,23 @@
</div>
<p>
-Android's camera Hardware Abstraction Layer (HAL) connects the higher level
-camera framework APIs in
-<a
-href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
-to your underlying camera driver and hardware. The latest version of Android
-introduces a new, underlying implementation of the camera stack. If you have
-previously developed a camera HAL module and driver for other versions of
-Android, be aware that there are significant changes in the camera pipeline.</p>
-<p>Version 1 of the camera HAL is still supported for future releases of Android
- because many devices still rely on it. Implementing both HALs is also supported
- by the Android camera service, which is useful when you want to support a less
- capable front-facing camera with version 1 of the HAL and a more advanced
- back-facing camera with version 3 of the HAL. Version 2 was a stepping stone to
- version 3 and is not supported.</p>
-
-<p>
-There is only one camera HAL module (with its own version number, currently 1, 2,
-or 2.1), which lists multiple independent camera devices that each have
-their own version. Camera module v2 or newer is required to support devices v2 or newer, and such
-camera modules can have a mix of camera device versions. This is what we mean
-when we say Android supports implementing both HALs.
-</p>
+Android's camera Hardware Abstraction Layer (HAL) connects the higher level
+camera framework APIs in
+<a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
+to your underlying camera driver and hardware. Android 5.0 introduced a new,
+underlying implementation of the camera stack. If you have previously developed
+a camera HAL module and driver for older versions of Android, be aware of
+significant changes in the camera pipeline.</p>
<p class="note"><strong>Note:</strong> The new camera HAL is in active
-development and can change at any time. This document describes at a high level
-the design of the camera subsystem and omits many details. See <a
-href="versioning.html">Camera version support</a> for our plans.</p>
+development and can change at any time. This document describes the high-level
+design of the camera subsystem; for details, see
+<a href="{@docRoot}devices/camera/versioning.html">Camera Version Support</a>.</p>
-<h2 id="overview">Overview</h2>
+<h2 id="overview">Camera HAL1 overview</h2>
-<p>
-Version 1 of the camera subsystem was designed as a black box with high-level
-controls. Roughly speaking, the old subsystem has three operating modes:</p>
+<p>Version 1 of the camera subsystem was designed as a black box with high-level
+controls and the following three operating modes:</p>
<ul>
<li>Preview</li>
@@ -65,46 +49,59 @@
<li>Still Capture</li>
</ul>
-<p>Each mode has slightly different and overlapping capabilities. This made it hard
-to implement new types of features, such as burst mode, since it would fall
+<p>Each mode has slightly different and overlapping capabilities. This made it
+hard to implement new types of features, such as burst mode, since it would fall
between two of these modes.</p>
+
<img src="images/camera_block.png" alt="Camera block diagram" id="figure1" />
-<p class="img-caption">
- <strong>Figure 1.</strong> Camera components
-</p>
+<p class="img-caption"><strong>Figure 1.</strong> Camera components</p>
-<h2 id="v3-enhance">Version 3 enhancements</h2>
+<p>Android 7.0 continues to support camera HAL1 as many devices still rely on
+it. In addition, the Android camera service supports implementing both HALs (1
+and 3), which is useful when you want to support a less-capable front-facing
+camera with camera HAL1 and a more advanced back-facing camera with camera
+HAL3.</p>
-<p>The aim of the Android Camera API redesign is to substantially increase the
-ability of applications to control the camera subsystem on Android devices while
-reorganizing the API to make it more efficient and maintainable.</p>
+<p class="note"><strong>Note:</strong> Camera HAL2 is not supported as it was a
+temporary step on the way to camera HAL3.</p>
-<p>The additional control makes it easier to build high-quality camera applications
-on Android devices that can operate reliably across multiple products while
-still using device-specific algorithms whenever possible to maximize quality and
+<p>There is a single camera HAL <em>module</em> (with its own
+<a href="{@docRoot}devices/camera/versioning.html#module_version">version
+number</a>), which lists multiple independent camera devices that each have
+their own version number. Camera module 2 or newer is required to support
+devices 2 or newer, and such camera modules can have a mix of camera device
+versions (this is what we mean when we say Android supports implementing both
+HALs).</p>
+
+<h2 id="v3-enhance">Camera HAL3 enhancements</h2>
+
+<p>The aim of the Android Camera API redesign is to substantially increase the
+ability of applications to control the camera subsystem on Android devices while
+reorganizing the API to make it more efficient and maintainable. The additional
+control makes it easier to build high-quality camera applications on Android
+devices that can operate reliably across multiple products while still using
+device-specific algorithms whenever possible to maximize quality and
performance.</p>
-<p>Version 3 of the camera subsystem structures the operation modes into a single
-unified view, which can be used to implement any of the previous modes and
-several others, such as burst mode. This results in better user control for
-focus and exposure and more post-processing, such as noise reduction, contrast
-and sharpening. Further, this simplified view makes it easier for application
-developers to use the camera's various functions.<br/>
-The API models the camera subsystem as a pipeline that converts incoming
-requests for frame captures into frames, on a 1:1 basis. The requests
-encapsulate all configuration information about the capture and processing of a
-frame. This includes: resolution and pixel format; manual sensor, lens and flash
-control; 3A operating modes; RAW->YUV processing control; statistics generation;
+<p>Version 3 of the camera subsystem structures the operation modes into a
+single unified view, which can be used to implement any of the previous modes
+and several others, such as burst mode. This results in better user control for
+focus and exposure and more post-processing, such as noise reduction, contrast
+and sharpening. Further, this simplified view makes it easier for application
+developers to use the camera's various functions.</p>
+<p>The API models the camera subsystem as a pipeline that converts incoming
+requests for frame captures into frames, on a 1:1 basis. The requests
+encapsulate all configuration information about the capture and processing of a
+frame. This includes resolution and pixel format; manual sensor, lens and flash
+control; 3A operating modes; RAW->YUV processing control; statistics generation;
and so on.</p>
-<p>In simple terms, the application framework requests a frame from the camera
-subsystem, and the camera subsystem returns results to an output stream. In
-addition, metadata that contains information such as color spaces and lens
-shading is generated for each set of results. The following sections and
-diagrams give you more detail about each component.<br/>
-You can think of camera version 3 as a pipeline to camera version 1's one-way
-stream. It converts each capture request into one image captured by the sensor,
-which is processed into: </p>
+<p>In simple terms, the application framework requests a frame from the camera
+subsystem, and the camera subsystem returns results to an output stream. In
+addition, metadata that contains information such as color spaces and lens
+shading is generated for each set of results. You can think of camera version 3
+as a pipeline to camera version 1's one-way stream. It converts each capture
+request into one image captured by the sensor, which is processed into:</p>
<ul>
<li>A Result object with metadata about the capture.</li>
@@ -114,27 +111,17 @@
<p>The set of possible output Surfaces is preconfigured:</p>
<ul>
-<li>Each Surface is a destination for a stream of image buffers of a fixed
+<li>Each Surface is a destination for a stream of image buffers of a fixed
resolution.</li>
-<li>Only a small number of Surfaces can be configured as outputs at once (~3).</li>
+<li>Only a small number of Surfaces can be configured as outputs at once (~3).
+</li>
</ul>
-<p>A request contains all desired capture settings and the list of output Surfaces
-to push image buffers into for this request (out of the total configured set). A
-request can be one-shot ( with capture() ), or it may be repeated indefinitely
-(with setRepeatingRequest() ). Captures have priority over repeating
-requests.</p>
+<p>A request contains all desired capture settings and the list of output
+Surfaces to push image buffers into for this request (out of the total
+configured set). A request can be one-shot (with <code>capture()</code>), or it
+may be repeated indefinitely (with <code>setRepeatingRequest()</code>). Captures
+have priority over repeating requests.</p>
+
<img src="images/camera_simple_model.png" alt="Camera data model" id="figure2" />
-<p class="img-caption">
- <strong>Figure 2.</strong> Camera core operation model
-</p>
-
-<h2 id="supported-version">Supported version</h2>
-
-<p>Camera devices that support this version of the HAL must return
-CAMERA_DEVICE_API_VERSION_3_1 in camera_device_t.common.version and in
-camera_info_t.device_version (from camera_module_t.get_camera_info).<br/>
-Camera modules that may contain version 3.1 devices must implement at least
-version 2.0 of the camera module interface (as defined by
-camera_module_t.common.module_api_version).<br/>
-See camera_common.h for more versioning details.</p>
+<p class="img-caption"><strong>Figure 2.</strong> Camera core operation model</p>
diff --git a/src/devices/camera/images/ape_camera_n_api1_hal1.png b/src/devices/camera/images/ape_camera_n_api1_hal1.png
new file mode 100644
index 0000000..8898379
--- /dev/null
+++ b/src/devices/camera/images/ape_camera_n_api1_hal1.png
Binary files differ
diff --git a/src/devices/camera/images/ape_camera_n_api1_hal3.png b/src/devices/camera/images/ape_camera_n_api1_hal3.png
new file mode 100644
index 0000000..c366512
--- /dev/null
+++ b/src/devices/camera/images/ape_camera_n_api1_hal3.png
Binary files differ
diff --git a/src/devices/camera/images/ape_camera_n_api2_hal3.png b/src/devices/camera/images/ape_camera_n_api2_hal3.png
new file mode 100644
index 0000000..9451cb5
--- /dev/null
+++ b/src/devices/camera/images/ape_camera_n_api2_hal3.png
Binary files differ
diff --git a/src/devices/camera/index.jd b/src/devices/camera/index.jd
index 9bf74df..f56227d 100644
--- a/src/devices/camera/index.jd
+++ b/src/devices/camera/index.jd
@@ -27,155 +27,165 @@
<img style="float: right; margin: 0px 15px 15px 15px;" src="images/ape_fwk_hal_camera.png" alt="Android Camera HAL icon"/>
<p>Android's camera Hardware Abstraction Layer (HAL) connects the higher level
-camera framework APIs in <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> to your underlying camera driver and hardware. The camera subsystem includes implementations for camera pipeline components while the camera HAL provides interfaces for use in implementing your version of these components.</p>
+camera framework APIs in
+<a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a>
+to your underlying camera driver and hardware. The camera subsystem includes
+implementations for camera pipeline components while the camera HAL provides
+interfaces for use in implementing your version of these components.</p>
+
+<p>For the most up-to-date information, refer to the following resources:</p>
+<ul>
+<li><a href="{@docRoot}devices/halref/camera_8h_source.html">camera.h</a> source
+file</li>
+<li><a href="{@docRoot}devices/halref/camera3_8h_source.html">camera3.h</a>
+source file</li>
+<li><a href="{@docRoot}devices/halref/camera__common_8h_source.html">camera_common.h</a>
+source file</li>
+<li><a href="https://developer.android.com/reference/android/hardware/camera2/CameraMetadata.html">CameraMetadata</a>
+developer reference</li>
+</ul>
+
<h2 id="architecture">Architecture</h2>
-<p>The following figure and list describe the HAL components:
-</p>
+<p>The following figure and list describe the HAL components:</p>
<img src="images/ape_fwk_camera.png" alt="Android camera architecture" id="figure1" />
-<p class="img-caption">
- <strong>Figure 1.</strong> Camera architecture
-</p>
+<p class="img-caption"><strong>Figure 1.</strong> Camera architecture</p>
<dl>
-
<dt>Application framework</dt>
- <dd>At the application framework level is the app's code, which utilizes the <a
- href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
- API to interact with the camera hardware. Internally, this code calls a corresponding JNI glue class
- to access the native code that interacts with the camera.</dd>
-
+ <dd>At the application framework level is the app's code, which utilizes the
+ <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
+ API to interact with the camera hardware. Internally, this code calls a
+ corresponding JNI glue class to access the native code that interacts with the
+ camera.</dd>
<dt>JNI</dt>
- <dd>The JNI code associated with <a
- href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a> is located in
- <code>frameworks/base/core/jni/android_hardware_Camera.cpp</code>. This code calls the lower level
- native code to obtain access to the physical camera and returns data that is used to create the
- <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a> object at the framework level.</dd>
-
+ <dd>The JNI code associated with <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
+ is located in
+ <code>frameworks/base/core/jni/android_hardware_Camera.cpp</code>. This code
+ calls the lower level native code to obtain access to the physical camera
+ and returns data that is used to create the
+ <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
+ object at the framework level.</dd>
<dt>Native framework<dt>
- <dd>The native framework defined in <code>frameworks/av/camera/Camera.cpp</code> provides a native equivalent
- to the <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a> class.
- This class calls the IPC binder proxies to obtain access to the camera service.</dd>
-
+ <dd>The native framework defined in <code>frameworks/av/camera/Camera.cpp</code>
+ provides a native equivalent to the
+ <a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>
+ class. This class calls the IPC binder proxies to obtain access to the camera
+ service.</dd>
<dt>Binder IPC proxies</dt>
- <dd>The IPC binder proxies facilitate communication over process boundaries. There are three camera binder
- classes that are located in the <code>frameworks/av/camera</code> directory that calls into
- camera service. ICameraService is the interface to the camera service, ICamera is the interface
- to a specific opened camera device, and ICameraClient is the device's interface back to the application framework.</dd>
-
+ <dd>The IPC binder proxies facilitate communication over process boundaries.
+ There are three camera binder classes that are located in
+ <code>frameworks/av/camera</code> directory that calls into camera service.
+ ICameraService is the interface to the camera service, ICamera is the
+ interface to a specific opened camera device, and ICameraClient is the
+ device's interface back to the application framework.</dd>
<dt>Camera service</dt>
- <dd>The camera service, located in <code>frameworks/av/services/camera/libcameraservice/CameraService.cpp</code>, is the actual code that interacts with the HAL.</p>
-
+ <dd>The camera service, located in
+ <code>frameworks/av/services/camera/libcameraservice/CameraService.cpp</code>,
+ is the actual code that interacts with the HAL.</dd>
<dt>HAL</dt>
- <dd>The hardware abstraction layer defines the standard interface that the camera service calls into and that
- you must implement to have your camera hardware function correctly.
- </dd>
-
+ <dd>The hardware abstraction layer defines the standard interface that the
+ camera service calls into and that you must implement to have your camera
+ hardware function correctly.</dd>
<dt>Kernel driver</dt>
- <dd>The camera's driver interacts with the actual camera hardware and your implementation of the HAL. The
- camera and driver must support YV12 and NV21 image formats to provide support for
- previewing the camera image on the display and video recording.</dd>
- </dl>
-
+ <dd>The camera's driver interacts with the actual camera hardware and your
+ implementation of the HAL. The camera and driver must support YV12 and NV21
+ image formats to provide support for previewing the camera image on the
+ display and video recording.</dd>
+</dl>
<h2 id="implementing">Implementing the HAL</h2>
<p>The HAL sits between the camera driver and the higher level Android framework
-and defines an interface that you must implement so that apps can
-correctly operate the camera hardware. The HAL interface is defined in the
+and defines an interface you must implement so apps can correctly operate the
+camera hardware. The HAL interface is defined in the
<code>hardware/libhardware/include/hardware/camera.h</code> and
<code>hardware/libhardware/include/hardware/camera_common.h</code> header files.
</p>
-<p>
-<code>camera_common.h</code> defines an important struct, <code>camera_module</code>, which defines a standard
-structure to obtain general information about the camera, such as its ID and properties
-that are common to all cameras such as whether or not it is a front or back-facing camera.
-</p>
+<p><code>camera_common.h</code> defines <code>camera_module</code>, a standard
+structure to obtain general information about the camera, such as the camera ID
+and properties common to all cameras (i.e., whether it is a front- or
+back-facing camera).</p>
<p>
-<code>camera.h</code> contains the code that corresponds mainly with
-<a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>. This header file declares a <code>camera_device</code>
-struct that contains a <code>camera_device_ops</code> struct with function pointers
-that point to functions that implement the HAL interface. For documentation on the
-different types of camera parameters that a developer can set,
-see the <code>frameworks/av/include/camera/CameraParameters.h</code> file.
-These parameters are set with the function pointed to by
-<code>int (*set_parameters)(struct camera_device *, const char *parms)</code> in the HAL.
+<code>camera.h</code> contains code that corresponds to
+<a href="http://developer.android.com/reference/android/hardware/Camera.html">android.hardware.Camera</a>. This header file declares a
+<code>camera_device</code> struct that in turn contains a
+<code>camera_device_ops</code> struct with pointers to functions that implement
+the HAL interface. For documentation on the camera parameters developers can
+set, refer to <code>frameworks/av/include/camera/CameraParameters.h</code>.
+These parameters are set with the function pointed to by <code>int
+(*set_parameters)(struct camera_device *, const char *parms)</code> in the HAL.
</p>
-<p>For an example of a HAL implementation, see the implementation for the Galaxy Nexus HAL in
-<code>hardware/ti/omap4xxx/camera</code>.</p>
+<p>For an example of a HAL implementation, refer to the implementation for the
+Galaxy Nexus HAL in <code>hardware/ti/omap4xxx/camera</code>.</p>
-<h2 id="configuring">Configuring the Shared Library</h2>
-<p>You need to set up the Android build system to
- correctly package the HAL implementation into a shared library and copy it to the
- appropriate location by creating an <code>Android.mk</code> file:
+<h2 id="configuring">Configuring the shared library</h2>
+<p>Set up the Android build system to correctly package the HAL implementation
+into a shared library and copy it to the appropriate location by creating an
+<code>Android.mk</code> file:</p>
<ol>
- <li>Create a <code>device/<company_name>/<device_name>/camera</code> directory to contain your
- library's source files.</li>
- <li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the Makefile contains the following lines:
+<li>Create a <code>device/<company_name>/<device_name>/camera</code>
+directory to contain your library's source files.</li>
+
+<li>Create an <code>Android.mk</code> file to build the shared library. Ensure
+the Makefile contains the following lines:
<pre>
LOCAL_MODULE := camera.<device_name>
LOCAL_MODULE_RELATIVE_PATH := hw
</pre>
-<p>Notice that your library must be named <code>camera.<device_name></code> (<code>.so</code> is appended automatically),
-so that Android can correctly load the library. For an example, see the Makefile
-for the Galaxy Nexus camera located in <code>hardware/ti/omap4xxx/Android.mk</code>.</p>
+<p>Your library must be named <code>camera.<device_name></code>
+(<code>.so</code> is appended automatically), so Android can correctly load the
+library. For an example, see the Makefile for the Galaxy Nexus camera located in
+<code>hardware/ti/omap4xxx/Android.mk</code>.</p></li>
-</li>
-<li>Specify that your device has camera features by copying the necessary feature XML files in the
-<code>frameworks/native/data/etc</code> directory with your
-device's Makefile. For example, to specify that your device has a camera flash and can autofocus,
-add the following lines in your device's
-<code><device>/<company_name>/<device_name>/device.mk</code> Makefile:
-
+<li>Specify your device has camera features by copying the necessary feature XML
+files in the <code>frameworks/native/data/etc</code> directory with your
+device's Makefile. For example, to specify your device has a camera flash and
+can autofocus, add the following lines in your device's
+<code><device>/<company_name>/<device_name>/device.mk</code>
+Makefile:
<pre class="no-pretty-print">
PRODUCT_COPY_FILES := \ ...
PRODUCT_COPY_FILES += \
-frameworks/native/data/etc/android.hardware.camera.flash-autofocus.xml:system/etc/permissions/android.hardware.camera.flash-autofocus.xml \
+frameworks/native/data/etc/android.hardware.camera.flash-autofocus.xml:system/etc/permissions/android.hardware.camera.flash-autofocus.xml \
</pre>
-
-<p>For an example of a device Makefile, see <code>device/samsung/tuna/device.mk</code>.</p>
-</li>
+<p>For an example of a device Makefile, see
+<code>device/samsung/tuna/device.mk</code>.</p></li>
<li>Declare your camera’s media codec, format, and resolution capabilities in
-<code>device/<company_name>/<device_name>/media_profiles.xml</code> and
-<code>device/<company_name>/<device_name>/media_codecs.xml</code> XML files.
- For more information, see <a href="{@docRoot}devices/media.html#expose"> Exposing
- Codecs and Profiles to the Framework</a> for information on how to do this.
-</p></code>
-
-</li>
+<code>device/<company_name>/<device_name>/media_profiles.xml</code>
+and <code>device/<company_name>/<device_name>/media_codecs.xml</code>
+XML files. For details, see
+<a href="{@docRoot}devices/media/index.html#expose">Exposing codecs to the
+framework</a>.</li>
<li>Add the following lines in your device's
- <code>device/<company_name>/<device_name>/device.mk</code>
- Makefile to copy the <code>media_profiles.xml</code>
-and <code>media_codecs.xml</code> files to the appropriate location:
+<code>device/<company_name>/<device_name>/device.mk</code> Makefile
+to copy the <code>media_profiles.xml</code> and <code>media_codecs.xml</code>
+files to the appropriate location:
<pre>
# media config xml file
PRODUCT_COPY_FILES += \
- <device>/<company_name>/<device_name>/media_profiles.xml:system/etc/media_profiles.xml
+ <device>/<company>/<device>/media_profiles.xml:system/etc/media_profiles.xml
# media codec config xml file
PRODUCT_COPY_FILES += \
- <device>/<company_name>/<device_name>/media_codecs.xml:system/etc/media_codecs.xml
-</pre>
-</li>
+ <device>/<company>/<device>/media_codecs.xml:system/etc/media_codecs.xml
+</pre></li>
-<li>
-<p>Declare that you want to include the Camera app in your device's system image by
-specifying it in the <code>PRODUCT_PACKAGES</code> variable in your device's
- <code>device/<company_name>/<device_name>/device.mk</code>
- Makefile:</p>
+<li>To include the Camera app in your device's system image, specify it in the
+<code>PRODUCT_PACKAGES</code> variable in your device's
+<code>device/<company>/<device>/device.mk</code>
+Makefile:
<pre>
PRODUCT_PACKAGES := \
Gallery2 \
...
-</pre>
-</li>
-
+</pre></li>
</ol>
diff --git a/src/devices/camera/versioning.jd b/src/devices/camera/versioning.jd
index 7c4d1b3..44d477b 100644
--- a/src/devices/camera/versioning.jd
+++ b/src/devices/camera/versioning.jd
@@ -1,8 +1,8 @@
-page.title=Camera version support
+page.title=Camera Version Support
@jd:body
<!--
- Copyright 2014 The Android Open Source Project
+ Copyright 2016 The Android Open Source Project
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,82 +24,102 @@
</div>
</div>
-<p>The Android 5.0 (Lollipop) platform release adds a new app-level camera framework. This
-document outlines some logistical details that OEMs and SoC vendors need to
-know.</p>
+<p>This page details version differences in Camera HALs, APIs, and associated
+Android Compatibility Test Suite (CTS) tests. It also covers several
+architectural changes made to harden and secure the camera framework in Android
+7.0 and the updates vendors must make to support these changes in their camera
+implementations.</p>
-<h2 id=glossary>Terms</h2>
+<h2 id=glossary>Terminology</h2>
-<p>The following terms are used in this document:</p>
+<p>The following terms are used on this page:</p>
+
+<dl>
+
+<dt>Camera API1</dt>
+<dd>The app-level camera framework on Android 4.4 and earlier devices, exposed
+through the <code>android.hardware.Camera</code> class.</dd>
+
+<dt>Camera API2</dt>
+<dd>The app-level camera framework on Android 5.0 and later devices, exposed
+through the<code> android.hardware.camera2</code> package.</dd>
+
+<dt>Camera HAL</dt>
+<dd>The camera module layer implemented by SoC vendors. The app-level public
+frameworks are built on top of the camera HAL.</dd>
+
+<dt>Camera HAL3.1</dt>
+<dd>Version of the camera device HAL released with Android 4.4.</dd>
+
+<dt>Camera HAL3.2</dt>
+<dd>Version of the camera device HAL released with Android 5.0.</dd>
+
+<dt>Camera API1 CTS</dt>
+<dd>Set of camera Compatibility Test Suite (CTS) tests that run on top of Camera
+API1.</dd>
+
+<dt>Camera API2 CTS</dt>
+<dd>Additional set of camera CTS tests that run on top of Camera API2.</dd>
+
+</dl>
+
+
+<h2 id=camera_apis>Camera APIs</h2>
+<p>Android includes the following camera APIs.</p>
+
+<h3 id=camera_api1>Camera API1</h3>
+
+<p>Android 5.0 deprecated Camera API1, which continues to be phased out as new
+platform development focuses on Camera API2. However, the phase-out period will
+be lengthy, and Android releases will continue to support Camera API1 apps for
+some time. Specifically, support continues for:</p>
<ul>
- <li><em>Camera API1</em>: The app-level camera framework on KitKat and earlier devices, exposed
-through the <code>android.hardware.Camera</code> class.
- <li><em>Camera API2</em>: The app-level camera framework on 5.0 and later
-devices, exposed through the<code> android.hardware.camera2</code> package.
- <li><em>Camera HAL</em>: The camera module layer that SoC vendors implement. The app-level public
-frameworks are built on top of the camera HAL.
- <li><em>Camera HAL3.2</em>: The version of the camera device HAL that is
-being released with Lollipop. KitKat launched with an earlier version (Camera HAL3.1).
- <li><em>Camera API1 CTS</em>: The set of camera Compatibility Test Suite (CTS) tests that run on top of
-Camera API1.
- <li><em>Camera API2 CTS</em>: An additional set of camera CTS tests that run on top of Camera API2.
+<li><em>Camera API1 interfaces for apps</em>. Camera apps built on top of Camera
+API1 should work as they do on devices running earlier Android release versions.
+</li>
+<li><em>Camera HAL versions</em>. Includes support for Camera HAL1.0.</li>
</ul>
-<h2 id=camera_api2_overview>Camera API2 overview</h2>
+<h3 id=camera_api2>Camera API2</h3>
-<p>The new camera frameworks expose lower-level camera control to the app,
+<p>The Camera API2 framework exposes lower-level camera control to the app,
including efficient zero-copy burst/streaming flows and per-frame controls of
exposure, gain, white balance gains, color conversion, denoising, sharpening,
-and more. See this <a
-href="https://www.youtube.com/watch?v=92fgcUNCHic&feature=youtu.be&t=29m50s">brief
-video overview from the Google I/O 2014 conference</a> for additional details.
-</p>
+and more. For details, watch the
+<a href="https://www.youtube.com/watch?v=92fgcUNCHic&feature=youtu.be&t=29m50s">Google
+I/O video overview</a>.</p>
-<h2 id=camera_api1_availability_and_deprecation_in_l>Camera API1 availability and deprecation in Android 5.0</h2>
-
-<p>The Camera API1 interfaces are still available for apps to use on Android
-5.0 and later devices, and camera apps built on top of Camera API1 should work
-as before. Camera API1 is being marked as deprecated in Lollipop, indicating that it
-will be phased out over time and new platform development will focus on Camera
-API2. However, we expect this phase-out period to be lengthy, and Camera API1
-apps will continue to be supported in Android for some time to come.</p>
-
-<p>All earlier camera HAL versions, including Camera HAL1.0, will also continue to
-be supported.</p>
-
-<h2 id=camera_api2_capabilities_and_support_levels>Camera API2 capabilities and support levels</h2>
-
-<p>Android 5.0 and later devices feature Camera API2, however they may not fully support all of
-the new features of Camera API2. The
+<p>Android 5.0 and later includes Camera API2; however, devices running Android
+5.0 and later may not support all Camera API2 features. The
<code>android.info.supportedHardwareLevel</code> property that apps can query
-through the Camera API2 interfaces report one of three support levels:
-<code>LEGACY</code>, <code>FULL</code>, and <code>LIMITED</code>.</p>
+through the Camera API2 interfaces reports one of the following support
+levels:</p>
-<p><em>Legacy</em> devices expose a level of capabilities through the Camera API2 interfaces that
-are approximately the same as is exposed to apps through the Camera API1
-interfaces; the legacy frameworks code conceptually translates Camera API2
-calls into Camera API1 calls under the hood. Legacy devices do not support
-the new Camera API2 features including per-frame controls.</p>
+<ul>
+<li><code>LEGACY</code>. These devices expose capabilities to apps through the
+Camera API2 interfaces that are approximately the same capabilities as those
+exposed to apps through the Camera API1 interfaces. The legacy frameworks code
+conceptually translates Camera API2 calls into Camera API1 calls; legacy devices
+do not support Camera API2 features such as per-frame controls.</li>
+<li><code>FULL</code>. These devices support all of major capabilities of Camera
+API2 and must use Camera HAL 3.2 or later and Android 5.0 or later.</li>
+<li><code>LIMITED</code>. These devices support some Camera API2 capabilities
+(but not all) and must use Camera HAL 3.2 or later.</li>
+</ul>
-<p><em>Full</em> devices support all of the major capabilities of Camera API2. Full devices by
-necessity must have a Camera HAL version of 3.2 (shipping with Android 5.0) or later.</p>
+<p>Individual capabilities are exposed via the
+<code>android.request.availableCapabilities</code> property in the Camera API2
+interfaces. <code>FULL</code> devices require the <code>MANUAL_SENSOR</code> and
+<code>MANUAL_POST_PROCESSING</code> capabilities, among others. The
+<code>RAW</code> capability is optional even for <code>FULL</code> devices.
+<code>LIMITED</code> devices can advertise any subset of these capabilities,
+including none of them. However, the <code>BACKWARD_COMPATIBLE</code> capability
+must always be defined.</p>
-<p><em>Limited</em> devices are in between: They support some of the new Camera API2 capabilities,
-but not all of them, and must also comprise a Camera HAL version of 3.2 or later.</p>
-
-<p>Individual capabilities are exposed via the<code>
-android.request.availableCapabilities</code> property in the Camera API2
-interfaces. Full devices require both the <code>MANUAL_SENSOR</code> and
-<code>MANUAL_POST_PROCESSING</code> capabilities, among others. There is also a
-<code>RAW</code> capability that is optional even for full devices. Limited
-devices can advertise any subset of these capabilities, including none of them. However,
-the <code>BACKWARD_COMPATIBLE</code> capability must always be defined.</p>
-
-<p>The supported hardware level of the device, as well as the specific Camera API2
-capabilities it supports, are available as the following feature flags to allow
-Play Store filtering of Camera API2 camera apps; a device must define the
-feature flag if any of its attached camera devices supports the feature.</p>
+<p>The supported hardware level of the device, as well as the specific Camera
+API2 capabilities it supports, are available as the following feature flags to
+allow Google Play filtering of Camera API2 camera apps.</p>
<ul>
<li><code>android.hardware.camera.hardware_level.full</code>
@@ -110,45 +130,180 @@
<h2 id=cts_requirements>CTS requirements</h2>
-<p>Android 5.0 and later devices must pass both Camera API1 CTS and Camera API2
-CTS. And as always, devices are required to pass the CTS Verifier camera
-tests.</p>
+<p>Android 5.0 and later devices must pass the Camera API1 CTS, Camera API2 CTS,
+and CTS Verifier camera tests.</p>
-<p>To add some context: For devices that don’t feature a Camera HAL3.2
-implementation and are not capable of supporting the full Camera API2
-interfaces, the Camera API2 CTS tests must still be passed. However, in this
-case the device will be running in Camera API2 <em>legacy</em> mode (in which
-the Camera API2 calls are conceptually just mapped to Camera
-API1 calls); and any Camera API2 CTS tests that relate to features or
-capabilities beyond Camera API1 have logic that will skip them in the case of
-old (legacy) devices.</p>
+<p>Devices that do not feature a Camera HAL3.2 implementation and are not
+capable of supporting the full Camera API2 interfaces must still pass the Camera
+API2 CTS tests. However, the device will be running in Camera API2
+<code>LEGACY</code> mode (in which the Camera API2 calls are conceptually mapped
+to Camera API1 calls) so any Camera API2 CTS tests related to features or
+capabilities beyond Camera API1 will be automatically skipped.</p>
-<p>On a legacy device, the Camera API2 CTS tests that are not skipped are purely
-using the existing public Camera API1 interfaces and capabilities (with no new
-requirements), and any bugs that are exposed (which will in turn cause a Camera
-API2 CTS failure) are bugs that were already present in the device’s existing
-Camera HAL and would also be a bug that could be easily hit by existing Camera
-API1 apps. The expectation is that there should be very few bugs of this
-nature. Nevertheless, any such bugs will need to be fixed.</p>
+<p>On legacy devices, Camera API2 CTS tests that are not skipped use the
+existing public Camera API1 interfaces and capabilities with no new
+requirements. Bugs that are exposed (and which cause a Camera API2 CTS failure)
+are bugs already present in the device’s existing Camera HAL, and thus would
+be found by existing Camera API1 apps. We do not expect many bugs of this nature
+(however, any such bugs must be fixed to pass the Camera API2 CTS tests).</p>
-<h2 id="version-history">Version history</h2>
+<h2 id=hardening>Camera framework hardening</h2>
+
+<p>To harden media and camera framework security, Android 7.0 moves camera
+service out of mediaserver. Vendors may need to make changes in the camera HAL
+depending on the API and HAL versions in use. The following sections detail
+architectural changes in AP1 and AP2 for HAL1 and HAL3, as well as general
+requirements.</p>
+
+<h3 id=hardening_api1>Architectural changes for API1</h3>
+<p>API1 video recording may assume camera and video encoder live in the same
+process. When using API1 on:</p>
+
+<ul>
+<li>HAL3, where camera service uses BufferQueue to pass buffers between
+processes, <strong>no vendor update</strong> is necessary.
+<p><img src="images/ape_camera_n_api1_hal3.png" alt="Android 7.0 camera and media
+stack in API1 on HAL3" id="figure1" /></p>
+<p class="img-caption"><strong>Figure 1.</strong>Android 7.0 camera and media
+stack in API1 on HAL3.</p>
+</li>
+<li>HAL1, which supports passing metadata in video buffers, <strong>vendors must
+update the HAL to allow camera and video encoder in different processes</strong>
+(e.g., the HAL cannot store virtual addresses in the metadata).
+<p><img src="images/ape_camera_n_api1_hal1.png" alt="Android 7.0 camera and media
+stack in API1 on HAL1" id="figure1" /></p>
+<p class="img-caption"><strong>Figure 2.</strong>Android 7.0 camera and media
+stack in API1 on HAL1.</p>
+</li>
+</ul>
+
+<h3 id=hardening_api2>Architectural changes for API2</h3>
+<p>For API2 on HAL1 or HAL3, BufferQueue passes buffers so those paths continue
+to work. The Android 7.0 architecture for API2 on:</p>
+
+<ul>
+<li>HAL1 is not affected by the cameraservice move, and <strong>no vendor
+update</strong> is necessary.</li>
+<li>HAL3 <em>is</em> affected, but <strong>no vendor update</strong> is
+necessary:
+<p><img src="images/ape_camera_n_api2_hal3.png" alt="Android 7.0 camera and
+media stack in API2 on HAL2" id="figure1" /></p>
+<p class="img-caption"><strong>Figure 3.</strong>Android 7.0 camera and media
+stack in API2 on HAL3.</p>
+</li>
+</ul>
+
+<h3 id=hardening_general>Additional requirements</h3>
+<p>The architectural changes made for hardening media and camera framework
+security include the following additional device requirements.</p>
+
+<ul>
+<li><strong>General</strong>. Devices require additional bandwidth due to IPC,
+which may affect time-sensitive camera use cases such as high-speed video
+recording. Vendors can measure actual impact by running
+<code>android.hardware.camera2.cts.PerformanceTest</code> and the Google Camera
+App for 120/240 FPS high speed video recording. Devices also require a small
+amount of additional RAM to create the new process.</li>
+<li><strong>Pass metadata in video buffers</strong>(<em>HAL1 only</em>). If HAL1
+stores metadata instead of real YUV frame data in video buffers, the HAL must
+not store anything that is invalid across process boundaries, including native
+handles. If HAL passes native handles in the metadata in video buffers, you must
+update it to use <code>kMetadataBufferTypeNativeHandleSource</code> as the
+metadata buffer type and pass <code>VideoNativeHandleMetadata</code> in video
+buffers.
+<p>With <code>VideoNativeHandleMetadata</code>, camera and media frameworks are
+able to pass the video buffers between processes by serializing and
+deserializing the native handles properly. If HAL chooses to continue using
+<code>kMetadataBufferTypeCameraSource</code> as the metadata buffer type, the
+metadata must be able to be passed between processes as plain values.</p>
+</li>
+<li><strong>Buffer handle address does not always store same buffer</strong>
+(<em>HAL3 only</em>). For each capture request, HAL3 gets addresses of buffer
+handles. HAL cannot use the addresses to identify buffers because the addresses
+may store another buffer handle after HAL returns the buffer. You must update
+the HAL to use buffer handles to identify the buffers. For example: HAL receives
+a buffer handle address A, which stores buffer handle A. After HAL returns
+buffer handle A, buffer handle address A may store buffer handle B next time the
+HAL receives it.</li>
+<li><strong>Update SELinux policies for cameraserver</strong>. If
+device-specific SELinux policies give mediaserver permissions to run the camera,
+you must update the SELinux policies to give cameraserver proper permissions. We
+do not encourage replicating the mediaserver's SELinux policies for cameraserver
+(as mediaserver and cameraserver generally require different resources in the
+system). Cameraserver should have only the permissions needed to perform camera
+functionalities and any unnecessary camera-related permissions in mediaserver
+should be removed.</p>
+
+<h3 id=hardening_validation>Validation</h3>
+<p>For all devices that include a camera and run Android 7.0, verify the
+implementation by running Android 7.0 CTS. Although Android 7.0 does not include
+new CTS tests that verify camera service changes, existing CTS tests will fail
+if you have not made the updates indicated above.</p>
+
+<h2 id="version-history">Camera HAL version history</h2>
+<p>For a list of tests available for evaluating the Android Camera HAL, see the
+<a href="{@docRoot}compatibility/cts/camera-hal.html">Camera HAL Testing
+Checklist</a>.</p>
+
+<h3 id="34">3.4</h3>
+
+<p>Minor additions to supported metadata and changes to data_space support:</p>
+
+<ul>
+<li>Add <code>ANDROID_SENSOR_OPAQUE_RAW_SIZE</code> static metadata as mandatory
+if <code>RAW_OPAQUE</code> format is supported.</li>
+<li>Add <code>ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST_RANGE</code> static
+metadata as mandatory if any RAW format is supported.</li>
+<li>Switch <code>camera3_stream_t data_space</code> field to a more flexible
+definition, using the version 0 definition of dataspace encoding.</li>
+<li>General metadata additions which are available to use for HALv3.2 or newer:
+ <ul>
+ <li>
+ <a href="https://developer.android.com/reference/android/hardware/camera2/CameraMetadata.html#INFO_SUPPORTED_HARDWARE_LEVEL_3"><code>ANDROID_INFO_SUPPORTED_HARDWARE_LEVEL_3</code>
+ </a></li>
+ <li><code>ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST</code></li>
+ <li><code>ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST_RANGE</code></li>
+ <li><code>ANDROID_SENSOR_DYNAMIC_BLACK_LEVEL</code></li>
+ <li><code>ANDROID_SENSOR_DYNAMIC_WHITE_LEVEL</code></li>
+ <li><code>ANDROID_SENSOR_OPAQUE_RAW_SIZE</code></li>
+ <li><code>ANDROID_SENSOR_OPTICAL_BLACK_REGIONS</code></li>
+ </ul>
+ <li>
+</ul>
+
+<h3 id="33">3.3</h3>
+
+<p>Minor revision of expanded-capability HAL:</p>
+
+<ul>
+ <li>OPAQUE and YUV reprocessing API updates.</li>
+ <li>Basic support for depth output buffers.</li>
+ <li>Addition of <code>data_space</code> field to
+ <code>camera3_stream_t</code>.</li>
+ <li>Addition of rotation field to <code>camera3_stream_t</code>.</li>
+ <li>Addition of camera3 stream configuration operation mode to
+ <code>camera3_stream_configuration_t</code>.</li>
+</ul>
<h3 id="32">3.2</h3>
-<p>Second revision of expanded-capability HAL:</p>
+<p>Minor revision of expanded-capability HAL:</p>
<ul>
-<li>Deprecates get_metadata_vendor_tag_ops. Please use get_vendor_tag_ops in
-camera_common.h instead.</li>
-<li>register_stream_buffers deprecated. All gralloc buffers provided by
-framework to HAL in process_capture_request may be new at any time.</li>
-<li>Add partial result support. process_capture_result may be called multiple
-times with a subset of the available result before the full result is available.</li>
-<li>Add manual template to camera3_request_template. The applications may use
-this template to control the capture settings directly.</li>
+<li>Deprecates <code>get_metadata_vendor_tag_ops</code>. Use
+<code>get_vendor_tag_ops</code> in <code>camera_common.h</code> instead.</li>
+<li>Deprecates <code>register_stream_buffers</code>. All gralloc buffers
+provided by framework to HAL in <code>process_capture_request</code> may be new
+at any time.</li>
+<li>Add partial result support. <code>process_capture_result</code> may be
+called multiple times with a subset of the available results before the full
+result is available.</li>
+<li>Add manual template to <code>camera3_request_template</code>. Applications
+may use this template to control the capture settings directly.</li>
<li>Rework the bidirectional and input stream specifications.</li>
<li>Change the input buffer return path. The buffer is returned in
-process_capture_result instead of process_capture_request.</li>
+<code>process_capture_result</code> instead of
+<code>process_capture_request</code>.</li>
</ul>
<h3 id="31">3.1</h3>
@@ -156,7 +311,7 @@
<p>Minor revision of expanded-capability HAL:</p>
<ul>
-<li>configure_streams passes consumer usage flags to the HAL.</li>
+<li><code>configure_streams</code> passes consumer usage flags to the HAL.</li>
<li>flush call to drop all in-flight requests/buffers as fast as possible.</li>
</ul>
@@ -172,7 +327,7 @@
is included, necessary for efficient implementations.</li>
<li>Moved triggers into requests, most notifications into results.</li>
<li>Consolidated all callbacks into framework into one structure, and all setup
-methods into a single initialize() call.</li>
+methods into a single <code>initialize()</code> call.</li>
<li>Made stream configuration into a single call to simplify stream management.
Bidirectional streams replace STREAM_FROM_STREAM construct.</li>
<li>Limited mode semantics for older/limited hardware devices.</li>
@@ -183,10 +338,11 @@
<p>Initial release of expanded-capability HAL (Android 4.2) [camera2.h]:</p>
<ul>
-<li>Sufficient for implementing existing android.hardware.Camera API.</li>
-<li>Allows for ZSL queue in camera service layer</li>
-<li>Not tested for any new features such manual capture control, Bayer RAW
-capture, reprocessing of RAW data.</li>
+<li>Sufficient for implementing existing <code>android.hardware.Camera</code>
+API.</li>
+<li>Allows for ZSL queue in camera service layer.</li>
+<li>Not tested for any new features such as manual capture control, Bayer RAW
+capture, reprocessing of RAW data, etc.</li>
</ul>
<h3 id="10">1.0</strong></h3>
@@ -195,5 +351,88 @@
<ul>
<li>Converted from C++ CameraHardwareInterface abstraction layer.</li>
-<li>Supports android.hardware.Camera API.</li>
+<li>Supports <code>android.hardware.Camera</code> API.</li>
</ul>
+
+<h2 id=module_version>Camera module version history</h2>
+
+<p>This section contains module versioning information for the Camera hardware
+module, based on <code>camera_module_t.common.module_api_version</code>. The two
+most significant hex digits represent the major version, and the two least
+significant represent the minor version.</p>
+
+<h3 id="24">2_4</h3>
+
+<p>This camera module version adds the following API changes:</p>
+
+<ol>
+ <li><em>Torch mode support</em>. The framework can turn on torch mode for any
+ camera device that has a flash unit, without opening a camera device. The
+ camera device has a higher priority accessing the flash unit than the camera
+ module; opening a camera device will turn off the torch if it had been enabled
+ through the module interface. When there are any resource conflicts, such as
+ <code>open()</code> is called to open a camera device, the camera HAL module
+ must notify the framework through the torch mode status callback that the torch
+ mode has been turned off.</li>
+
+ <li><em>External camera (e.g. USB hot-plug camera) support</em>. The API
+ updates specify the camera static info is available only when camera is
+ connected and ready to use for external hot-plug cameras. Calls to get static
+ info will be invalid calls when camera status is not
+ <code>CAMERA_DEVICE_STATUS_PRESENT</code>. The framework counts solely on
+ device status change callbacks to manage the available external camera list.
+ </li>
+
+ <li><em>Camera arbitration hints</em>. Adds support for explicitly indicating
+ the number of camera devices that can be simultaneously opened and used. To
+ specify valid combinations of devices, the <code>resource_cost</code> and
+ <code>conflicting_devices</code> fields should always be set in the
+ <code>camera_info</code> structure returned by the <code>get_camera_info</code>
+ call.</li>
+
+ <li><em>Module initialization method</em>. Called by the camera service
+ after the HAL module is loaded to allow for one-time initialization of the HAL.
+ It is called before any other module methods are invoked.</li>
+</ol>
+
+<h3 id="23">2_3</h3>
+
+<p>This camera module version adds open legacy camera HAL device support.
+ The framework can use it to open the camera device as lower device HAL version
+ HAL device if the same device can support multiple device API versions.
+ The standard hardware module open call (common.methods->open) continues
+ to open the camera device with the latest supported version, which is
+ also the version listed in <code>camera_info_t.device_version</code>.</p>
+
+<h3 id="22">2_2</h3>
+
+<p>This camera module version adds vendor tag support from the module, and
+deprecates the old <code>vendor_tag_query_ops</code> that were previously only
+accessible with a device open.</p>
+
+<h3 id="21">2_1</h3>
+
+<p>This camera module version adds support for asynchronous callbacks to the
+framework from the camera HAL module, which is used to notify the framework
+about changes to the camera module state. Modules that provide a valid
+<code>set_callbacks()</code> method must report at least this version number.</p>
+
+<h3 id="20">2_0</h3>
+
+<p>Camera modules that report this version number implement the second version
+of the camera module HAL interface. Camera devices openable through this
+module may support either version 1.0 or version 2.0 of the camera device
+HAL interface. The <code>device_version</code> field of camera_info is always
+valid; the <code>static_camera_characteristics</code> field of
+<code>camera_info</code> is valid if the <code>device_version</code> field is
+2.0 or higher.</p>
+
+<h3 id="10">1_0</h3>
+
+<p>Camera modules that report these version numbers implement the initial
+camera module HAL interface. All camera devices openable through this
+module support only version 1 of the camera device HAL. The
+<code>device_version</code> and <code>static_camera_characteristics</code>
+fields of <code>camera_info</code> are not valid. Only the
+<code>android.hardware.Camera</code> API can be supported by this module and its
+devices.</p>
diff --git a/src/devices/devices_toc.cs b/src/devices/devices_toc.cs
index 49422ef..2acdb82 100644
--- a/src/devices/devices_toc.cs
+++ b/src/devices/devices_toc.cs
@@ -80,7 +80,18 @@
</div>
<ul>
<li><a href="<?cs var:toroot ?>devices/audio/terminology.html">Terminology</a></li>
- <li><a href="<?cs var:toroot ?>devices/audio/implement.html">Implementation</a></li>
+ <li class="nav-section">
+ <div class="nav-section-header">
+ <a href="<?cs var:toroot ?>devices/audio/implement.html">
+ <span class="en">Implementation</span>
+ </a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>devices/audio/implement-policy.html">Policy Configuration</a></li>
+ <li><a href="<?cs var:toroot ?>devices/audio/implement-shared-library.html">Shared Library</a></li>
+ <li><a href="<?cs var:toroot ?>devices/audio/implement-pre-processing.html">Pre-processing Effects</a></li>
+ </ul>
+ </li>
<li><a href="<?cs var:toroot ?>devices/audio/data_formats.html">Data Formats</a></li>
<li><a href="<?cs var:toroot ?>devices/audio/attributes.html">Attributes</a></li>
<li><a href="<?cs var:toroot ?>devices/audio/warmup.html">Warmup</a></li>
@@ -117,6 +128,7 @@
<li><a href="<?cs var:toroot ?>devices/audio/tv.html">TV Audio</a></li>
</ul>
</li>
+ <li><a href="<?cs var:toroot ?>devices/automotive.html">Automotive</a></li>
<li><a href="<?cs var:toroot ?>devices/bluetooth.html">Bluetooth</a></li>
<li class="nav-section">
<div class="nav-section-header">
@@ -144,8 +156,37 @@
</a>
</div>
<ul>
- <li><a href="<?cs var:toroot ?>devices/graphics/architecture.html">Architecture</a></li>
- <li><a href="<?cs var:toroot ?>devices/graphics/implement.html">Implementation</a></li>
+ <li class="nav-section">
+ <div class="nav-section-header">
+ <a href="<?cs var:toroot ?>devices/graphics/architecture.html">
+ <span class="en">Architecture</span>
+ </a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-bq-gralloc.html">BufferQueue</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-sf-hwc.html">SurfaceFlinger and HWC</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-sh.html">Surface and SurfaceHolder</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-egl-opengl.html">OpenGL ES</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-vulkan.html">Vulkan</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-sv-glsv.html">SurfaceView</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-st.html">SurfaceTexture</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-tv.html">TextureView</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/arch-gameloops.html">Game Loops</a></li>
+ </ul>
+ </li>
+ <li class="nav-section">
+ <div class="nav-section-header">
+ <a href="<?cs var:toroot ?>devices/graphics/implement.html">
+ <span class="en">Implementing</span>
+ </a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>devices/graphics/implement-hwc.html">Hardware Composer HAL</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/implement-vsync.html">VSYNC</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/implement-vulkan.html">Vulkan</a></li>
+ <li><a href="<?cs var:toroot ?>devices/graphics/implement-vdisplays.html">Virtual Displays</a></li>
+ </ul>
+ </li>
<li class="nav-section">
<div class="nav-section-header">
<a href="<?cs var:toroot ?>devices/graphics/testing.html">
@@ -188,6 +229,8 @@
</a>
</div>
<ul>
+ <li><a href="<?cs var:toroot ?>devices/media/framework-hardening.html">Framework
+ Hardening</a></li>
<li><a href="<?cs var:toroot ?>devices/media/soc.html">SoC Dependencies</a></li>
<li><a href="<?cs var:toroot ?>devices/media/oem.html">OEM Dependencies</a></li>
</ul>
@@ -258,6 +301,7 @@
<li><a href="<?cs var:toroot ?>devices/tech/dalvik/constraints.html">Constraints</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/dalvik/configure.html">Configuration</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/dalvik/gc-debug.html">Garbage Collection</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/dalvik/jit-compiler.html">JIT Compilation</a></li>
</ul>
</li>
@@ -274,6 +318,7 @@
<li><a href="<?cs var:toroot ?>devices/tech/config/kernel.html">Kernel Configuration</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/config/kernel_network_tests.html">Kernel Network Tests</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/config/low-ram.html">Low RAM</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/config/namespaces_libraries.html">Namespaces for Libraries</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/config/renderer.html">OpenGLRenderer</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/config/runtime_perms.html">Runtime Permissions</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/config/uicc.html">UICC</a></li>
@@ -283,6 +328,21 @@
<li class="nav-section">
<div class="nav-section-header">
+ <a href="<?cs var:toroot ?>devices/tech/connect/index.html">
+ <span class="en">Connectivity</span>
+ </a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>devices/tech/connect/block-numbers.html">Block Phone Numbers</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/connect/call-notification.html">Call Notifications</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/connect/data-saver.html">Data Saver Mode</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/connect/felica.html">Host Card Emulation of FeliCa</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/connect/ril.html">Radio Interface Layer (RIL)</a></li>
+ </ul>
+ </li>
+
+ <li class="nav-section">
+ <div class="nav-section-header">
<a href="<?cs var:toroot ?>devices/tech/datausage/index.html">
<span class="en">Data Usage</span>
</a>
@@ -324,7 +384,21 @@
<li><a href="<?cs var:toroot ?>devices/tech/admin/managed-profiles.html">Managed Profiles</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/admin/provision.html">Provisioning</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/admin/multiuser-apps.html">Multiuser Apps</a></li>
- <li><a href="<?cs var:toroot ?>devices/tech/admin/testing-setup.html">Testing Setup</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/admin/enterprise-telephony.html">Enterprise Telephony</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/admin/testing-provision.html">Testing Device Provisioning</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/admin/testing-setup.html">Testing Device Administration</a></li>
+ </ul>
+ </li>
+
+ <li class="nav-section">
+ <div class="nav-section-header">
+ <a href="<?cs var:toroot ?>devices/tech/display/index.html">
+ <span class="en">Display Settings</span></a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>devices/tech/display/dnd.html">Do Not Disturb</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/display/multi-window.html">Multi-Window</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/display/hdr.html">HDR Video</a></li>
</ul>
</li>
@@ -357,11 +431,10 @@
<a href="<?cs var:toroot ?>devices/tech/power/index.html"><span class="en">Power</span></a>
</div>
<ul>
- <li><a href="<?cs var:toroot ?>devices/tech/power/mgmt.html">Power Management</a>
- </li>
+ <li><a href="<?cs var:toroot ?>devices/tech/power/mgmt.html">Power Management</a></li>
+ <li><a href="<?cs var:toroot ?>devices/tech/power/performance.html">Performance Management</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/power/component.html">Component Power</a></li>
- <li><a href="<?cs var:toroot ?>devices/tech/power/device.html">Device Power</a>
- </li>
+ <li><a href="<?cs var:toroot ?>devices/tech/power/device.html">Device Power</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/power/values.html">Power Values</a></li>
<li><a href="<?cs var:toroot ?>devices/tech/power/batterystats.html">Battery Use</a>
</li>
diff --git a/src/devices/graphics/arch-bq-gralloc.jd b/src/devices/graphics/arch-bq-gralloc.jd
new file mode 100644
index 0000000..1bf6019
--- /dev/null
+++ b/src/devices/graphics/arch-bq-gralloc.jd
@@ -0,0 +1,141 @@
+page.title=BufferQueue and gralloc
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Understanding the Android graphics system starts behind the scenes with
+BufferQueue and the gralloc HAL.</p>
+
+<p>The BufferQueue class is at the heart of everything graphical in Android. Its
+role is simple: Connect something that generates buffers of graphical data (the
+<em>producer</em>) to something that accepts the data for display or further
+processing (the <em>consumer</em>). Nearly everything that moves buffers of
+graphical data through the system relies on BufferQueue.</p>
+
+<p>The gralloc memory allocator performs buffer allocations and is
+implemented through a vendor-specific HAL interface (see
+<code>hardware/libhardware/include/hardware/gralloc.h</code>). The
+<code>alloc()</code> function takes expected arguments (width, height, pixel
+format) as well as a set of usage flags (detailed below).</p>
+
+<h2 id="BufferQueue">BufferQueue producers and consumers</h2>
+
+<p>Basic usage is straightforward: The producer requests a free buffer
+(<code>dequeueBuffer()</code>), specifying a set of characteristics including
+width, height, pixel format, and usage flags. The producer populates the buffer
+and returns it to the queue (<code>queueBuffer()</code>). Later, the consumer
+acquires the buffer (<code>acquireBuffer()</code>) and makes use of the buffer
+contents. When the consumer is done, it returns the buffer to the queue
+(<code>releaseBuffer()</code>).</p>
+
+<p>Recent Android devices support the <em>sync framework</em>, which enables the
+system to do nifty things when combined with hardware components that can
+manipulate graphics data asynchronously. For example, a producer can submit a
+series of OpenGL ES drawing commands and then enqueue the output buffer before
+rendering completes. The buffer is accompanied by a fence that signals when the
+contents are ready. A second fence accompanies the buffer when it is returned
+to the free list, so the consumer can release the buffer while the contents are
+still in use. This approach improves latency and throughput as the buffers
+move through the system.</p>
+
+<p>Some characteristics of the queue, such as the maximum number of buffers it
+can hold, are determined jointly by the producer and the consumer. However, the
+BufferQueue is responsible for allocating buffers as it needs them. Buffers are
+retained unless the characteristics change; for example, if the producer
+requests buffers with a different size, old buffers are freed and new buffers
+are allocated on demand.</p>
+
+<p>Producers and consumers can live in different processes. Currently, the
+consumer always creates and owns the data structure. In older versions of
+Android, only the producer side was binderized (i.e. producer could be in a
+remote process but consumer had to live in the process where the queue was
+created). Android 4.4 and later releases moved toward a more general
+implementation.</p>
+
+<p>Buffer contents are never copied by BufferQueue (moving that much data around
+would be very inefficient). Instead, buffers are always passed by handle.</p>
+
+<h2 id="gralloc_HAL">gralloc HAL usage flags</h2>
+
+<p>The gralloc allocator is not just another way to allocate memory on the
+native heap; in some situations, the allocated memory may not be cache-coherent
+or could be totally inaccessible from user space. The nature of the allocation
+is determined by the usage flags, which include attributes such as:</p>
+
+<ul>
+<li>How often the memory will be accessed from software (CPU)</li>
+<li>How often the memory will be accessed from hardware (GPU)</li>
+<li>Whether the memory will be used as an OpenGL ES (GLES) texture</li>
+<li>Whether the memory will be used by a video encoder</li>
+</ul>
+
+<p>For example, if your format specifies RGBA 8888 pixels, and you indicate the
+buffer will be accessed from software (meaning your application will touch
+pixels directly) then the allocator must create a buffer with 4 bytes per pixel
+in R-G-B-A order. If instead, you say the buffer will be only accessed from
+hardware and as a GLES texture, the allocator can do anything the GLES driver
+wants—BGRA ordering, non-linear swizzled layouts, alternative color
+formats, etc. Allowing the hardware to use its preferred format can improve
+performance.</p>
+
+<p>Some values cannot be combined on certain platforms. For example, the video
+encoder flag may require YUV pixels, so adding software access and specifying
+RGBA 8888 would fail.</p>
+
+<p>The handle returned by the gralloc allocator can be passed between processes
+through Binder.</p>
+
+<h2 id=tracking>Tracking BufferQueue with systrace</h2>
+
+<p>To really understand how graphics buffers move around, use systrace. The
+system-level graphics code is well instrumented, as is much of the relevant app
+framework code.</p>
+
+<p>A full description of how to use systrace effectively would fill a rather
+long document. Start by enabling the <code>gfx</code>, <code>view</code>, and
+<code>sched</code> tags. You'll also see BufferQueues in the trace. If you've
+used systrace before, you've probably seen them but maybe weren't sure what they
+were. As an example, if you grab a trace while
+<a href="https://github.com/google/grafika">Grafika's</a> "Play video
+(SurfaceView)" is running, the row labeled <em>SurfaceView</em> tells you how
+many buffers were queued up at any given time.</p>
+
+<p>The value increments while the app is active—triggering the rendering
+of frames by the MediaCodec decoder—and decrements while SurfaceFlinger is
+doing work, consuming buffers. When showing video at 30fps, the queue's value
+varies from 0 to 1 because the ~60fps display can easily keep up with the
+source. (Notice also that SurfaceFlinger only wakes when there's work to
+be done, not 60 times per second. The system tries very hard to avoid work and
+will disable VSYNC entirely if nothing is updating the screen.)</p>
+
+<p>If you switch to Grafika's "Play video (TextureView)" and grab a new trace,
+you'll see a row labeled
+com.android.grafika/com.android.grafika.PlayMovieActivity. This is the main UI
+layer, which is just another BufferQueue. Because TextureView renders into the
+UI layer (rather than a separate layer), you'll see all of the video-driven
+updates here.</p>
+
+<p>For more information about the systrace tool, refer to <a
+href="http://developer.android.com/tools/help/systrace.html">Systrace
+documentation</a>.</p>
diff --git a/src/devices/graphics/arch-egl-opengl.jd b/src/devices/graphics/arch-egl-opengl.jd
new file mode 100644
index 0000000..97ca18e
--- /dev/null
+++ b/src/devices/graphics/arch-egl-opengl.jd
@@ -0,0 +1,88 @@
+page.title=EGLSurfaces and OpenGL ES
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>OpenGL ES defines an API for rendering graphics. It does not define a windowing
+system. To allow GLES to work on a variety of platforms, it is designed to be
+combined with a library that knows how to create and access windows through the
+operating system. The library used for Android is called EGL. If you want to
+draw textured polygons, you use GLES calls; if you want to put your rendering on
+the screen, you use EGL calls.</p>
+
+<p>Before you can do anything with GLES, you need to create a GL context. In EGL,
+this means creating an EGLContext and an EGLSurface. GLES operations apply to
+the current context, which is accessed through thread-local storage rather than
+passed around as an argument. This means you have to be careful about which
+thread your rendering code executes on, and which context is current on that
+thread.</p>
+
+ <h2 id=egl_surface>EGLSurfaces</h2>
+
+<p>The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer")
+or a window allocated by the operating system. EGL window surfaces are created
+with the <code>eglCreateWindowSurface()</code> call. It takes a "window object" as an
+argument, which on Android can be a SurfaceView, a SurfaceTexture, a
+SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath. When
+you make this call, EGL creates a new EGLSurface object, and connects it to the
+producer interface of the window object's BufferQueue. From that point onward,
+rendering to that EGLSurface results in a buffer being dequeued, rendered into,
+and queued for use by the consumer. (The term "window" is indicative of the
+expected use, but bear in mind the output might not be destined to appear
+on the display.)</p>
+
+<p>EGL does not provide lock/unlock calls. Instead, you issue drawing commands and
+then call <code>eglSwapBuffers()</code> to submit the current frame. The
+method name comes from the traditional swap of front and back buffers, but the actual
+implementation may be very different.</p>
+
+<p>Only one EGLSurface can be associated with a Surface at a time -- you can have
+only one producer connected to a BufferQueue -- but if you destroy the
+EGLSurface it will disconnect from the BufferQueue and allow something else to
+connect.</p>
+
+<p>A given thread can switch between multiple EGLSurfaces by changing what's
+"current." An EGLSurface must be current on only one thread at a time.</p>
+
+<p>The most common mistake when thinking about EGLSurface is assuming that it is
+just another aspect of Surface (like SurfaceHolder). It's a related but
+independent concept. You can draw on an EGLSurface that isn't backed by a
+Surface, and you can use a Surface without EGL. EGLSurface just gives GLES a
+place to draw.</p>
+
+<h2 id="anativewindow">ANativeWindow</h2>
+
+<p>The public Surface class is implemented in the Java programming language. The
+equivalent in C/C++ is the ANativeWindow class, semi-exposed by the <a
+href="https://developer.android.com/tools/sdk/ndk/index.html">Android NDK</a>. You
+can get the ANativeWindow from a Surface with the <code>ANativeWindow_fromSurface()</code>
+call. Just like its Java-language cousin, you can lock it, render in software,
+and unlock-and-post.</p>
+
+<p>To create an EGL window surface from native code, you pass an instance of
+EGLNativeWindowType to <code>eglCreateWindowSurface()</code>. EGLNativeWindowType is just
+a synonym for ANativeWindow, so you can freely cast one to the other.</p>
+
+<p>The fact that the basic "native window" type just wraps the producer side of a
+BufferQueue should not come as a surprise.</p>
diff --git a/src/devices/graphics/arch-gameloops.jd b/src/devices/graphics/arch-gameloops.jd
new file mode 100644
index 0000000..bca4acd
--- /dev/null
+++ b/src/devices/graphics/arch-gameloops.jd
@@ -0,0 +1,155 @@
+page.title=Game Loops
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>A very popular way to implement a game loop looks like this:</p>
+
+<pre>
+while (playing) {
+ advance state by one frame
+ render the new frame
+ sleep until it’s time to do the next frame
+}
+</pre>
+
+<p>There are a few problems with this, the most fundamental being the idea that the
+game can define what a "frame" is. Different displays will refresh at different
+rates, and that rate may vary over time. If you generate frames faster than the
+display can show them, you will have to drop one occasionally. If you generate
+them too slowly, SurfaceFlinger will periodically fail to find a new buffer to
+acquire and will re-show the previous frame. Both of these situations can
+cause visible glitches.</p>
+
+<p>What you need to do is match the display's frame rate, and advance game state
+according to how much time has elapsed since the previous frame. There are two
+ways to go about this: (1) stuff the BufferQueue full and rely on the "swap
+buffers" back-pressure; (2) use Choreographer (API 16+).</p>
+
+<h2 id=stuffing>Queue stuffing</h2>
+
+<p>This is very easy to implement: just swap buffers as fast as you can. In early
+versions of Android this could actually result in a penalty where
+<code>SurfaceView#lockCanvas()</code> would put you to sleep for 100ms. Now
+it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as
+SurfaceFlinger is able.</p>
+
+<p>One example of this approach can be seen in <a
+href="https://code.google.com/p/android-breakout/">Android Breakout</a>. It
+uses GLSurfaceView, which runs in a loop that calls the application's
+onDrawFrame() callback and then swaps the buffer. If the BufferQueue is full,
+the <code>eglSwapBuffers()</code> call will wait until a buffer is available.
+Buffers become available when SurfaceFlinger releases them, which it does after
+acquiring a new one for display. Because this happens on VSYNC, your draw loop
+timing will match the refresh rate. Mostly.</p>
+
+<p>There are a couple of problems with this approach. First, the app is tied to
+SurfaceFlinger activity, which is going to take different amounts of time
+depending on how much work there is to do and whether it's fighting for CPU time
+with other processes. Since your game state advances according to the time
+between buffer swaps, your animation won't update at a consistent rate. When
+running at 60fps with the inconsistencies averaged out over time, though, you
+probably won't notice the bumps.</p>
+
+<p>Second, the first couple of buffer swaps are going to happen very quickly
+because the BufferQueue isn't full yet. The computed time between frames will
+be near zero, so the game will generate a few frames in which nothing happens.
+In a game like Breakout, which updates the screen on every refresh, the queue is
+always full except when a game is first starting (or un-paused), so the effect
+isn't noticeable. A game that pauses animation occasionally and then returns to
+as-fast-as-possible mode might see odd hiccups.</p>
+
+<h2 id=choreographer>Choreographer</h2>
+
+<p>Choreographer allows you to set a callback that fires on the next VSYNC. The
+actual VSYNC time is passed in as an argument. So even if your app doesn't wake
+up right away, you still have an accurate picture of when the display refresh
+period began. Using this value, rather than the current time, yields a
+consistent time source for your game state update logic.</p>
+
+<p>Unfortunately, the fact that you get a callback after every VSYNC does not
+guarantee that your callback will be executed in a timely fashion or that you
+will be able to act upon it sufficiently swiftly. Your app will need to detect
+situations where it's falling behind and drop frames manually.</p>
+
+<p>The "Record GL app" activity in Grafika provides an example of this. On some
+devices (e.g. Nexus 4 and Nexus 5), the activity will start dropping frames if
+you just sit and watch. The GL rendering is trivial, but occasionally the View
+elements get redrawn, and the measure/layout pass can take a very long time if
+the device has dropped into a reduced-power mode. (According to systrace, it
+takes 28ms instead of 6ms after the clocks slow on Android 4.4. If you drag
+your finger around the screen, it thinks you're interacting with the activity,
+so the clock speeds stay high and you'll never drop a frame.)</p>
+
+<p>The simple fix was to drop a frame in the Choreographer callback if the current
+time is more than N milliseconds after the VSYNC time. Ideally the value of N
+is determined based on previously observed VSYNC intervals. For example, if the
+refresh period is 16.7ms (60fps), you might drop a frame if you're running more
+than 15ms late.</p>
+
+<p>If you watch "Record GL app" run, you will see the dropped-frame counter
+increase, and even see a flash of red in the border when frames drop. Unless
+your eyes are very good, though, you won't see the animation stutter. At 60fps,
+the app can drop the occasional frame without anyone noticing so long as the
+animation continues to advance at a constant rate. How much you can get away
+with depends to some extent on what you're drawing, the characteristics of the
+display, and how good the person using the app is at detecting jank.</p>
+
+<h2 id=thread>Thread management</h2>
+
+<p>Generally speaking, if you're rendering onto a SurfaceView, GLSurfaceView, or
+TextureView, you want to do that rendering in a dedicated thread. Never do any
+"heavy lifting" or anything that takes an indeterminate amount of time on the
+UI thread.</p>
+
+<p>Breakout and "Record GL app" use dedicated renderer threads, and they also
+update animation state on that thread. This is a reasonable approach so long as
+game state can be updated quickly.</p>
+
+<p>Other games separate the game logic and rendering completely. If you had a
+simple game that did nothing but move a block every 100ms, you could have a
+dedicated thread that just did this:</p>
+
+<pre>
+ run() {
+ Thread.sleep(100);
+ synchronized (mLock) {
+ moveBlock();
+ }
+ }
+</pre>
+
+<p>(You may want to base the sleep time off of a fixed clock to prevent drift --
+sleep() isn't perfectly consistent, and moveBlock() takes a nonzero amount of
+time -- but you get the idea.)</p>
+
+<p>When the draw code wakes up, it just grabs the lock, gets the current position
+of the block, releases the lock, and draws. Instead of doing fractional
+movement based on inter-frame delta times, you just have one thread that moves
+things along and another thread that draws things wherever they happen to be
+when the drawing starts.</p>
+
+<p>For a scene with any complexity you'd want to create a list of upcoming events
+sorted by wake time, and sleep until the next event is due, but it's the same
+idea.</p>
diff --git a/src/devices/graphics/arch-sf-hwc.jd b/src/devices/graphics/arch-sf-hwc.jd
new file mode 100644
index 0000000..d6749c7
--- /dev/null
+++ b/src/devices/graphics/arch-sf-hwc.jd
@@ -0,0 +1,203 @@
+page.title=SurfaceFlinger and Hardware Composer
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Having buffers of graphical data is wonderful, but life is even better when
+you get to see them on your device's screen. That's where SurfaceFlinger and the
+Hardware Composer HAL come in.</p>
+
+
+<h2 id=surfaceflinger>SurfaceFlinger</h2>
+
+<p>SurfaceFlinger's role is to accept buffers of data from multiple sources,
+composite them, and send them to the display. Once upon a time this was done
+with software blitting to a hardware framebuffer (e.g.
+<code>/dev/graphics/fb0</code>), but those days are long gone.</p>
+
+<p>When an app comes to the foreground, the WindowManager service asks
+SurfaceFlinger for a drawing surface. SurfaceFlinger creates a layer (the
+primary component of which is a BufferQueue) for which SurfaceFlinger acts as
+the consumer. A Binder object for the producer side is passed through the
+WindowManager to the app, which can then start sending frames directly to
+SurfaceFlinger.</p>
+
+<p class="note"><strong>Note:</strong> While this section uses SurfaceFlinger
+terminology, WindowManager uses the term <em>window</em> instead of
+<em>layer</em>…and uses layer to mean something else. (It can be argued
+that SurfaceFlinger should really be called LayerFlinger.)</p>
+
+<p>Most applications have three layers on screen at any time: the status bar at
+the top of the screen, the navigation bar at the bottom or side, and the
+application UI. Some apps have more, some less (e.g. the default home app has a
+separate layer for the wallpaper, while a full-screen game might hide the status
+bar. Each layer can be updated independently. The status and navigation bars
+are rendered by a system process, while the app layers are rendered by the app,
+with no coordination between the two.</p>
+
+<p>Device displays refresh at a certain rate, typically 60 frames per second on
+phones and tablets. If the display contents are updated mid-refresh, tearing
+will be visible; so it's important to update the contents only between cycles.
+The system receives a signal from the display when it's safe to update the
+contents. For historical reasons we'll call this the VSYNC signal.</p>
+
+<p>The refresh rate may vary over time, e.g. some mobile devices will range from 58
+to 62fps depending on current conditions. For an HDMI-attached television, this
+could theoretically dip to 24 or 48Hz to match a video. Because we can update
+the screen only once per refresh cycle, submitting buffers for display at 200fps
+would be a waste of effort as most of the frames would never be seen. Instead of
+taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the
+display is ready for something new.</p>
+
+<p>When the VSYNC signal arrives, SurfaceFlinger walks through its list of
+layers looking for new buffers. If it finds a new one, it acquires it; if not,
+it continues to use the previously-acquired buffer. SurfaceFlinger always wants
+to have something to display, so it will hang on to one buffer. If no buffers
+have ever been submitted on a layer, the layer is ignored.</p>
+
+<p>After SurfaceFlinger has collected all buffers for visible layers, it asks
+the Hardware Composer how composition should be performed.</p>
+
+<h2 id=hwc>Hardware Composer</h2>
+
+<p>The Hardware Composer HAL (HWC) was introduced in Android 3.0 and has evolved
+steadily over the years. Its primary purpose is to determine the most efficient
+way to composite buffers with the available hardware. As a HAL, its
+implementation is device-specific and usually done by the display hardware OEM.</p>
+
+<p>The value of this approach is easy to recognize when you consider <em>overlay
+planes</em>, the purpose of which is to composite multiple buffers together in
+the display hardware rather than the GPU. For example, consider a typical
+Android phone in portrait orientation, with the status bar on top, navigation
+bar at the bottom, and app content everywhere else. The contents for each layer
+are in separate buffers. You could handle composition using either of the
+following methods:</p>
+
+<ul>
+<li>Rendering the app content into a scratch buffer, then rendering the status
+bar over it, the navigation bar on top of that, and finally passing the scratch
+buffer to the display hardware.</li>
+<li>Passing all three buffers to the display hardware and tell it to read data
+from different buffers for different parts of the screen.</li>
+</ul>
+
+<p>The latter approach can be significantly more efficient.</p>
+
+<p>Display processor capabilities vary significantly. The number of overlays,
+whether layers can be rotated or blended, and restrictions on positioning and
+overlap can be difficult to express through an API. The HWC attempts to
+accommodate such diversity through a series of decisions:</p>
+
+<ol>
+<li>SurfaceFlinger provides HWC with a full list of layers and asks, "How do
+you want to handle this?"</li>
+<li>HWC responds by marking each layer as overlay or GLES composition.</li>
+<li>SurfaceFlinger takes care of any GLES composition, passing the output buffer
+to HWC, and lets HWC handle the rest.</li>
+</ol>
+
+<p>Since hardware vendors can custom tailor decision-making code, it's possible
+to get the best performance out of every device.</p>
+
+<p>Overlay planes may be less efficient than GL composition when nothing on the
+screen is changing. This is particularly true when overlay contents have
+transparent pixels and overlapping layers are blended together. In such cases,
+the HWC can choose to request GLES composition for some or all layers and retain
+the composited buffer. If SurfaceFlinger comes back asking to composite the same
+set of buffers, the HWC can continue to show the previously-composited scratch
+buffer. This can improve the battery life of an idle device.</p>
+
+<p>Devices running Android 4.4 and later typically support four overlay planes.
+Attempting to composite more layers than overlays causes the system to use GLES
+composition for some of them, meaning the number of layers used by an app can
+have a measurable impact on power consumption and performance.</p>
+
+<h2 id=virtual-displays>Virtual displays</h2>
+
+<p>SurfaceFlinger supports a primary display (i.e. what's built into your phone
+or tablet), an external display (such as a television connected through HDMI),
+and one or more virtual displays that make composited output available within
+the system. Virtual displays can be used to record the screen or send it over a
+network.</p>
+
+<p>Virtual displays may share the same set of layers as the main display
+(the layer stack) or have its own set. There is no VSYNC for a virtual display,
+so the VSYNC for the primary display is used to trigger composition for all
+displays.</p>
+
+<p>In older versions of Android, virtual displays were always composited with
+GLES and the Hardware Composer managed composition for the primary display only.
+In Android 4.4, the Hardware Composer gained the ability to participate in
+virtual display composition.</p>
+
+<p>As you might expect, frames generated for a virtual display are written to a
+BufferQueue.</p>
+
+<h2 id=screenrecord>Case Study: screenrecord</h2>
+
+<p>The <a href="https://android.googlesource.com/platform/frameworks/av/+/marshmallow-release/cmds/screenrecord/">screenrecord
+command</a> allows you to record everything that appears on the screen as an
+.mp4 file on disk. To implement, we have to receive composited frames from
+SurfaceFlinger, write them to the video encoder, and then write the encoded
+video data to a file. The video codecs are managed by a separate process
+(mediaserver) so we have to move large graphics buffers around the system. To
+make it more challenging, we're trying to record 60fps video at full resolution.
+The key to making this work efficiently is BufferQueue.</p>
+
+<p>The MediaCodec class allows an app to provide data as raw bytes in buffers,
+or through a <a href="{@docRoot}devices/graphics/arch-sh.html">Surface</a>. When
+screenrecord requests access to a video encoder, mediaserver creates a
+BufferQueue, connects itself to the consumer side, then passes the producer
+side back to screenrecord as a Surface.</p>
+
+<p>The screenrecord command then asks SurfaceFlinger to create a virtual display
+that mirrors the main display (i.e. it has all of the same layers), and directs
+it to send output to the Surface that came from mediaserver. In this case,
+SurfaceFlinger is the producer of buffers rather than the consumer.</p>
+
+<p>After the configuration is complete, screenrecord waits for encoded data to
+appear. As apps draw, their buffers travel to SurfaceFlinger, which composites
+them into a single buffer that gets sent directly to the video encoder in
+mediaserver. The full frames are never even seen by the screenrecord process.
+Internally, mediaserver has its own way of moving buffers around that also
+passes data by handle, minimizing overhead.</p>
+
+<h2 id=simulate-secondary>Case Study: Simulate secondary displays</h2>
+
+<p>The WindowManager can ask SurfaceFlinger to create a visible layer for which
+SurfaceFlinger acts as the BufferQueue consumer. It's also possible to ask
+SurfaceFlinger to create a virtual display, for which SurfaceFlinger acts as
+the BufferQueue producer. What happens if you connect them, configuring a
+virtual display that renders to a visible layer?</p>
+
+<p>You create a closed loop, where the composited screen appears in a window.
+That window is now part of the composited output, so on the next refresh
+the composited image inside the window will show the window contents as well
+(and then it's
+<a href="https://en.wikipedia.org/wiki/Turtles_all_the_way_down">turtles all the
+way down)</a>. To see this in action, enable
+<a href="http://developer.android.com/tools/index.html">Developer options</a> in
+settings, select <strong>Simulate secondary displays</strong>, and enable a
+window. For bonus points, use screenrecord to capture the act of enabling the
+display then play it back frame-by-frame.</p>
diff --git a/src/devices/graphics/arch-sh.jd b/src/devices/graphics/arch-sh.jd
new file mode 100644
index 0000000..2ef6c3c
--- /dev/null
+++ b/src/devices/graphics/arch-sh.jd
@@ -0,0 +1,105 @@
+page.title=Surface and SurfaceHolder
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>The
+<a href="http://developer.android.com/reference/android/view/Surface.html">Surface</a>
+class has been part of the public API since 1.0. Its description simply says,
+"Handle onto a raw buffer that is being managed by the screen compositor." The
+statement was accurate when initially written but falls well short of the mark
+on a modern system.</p>
+
+<p>The Surface represents the producer side of a buffer queue that is often (but
+not always!) consumed by SurfaceFlinger. When you render onto a Surface, the
+result ends up in a buffer that gets shipped to the consumer. A Surface is not
+simply a raw chunk of memory you can scribble on.</p>
+
+<p>The BufferQueue for a display Surface is typically configured for
+triple-buffering; but buffers are allocated on demand. So if the producer
+generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps
+display -- there might only be two allocated buffers in the queue. This helps
+minimize memory consumption. You can see a summary of the buffers associated
+with every layer in the <code>dumpsys SurfaceFlinger</code> output.</p>
+
+<h2 id="canvas">Canvas Rendering</h2>
+
+<p>Once upon a time, all rendering was done in software, and you can still do this
+today. The low-level implementation is provided by the Skia graphics library.
+If you want to draw a rectangle, you make a library call, and it sets bytes in a
+buffer appropriately. To ensure that a buffer isn't updated by two clients at
+once, or written to while being displayed, you have to lock the buffer to access
+it. <code>lockCanvas()</code> locks the buffer and returns a Canvas to use for drawing,
+and <code>unlockCanvasAndPost()</code> unlocks the buffer and sends it to the compositor.</p>
+
+<p>As time went on, and devices with general-purpose 3D engines appeared, Android
+reoriented itself around OpenGL ES. However, it was important to keep the old
+API working, for apps as well as app framework code, so an effort was made to
+hardware-accelerate the Canvas API. As you can see from the charts on the
+<a href="http://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware
+Acceleration</a>
+page, this was a bit of a bumpy ride. Note in particular that while the Canvas
+provided to a View's <code>onDraw()</code> method may be hardware-accelerated, the Canvas
+obtained when an app locks a Surface directly with <code>lockCanvas()</code> never is.</p>
+
+<p>When you lock a Surface for Canvas access, the "CPU renderer" connects to the
+producer side of the BufferQueue and does not disconnect until the Surface is
+destroyed. Most other producers (like GLES) can be disconnected and reconnected
+to a Surface, but the Canvas-based "CPU renderer" cannot. This means you can't
+draw on a surface with GLES or send it frames from a video decoder if you've
+ever locked it for a Canvas.</p>
+
+<p>The first time the producer requests a buffer from a BufferQueue, it is
+allocated and initialized to zeroes. Initialization is necessary to avoid
+inadvertently sharing data between processes. When you re-use a buffer,
+however, the previous contents will still be present. If you repeatedly call
+<code>lockCanvas()</code> and <code>unlockCanvasAndPost()</code> without
+drawing anything, you'll cycle between previously-rendered frames.</p>
+
+<p>The Surface lock/unlock code keeps a reference to the previously-rendered
+buffer. If you specify a dirty region when locking the Surface, it will copy
+the non-dirty pixels from the previous buffer. There's a fair chance the buffer
+will be handled by SurfaceFlinger or HWC; but since we need to only read from
+it, there's no need to wait for exclusive access.</p>
+
+<p>The main non-Canvas way for an application to draw directly on a Surface is
+through OpenGL ES. That's described in the <a href="#eglsurface">EGLSurface and
+OpenGL ES</a> section.</p>
+
+<h2 id="surfaceholder">SurfaceHolder</h2>
+
+<p>Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView.
+The original idea was that Surface represented the raw compositor-managed
+buffer, while SurfaceHolder was managed by the app and kept track of
+higher-level information like the dimensions and format. The Java-language
+definition mirrors the underlying native implementation. It's arguably no
+longer useful to split it this way, but it has long been part of the public API.</p>
+
+<p>Generally speaking, anything having to do with a View will involve a
+SurfaceHolder. Some other APIs, such as MediaCodec, will operate on the Surface
+itself. You can easily get the Surface from the SurfaceHolder, so hang on to
+the latter when you have it.</p>
+
+<p>APIs to get and set Surface parameters, such as the size and format, are
+implemented through SurfaceHolder.</p>
diff --git a/src/devices/graphics/arch-st.jd b/src/devices/graphics/arch-st.jd
new file mode 100644
index 0000000..5bdcb92
--- /dev/null
+++ b/src/devices/graphics/arch-st.jd
@@ -0,0 +1,206 @@
+page.title=SurfaceTexture
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+
+<p>The SurfaceTexture class was introduced in Android 3.0. Just as SurfaceView
+is the combination of a Surface and a View, SurfaceTexture is a rough
+combination of a Surface and a GLES texture (with a few caveats).</p>
+
+<p>When you create a SurfaceTexture, you are creating a BufferQueue for which
+your app is the consumer. When a new buffer is queued by the producer, your app
+is notified via callback (<code>onFrameAvailable()</code>). Your app calls
+<code>updateTexImage()</code>, which releases the previously-held buffer,
+acquires the new buffer from the queue, and makes some EGL calls to make the
+buffer available to GLES as an external texture.</p>
+
+
+<h2 id=ext_texture>External textures</h2>
+<p>External textures (<code>GL_TEXTURE_EXTERNAL_OES</code>) are not quite the
+same as textures created by GLES (<code>GL_TEXTURE_2D</code>): You have to
+configure your renderer a bit differently, and there are things you can't do
+with them. The key point is that you can render textured polygons directly
+from the data received by your BufferQueue. gralloc supports a wide variety of
+formats, so we need to guarantee the format of the data in the buffer is
+something GLES can recognize. To do so, when SurfaceTexture creates the
+BufferQueue, it sets the consumer usage flags to
+<code>GRALLOC_USAGE_HW_TEXTURE</code>, ensuring that any buffer created by
+gralloc would be usable by GLES.</p>
+
+<p>Because SurfaceTexture interacts with an EGL context, you must be careful to
+call its methods from the correct thread (as detailed in the class
+documentation).</p>
+
+<h2 id=time_transforms>Timestamps and transformations</h2>
+<p>If you look deeper into the class documentation, you will see a couple of odd
+calls. One call retrieves a timestamp, the other a transformation matrix, the
+value of each having been set by the previous call to
+<code>updateTexImage()</code>. It turns out that BufferQueue passes more than
+just a buffer handle to the consumer. Each buffer is accompanied by a timestamp
+and transformation parameters.</p>
+
+<p>The transformation is provided for efficiency. In some cases, the source data
+might be in the incorrect orientation for the consumer; but instead of rotating
+the data before sending it, we can send the data in its current orientation with
+a transform that corrects it. The transformation matrix can be merged with other
+transformations at the point the data is used, minimizing overhead.</p>
+
+<p>The timestamp is useful for certain buffer sources. For example, suppose you
+connect the producer interface to the output of the camera (with
+<code>setPreviewTexture()</code>). To create a video, you need to set the
+presentation timestamp for each frame; but you want to base that on the time
+when the frame was captured, not the time when the buffer was received by your
+app. The timestamp provided with the buffer is set by the camera code, resulting
+in a more consistent series of timestamps.</p>
+
+<h2 id=surfacet>SurfaceTexture and Surface</h2>
+
+<p>If you look closely at the API you'll see the only way for an application
+to create a plain Surface is through a constructor that takes a SurfaceTexture
+as the sole argument. (Prior to API 11, there was no public constructor for
+Surface at all.) This might seem a bit backward if you view SurfaceTexture as a
+combination of a Surface and a texture.</p>
+
+<p>Under the hood, SurfaceTexture is called GLConsumer, which more accurately
+reflects its role as the owner and consumer of a BufferQueue. When you create a
+Surface from a SurfaceTexture, what you're doing is creating an object that
+represents the producer side of the SurfaceTexture's BufferQueue.</p>
+
+<h2 id=continuous_capture>Case Study: Grafika's continuous capture</h2>
+
+<p>The camera can provide a stream of frames suitable for recording as a movie.
+To display it on screen, you create a SurfaceView, pass the Surface to
+<code>setPreviewDisplay()</code>, and let the producer (camera) and consumer
+(SurfaceFlinger) do all the work. To record the video, you create a Surface with
+MediaCodec's <code>createInputSurface()</code>, pass that to the camera, and
+again sit back and relax. To show and record the it at the same time, you have
+to get more involved.</p>
+
+<p>The <em>continuous capture</em> activity displays video from the camera as
+the video is being recorded. In this case, encoded video is written to a
+circular buffer in memory that can be saved to disk at any time. It's
+straightforward to implement so long as you keep track of where everything is.
+</p>
+
+<p>This flow involves three BufferQueues: one created by the app, one created by
+SurfaceFlinger, and one created by mediaserver:</p>
+<ul>
+<li><strong>Application</strong>. The app uses a SurfaceTexture to receive
+frames from Camera, converting them to an external GLES texture.</li>
+<li><strong>SurfaceFlinger</strong>. The app declares a SurfaceView, which we
+use to display the frames.</li>
+<li><strong>MediaServer</strong>. You configure a MediaCodec encoder with an
+input Surface to create the video.</li>
+</ul>
+
+<img src="images/continuous_capture_activity.png" alt="Grafika continuous
+capture activity" />
+
+<p class="img-caption"><strong>Figure 1.</strong>Grafika's continuous capture
+activity. Arrows indicate data propagation from the camera and BufferQueues are
+in color (producers are teal, consumers are green).</p>
+
+<p>Encoded H.264 video goes to a circular buffer in RAM in the app process, and
+is written to an MP4 file on disk using the MediaMuxer class when the capture
+button is hit.</p>
+
+<p>All three of the BufferQueues are handled with a single EGL context in the
+app, and the GLES operations are performed on the UI thread. Doing the
+SurfaceView rendering on the UI thread is generally discouraged, but since we're
+doing simple operations that are handled asynchronously by the GLES driver we
+should be fine. (If the video encoder locks up and we block trying to dequeue a
+buffer, the app will become unresponsive. But at that point, we're probably
+failing anyway.) The handling of the encoded data -- managing the circular
+buffer and writing it to disk -- is performed on a separate thread.</p>
+
+<p>The bulk of the configuration happens in the SurfaceView's <code>surfaceCreated()</code>
+callback. The EGLContext is created, and EGLSurfaces are created for the
+display and for the video encoder. When a new frame arrives, we tell
+SurfaceTexture to acquire it and make it available as a GLES texture, then
+render it with GLES commands on each EGLSurface (forwarding the transform and
+timestamp from SurfaceTexture). The encoder thread pulls the encoded output
+from MediaCodec and stashes it in memory.</p>
+
+<h2 id=st_vid_play>Secure texture video playback</h2>
+<p>Android 7.0 supports GPU post-processing of protected video content. This
+allows using the GPU for complex non-linear video effects (such as warps),
+mapping protected video content onto textures for use in general graphics scenes
+(e.g., using OpenGL ES), and virtual reality (VR).</p>
+
+<img src="images/graphics_secure_texture_playback.png" alt="Secure Texture Video Playback" />
+<p class="img-caption"><strong>Figure 2.</strong>Secure texture video playback</p>
+
+<p>Support is enabled using the following two extensions:</p>
+<ul>
+<li><strong>EGL extension</strong>
+(<code><a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</code></a>).
+Allows the creation of protected GL contexts and surfaces, which can both
+operate on protected content.</li>
+<li><strong>GLES extension</strong>
+(<code><a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</code></a>).
+Allows tagging textures as protected so they can be used as framebuffer texture
+attachments.</li>
+</ul>
+
+<p>Android 7.0 also updates SurfaceTexture and ACodec
+(<code>libstagefright.so</code>) to allow protected content to be sent even if
+the windows surface does not queue to the window composer (i.e., SurfaceFlinger)
+and provide a protected video surface for use within a protected context. This
+is done by setting the correct protected consumer bits
+(<code>GRALLOC_USAGE_PROTECTED</code>) on surfaces created in a protected
+context (verified by ACodec).</p>
+
+<p>These changes benefit app developers who can create apps that perform
+enhanced video effects or apply video textures using protected content in GL
+(for example, in VR), end users who can view high-value video content (such as
+movies and TV shows) in GL environment (for example, in VR), and OEMs who can
+achieve higher sales due to added device functionality (for example, watching HD
+movies in VR). The new EGL and GLES extensions can be used by system on chip
+(SoCs) providers and other vendors, and are currently implemented on the
+Qualcomm MSM8994 SoC chipset used in the Nexus 6P.
+
+<p>Secure texture video playback sets the foundation for strong DRM
+implementation in the OpenGL ES environment. Without a strong DRM implementation
+such as Widevine Level 1, many content providers would not allow rendering of
+their high-value content in the OpenGL ES environment, preventing important VR
+use cases such as watching DRM protected content in VR.</p>
+
+<p>AOSP includes framework code for secure texture video playback; driver
+support is up to the vendor. Partners must implement the
+<code>EGL_EXT_protected_content</code> and
+<code>GL_EXT_protected_textures extensions</code>. When using your own codec
+library (to replace libstagefright), note the changes in
+<code>/frameworks/av/media/libstagefright/SurfaceUtils.cpp</code> that allow
+buffers marked with <code>GRALLOC_USAGE_PROTECTED</code> to be sent to
+ANativeWindows (even if the ANativeWindow does not queue directly to the window
+composer) as long as the consumer usage bits contain
+<code>GRALLOC_USAGE_PROTECTED</code>. For detailed documentation on implementing
+the extensions, refer to the Khronos Registry
+(<a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</a>,
+<a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</a>).</p>
+
+<p>Partners may also need to make hardware changes to ensure that protected
+memory mapped onto the GPU remains protected and unreadable by unprotected
+code.</p>
diff --git a/src/devices/graphics/arch-sv-glsv.jd b/src/devices/graphics/arch-sv-glsv.jd
new file mode 100644
index 0000000..e8df719
--- /dev/null
+++ b/src/devices/graphics/arch-sv-glsv.jd
@@ -0,0 +1,229 @@
+page.title=SurfaceView and GLSurfaceView
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>The Android app framework UI is based on a hierarchy of objects that start
+with View. All UI elements go through a complicated measurement and layout
+process that fits them into a rectangular area, and all visible View objects are
+rendered to a SurfaceFlinger-created Surface that was set up by the
+WindowManager when the app was brought to the foreground. The app's UI thread
+performs layout and rendering to a single buffer (regardless of the number of
+Layouts and Views and whether or not the Views are hardware-accelerated).</p>
+
+<p>A SurfaceView takes the same parameters as other views, so you can give it a
+position and size, and fit other elements around it. When it comes time to
+render, however, the contents are completely transparent; The View part of a
+SurfaceView is just a see-through placeholder.</p>
+
+<p>When the SurfaceView's View component is about to become visible, the
+framework asks the WindowManager to ask SurfaceFlinger to create a new Surface.
+(This doesn't happen synchronously, which is why you should provide a callback
+that notifies you when the Surface creation finishes.) By default, the new
+Surface is placed behind the app UI Surface, but the default Z-ordering can be
+overridden to put the Surface on top.</p>
+
+<p>Whatever you render onto this Surface will be composited by SurfaceFlinger,
+not by the app. This is the real power of SurfaceView: The Surface you get can
+be rendered by a separate thread or a separate process, isolated from any
+rendering performed by the app UI, and the buffers go directly to
+SurfaceFlinger. You can't totally ignore the UI thread—you still have to
+coordinate with the Activity lifecycle and you may need to adjust something if
+the size or position of the View changes—but you have a whole Surface all
+to yourself. Blending with the app UI and other layers is handled by the
+Hardware Composer.</p>
+
+<p>The new Surface is the producer side of a BufferQueue, whose consumer is a
+SurfaceFlinger layer. You can update the Surface with any mechanism that can
+feed a BufferQueue, such as surface-supplied Canvas functions, attach an
+EGLSurface and draw on it with GLES, or configure a MediaCodec video decoder to
+write to it.</p>
+
+<h2 id=composition>Composition and the Hardware Scaler</h2>
+
+<p>Let's take a closer look at <code>dumpsys SurfaceFlinger</code>. The
+following output was taken while playing a movie in Grafika's "Play video
+(SurfaceView)" activity on a Nexus 5 in portrait orientation; the video is QVGA
+(320x240):</p>
+<p><pre>
+ type | source crop | frame name
+------------+-----------------------------------+--------------------------------
+ HWC | [ 0.0, 0.0, 320.0, 240.0] | [ 48, 411, 1032, 1149] SurfaceView
+ HWC | [ 0.0, 75.0, 1080.0, 1776.0] | [ 0, 75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
+ HWC | [ 0.0, 0.0, 1080.0, 75.0] | [ 0, 0, 1080, 75] StatusBar
+ HWC | [ 0.0, 0.0, 1080.0, 144.0] | [ 0, 1776, 1080, 1920] NavigationBar
+ FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
+</pre></p>
+
+<ul>
+<li>The <strong>list order</strong> is back to front: the SurfaceView's Surface
+is in the back, the app UI layer sits on top of that, followed by the status and
+navigation bars that are above everything else.</li>
+<li>The <strong>source crop</strong> values indicate the portion of the
+Surface's buffer that SurfaceFlinger will display. The app UI was given a
+Surface equal to the full size of the display (1080x1920), but as there is no
+point rendering and compositing pixels that will be obscured by the status and
+navigation bars, the source is cropped to a rectangle that starts 75 pixels from
+the top and ends 144 pixels from the bottom. The status and navigation bars have
+smaller Surfaces, and the source crop describes a rectangle that begins at the
+top left (0,0) and spans their content.</li>
+<li>The <strong>frame</strong> values specify the rectangle where pixels
+appear on the display. For the app UI layer, the frame matches the source crop
+because we are copying (or overlaying) a portion of a display-sized layer to the
+same location in another display-sized layer. For the status and navigation
+bars, the size of the frame rectangle is the same, but the position is adjusted
+so the navigation bar appears at the bottom of the screen.</li>
+<li>The <strong>SurfaceView layer</strong> holds our video content. The source crop
+matches the video size, which SurfaceFlinger knows because the MediaCodec
+decoder (the buffer producer) is dequeuing buffers that size. The frame
+rectangle has a completely different size—984x738.</li>
+</ul>
+
+<p>SurfaceFlinger handles size differences by scaling the buffer contents to
+fill the frame rectangle, upscaling or downscaling as needed. This particular
+size was chosen because it has the same aspect ratio as the video (4:3), and is
+as wide as possible given the constraints of the View layout (which includes
+some padding at the edges of the screen for aesthetic reasons).</p>
+
+<p>If you started playing a different video on the same Surface, the underlying
+BufferQueue would reallocate buffers to the new size automatically, and
+SurfaceFlinger would adjust the source crop. If the aspect ratio of the new
+video is different, the app would need to force a re-layout of the View to match
+it, which causes the WindowManager to tell SurfaceFlinger to update the frame
+rectangle.</p>
+
+<p>If you're rendering on the Surface through some other means (such as GLES),
+you can set the Surface size using the <code>SurfaceHolder#setFixedSize()</code>
+call. For example, you could configure a game to always render at 1280x720,
+which would significantly reduce the number of pixels that must be touched to
+fill the screen on a 2560x1440 tablet or 4K television. The display processor
+handles the scaling. If you don't want to letter- or pillar-box your game, you
+could adjust the game's aspect ratio by setting the size so that the narrow
+dimension is 720 pixels but the long dimension is set to maintain the aspect
+ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display).
+For an example of this approach, see Grafika's "Hardware scaler exerciser"
+activity.</p>
+
+<h2 id=glsurfaceview>GLSurfaceView</h2>
+
+<p>The GLSurfaceView class provides helper classes for managing EGL contexts,
+inter-thread communication, and interaction with the Activity lifecycle. That's
+it. You do not need to use a GLSurfaceView to use GLES.</p>
+
+<p>For example, GLSurfaceView creates a thread for rendering and configures an
+EGL context there. The state is cleaned up automatically when the activity
+pauses. Most apps won't need to know anything about EGL to use GLES with
+GLSurfaceView.</p>
+
+<p>In most cases, GLSurfaceView is very helpful and can make working with GLES
+easier. In some situations, it can get in the way. Use it if it helps, don't
+if it doesn't.</p>
+
+<h2 id=activity>SurfaceView and the Activity Lifecycle</h2>
+
+<p>When using a SurfaceView, it's considered good practice to render the Surface
+from a thread other than the main UI thread. This raises some questions about
+the interaction between that thread and the Activity lifecycle.</p>
+
+<p>For an Activity with a SurfaceView, there are two separate but interdependent
+state machines:</p>
+
+<ol>
+<li>Application onCreate/onResume/onPause</li>
+<li>Surface created/changed/destroyed</li>
+</ol>
+
+<p>When the Activity starts, you get callbacks in this order:</p>
+
+<ul>
+<li>onCreate</li>
+<li>onResume</li>
+<li>surfaceCreated</li>
+<li>surfaceChanged</li>
+</ul>
+
+<p>If you hit back you get:</p>
+
+<ul>
+<li>onPause</li>
+<li>surfaceDestroyed (called just before the Surface goes away)</li>
+</ul>
+
+<p>If you rotate the screen, the Activity is torn down and recreated and you
+get the full cycle. You can tell it's a quick restart by checking
+<code>isFinishing()</code>. It might be possible to start/stop an Activity so
+quickly that <code>surfaceCreated()</code> might actually happen after
+<code>onPause()</code>.</p>
+
+<p>If you tap the power button to blank the screen, you get only
+<code>onPause()</code>—no <code>surfaceDestroyed()</code>. The Surface
+remains alive, and rendering can continue. You can even keep getting
+Choreographer events if you continue to request them. If you have a lock
+screen that forces a different orientation, your Activity may be restarted when
+the device is unblanked; but if not, you can come out of screen-blank with the
+same Surface you had before.</p>
+
+<p>This raises a fundamental question when using a separate renderer thread with
+SurfaceView: Should the lifespan of the thread be tied to that of the Surface or
+the Activity? The answer depends on what you want to happen when the screen
+goes blank: (1) start/stop the thread on Activity start/stop or (2) start/stop
+the thread on Surface create/destroy.</p>
+
+<p>Option 1 interacts well with the app lifecycle. We start the renderer thread
+in <code>onResume()</code> and stop it in <code>onPause()</code>. It gets a bit
+awkward when creating and configuring the thread because sometimes the Surface
+will already exist and sometimes it won't (e.g. it's still alive after toggling
+the screen with the power button). We have to wait for the surface to be
+created before we do some initialization in the thread, but we can't simply do
+it in the <code>surfaceCreated()</code> callback because that won't fire again
+if the Surface didn't get recreated. So we need to query or cache the Surface
+state, and forward it to the renderer thread.</p>
+
+<p class="note"><strong>Note:</strong> Be careful when passing objects
+between threads. It is best to pass the Surface or SurfaceHolder through a
+Handler message (rather than just stuffing it into the thread) to avoid issues
+on multi-core systems. For details, refer to
+<a href="http://developer.android.com/training/articles/smp.html">Android
+SMP Primer</a>.</p>
+
+<p>Option 2 is appealing because the Surface and the renderer are logically
+intertwined. We start the thread after the Surface has been created, which
+avoids some inter-thread communication concerns, and Surface created/changed
+messages are simply forwarded. We need to ensure rendering stops when the
+screen goes blank and resumes when it un-blanks; this could be a simple matter
+of telling Choreographer to stop invoking the frame draw callback. Our
+<code>onResume()</code> will need to resume the callbacks if and only if the
+renderer thread is running. It may not be so trivial though—if we animate
+based on elapsed time between frames, we could have a very large gap when the
+next event arrives; an explicit pause/resume message may be desirable.</p>
+
+<p class="note"><strong>Note:</strong> For an example of Option 2, see Grafika's
+"Hardware scaler exerciser."</p>
+
+<p>Both options are primarily concerned with how the renderer thread is
+configured and whether it's executing. A related concern is extracting state
+from the thread when the Activity is killed (in <code>onPause()</code> or
+<code>onSaveInstanceState()</code>); in such cases, Option 1 works best because
+after the renderer thread has been joined its state can be accessed without
+synchronization primitives.</p>
diff --git a/src/devices/graphics/arch-tv.jd b/src/devices/graphics/arch-tv.jd
new file mode 100644
index 0000000..19eb8cc
--- /dev/null
+++ b/src/devices/graphics/arch-tv.jd
@@ -0,0 +1,146 @@
+page.title=TextureView
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+
+<p>The TextureView class introduced in Android 4.0 and is the most complex of
+the View objects discussed here, combining a View with a SurfaceTexture.</p>
+
+<h2 id=render_gles>Rendering with GLES</h2>
+<p>Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics
+data and making them available as textures. TextureView wraps a SurfaceTexture,
+taking over the responsibility of responding to the callbacks and acquiring new
+buffers. The arrival of new buffers causes TextureView to issue a View
+invalidate request. When asked to draw, the TextureView uses the contents of
+the most recently received buffer as its data source, rendering wherever and
+however the View state indicates it should.</p>
+
+<p>You can render on a TextureView with GLES just as you would SurfaceView. Just
+pass the SurfaceTexture to the EGL window creation call. However, doing so
+exposes a potential problem.</p>
+
+<p>In most of what we've looked at, the BufferQueues have passed buffers between
+different processes. When rendering to a TextureView with GLES, both producer
+and consumer are in the same process, and they might even be handled on a single
+thread. Suppose we submit several buffers in quick succession from the UI
+thread. The EGL buffer swap call will need to dequeue a buffer from the
+BufferQueue, and it will stall until one is available. There won't be any
+available until the consumer acquires one for rendering, but that also happens
+on the UI thread… so we're stuck.</p>
+
+<p>The solution is to have BufferQueue ensure there is always a buffer
+available to be dequeued, so the buffer swap never stalls. One way to guarantee
+this is to have BufferQueue discard the contents of the previously-queued buffer
+when a new buffer is queued, and to place restrictions on minimum buffer counts
+and maximum acquired buffer counts. (If your queue has three buffers, and all
+three buffers are acquired by the consumer, then there's nothing to dequeue and
+the buffer swap call must hang or fail. So we need to prevent the consumer from
+acquiring more than two buffers at once.) Dropping buffers is usually
+undesirable, so it's only enabled in specific situations, such as when the
+producer and consumer are in the same process.</p>
+
+<h2 id=surface_or_texture>SurfaceView or TextureView?</h2>
+SurfaceView and TextureView fill similar roles, but have very different
+implementations. To decide which is best requires an understanding of the
+trade-offs.</p>
+
+<p>Because TextureView is a proper citizen of the View hierarchy, it behaves like
+any other View, and can overlap or be overlapped by other elements. You can
+perform arbitrary transformations and retrieve the contents as a bitmap with
+simple API calls.</p>
+
+<p>The main strike against TextureView is the performance of the composition step.
+With SurfaceView, the content is written to a separate layer that SurfaceFlinger
+composites, ideally with an overlay. With TextureView, the View composition is
+always performed with GLES, and updates to its contents may cause other View
+elements to redraw as well (e.g. if they're positioned on top of the
+TextureView). After the View rendering completes, the app UI layer must then be
+composited with other layers by SurfaceFlinger, so you're effectively
+compositing every visible pixel twice. For a full-screen video player, or any
+other application that is effectively just UI elements layered on top of video,
+SurfaceView offers much better performance.</p>
+
+<p>As noted earlier, DRM-protected video can be presented only on an overlay plane.
+ Video players that support protected content must be implemented with
+SurfaceView.</p>
+
+<h2 id=grafika>Case Study: Grafika's Play Video (TextureView)</h2>
+
+<p>Grafika includes a pair of video players, one implemented with TextureView, the
+other with SurfaceView. The video decoding portion, which just sends frames
+from MediaCodec to a Surface, is the same for both. The most interesting
+differences between the implementations are the steps required to present the
+correct aspect ratio.</p>
+
+<p>While SurfaceView requires a custom implementation of FrameLayout, resizing
+SurfaceTexture is a simple matter of configuring a transformation matrix with
+<code>TextureView#setTransform()</code>. For the former, you're sending new
+window position and size values to SurfaceFlinger through WindowManager; for
+the latter, you're just rendering it differently.</p>
+
+<p>Otherwise, both implementations follow the same pattern. Once the Surface has
+been created, playback is enabled. When "play" is hit, a video decoding thread
+is started, with the Surface as the output target. After that, the app code
+doesn't have to do anything -- composition and display will either be handled by
+SurfaceFlinger (for the SurfaceView) or by TextureView.</p>
+
+<h2 id=decode>Case Study: Grafika's Double Decode</h2>
+
+<p>This activity demonstrates manipulation of the SurfaceTexture inside a
+TextureView.</p>
+
+<p>The basic structure of this activity is a pair of TextureViews that show two
+different videos playing side-by-side. To simulate the needs of a
+videoconferencing app, we want to keep the MediaCodec decoders alive when the
+activity is paused and resumed for an orientation change. The trick is that you
+can't change the Surface that a MediaCodec decoder uses without fully
+reconfiguring it, which is a fairly expensive operation; so we want to keep the
+Surface alive. The Surface is just a handle to the producer interface in the
+SurfaceTexture's BufferQueue, and the SurfaceTexture is managed by the
+TextureView;, so we also need to keep the SurfaceTexture alive. So how do we deal
+with the TextureView getting torn down?</p>
+
+<p>It just so happens TextureView provides a <code>setSurfaceTexture()</code> call
+that does exactly what we want. We obtain references to the SurfaceTextures
+from the TextureViews and save them in a static field. When the activity is
+shut down, we return "false" from the <code>onSurfaceTextureDestroyed()</code>
+callback to prevent destruction of the SurfaceTexture. When the activity is
+restarted, we stuff the old SurfaceTexture into the new TextureView. The
+TextureView class takes care of creating and destroying the EGL contexts.</p>
+
+<p>Each video decoder is driven from a separate thread. At first glance it might
+seem like we need EGL contexts local to each thread; but remember the buffers
+with decoded output are actually being sent from mediaserver to our
+BufferQueue consumers (the SurfaceTextures). The TextureViews take care of the
+rendering for us, and they execute on the UI thread.</p>
+
+<p>Implementing this activity with SurfaceView would be a bit harder. We can't
+just create a pair of SurfaceViews and direct the output to them, because the
+Surfaces would be destroyed during an orientation change. Besides, that would
+add two layers, and limitations on the number of available overlays strongly
+motivate us to keep the number of layers to a minimum. Instead, we'd want to
+create a pair of SurfaceTextures to receive the output from the video decoders,
+and then perform the rendering in the app, using GLES to render two textured
+quads onto the SurfaceView's Surface.</p>
diff --git a/src/devices/graphics/arch-vulkan.jd b/src/devices/graphics/arch-vulkan.jd
new file mode 100644
index 0000000..45c3d34
--- /dev/null
+++ b/src/devices/graphics/arch-vulkan.jd
@@ -0,0 +1,131 @@
+page.title=Vulkan
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Android 7.0 adds support for
+<a href="https://www.khronos.org/vulkan/">Vulkan</a>, a low-overhead,
+cross-platform API for high-performance 3D graphics. Like OpenGL ES, Vulkan
+provides tools for creating high-quality, real-time graphics in applications.
+Vulkan advantages include reductions in CPU overhead and support for the
+<a href="https://www.khronos.org/spir">SPIR-V Binary Intermediate</a> language.
+</p>
+
+<p>System on chip vendors (SoCs) such as GPU Independent Hardware Vendors (IHVs)
+can write Vulkan drivers for Android; OEMs simply need to integrate these
+drivers for specific devices. For details on how a Vulkan driver interacts with
+the system, how GPU-specific tools should be installed, and Android-specific
+requirements, see <a href="{@docRoot}devices/graphics/implement-vulkan.html">Implementing
+Vulkan.</a></p>
+
+<p>Application developers can take advantage of Vulkan to create apps that
+execute commands on the GPU with significantly reduced overhead. Vulkan also
+provides a more direct mapping to the capabilities found in current graphics
+hardware, minimizing opportunities for driver bugs and reducing developer
+testing time (e.g. less time required to troubleshoot Vulkan bugs).</p>
+
+<p>For general information on Vulkan, refer to the
+<a href="http://khr.io/vulkanlaunchoverview">Vulkan Overview</a> or see the list
+of <a href="#resources">Resources</a> below.</p>
+
+<h2 id=vulkan_components>Vulkan components</h2>
+<p>Vulkan support includes the following components:</p>
+<p><img src="{@docRoot}devices/graphics/images/ape_graphics_vulkan.png"></p>
+<p class=img-caption>Figure 1: Vulkan components</p>
+
+<ul>
+<li><strong>Vulkan Validation Layers</strong> (<em>provided in the Android
+NDK</em>). A set of libraries used by developers during the development of
+Vulkan apps. The Vulkan runtime library and the Vulkan driver from graphics
+vendors do not contain runtime error-checking to keep Vulkan runtime efficient.
+Instead, the validation libraries are used (only during development) to find
+errors in an application's use of the Vulkan API. The Vulkan Validation
+libraries are linked into the app during development and perform this error
+checking. After all API usage issues are found, the aplication no longer needs
+to include these libraries in the app.</li>
+<li><strong>Vulkan Runtime </strong><em>(provided by Android)</em>. A native
+library <code>(libvulkan.so</code>) that provides a new public native API
+called <a href="https://www.khronos.org/vulkan">Vulkan</a>. Most functionality
+is implemented by a driver provided by the GPU vendor; the runtime wraps the
+driver, provides API interception capabilities (for debugging and other
+developer tools), and manages the interaction between the driver and platform
+dependencies such as BufferQueue.</li>
+<li><strong>Vulkan Driver </strong><em>(provided by SoC)</em>. Maps the Vulkan
+API onto hardware-specific GPU commands and interactions with the kernel
+graphics driver.</li>
+</ul>
+
+<h2 id=modified_components>Modified components</h2>
+<p>Android 7.0 modifies the following existing graphics components to support
+Vulkan:</p>
+
+<ul>
+<li><strong>BufferQueue</strong>. The Vulkan Runtime interacts with the existing
+BufferQueue component via the existing <code>ANativeWindow</code> interface.
+Includes minor modifications (new enum values and new methods) to
+<code>ANativeWindow</code> and BufferQueue, but no architectural changes.</li>
+<li><strong>Gralloc HAL</strong>. Includes a new, optional interface for
+discovering whether a given format can be used for a particular
+producer/consumer combination without actually allocating a buffer.</li>
+</ul>
+
+<p>For details on these components, see
+<a href="{@docRoot}devices/graphics/arch-bq-gralloc.html">BufferQueue and
+gralloc</a> (for details on <code>ANativeWindow</code>, see
+<a href="{@docRoot}devices/graphics/arch-egl-opengl.html">EGLSurface and OpenGL
+ES</a>).
+
+<h2 id=apis>Vulkan API</h2>
+<p>The Android platform includes an
+<a href="https://developer.android.com/ndk/guides/graphics/index.html">Android-specific
+implementation</a> of the <a href="https://www.khronos.org/vulkan/">Vulkan API
+specification</a> from the Khronos Group. Android applications must use the
+<a href="{@docRoot}devices/graphics/implement-vulkan.html#wsi">Window System
+Integration (WSI) extensions</a> to output their rendering.</p>
+
+<h2 id=resources>Resources</h2>
+<p>Use the following resources to learn more about Vulkan:</p>
+<ul>
+
+<li>
+<a href="https://googleplex-android.git.corp.google.com/platform/frameworks/native/+/nyc-dr1-release/vulkan/#">Vulkan
+Loader </a>(libvulkan.so) at <code>platform/frameworks/native/vulkan</code>.
+Contains Android's Vulkan loader, as well as some Vulkan-related tools useful to
+platform developers.</li>
+
+<li><a href="https://android.googlesource.com/platform/frameworks/native/+/master/vulkan/doc/implementors_guide/implementors_guide.html">Vulkan
+Implementor's Guide</a>. Intended for GPU IHVs writing Vulkan drivers for
+Android and OEMs integrating those drivers for specific devices. It describes
+how a Vulkan driver interacts with the system, how GPU-specific tools should be
+installed, and Android-specific requirements.</li>
+
+<li><a href="https://developer.android.com/ndk/guides/graphics/index.html">Vulkan
+Graphics API Guide</a>. Includes information on getting started with using
+Vulkan in an Android app, details on Vulkan design guidelines on the Android
+platform, how to use Vulkan's shader compilers, and how to use use validation
+layers to help assure stability in apps using Vulkan.</li>
+
+<li><a href="https://www.khronos.org/#slider_vulkan">Vulkan News</a>. Covers
+events, patches, tutorials, and more Vulkan-related news articles.</li>
+</ul>
diff --git a/src/devices/graphics/architecture.jd b/src/devices/graphics/architecture.jd
index 77593f2..40606a8 100644
--- a/src/devices/graphics/architecture.jd
+++ b/src/devices/graphics/architecture.jd
@@ -25,1201 +25,99 @@
</div>
-<p><em>What every developer should know about Surface, SurfaceHolder, EGLSurface,
-SurfaceView, GLSurfaceView, SurfaceTexture, TextureView, and SurfaceFlinger</em>
-</p>
-<p>This document describes the essential elements of Android's "system-level"
- graphics architecture, and how it is used by the application framework and
- multimedia system. The focus is on how buffers of graphical data move through
- the system. If you've ever wondered why SurfaceView and TextureView behave the
- way they do, or how Surface and EGLSurface interact, you've come to the right
-place.</p>
+<p><em>What every developer should know about Surface, SurfaceHolder,
+EGLSurface, SurfaceView, GLSurfaceView, SurfaceTexture, TextureView,
+SurfaceFlinger, and Vulkan.</em></p>
+
+<p>This page describes essential elements of the Android system-level graphics
+architecture and how they are used by the application framework and multimedia
+system. The focus is on how buffers of graphical data move through the system.
+If you've ever wondered why SurfaceView and TextureView behave the way they do,
+or how Surface and EGLSurface interact, you are in the correct place.</p>
<p>Some familiarity with Android devices and application development is assumed.
-You don't need detailed knowledge of the app framework, and very few API calls
-will be mentioned, but the material herein doesn't overlap much with other
-public documentation. The goal here is to provide a sense for the significant
-events involved in rendering a frame for output, so that you can make informed
-choices when designing an application. To achieve this, we work from the bottom
-up, describing how the UI classes work rather than how they can be used.</p>
+You don't need detailed knowledge of the app framework and very few API calls
+are mentioned, but the material doesn't overlap with other public
+documentation. The goal is to provide details on the significant events
+involved in rendering a frame for output to help you make informed choices
+when designing an application. To achieve this, we work from the bottom up,
+describing how the UI classes work rather than how they can be used.</p>
-<p>Early sections contain background material used in later sections, so it's a
-good idea to read straight through rather than skipping to a section that sounds
-interesting. We start with an explanation of Android's graphics buffers,
-describe the composition and display mechanism, and then proceed to the
-higher-level mechanisms that supply the compositor with data.</p>
+<p>This section includes several pages covering everything from background
+material to HAL details to use cases. It starts with an explanation of Android
+graphics buffers, describes the composition and display mechanism, then proceeds
+to the higher-level mechanisms that supply the compositor with data. We
+recommend reading pages in the order listed below rather than skipping to a
+topic that sounds interesting.</p>
-<p>This document is chiefly concerned with the system as it exists in Android 4.4
-("KitKat"). Earlier versions of the system worked differently, and future
-versions will likely be different as well. Version-specific features are called
-out in a few places.</p>
-
-<p>At various points I will refer to source code from the AOSP sources or from
-Grafika. Grafika is a Google open source project for testing; it can be found at
-<a
-href="https://github.com/google/grafika">https://github.com/google/grafika</a>.
-It's more "quick hack" than solid example code, but it will suffice.</p>
-<h2 id="BufferQueue">BufferQueue and gralloc</h2>
-
-<p>To understand how Android's graphics system works, we have to start behind the
-scenes. At the heart of everything graphical in Android is a class called
-BufferQueue. Its role is simple enough: connect something that generates
-buffers of graphical data (the "producer") to something that accepts the data
-for display or further processing (the "consumer"). The producer and consumer
-can live in different processes. Nearly everything that moves buffers of
-graphical data through the system relies on BufferQueue.</p>
-
-<p>The basic usage is straightforward. The producer requests a free buffer
-(<code>dequeueBuffer()</code>), specifying a set of characteristics including width,
-height, pixel format, and usage flags. The producer populates the buffer and
-returns it to the queue (<code>queueBuffer()</code>). Some time later, the consumer
-acquires the buffer (<code>acquireBuffer()</code>) and makes use of the buffer contents.
-When the consumer is done, it returns the buffer to the queue
-(<code>releaseBuffer()</code>).</p>
-
-<p>Most recent Android devices support the "sync framework". This allows the
-system to do some nifty thing when combined with hardware components that can
-manipulate graphics data asynchronously. For example, a producer can submit a
-series of OpenGL ES drawing commands and then enqueue the output buffer before
-rendering completes. The buffer is accompanied by a fence that signals when the
-contents are ready. A second fence accompanies the buffer when it is returned
-to the free list, so that the consumer can release the buffer while the contents
-are still in use. This approach improves latency and throughput as the buffers
-move through the system.</p>
-
-<p>Some characteristics of the queue, such as the maximum number of buffers it can
-hold, are determined jointly by the producer and the consumer.</p>
-
-<p>The BufferQueue is responsible for allocating buffers as it needs them. Buffers
-are retained unless the characteristics change; for example, if the producer
-starts requesting buffers with a different size, the old buffers will be freed
-and new buffers will be allocated on demand.</p>
-
-<p>The data structure is currently always created and "owned" by the consumer. In
-Android 4.3 only the producer side was "binderized", i.e. the producer could be
-in a remote process but the consumer had to live in the process where the queue
-was created. This evolved a bit in 4.4, moving toward a more general
-implementation.</p>
-
-<p>Buffer contents are never copied by BufferQueue. Moving that much data around
-would be very inefficient. Instead, buffers are always passed by handle.</p>
-
-<h3 id="gralloc_HAL">gralloc HAL</h3>
-
-<p>The actual buffer allocations are performed through a memory allocator called
-"gralloc", which is implemented through a vendor-specific HAL interface (see
-<a
-href="https://android.googlesource.com/platform/hardware/libhardware/+/kitkat-release/include/hardware/gralloc.h">hardware/libhardware/include/hardware/gralloc.h</a>).
-The <code>alloc()</code> function takes the arguments you'd expect -- width,
-height, pixel format -- as well as a set of usage flags. Those flags merit
-closer attention.</p>
-
-<p>The gralloc allocator is not just another way to allocate memory on the native
-heap. In some situations, the allocated memory may not be cache-coherent, or
-could be totally inaccessible from user space. The nature of the allocation is
-determined by the usage flags, which include attributes like:</p>
+<h2 id=low_level>Low-level components</h2>
<ul>
-<li>how often the memory will be accessed from software (CPU)</li>
-<li>how often the memory will be accessed from hardware (GPU)</li>
-<li>whether the memory will be used as an OpenGL ES ("GLES") texture</li>
-<li>whether the memory will be used by a video encoder</li>
+<li><a href="{@docRoot}devices/graphics/arch-bq-gralloc.html">BufferQueue and
+gralloc</a>. BufferQueue connects something that generates buffers of graphical
+data (the <em>producer</em>) to something that accepts the data for display or
+further processing (the <em>consumer</em>). Buffer allocations are performed
+through the <em>gralloc</em> memory allocator implemented through a
+vendor-specific HAL interface.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-sf-hwc.html">SurfaceFlinger,
+Hardware Composer, and virtual displays</a>. SurfaceFlinger accepts buffers of
+data from multiple sources, composites them, and sends them to the display. The
+Hardware Composer HAL (HWC) determines the most efficient way to composite
+buffers with the available hardware, and virtual displays make composited output
+available within the system (recording the screen or sending the screen over a
+network).</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-sh.html">Surface, Canvas, and
+SurfaceHolder</a>. A Surface produces a buffer queue that is often consumed by
+SurfaceFlinger. When rendering onto a Surface, the result ends up in a buffer
+that gets shipped to the consumer. Canvas APIs provide a software implementation
+(with hardware-acceleration support) for drawing directly on a Surface
+(low-level alternative to OpenGL ES). Anything having to do with a View involves
+a SurfaceHolder, whose APIs enable getting and setting Surface parameters such
+as size and format.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-egl-opengl.html">EGLSurface and
+OpenGL ES</a>. OpenGL ES (GLES) defines a graphics-rendering API designed to be
+combined with EGL, a library that knows how to create and access windows through
+the operating system (to draw textured polygons, use GLES calls; to put
+rendering on the screen, use EGL calls). This page also covers ANativeWindow,
+the C/C++ equivalent of the Java Surface class used to create an EGL window
+surface from native code.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-vulkan.html">Vulkan</a>. Vulkan is
+a low-overhead, cross-platform API for high-performance 3D graphics. Like OpenGL
+ES, Vulkan provides tools for creating high-quality, real-time graphics in
+applications. Vulkan advantages include reductions in CPU overhead and support
+for the <a href="https://www.khronos.org/spir">SPIR-V Binary Intermediate</a>
+language.</li>
+
</ul>
-<p>For example, if your format specifies RGBA 8888 pixels, and you indicate
-the buffer will be accessed from software -- meaning your application will touch
-pixels directly -- then the allocator needs to create a buffer with 4 bytes per
-pixel in R-G-B-A order. If instead you say the buffer will only be
-accessed from hardware and as a GLES texture, the allocator can do anything the
-GLES driver wants -- BGRA ordering, non-linear "swizzled" layouts, alternative
-color formats, etc. Allowing the hardware to use its preferred format can
-improve performance.</p>
-
-<p>Some values cannot be combined on certain platforms. For example, the "video
-encoder" flag may require YUV pixels, so adding "software access" and specifying
-RGBA 8888 would fail.</p>
-
-<p>The handle returned by the gralloc allocator can be passed between processes
-through Binder.</p>
-
-<h2 id="SurfaceFlinger">SurfaceFlinger and Hardware Composer</h2>
-
-<p>Having buffers of graphical data is wonderful, but life is even better when you
-get to see them on your device's screen. That's where SurfaceFlinger and the
-Hardware Composer HAL come in.</p>
-
-<p>SurfaceFlinger's role is to accept buffers of data from multiple sources,
-composite them, and send them to the display. Once upon a time this was done
-with software blitting to a hardware framebuffer (e.g.
-<code>/dev/graphics/fb0</code>), but those days are long gone.</p>
-
-<p>When an app comes to the foreground, the WindowManager service asks
-SurfaceFlinger for a drawing surface. SurfaceFlinger creates a "layer" - the
-primary component of which is a BufferQueue - for which SurfaceFlinger acts as
-the consumer. A Binder object for the producer side is passed through the
-WindowManager to the app, which can then start sending frames directly to
-SurfaceFlinger.</p>
-
-<p class="note"><strong>Note:</strong> The WindowManager uses the term "window" instead of
-"layer" for this and uses "layer" to mean something else. We're going to use the
-SurfaceFlinger terminology. It can be argued that SurfaceFlinger should really
-be called LayerFlinger.</p>
-
-<p>For most apps, there will be three layers on screen at any time: the "status
-bar" at the top of the screen, the "navigation bar" at the bottom or side, and
-the application's UI. Some apps will have more or less, e.g. the default home app has a
-separate layer for the wallpaper, while a full-screen game might hide the status
-bar. Each layer can be updated independently. The status and navigation bars
-are rendered by a system process, while the app layers are rendered by the app,
-with no coordination between the two.</p>
-
-<p>Device displays refresh at a certain rate, typically 60 frames per second on
-phones and tablets. If the display contents are updated mid-refresh, "tearing"
-will be visible; so it's important to update the contents only between cycles.
-The system receives a signal from the display when it's safe to update the
-contents. For historical reasons we'll call this the VSYNC signal.</p>
-
-<p>The refresh rate may vary over time, e.g. some mobile devices will range from 58
-to 62fps depending on current conditions. For an HDMI-attached television, this
-could theoretically dip to 24 or 48Hz to match a video. Because we can update
-the screen only once per refresh cycle, submitting buffers for display at
-200fps would be a waste of effort as most of the frames would never be seen.
-Instead of taking action whenever an app submits a buffer, SurfaceFlinger wakes
-up when the display is ready for something new.</p>
-
-<p>When the VSYNC signal arrives, SurfaceFlinger walks through its list of layers
-looking for new buffers. If it finds a new one, it acquires it; if not, it
-continues to use the previously-acquired buffer. SurfaceFlinger always wants to
-have something to display, so it will hang on to one buffer. If no buffers have
-ever been submitted on a layer, the layer is ignored.</p>
-
-<p>Once SurfaceFlinger has collected all of the buffers for visible layers, it
-asks the Hardware Composer how composition should be performed.</p>
-
-<h3 id="hwcomposer">Hardware Composer</h3>
-
-<p>The Hardware Composer HAL ("HWC") was first introduced in Android 3.0
-("Honeycomb") and has evolved steadily over the years. Its primary purpose is
-to determine the most efficient way to composite buffers with the available
-hardware. As a HAL, its implementation is device-specific and usually
-implemented by the display hardware OEM.</p>
-
-<p>The value of this approach is easy to recognize when you consider "overlay
-planes." The purpose of overlay planes is to composite multiple buffers
-together, but in the display hardware rather than the GPU. For example, suppose
-you have a typical Android phone in portrait orientation, with the status bar on
-top and navigation bar at the bottom, and app content everywhere else. The contents
-for each layer are in separate buffers. You could handle composition by
-rendering the app content into a scratch buffer, then rendering the status bar
-over it, then rendering the navigation bar on top of that, and finally passing the
-scratch buffer to the display hardware. Or, you could pass all three buffers to
-the display hardware, and tell it to read data from different buffers for
-different parts of the screen. The latter approach can be significantly more
-efficient.</p>
-
-<p>As you might expect, the capabilities of different display processors vary
-significantly. The number of overlays, whether layers can be rotated or
-blended, and restrictions on positioning and overlap can be difficult to express
-through an API. So, the HWC works like this:</p>
-
-<ol>
-<li>SurfaceFlinger provides the HWC with a full list of layers, and asks, "how do
-you want to handle this?"</li>
-<li>The HWC responds by marking each layer as "overlay" or "GLES composition."</li>
-<li>SurfaceFlinger takes care of any GLES composition, passing the output buffer
-to HWC, and lets HWC handle the rest.</li>
-</ol>
-
-<p>Since the decision-making code can be custom tailored by the hardware vendor,
-it's possible to get the best performance out of every device.</p>
-
-<p>Overlay planes may be less efficient than GL composition when nothing on the
-screen is changing. This is particularly true when the overlay contents have
-transparent pixels, and overlapping layers are being blended together. In such
-cases, the HWC can choose to request GLES composition for some or all layers
-and retain the composited buffer. If SurfaceFlinger comes back again asking to
-composite the same set of buffers, the HWC can just continue to show the
-previously-composited scratch buffer. This can improve the battery life of an
-idle device.</p>
-
-<p>Devices shipping with Android 4.4 ("KitKat") typically support four overlay
-planes. Attempting to composite more layers than there are overlays will cause
-the system to use GLES composition for some of them; so the number of layers
-used by an application can have a measurable impact on power consumption and
-performance.</p>
-
-<p>You can see exactly what SurfaceFlinger is up to with the command <code>adb shell
-dumpsys SurfaceFlinger</code>. The output is verbose. The part most relevant to our
-current discussion is the HWC summary that appears near the bottom of the
-output:</p>
-
-<pre>
- type | source crop | frame name
-------------+-----------------------------------+--------------------------------
- HWC | [ 0.0, 0.0, 320.0, 240.0] | [ 48, 411, 1032, 1149] SurfaceView
- HWC | [ 0.0, 75.0, 1080.0, 1776.0] | [ 0, 75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
- HWC | [ 0.0, 0.0, 1080.0, 75.0] | [ 0, 0, 1080, 75] StatusBar
- HWC | [ 0.0, 0.0, 1080.0, 144.0] | [ 0, 1776, 1080, 1920] NavigationBar
- FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
-</pre>
-
-<p>This tells you what layers are on screen, whether they're being handled with
-overlays ("HWC") or OpenGL ES composition ("GLES"), and gives you a bunch of
-other facts you probably won't care about ("handle" and "hints" and "flags" and
-other stuff that we've trimmed out of the snippet above). The "source crop" and
-"frame" values will be examined more closely later on.</p>
-
-<p>The FB_TARGET layer is where GLES composition output goes. Since all layers
-shown above are using overlays, FB_TARGET isn’t being used for this frame. The
-layer's name is indicative of its original role: On a device with
-<code>/dev/graphics/fb0</code> and no overlays, all composition would be done
-with GLES, and the output would be written to the framebuffer. On recent devices there
-generally is no simple framebuffer, so the FB_TARGET layer is a scratch buffer.</p>
-
-<p class="note"><strong>Note:</strong> This is why screen grabbers written for old versions of Android no
-longer work: They're trying to read from the Framebuffer, but there is no such
-thing.</p>
-
-<p>The overlay planes have another important role: they're the only way to display
-DRM content. DRM-protected buffers cannot be accessed by SurfaceFlinger or the
-GLES driver, which means that your video will disappear if HWC switches to GLES
-composition.</p>
-
-<h3 id="triple-buffering">The Need for Triple-Buffering</h3>
-
-<p>To avoid tearing on the display, the system needs to be double-buffered: the
-front buffer is displayed while the back buffer is being prepared. At VSYNC, if
-the back buffer is ready, you quickly switch them. This works reasonably well
-in a system where you're drawing directly into the framebuffer, but there's a
-hitch in the flow when a composition step is added. Because of the way
-SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble.</p>
-
-<p>Suppose frame N is being displayed, and frame N+1 has been acquired by
-SurfaceFlinger for display on the next VSYNC. (Assume frame N is composited
-with an overlay, so we can't alter the buffer contents until the display is done
-with it.) When VSYNC arrives, HWC flips the buffers. While the app is starting
-to render frame N+2 into the buffer that used to hold frame N, SurfaceFlinger is
-scanning the layer list, looking for updates. SurfaceFlinger won't find any new
-buffers, so it prepares to show frame N+1 again after the next VSYNC. A little
-while later, the app finishes rendering frame N+2 and queues it for
-SurfaceFlinger, but it's too late. This has effectively cut our maximum frame
-rate in half.</p>
-
-<p>We can fix this with triple-buffering. Just before VSYNC, frame N is being
-displayed, frame N+1 has been composited (or scheduled for an overlay) and is
-ready to be displayed, and frame N+2 is queued up and ready to be acquired by
-SurfaceFlinger. When the screen flips, the buffers rotate through the stages
-with no bubble. The app has just less than a full VSYNC period (16.7ms at 60fps) to
-do its rendering and queue the buffer. And SurfaceFlinger / HWC has a full VSYNC
-period to figure out the composition before the next flip. The downside is
-that it takes at least two VSYNC periods for anything that the app does to
-appear on the screen. As the latency increases, the device feels less
-responsive to touch input.</p>
-
-<img src="images/surfaceflinger_bufferqueue.png" alt="SurfaceFlinger with BufferQueue" />
-
-<p class="img-caption">
- <strong>Figure 1.</strong> SurfaceFlinger + BufferQueue
-</p>
-
-<p>The diagram above depicts the flow of SurfaceFlinger and BufferQueue. During
-frame:</p>
-
-<ol>
-<li>red buffer fills up, then slides into BufferQueue</li>
-<li>after red buffer leaves app, blue buffer slides in, replacing it</li>
-<li>green buffer and systemUI* shadow-slide into HWC (showing that SurfaceFlinger
-still has the buffers, but now HWC has prepared them for display via overlay on
-the next VSYNC).</li>
-</ol>
-
-<p>The blue buffer is referenced by both the display and the BufferQueue. The
-app is not allowed to render to it until the associated sync fence signals.</p>
-
-<p>On VSYNC, all of these happen at once:</p>
+<h2 id=high_level>High-level components</h2>
<ul>
-<li>red buffer leaps into SurfaceFlinger, replacing green buffer</li>
-<li>green buffer leaps into Display, replacing blue buffer, and a dotted-line
-green twin appears in the BufferQueue</li>
-<li>the blue buffer’s fence is signaled, and the blue buffer in App empties**</li>
-<li>display rect changes from <blue + SystemUI> to <green +
-SystemUI></li>
+<li><a href="{@docRoot}devices/graphics/arch-sv-glsv.html">SurfaceView and
+GLSurfaceView</a>. SurfaceView combines a Surface and a View. SurfaceView's View
+components are composited by SurfaceFlinger (and not the app), enabling
+rendering from a separate thread/process and isolation from app UI rendering.
+GLSurfaceView provides helper classes to manage EGL contexts, inter-thread
+communication, and interaction with the Activity lifecycle (but is not required
+to use GLES).</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-st.html">SurfaceTexture</a>.
+SurfaceTexture combines a Surface and GLES texture to create a BufferQueue for
+which your app is the consumer. When a producer queues a new buffer, it notifies
+your app, which in turn releases the previously-held buffer, acquires the new
+buffer from the queue, and makes EGL calls to make the buffer available to GLES
+as an external texture. Android 7.0 adds support for secure texture video
+playback enabling GPU post-processing of protected video content.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-tv.html">TextureView</a>.
+TextureView combines a View with a SurfaceTexture. TextureView wraps a
+SurfaceTexture and takes responsibility for responding to callbacks and
+acquiring new buffers. When drawing, TextureView uses the contents of the most
+recently received buffer as its data source, rendering wherever and however the
+View state indicates it should. View composition is always performed with GLES,
+meaning updates to contents may cause other View elements to redraw as well.</li>
</ul>
-
-<p><strong>*</strong> - The System UI process is providing the status and nav
-bars, which for our purposes here aren’t changing, so SurfaceFlinger keeps using
-the previously-acquired buffer. In practice there would be two separate
-buffers, one for the status bar at the top, one for the navigation bar at the
-bottom, and they would be sized to fit their contents. Each would arrive on its
-own BufferQueue.</p>
-
-<p><strong>**</strong> - The buffer doesn’t actually “empty”; if you submit it
-without drawing on it you’ll get that same blue again. The emptying is the
-result of clearing the buffer contents, which the app should do before it starts
-drawing.</p>
-
-<p>We can reduce the latency by noting layer composition should not require a
-full VSYNC period. If composition is performed by overlays, it takes essentially
-zero CPU and GPU time. But we can't count on that, so we need to allow a little
-time. If the app starts rendering halfway between VSYNC signals, and
-SurfaceFlinger defers the HWC setup until a few milliseconds before the signal
-is due to arrive, we can cut the latency from 2 frames to perhaps 1.5. In
-theory you could render and composite in a single period, allowing a return to
-double-buffering; but getting it down that far is difficult on current devices.
-Minor fluctuations in rendering and composition time, and switching from
-overlays to GLES composition, can cause us to miss a swap deadline and repeat
-the previous frame.</p>
-
-<p>SurfaceFlinger's buffer handling demonstrates the fence-based buffer
-management mentioned earlier. If we're animating at full speed, we need to
-have an acquired buffer for the display ("front") and an acquired buffer for
-the next flip ("back"). If we're showing the buffer on an overlay, the
-contents are being accessed directly by the display and must not be touched.
-But if you look at an active layer's BufferQueue state in the <code>dumpsys
-SurfaceFlinger</code> output, you'll see one acquired buffer, one queued buffer, and
-one free buffer. That's because, when SurfaceFlinger acquires the new "back"
-buffer, it releases the current "front" buffer to the queue. The "front"
-buffer is still in use by the display, so anything that dequeues it must wait
-for the fence to signal before drawing on it. So long as everybody follows
-the fencing rules, all of the queue-management IPC requests can happen in
-parallel with the display.</p>
-
-<h3 id="virtual-displays">Virtual Displays</h3>
-
-<p>SurfaceFlinger supports a "primary" display, i.e. what's built into your phone
-or tablet, and an "external" display, such as a television connected through
-HDMI. It also supports a number of "virtual" displays, which make composited
-output available within the system. Virtual displays can be used to record the
-screen or send it over a network.</p>
-
-<p>Virtual displays may share the same set of layers as the main display
-(the "layer stack") or have its own set. There is no VSYNC for a virtual
-display, so the VSYNC for the primary display is used to trigger composition for
-all displays.</p>
-
-<p>In the past, virtual displays were always composited with GLES. The Hardware
-Composer managed composition for only the primary display. In Android 4.4, the
-Hardware Composer gained the ability to participate in virtual display
-composition.</p>
-
-<p>As you might expect, the frames generated for a virtual display are written to a
-BufferQueue.</p>
-
-<h3 id="screenrecord">Case study: screenrecord</h3>
-
-<p>Now that we've established some background on BufferQueue and SurfaceFlinger,
-it's useful to examine a practical use case.</p>
-
-<p>The <a href="https://android.googlesource.com/platform/frameworks/av/+/kitkat-release/cmds/screenrecord/">screenrecord
-command</a>,
-introduced in Android 4.4, allows you to record everything that appears on the
-screen as an .mp4 file on disk. To implement this, we have to receive composited
-frames from SurfaceFlinger, write them to the video encoder, and then write the
-encoded video data to a file. The video codecs are managed by a separate
-process - called "mediaserver" - so we have to move large graphics buffers around
-the system. To make it more challenging, we're trying to record 60fps video at
-full resolution. The key to making this work efficiently is BufferQueue.</p>
-
-<p>The MediaCodec class allows an app to provide data as raw bytes in buffers, or
-through a Surface. We'll discuss Surface in more detail later, but for now just
-think of it as a wrapper around the producer end of a BufferQueue. When
-screenrecord requests access to a video encoder, mediaserver creates a
-BufferQueue and connects itself to the consumer side, and then passes the
-producer side back to screenrecord as a Surface.</p>
-
-<p>The screenrecord command then asks SurfaceFlinger to create a virtual display
-that mirrors the main display (i.e. it has all of the same layers), and directs
-it to send output to the Surface that came from mediaserver. Note that, in this
-case, SurfaceFlinger is the producer of buffers rather than the consumer.</p>
-
-<p>Once the configuration is complete, screenrecord can just sit and wait for
-encoded data to appear. As apps draw, their buffers travel to SurfaceFlinger,
-which composites them into a single buffer that gets sent directly to the video
-encoder in mediaserver. The full frames are never even seen by the screenrecord
-process. Internally, mediaserver has its own way of moving buffers around that
-also passes data by handle, minimizing overhead.</p>
-
-<h3 id="simulate-secondary">Case study: Simulate Secondary Displays</h3>
-
-<p>The WindowManager can ask SurfaceFlinger to create a visible layer for which
-SurfaceFlinger will act as the BufferQueue consumer. It's also possible to ask
-SurfaceFlinger to create a virtual display, for which SurfaceFlinger will act as
-the BufferQueue producer. What happens if you connect them, configuring a
-virtual display that renders to a visible layer?</p>
-
-<p>You create a closed loop, where the composited screen appears in a window. Of
-course, that window is now part of the composited output, so on the next refresh
-the composited image inside the window will show the window contents as well.
-It's turtles all the way down. You can see this in action by enabling
-"<a href="http://developer.android.com/tools/index.html">Developer options</a>" in
-settings, selecting "Simulate secondary displays", and enabling a window. For
-bonus points, use screenrecord to capture the act of enabling the display, then
-play it back frame-by-frame.</p>
-
-<h2 id="surface">Surface and SurfaceHolder</h2>
-
-<p>The <a
-href="http://developer.android.com/reference/android/view/Surface.html">Surface</a>
-class has been part of the public API since 1.0. Its description simply says,
-"Handle onto a raw buffer that is being managed by the screen compositor." The
-statement was accurate when initially written but falls well short of the mark
-on a modern system.</p>
-
-<p>The Surface represents the producer side of a buffer queue that is often (but
-not always!) consumed by SurfaceFlinger. When you render onto a Surface, the
-result ends up in a buffer that gets shipped to the consumer. A Surface is not
-simply a raw chunk of memory you can scribble on.</p>
-
-<p>The BufferQueue for a display Surface is typically configured for
-triple-buffering; but buffers are allocated on demand. So if the producer
-generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps
-display -- there might only be two allocated buffers in the queue. This helps
-minimize memory consumption. You can see a summary of the buffers associated
-with every layer in the <code>dumpsys SurfaceFlinger</code> output.</p>
-
-<h3 id="canvas">Canvas Rendering</h3>
-
-<p>Once upon a time, all rendering was done in software, and you can still do this
-today. The low-level implementation is provided by the Skia graphics library.
-If you want to draw a rectangle, you make a library call, and it sets bytes in a
-buffer appropriately. To ensure that a buffer isn't updated by two clients at
-once, or written to while being displayed, you have to lock the buffer to access
-it. <code>lockCanvas()</code> locks the buffer and returns a Canvas to use for drawing,
-and <code>unlockCanvasAndPost()</code> unlocks the buffer and sends it to the compositor.</p>
-
-<p>As time went on, and devices with general-purpose 3D engines appeared, Android
-reoriented itself around OpenGL ES. However, it was important to keep the old
-API working, for apps as well as app framework code, so an effort was made to
-hardware-accelerate the Canvas API. As you can see from the charts on the
-<a href="http://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware
-Acceleration</a>
-page, this was a bit of a bumpy ride. Note in particular that while the Canvas
-provided to a View's <code>onDraw()</code> method may be hardware-accelerated, the Canvas
-obtained when an app locks a Surface directly with <code>lockCanvas()</code> never is.</p>
-
-<p>When you lock a Surface for Canvas access, the "CPU renderer" connects to the
-producer side of the BufferQueue and does not disconnect until the Surface is
-destroyed. Most other producers (like GLES) can be disconnected and reconnected
-to a Surface, but the Canvas-based "CPU renderer" cannot. This means you can't
-draw on a surface with GLES or send it frames from a video decoder if you've
-ever locked it for a Canvas.</p>
-
-<p>The first time the producer requests a buffer from a BufferQueue, it is
-allocated and initialized to zeroes. Initialization is necessary to avoid
-inadvertently sharing data between processes. When you re-use a buffer,
-however, the previous contents will still be present. If you repeatedly call
-<code>lockCanvas()</code> and <code>unlockCanvasAndPost()</code> without
-drawing anything, you'll cycle between previously-rendered frames.</p>
-
-<p>The Surface lock/unlock code keeps a reference to the previously-rendered
-buffer. If you specify a dirty region when locking the Surface, it will copy
-the non-dirty pixels from the previous buffer. There's a fair chance the buffer
-will be handled by SurfaceFlinger or HWC; but since we need to only read from
-it, there's no need to wait for exclusive access.</p>
-
-<p>The main non-Canvas way for an application to draw directly on a Surface is
-through OpenGL ES. That's described in the <a href="#eglsurface">EGLSurface and
-OpenGL ES</a> section.</p>
-
-<h3 id="surfaceholder">SurfaceHolder</h3>
-
-<p>Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView.
-The original idea was that Surface represented the raw compositor-managed
-buffer, while SurfaceHolder was managed by the app and kept track of
-higher-level information like the dimensions and format. The Java-language
-definition mirrors the underlying native implementation. It's arguably no
-longer useful to split it this way, but it has long been part of the public API.</p>
-
-<p>Generally speaking, anything having to do with a View will involve a
-SurfaceHolder. Some other APIs, such as MediaCodec, will operate on the Surface
-itself. You can easily get the Surface from the SurfaceHolder, so hang on to
-the latter when you have it.</p>
-
-<p>APIs to get and set Surface parameters, such as the size and format, are
-implemented through SurfaceHolder.</p>
-
-<h2 id="eglsurface">EGLSurface and OpenGL ES</h2>
-
-<p>OpenGL ES defines an API for rendering graphics. It does not define a windowing
-system. To allow GLES to work on a variety of platforms, it is designed to be
-combined with a library that knows how to create and access windows through the
-operating system. The library used for Android is called EGL. If you want to
-draw textured polygons, you use GLES calls; if you want to put your rendering on
-the screen, you use EGL calls.</p>
-
-<p>Before you can do anything with GLES, you need to create a GL context. In EGL,
-this means creating an EGLContext and an EGLSurface. GLES operations apply to
-the current context, which is accessed through thread-local storage rather than
-passed around as an argument. This means you have to be careful about which
-thread your rendering code executes on, and which context is current on that
-thread.</p>
-
-<p>The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer")
-or a window allocated by the operating system. EGL window surfaces are created
-with the <code>eglCreateWindowSurface()</code> call. It takes a "window object" as an
-argument, which on Android can be a SurfaceView, a SurfaceTexture, a
-SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath. When
-you make this call, EGL creates a new EGLSurface object, and connects it to the
-producer interface of the window object's BufferQueue. From that point onward,
-rendering to that EGLSurface results in a buffer being dequeued, rendered into,
-and queued for use by the consumer. (The term "window" is indicative of the
-expected use, but bear in mind the output might not be destined to appear
-on the display.)</p>
-
-<p>EGL does not provide lock/unlock calls. Instead, you issue drawing commands and
-then call <code>eglSwapBuffers()</code> to submit the current frame. The
-method name comes from the traditional swap of front and back buffers, but the actual
-implementation may be very different.</p>
-
-<p>Only one EGLSurface can be associated with a Surface at a time -- you can have
-only one producer connected to a BufferQueue -- but if you destroy the
-EGLSurface it will disconnect from the BufferQueue and allow something else to
-connect.</p>
-
-<p>A given thread can switch between multiple EGLSurfaces by changing what's
-"current." An EGLSurface must be current on only one thread at a time.</p>
-
-<p>The most common mistake when thinking about EGLSurface is assuming that it is
-just another aspect of Surface (like SurfaceHolder). It's a related but
-independent concept. You can draw on an EGLSurface that isn't backed by a
-Surface, and you can use a Surface without EGL. EGLSurface just gives GLES a
-place to draw.</p>
-
-<h3 id="anativewindow">ANativeWindow</h3>
-
-<p>The public Surface class is implemented in the Java programming language. The
-equivalent in C/C++ is the ANativeWindow class, semi-exposed by the <a
-href="https://developer.android.com/tools/sdk/ndk/index.html">Android NDK</a>. You
-can get the ANativeWindow from a Surface with the <code>ANativeWindow_fromSurface()</code>
-call. Just like its Java-language cousin, you can lock it, render in software,
-and unlock-and-post.</p>
-
-<p>To create an EGL window surface from native code, you pass an instance of
-EGLNativeWindowType to <code>eglCreateWindowSurface()</code>. EGLNativeWindowType is just
-a synonym for ANativeWindow, so you can freely cast one to the other.</p>
-
-<p>The fact that the basic "native window" type just wraps the producer side of a
-BufferQueue should not come as a surprise.</p>
-
-<h2 id="surfaceview">SurfaceView and GLSurfaceView</h2>
-
-<p>Now that we've explored the lower-level components, it's time to see how they
-fit into the higher-level components that apps are built from.</p>
-
-<p>The Android app framework UI is based on a hierarchy of objects that start with
-View. Most of the details don't matter for this discussion, but it's helpful to
-understand that UI elements go through a complicated measurement and layout
-process that fits them into a rectangular area. All visible View objects are
-rendered to a SurfaceFlinger-created Surface that was set up by the
-WindowManager when the app was brought to the foreground. The layout and
-rendering is performed on the app's UI thread.</p>
-
-<p>Regardless of how many Layouts and Views you have, everything gets rendered into
-a single buffer. This is true whether or not the Views are hardware-accelerated.</p>
-
-<p>A SurfaceView takes the same sorts of parameters as other views, so you can give
-it a position and size, and fit other elements around it. When it comes time to
-render, however, the contents are completely transparent. The View part of a
-SurfaceView is just a see-through placeholder.</p>
-
-<p>When the SurfaceView's View component is about to become visible, the framework
-asks the WindowManager to ask SurfaceFlinger to create a new Surface. (This
-doesn't happen synchronously, which is why you should provide a callback that
-notifies you when the Surface creation finishes.) By default, the new Surface
-is placed behind the app UI Surface, but the default "Z-ordering" can be
-overridden to put the Surface on top.</p>
-
-<p>Whatever you render onto this Surface will be composited by SurfaceFlinger, not
-by the app. This is the real power of SurfaceView: the Surface you get can be
-rendered by a separate thread or a separate process, isolated from any rendering
-performed by the app UI, and the buffers go directly to SurfaceFlinger. You
-can't totally ignore the UI thread -- you still have to coordinate with the
-Activity lifecycle, and you may need to adjust something if the size or position
-of the View changes -- but you have a whole Surface all to yourself, and
-blending with the app UI and other layers is handled by the Hardware Composer.</p>
-
-<p>It's worth taking a moment to note that this new Surface is the producer side of
-a BufferQueue whose consumer is a SurfaceFlinger layer. You can update the
-Surface with any mechanism that can feed a BufferQueue. You can: use the
-Surface-supplied Canvas functions, attach an EGLSurface and draw on it
-with GLES, and configure a MediaCodec video decoder to write to it.</p>
-
-<h3 id="composition">Composition and the Hardware Scaler</h3>
-
-<p>Now that we have a bit more context, it's useful to go back and look at a couple
-of fields from <code>dumpsys SurfaceFlinger</code> that we skipped over earlier
-on. Back in the <a href="#hwcomposer">Hardware Composer</a> discussion, we
-looked at some output like this:</p>
-
-<pre>
- type | source crop | frame name
-------------+-----------------------------------+--------------------------------
- HWC | [ 0.0, 0.0, 320.0, 240.0] | [ 48, 411, 1032, 1149] SurfaceView
- HWC | [ 0.0, 75.0, 1080.0, 1776.0] | [ 0, 75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
- HWC | [ 0.0, 0.0, 1080.0, 75.0] | [ 0, 0, 1080, 75] StatusBar
- HWC | [ 0.0, 0.0, 1080.0, 144.0] | [ 0, 1776, 1080, 1920] NavigationBar
- FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
-</pre>
-
-<p>This was taken while playing a movie in Grafika's "Play video (SurfaceView)"
-activity, on a Nexus 5 in portrait orientation. Note that the list is ordered
-from back to front: the SurfaceView's Surface is in the back, the app UI layer
-sits on top of that, followed by the status and navigation bars that are above
-everything else. The video is QVGA (320x240).</p>
-
-<p>The "source crop" indicates the portion of the Surface's buffer that
-SurfaceFlinger is going to display. The app UI was given a Surface equal to the
-full size of the display (1080x1920), but there's no point rendering and
-compositing pixels that will be obscured by the status and navigation bars, so
-the source is cropped to a rectangle that starts 75 pixels from the top, and
-ends 144 pixels from the bottom. The status and navigation bars have smaller
-Surfaces, and the source crop describes a rectangle that begins at the the top
-left (0,0) and spans their content.</p>
-
-<p>The "frame" is the rectangle where the pixels end up on the display. For the
-app UI layer, the frame matches the source crop, because we're copying (or
-overlaying) a portion of a display-sized layer to the same location in another
-display-sized layer. For the status and navigation bars, the size of the frame
-rectangle is the same, but the position is adjusted so that the navigation bar
-appears at the bottom of the screen.</p>
-
-<p>Now consider the layer labeled "SurfaceView", which holds our video content.
-The source crop matches the video size, which SurfaceFlinger knows because the
-MediaCodec decoder (the buffer producer) is dequeuing buffers that size. The
-frame rectangle has a completely different size -- 984x738.</p>
-
-<p>SurfaceFlinger handles size differences by scaling the buffer contents to fill
-the frame rectangle, upscaling or downscaling as needed. This particular size
-was chosen because it has the same aspect ratio as the video (4:3), and is as
-wide as possible given the constraints of the View layout (which includes some
-padding at the edges of the screen for aesthetic reasons).</p>
-
-<p>If you started playing a different video on the same Surface, the underlying
-BufferQueue would reallocate buffers to the new size automatically, and
-SurfaceFlinger would adjust the source crop. If the aspect ratio of the new
-video is different, the app would need to force a re-layout of the View to match
-it, which causes the WindowManager to tell SurfaceFlinger to update the frame
-rectangle.</p>
-
-<p>If you're rendering on the Surface through some other means, perhaps GLES, you
-can set the Surface size using the <code>SurfaceHolder#setFixedSize()</code>
-call. You could, for example, configure a game to always render at 1280x720,
-which would significantly reduce the number of pixels that must be touched to
-fill the screen on a 2560x1440 tablet or 4K television. The display processor
-handles the scaling. If you don't want to letter- or pillar-box your game, you
-could adjust the game's aspect ratio by setting the size so that the narrow
-dimension is 720 pixels, but the long dimension is set to maintain the aspect
-ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display).
-You can see an example of this approach in Grafika's "Hardware scaler
-exerciser" activity.</p>
-
-<h3 id="glsurfaceview">GLSurfaceView</h3>
-
-<p>The GLSurfaceView class provides some helper classes that help manage EGL
-contexts, inter-thread communication, and interaction with the Activity
-lifecycle. That's it. You do not need to use a GLSurfaceView to use GLES.</p>
-
-<p>For example, GLSurfaceView creates a thread for rendering and configures an EGL
-context there. The state is cleaned up automatically when the activity pauses.
-Most apps won't need to know anything about EGL to use GLES with GLSurfaceView.</p>
-
-<p>In most cases, GLSurfaceView is very helpful and can make working with GLES
-easier. In some situations, it can get in the way. Use it if it helps, don't
-if it doesn't.</p>
-
-<h2 id="surfacetexture">SurfaceTexture</h2>
-
-<p>The SurfaceTexture class is a relative newcomer, added in Android 3.0
-("Honeycomb"). Just as SurfaceView is the combination of a Surface and a View,
-SurfaceTexture is the combination of a Surface and a GLES texture. Sort of.</p>
-
-<p>When you create a SurfaceTexture, you are creating a BufferQueue for which your
-app is the consumer. When a new buffer is queued by the producer, your app is
-notified via callback (<code>onFrameAvailable()</code>). Your app calls
-<code>updateTexImage()</code>, which releases the previously-held buffer,
-acquires the new buffer from the queue, and makes some EGL calls to make the
-buffer available to GLES as an "external" texture.</p>
-
-<p>External textures (<code>GL_TEXTURE_EXTERNAL_OES</code>) are not quite the
-same as textures created by GLES (<code>GL_TEXTURE_2D</code>). You have to
-configure your renderer a bit differently, and there are things you can't do
-with them. But the key point is this: You can render textured polygons directly
-from the data received by your BufferQueue.</p>
-
-<p>You may be wondering how we can guarantee the format of the data in the
-buffer is something GLES can recognize -- gralloc supports a wide variety
-of formats. When SurfaceTexture created the BufferQueue, it set the consumer's
-usage flags to <code>GRALLOC_USAGE_HW_TEXTURE</code>, ensuring that any buffer
-created by gralloc would be usable by GLES.</p>
-
-<p>Because SurfaceTexture interacts with an EGL context, you have to be careful to
-call its methods from the correct thread. This is spelled out in the class
-documentation.</p>
-
-<p>If you look deeper into the class documentation, you will see a couple of odd
-calls. One retrieves a timestamp, the other a transformation matrix, the value
-of each having been set by the previous call to <code>updateTexImage()</code>.
-It turns out that BufferQueue passes more than just a buffer handle to the consumer.
-Each buffer is accompanied by a timestamp and transformation parameters.</p>
-
-<p>The transformation is provided for efficiency. In some cases, the source data
-might be in the "wrong" orientation for the consumer; but instead of rotating
-the data before sending it, we can send the data in its current orientation with
-a transform that corrects it. The transformation matrix can be merged with
-other transformations at the point the data is used, minimizing overhead.</p>
-
-<p>The timestamp is useful for certain buffer sources. For example, suppose you
-connect the producer interface to the output of the camera (with
-<code>setPreviewTexture()</code>). If you want to create a video, you need to
-set the presentation time stamp for each frame; but you want to base that on the time
-when the frame was captured, not the time when the buffer was received by your
-app. The timestamp provided with the buffer is set by the camera code,
-resulting in a more consistent series of timestamps.</p>
-
-<h3 id="surfacet">SurfaceTexture and Surface</h3>
-
-<p>If you look closely at the API you'll see the only way for an application
-to create a plain Surface is through a constructor that takes a SurfaceTexture
-as the sole argument. (Prior to API 11, there was no public constructor for
-Surface at all.) This might seem a bit backward if you view SurfaceTexture as a
-combination of a Surface and a texture.</p>
-
-<p>Under the hood, SurfaceTexture is called GLConsumer, which more accurately
-reflects its role as the owner and consumer of a BufferQueue. When you create a
-Surface from a SurfaceTexture, what you're doing is creating an object that
-represents the producer side of the SurfaceTexture's BufferQueue.</p>
-
-<h3 id="continuous-capture">Case Study: Grafika's "Continuous Capture" Activity</h3>
-
-<p>The camera can provide a stream of frames suitable for recording as a movie. If
-you want to display it on screen, you create a SurfaceView, pass the Surface to
-<code>setPreviewDisplay()</code>, and let the producer (camera) and consumer
-(SurfaceFlinger) do all the work. If you want to record the video, you create a
-Surface with MediaCodec's <code>createInputSurface()</code>, pass that to the
-camera, and again you sit back and relax. If you want to show the video and
-record it at the same time, you have to get more involved.</p>
-
-<p>The "Continuous capture" activity displays video from the camera as it's being
-recorded. In this case, encoded video is written to a circular buffer in memory
-that can be saved to disk at any time. It's straightforward to implement so
-long as you keep track of where everything is.</p>
-
-<p>There are three BufferQueues involved. The app uses a SurfaceTexture to receive
-frames from Camera, converting them to an external GLES texture. The app
-declares a SurfaceView, which we use to display the frames, and we configure a
-MediaCodec encoder with an input Surface to create the video. So one
-BufferQueue is created by the app, one by SurfaceFlinger, and one by
-mediaserver.</p>
-
-<img src="images/continuous_capture_activity.png" alt="Grafika continuous
-capture activity" />
-
-<p class="img-caption">
- <strong>Figure 2.</strong>Grafika's continuous capture activity
-</p>
-
-<p>In the diagram above, the arrows show the propagation of the data from the
-camera. BufferQueues are in color (purple producer, cyan consumer). Note
-“Camera” actually lives in the mediaserver process.</p>
-
-<p>Encoded H.264 video goes to a circular buffer in RAM in the app process, and is
-written to an MP4 file on disk using the MediaMuxer class when the “capture”
-button is hit.</p>
-
-<p>All three of the BufferQueues are handled with a single EGL context in the
-app, and the GLES operations are performed on the UI thread. Doing the
-SurfaceView rendering on the UI thread is generally discouraged, but since we're
-doing simple operations that are handled asynchronously by the GLES driver we
-should be fine. (If the video encoder locks up and we block trying to dequeue a
-buffer, the app will become unresponsive. But at that point, we're probably
-failing anyway.) The handling of the encoded data -- managing the circular
-buffer and writing it to disk -- is performed on a separate thread.</p>
-
-<p>The bulk of the configuration happens in the SurfaceView's <code>surfaceCreated()</code>
-callback. The EGLContext is created, and EGLSurfaces are created for the
-display and for the video encoder. When a new frame arrives, we tell
-SurfaceTexture to acquire it and make it available as a GLES texture, then
-render it with GLES commands on each EGLSurface (forwarding the transform and
-timestamp from SurfaceTexture). The encoder thread pulls the encoded output
-from MediaCodec and stashes it in memory.</p>
-
-<h2 id="texture">TextureView</h2>
-
-<p>The TextureView class was
-<a href="http://android-developers.blogspot.com/2011/11/android-40-graphics-and-animations.html">introduced</a>
-in Android 4.0 ("Ice Cream Sandwich"). It's the most complex of the View
-objects discussed here, combining a View with a SurfaceTexture.</p>
-
-<p>Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics
-data and making them available as textures. TextureView wraps a SurfaceTexture,
-taking over the responsibility of responding to the callbacks and acquiring new
-buffers. The arrival of new buffers causes TextureView to issue a View
-invalidate request. When asked to draw, the TextureView uses the contents of
-the most recently received buffer as its data source, rendering wherever and
-however the View state indicates it should.</p>
-
-<p>You can render on a TextureView with GLES just as you would SurfaceView. Just
-pass the SurfaceTexture to the EGL window creation call. However, doing so
-exposes a potential problem.</p>
-
-<p>In most of what we've looked at, the BufferQueues have passed buffers between
-different processes. When rendering to a TextureView with GLES, both producer
-and consumer are in the same process, and they might even be handled on a single
-thread. Suppose we submit several buffers in quick succession from the UI
-thread. The EGL buffer swap call will need to dequeue a buffer from the
-BufferQueue, and it will stall until one is available. There won't be any
-available until the consumer acquires one for rendering, but that also happens
-on the UI thread… so we're stuck.</p>
-
-<p>The solution is to have BufferQueue ensure there is always a buffer
-available to be dequeued, so the buffer swap never stalls. One way to guarantee
-this is to have BufferQueue discard the contents of the previously-queued buffer
-when a new buffer is queued, and to place restrictions on minimum buffer counts
-and maximum acquired buffer counts. (If your queue has three buffers, and all
-three buffers are acquired by the consumer, then there's nothing to dequeue and
-the buffer swap call must hang or fail. So we need to prevent the consumer from
-acquiring more than two buffers at once.) Dropping buffers is usually
-undesirable, so it's only enabled in specific situations, such as when the
-producer and consumer are in the same process.</p>
-
-<h3 id="surface-or-texture">SurfaceView or TextureView?</h3>
-SurfaceView and TextureView fill similar roles, but have very different
-implementations. To decide which is best requires an understanding of the
-trade-offs.</p>
-
-<p>Because TextureView is a proper citizen of the View hierarchy, it behaves like
-any other View, and can overlap or be overlapped by other elements. You can
-perform arbitrary transformations and retrieve the contents as a bitmap with
-simple API calls.</p>
-
-<p>The main strike against TextureView is the performance of the composition step.
-With SurfaceView, the content is written to a separate layer that SurfaceFlinger
-composites, ideally with an overlay. With TextureView, the View composition is
-always performed with GLES, and updates to its contents may cause other View
-elements to redraw as well (e.g. if they're positioned on top of the
-TextureView). After the View rendering completes, the app UI layer must then be
-composited with other layers by SurfaceFlinger, so you're effectively
-compositing every visible pixel twice. For a full-screen video player, or any
-other application that is effectively just UI elements layered on top of video,
-SurfaceView offers much better performance.</p>
-
-<p>As noted earlier, DRM-protected video can be presented only on an overlay plane.
- Video players that support protected content must be implemented with
-SurfaceView.</p>
-
-<h3 id="grafika">Case Study: Grafika's Play Video (TextureView)</h3>
-
-<p>Grafika includes a pair of video players, one implemented with TextureView, the
-other with SurfaceView. The video decoding portion, which just sends frames
-from MediaCodec to a Surface, is the same for both. The most interesting
-differences between the implementations are the steps required to present the
-correct aspect ratio.</p>
-
-<p>While SurfaceView requires a custom implementation of FrameLayout, resizing
-SurfaceTexture is a simple matter of configuring a transformation matrix with
-<code>TextureView#setTransform()</code>. For the former, you're sending new
-window position and size values to SurfaceFlinger through WindowManager; for
-the latter, you're just rendering it differently.</p>
-
-<p>Otherwise, both implementations follow the same pattern. Once the Surface has
-been created, playback is enabled. When "play" is hit, a video decoding thread
-is started, with the Surface as the output target. After that, the app code
-doesn't have to do anything -- composition and display will either be handled by
-SurfaceFlinger (for the SurfaceView) or by TextureView.</p>
-
-<h3 id="decode">Case Study: Grafika's Double Decode</h3>
-
-<p>This activity demonstrates manipulation of the SurfaceTexture inside a
-TextureView.</p>
-
-<p>The basic structure of this activity is a pair of TextureViews that show two
-different videos playing side-by-side. To simulate the needs of a
-videoconferencing app, we want to keep the MediaCodec decoders alive when the
-activity is paused and resumed for an orientation change. The trick is that you
-can't change the Surface that a MediaCodec decoder uses without fully
-reconfiguring it, which is a fairly expensive operation; so we want to keep the
-Surface alive. The Surface is just a handle to the producer interface in the
-SurfaceTexture's BufferQueue, and the SurfaceTexture is managed by the
-TextureView;, so we also need to keep the SurfaceTexture alive. So how do we deal
-with the TextureView getting torn down?</p>
-
-<p>It just so happens TextureView provides a <code>setSurfaceTexture()</code> call
-that does exactly what we want. We obtain references to the SurfaceTextures
-from the TextureViews and save them in a static field. When the activity is
-shut down, we return "false" from the <code>onSurfaceTextureDestroyed()</code>
-callback to prevent destruction of the SurfaceTexture. When the activity is
-restarted, we stuff the old SurfaceTexture into the new TextureView. The
-TextureView class takes care of creating and destroying the EGL contexts.</p>
-
-<p>Each video decoder is driven from a separate thread. At first glance it might
-seem like we need EGL contexts local to each thread; but remember the buffers
-with decoded output are actually being sent from mediaserver to our
-BufferQueue consumers (the SurfaceTextures). The TextureViews take care of the
-rendering for us, and they execute on the UI thread.</p>
-
-<p>Implementing this activity with SurfaceView would be a bit harder. We can't
-just create a pair of SurfaceViews and direct the output to them, because the
-Surfaces would be destroyed during an orientation change. Besides, that would
-add two layers, and limitations on the number of available overlays strongly
-motivate us to keep the number of layers to a minimum. Instead, we'd want to
-create a pair of SurfaceTextures to receive the output from the video decoders,
-and then perform the rendering in the app, using GLES to render two textured
-quads onto the SurfaceView's Surface.</p>
-
-<h2 id="notes">Conclusion</h2>
-
-<p>We hope this page has provided useful insights into the way Android handles
-graphics at the system level.</p>
-
-<p>Some information and advice on related topics can be found in the appendices
-that follow.</p>
-
-<h2 id="loops">Appendix A: Game Loops</h2>
-
-<p>A very popular way to implement a game loop looks like this:</p>
-
-<pre>
-while (playing) {
- advance state by one frame
- render the new frame
- sleep until it’s time to do the next frame
-}
-</pre>
-
-<p>There are a few problems with this, the most fundamental being the idea that the
-game can define what a "frame" is. Different displays will refresh at different
-rates, and that rate may vary over time. If you generate frames faster than the
-display can show them, you will have to drop one occasionally. If you generate
-them too slowly, SurfaceFlinger will periodically fail to find a new buffer to
-acquire and will re-show the previous frame. Both of these situations can
-cause visible glitches.</p>
-
-<p>What you need to do is match the display's frame rate, and advance game state
-according to how much time has elapsed since the previous frame. There are two
-ways to go about this: (1) stuff the BufferQueue full and rely on the "swap
-buffers" back-pressure; (2) use Choreographer (API 16+).</p>
-
-<h3 id="stuffing">Queue Stuffing</h3>
-
-<p>This is very easy to implement: just swap buffers as fast as you can. In early
-versions of Android this could actually result in a penalty where
-<code>SurfaceView#lockCanvas()</code> would put you to sleep for 100ms. Now
-it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as
-SurfaceFlinger is able.</p>
-
-<p>One example of this approach can be seen in <a
-href="https://code.google.com/p/android-breakout/">Android Breakout</a>. It
-uses GLSurfaceView, which runs in a loop that calls the application's
-onDrawFrame() callback and then swaps the buffer. If the BufferQueue is full,
-the <code>eglSwapBuffers()</code> call will wait until a buffer is available.
-Buffers become available when SurfaceFlinger releases them, which it does after
-acquiring a new one for display. Because this happens on VSYNC, your draw loop
-timing will match the refresh rate. Mostly.</p>
-
-<p>There are a couple of problems with this approach. First, the app is tied to
-SurfaceFlinger activity, which is going to take different amounts of time
-depending on how much work there is to do and whether it's fighting for CPU time
-with other processes. Since your game state advances according to the time
-between buffer swaps, your animation won't update at a consistent rate. When
-running at 60fps with the inconsistencies averaged out over time, though, you
-probably won't notice the bumps.</p>
-
-<p>Second, the first couple of buffer swaps are going to happen very quickly
-because the BufferQueue isn't full yet. The computed time between frames will
-be near zero, so the game will generate a few frames in which nothing happens.
-In a game like Breakout, which updates the screen on every refresh, the queue is
-always full except when a game is first starting (or un-paused), so the effect
-isn't noticeable. A game that pauses animation occasionally and then returns to
-as-fast-as-possible mode might see odd hiccups.</p>
-
-<h3 id="choreographer">Choreographer</h3>
-
-<p>Choreographer allows you to set a callback that fires on the next VSYNC. The
-actual VSYNC time is passed in as an argument. So even if your app doesn't wake
-up right away, you still have an accurate picture of when the display refresh
-period began. Using this value, rather than the current time, yields a
-consistent time source for your game state update logic.</p>
-
-<p>Unfortunately, the fact that you get a callback after every VSYNC does not
-guarantee that your callback will be executed in a timely fashion or that you
-will be able to act upon it sufficiently swiftly. Your app will need to detect
-situations where it's falling behind and drop frames manually.</p>
-
-<p>The "Record GL app" activity in Grafika provides an example of this. On some
-devices (e.g. Nexus 4 and Nexus 5), the activity will start dropping frames if
-you just sit and watch. The GL rendering is trivial, but occasionally the View
-elements get redrawn, and the measure/layout pass can take a very long time if
-the device has dropped into a reduced-power mode. (According to systrace, it
-takes 28ms instead of 6ms after the clocks slow on Android 4.4. If you drag
-your finger around the screen, it thinks you're interacting with the activity,
-so the clock speeds stay high and you'll never drop a frame.)</p>
-
-<p>The simple fix was to drop a frame in the Choreographer callback if the current
-time is more than N milliseconds after the VSYNC time. Ideally the value of N
-is determined based on previously observed VSYNC intervals. For example, if the
-refresh period is 16.7ms (60fps), you might drop a frame if you're running more
-than 15ms late.</p>
-
-<p>If you watch "Record GL app" run, you will see the dropped-frame counter
-increase, and even see a flash of red in the border when frames drop. Unless
-your eyes are very good, though, you won't see the animation stutter. At 60fps,
-the app can drop the occasional frame without anyone noticing so long as the
-animation continues to advance at a constant rate. How much you can get away
-with depends to some extent on what you're drawing, the characteristics of the
-display, and how good the person using the app is at detecting jank.</p>
-
-<h3 id="thread">Thread Management</h3>
-
-<p>Generally speaking, if you're rendering onto a SurfaceView, GLSurfaceView, or
-TextureView, you want to do that rendering in a dedicated thread. Never do any
-"heavy lifting" or anything that takes an indeterminate amount of time on the
-UI thread.</p>
-
-<p>Breakout and "Record GL app" use dedicated renderer threads, and they also
-update animation state on that thread. This is a reasonable approach so long as
-game state can be updated quickly.</p>
-
-<p>Other games separate the game logic and rendering completely. If you had a
-simple game that did nothing but move a block every 100ms, you could have a
-dedicated thread that just did this:</p>
-
-<pre>
- run() {
- Thread.sleep(100);
- synchronized (mLock) {
- moveBlock();
- }
- }
-</pre>
-
-<p>(You may want to base the sleep time off of a fixed clock to prevent drift --
-sleep() isn't perfectly consistent, and moveBlock() takes a nonzero amount of
-time -- but you get the idea.)</p>
-
-<p>When the draw code wakes up, it just grabs the lock, gets the current position
-of the block, releases the lock, and draws. Instead of doing fractional
-movement based on inter-frame delta times, you just have one thread that moves
-things along and another thread that draws things wherever they happen to be
-when the drawing starts.</p>
-
-<p>For a scene with any complexity you'd want to create a list of upcoming events
-sorted by wake time, and sleep until the next event is due, but it's the same
-idea.</p>
-
-<h2 id="activity">Appendix B: SurfaceView and the Activity Lifecycle</h2>
-
-<p>When using a SurfaceView, it's considered good practice to render the Surface
-from a thread other than the main UI thread. This raises some questions about
-the interaction between that thread and the Activity lifecycle.</p>
-
-<p>First, a little background. For an Activity with a SurfaceView, there are two
-separate but interdependent state machines:</p>
-
-<ol>
-<li>Application onCreate / onResume / onPause</li>
-<li>Surface created / changed / destroyed</li>
-</ol>
-
-<p>When the Activity starts, you get callbacks in this order:</p>
-
-<ul>
-<li>onCreate</li>
-<li>onResume</li>
-<li>surfaceCreated</li>
-<li>surfaceChanged</li>
-</ul>
-
-<p>If you hit "back" you get:</p>
-
-<ul>
-<li>onPause</li>
-<li>surfaceDestroyed (called just before the Surface goes away)</li>
-</ul>
-
-<p>If you rotate the screen, the Activity is torn down and recreated, so you
-get the full cycle. If it matters, you can tell that it's a "quick" restart by
-checking <code>isFinishing()</code>. (It might be possible to start / stop an
-Activity so quickly that surfaceCreated() might actually happen after onPause().)</p>
-
-<p>If you tap the power button to blank the screen, you only get
-<code>onPause()</code> -- no <code>surfaceDestroyed()</code>. The Surface
-remains alive, and rendering can continue. You can even keep getting
-Choreographer events if you continue to request them. If you have a lock
-screen that forces a different orientation, your Activity may be restarted when
-the device is unblanked; but if not, you can come out of screen-blank with the
-same Surface you had before.</p>
-
-<p>This raises a fundamental question when using a separate renderer thread with
-SurfaceView: Should the lifespan of the thread be tied to that of the Surface or
-the Activity? The answer depends on what you want to have happen when the
-screen goes blank. There are two basic approaches: (1) start/stop the thread on
-Activity start/stop; (2) start/stop the thread on Surface create/destroy.</p>
-
-<p>#1 interacts well with the app lifecycle. We start the renderer thread in
-<code>onResume()</code> and stop it in <code>onPause()</code>. It gets a bit
-awkward when creating and configuring the thread because sometimes the Surface
-will already exist and sometimes it won't (e.g. it's still alive after toggling
-the screen with the power button). We have to wait for the surface to be
-created before we do some initialization in the thread, but we can't simply do
-it in the <code>surfaceCreated()</code> callback because that won't fire again
-if the Surface didn't get recreated. So we need to query or cache the Surface
-state, and forward it to the renderer thread. Note we have to be a little
-careful here passing objects between threads -- it is best to pass the Surface or
-SurfaceHolder through a Handler message, rather than just stuffing it into the
-thread, to avoid issues on multi-core systems (cf. the <a
-href="http://developer.android.com/training/articles/smp.html">Android SMP
-Primer</a>).</p>
-
-<p>#2 has a certain appeal because the Surface and the renderer are logically
-intertwined. We start the thread after the Surface has been created, which
-avoids some inter-thread communication concerns. Surface created / changed
-messages are simply forwarded. We need to make sure rendering stops when the
-screen goes blank, and resumes when it un-blanks; this could be a simple matter
-of telling Choreographer to stop invoking the frame draw callback. Our
-<code>onResume()</code> will need to resume the callbacks if and only if the
-renderer thread is running. It may not be so trivial though -- if we animate
-based on elapsed time between frames, we could have a very large gap when the
-next event arrives; so an explicit pause/resume message may be desirable.</p>
-
-<p>The above is primarily concerned with how the renderer thread is configured and
-whether it's executing. A related concern is extracting state from the thread
-when the Activity is killed (in <code>onPause()</code> or <code>onSaveInstanceState()</code>).
-Approach #1 will work best for that, because once the renderer thread has been
-joined its state can be accessed without synchronization primitives.</p>
-
-<p>You can see an example of approach #2 in Grafika's "Hardware scaler exerciser."</p>
-
-<h2 id="tracking">Appendix C: Tracking BufferQueue with systrace</h2>
-
-<p>If you really want to understand how graphics buffers move around, you need to
-use systrace. The system-level graphics code is well instrumented, as is much
-of the relevant app framework code. Enable the "gfx" and "view" tags, and
-generally "sched" as well.</p>
-
-<p>A full description of how to use systrace effectively would fill a rather long
-document. One noteworthy item is the presence of BufferQueues in the trace. If
-you've used systrace before, you've probably seen them, but maybe weren't sure
-what they were. As an example, if you grab a trace while Grafika's "Play video
-(SurfaceView)" is running, you will see a row labeled: "SurfaceView" This row
-tells you how many buffers were queued up at any given time.</p>
-
-<p>You'll notice the value increments while the app is active -- triggering
-the rendering of frames by the MediaCodec decoder -- and decrements while
-SurfaceFlinger is doing work, consuming buffers. If you're showing video at
-30fps, the queue's value will vary from 0 to 1, because the ~60fps display can
-easily keep up with the source. (You'll also notice that SurfaceFlinger is only
-waking up when there's work to be done, not 60 times per second. The system tries
-very hard to avoid work and will disable VSYNC entirely if nothing is updating
-the screen.)</p>
-
-<p>If you switch to "Play video (TextureView)" and grab a new trace, you'll see a
-row with a much longer name
-("com.android.grafika/com.android.grafika.PlayMovieActivity"). This is the
-main UI layer, which is of course just another BufferQueue. Because TextureView
-renders into the UI layer, rather than a separate layer, you'll see all of the
-video-driven updates here.</p>
-
-<p>For more information about systrace, see the <a
-href="http://developer.android.com/tools/help/systrace.html">Android
-documentation</a> for the tool.</p>
diff --git a/src/devices/graphics/cts-integration.jd b/src/devices/graphics/cts-integration.jd
index 7b04c57..a0571a5 100644
--- a/src/devices/graphics/cts-integration.jd
+++ b/src/devices/graphics/cts-integration.jd
@@ -25,19 +25,20 @@
</div>
</div>
-<h2 id=deqp_tests_in_android_cts>Deqp tests in Android CTS</h2>
+<p>Android CTS release packages (available from
+<a href="{@docRoot}compatibility/cts/downloads.html">Android Compatibility
+Downloads</a>) include deqp tests and require a subset of these tests (known as
+the <code>mustpass</code> list), to pass. For devices that do not support a
+target API or extension, tests are skipped and reported as passing.</p>
-<p>Deqp tests have been part of Android CTS since the Android 5.0 release.</p>
-
-<p>Android CTS requires a certain subset of tests, called the <code>mustpass</code> list, to pass. The <code>mustpass</code> list includes OpenGL ES 3.0, OpenGL ES 3.1, and the Android Extension Pack tests. If a device doesn't support a target API or extension, tests are skipped and reported as passing.
-The <code>mustpass</code> files can be found under the <code>android/cts</code> directory in the deqp source tree.</p>
-
-<p>Deqp tests are included in the Android CTS release packages, available on the <a href="{@docRoot}compatibility/cts/downloads.html">Android Compatibility Downloads</a> page. </p>
-
-<p>You can run deqp tests through the <code>cts-tradefed</code> utility with the following command:</p>
+<p>The <code>mustpass</code> list includes OpenGL ES 3.0, OpenGL ES
+3.1, OpenGL ES 3.2, and the Android Extension Pack tests. <code>mustpass</code>
+files can be found under the <code>android/cts</code> directory in the deqp
+source tree. You can run deqp tests through the <code>cts-tradefed</code>
+utility with the following command:</p>
<pre>
-cts-tradefed run cts --plan CTS-DEQP
+$ cts-tradefed run cts --plan CTS-DEQP
</pre>
<h2 id=duplicating_runs_without_cts>Duplicating runs without CTS</h2>
@@ -46,27 +47,23 @@
following command:</p>
<pre>
-adb -d shell am start -n com.drawelements.deqp/android.app.NativeActivity -e
-cmdLine "deqp --deqp-case=dEQP-GLES3.some_group.*
---deqp-gl-config-name=rgba8888d24s8 --deqp-log-filename=/sdcard/dEQP-Log.qpa
+$ adb -d shell am start -n com.drawelements.deqp/android.app.NativeActivity -e \
+cmdLine "deqp --deqp-case=dEQP-GLES3.some_group.* --deqp-gl-config-name=rgba8888d24s8 --deqp-log-filename=/sdcard/dEQP-Log.qpa
</pre>
-<p>The important part of that command is the following:</p>
-<pre>
---deqp-gl-config-name=rgba8888d24s8
-</pre>
+<p>The important part is the <code>--deqp-gl-config-name=rgba8888d24s8</code>
+argument, which requests the tests be run on an RGBA 8888 on-screen surface
+with a 24-bit depth buffer and an 8-bit stencil buffer. Remember to set
+the desired tests using the <code>--deqp-case</code> argument.</p>
-<p>This argument requests the tests be run on an RGBA 8888 on-screen surface
-with a 24-bit depth buffer and an 8-bit stencil buffer. Also remember to set
-the desired tests, e.g. using the <code>--deqp-case</code> argument.</p>
+<h2 id=mapping_of_the_cts_results>CTS results mapping</h2>
-<h2 id=mapping_of_the_cts_results>Mapping of the CTS results</h2>
-
-<p>In the Android CTS, a test case can end up in three states: passed, failed, or
-not executed.</p>
-
-<p>The deqp has more result codes available. A mapping is automatically performed
-by the CTS. The following deqp result codes are mapped to a CTS pass: <code>Pass</code>, <code>NotSupported</code>, <code>QualityWarning</code>, and <code>CompatibilityWarning</code> </p>
-
-<p>The following results are interpreted as a CTS failure:
-<code>Fail</code>, <code>ResourceError</code>, <code>Crash</code>, <code>Timeout</code>, and <code>InternalError</code></p>
+<p>In the Android CTS, a test case can end up in one of three states: passed,
+failed, or not executed (the deqp has more result codes available). CTS
+automatically maps deqp result codes to CTS results:</p>
+<ul>
+<li>A CTS pass can include <code>Pass</code>, <code>NotSupported</code>,
+<code>QualityWarning</code>, and <code>CompatibilityWarning</code>.</li>
+<li>A CTS failure can include <code>Fail</code>, <code>ResourceError</code>,
+<code>Crash</code>, <code>Timeout</code>, and <code>InternalError</code>.</li>
+</ul>
diff --git a/src/devices/graphics/images/ape_graphics_vulkan.png b/src/devices/graphics/images/ape_graphics_vulkan.png
new file mode 100644
index 0000000..b9910cf
--- /dev/null
+++ b/src/devices/graphics/images/ape_graphics_vulkan.png
Binary files differ
diff --git a/src/devices/graphics/images/graphics_secure_texture_playback.png b/src/devices/graphics/images/graphics_secure_texture_playback.png
new file mode 100644
index 0000000..9d38fe0
--- /dev/null
+++ b/src/devices/graphics/images/graphics_secure_texture_playback.png
Binary files differ
diff --git a/src/devices/graphics/implement-hwc.jd b/src/devices/graphics/implement-hwc.jd
new file mode 100644
index 0000000..77b425a
--- /dev/null
+++ b/src/devices/graphics/implement-hwc.jd
@@ -0,0 +1,320 @@
+page.title=Implementing the Hardware Composer HAL
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+
+<p>The Hardware Composer HAL (HWC) is used by SurfaceFlinger to composite
+surfaces to the screen. The HWC abstracts objects such as overlays and 2D
+blitters and helps offload some work that would normally be done with OpenGL.</p>
+
+<p>Android 7.0 includes a new version of HWC (HWC2) used by SurfaceFlinger to
+talk to specialized window composition hardware. SurfaceFlinger contains a
+fallback path that uses the 3D graphics processor (GPU) to perform the task of
+window composition, but this path is not ideal for a couple of reasons:</p>
+
+<ul>
+ <li>Typically, GPUs are not optimized for this use case and may use more power
+ than necessary to perform composition.</li>
+ <li>Any time SurfaceFlinger is using the GPU for composition is time that
+ applications cannot use the processor for their own rendering, so it is
+ preferable to use specialized hardware for composition instead of the GPU
+ whenever possible.</li>
+</ul>
+
+<h2 id="guidance">General guidance</h2>
+
+<p>As the physical display hardware behind the Hardware Composer abstraction
+layer can vary from device to device, it's difficult to give recommendations on
+specific features. In general, use the following guidance:</p>
+
+<ul>
+ <li>The HWC should support at least four overlays (status bar, system bar,
+ application, and wallpaper/background).</li>
+ <li>Layers can be bigger than the screen, so the HWC should be able to handle
+ layers that are larger than the display (for example, a wallpaper).</li>
+ <li>Pre-multiplied per-pixel alpha blending and per-plane alpha blending
+ should be supported at the same time.</li>
+ <li>The HWC should be able to consume the same buffers the GPU, camera, and
+ video decoder are producing, so supporting some of the following
+ properties is helpful:
+ <ul>
+ <li>RGBA packing order</li>
+ <li>YUV formats</li>
+ <li>Tiling, swizzling, and stride properties</li>
+ </ul>
+ <li>To support protected content, a hardware path for protected video playback
+ must be present.</li>
+ </ul>
+
+<p>The general recommendation is to implement a non-operational HWC first; after
+the structure is complete, implement a simple algorithm to delegate composition
+to the HWC (for example, delegate only the first three or four surfaces to the
+overlay hardware of the HWC).</p>
+
+<p>Focus on optimization, such as intelligently selecting the surfaces to send
+to the overlay hardware that maximizes the load taken off of the GPU. Another
+optimization is to detect whether the screen is updating; if it isn't, delegate
+composition to OpenGL instead of the HWC to save power. When the screen updates
+again, continue to offload composition to the HWC.</p>
+
+<p>Prepare for common use cases, such as:</p>
+
+<ul>
+ <li>Full-screen games in portrait and landscape mode</li>
+ <li>Full-screen video with closed captioning and playback control</li>
+ <li>The home screen (compositing the status bar, system bar, application
+ window, and live wallpapers)</li>
+ <li>Protected video playback</li>
+ <li>Multiple display support</li>
+</ul>
+
+<p>These use cases should address regular, predictable uses rather than edge
+cases that are rarely encountered (otherwise, optimizations will have little
+benefit). Implementations must balance two competing goals: animation smoothness
+and interaction latency.</p>
+
+
+<h2 id="interface_activities">HWC2 interface activities</h2>
+
+<p>HWC2 provides a few primitives (layer, display) to represent composition work
+and its interaction with the display hardware.</p>
+<p>A <em>layer</em> is the most important unit of composition; every layer has a
+set of properties that define how it interacts with other layers. Property
+categories include the following:</p>
+
+<ul>
+<li><strong>Positional</strong>. Defines where the layer appears on its display.
+Includes information such as the positions of a layer's edges and its <em>Z
+order</em> relative to other layers (whether it should be in front of or behind
+other layers).</li>
+<li><strong>Content</strong>. Defines how content displayed on the layer should
+be presented within the bounds defined by the positional properties. Includes
+information such as crop (to expand a portion of the content to fill the bounds
+of the layer) and transform (to show rotated or flipped content).</li>
+<li><strong>Composition</strong>. Defines how the layer should be composited
+with other layers. Includes information such as blending mode and a layer-wide
+alpha value for
+<a href="https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending">alpha
+compositing</a>.</li>
+<li><strong>Optimization</strong>. Provides information not strictly necessary
+to correctly composite the layer, but which can be used by the HWC device to
+optimize how it performs composition. Includes information such as the visible
+region of the layer and which portion of the layer has been updated since the
+previous frame.</li>
+</ul>
+
+<p>A <em>display</em> is another important unit of composition. Every layer can
+be present only on one display. A system can have multiple displays, and
+displays can be added or removed during normal system operations. This
+addition/removal can come at the request of the HWC device (typically in
+response to an external display being plugged into or removed from the device,
+called <em>hotplugging</em>), or at the request of the client, which permits the
+creation of <em>virtual displays</em> whose contents are rendered into an
+off-screen buffer instead of to a physical display.</p>
+<p>HWC2 provides functions to determine the properties of a given display, to
+switch between different configurations (e.g., 4k or 1080p resolution) and color
+modes (e.g., native color or true sRGB), and to turn the display on, off, or
+into a low-power mode if supported.</p>
+<p>In addition to layers and displays, HWC2 also provides control over the
+hardware vertical sync (VSYNC) signal along with a callback into the client to
+notify it of when a vsync event has occurred.</p>
+
+<h3 id="func_pointers">Function pointers</h3>
+<p>In this section and in HWC2 header comments, HWC interface functions are
+referred to by lowerCamelCase names that do not actually exist in the interface
+as named fields. Instead, almost every function is loaded by requesting a
+function pointer using <code>getFunction</code> provided by
+<code>hwc2_device_t</code>. For example, the function <code>createLayer</code>
+is a function pointer of type <code>HWC2_PFN_CREATE_LAYER</code>, which is
+returned when the enumerated value <code>HWC2_FUNCTION_CREATE_LAYER</code> is
+passed into <code>getFunction</code>.</p>
+<p>For detailed documentation on functions (including functions required for
+every HWC2 implementation), refer to the
+<a href="{@docRoot}devices/halref/hwcomposer2_8h.html">HWC2 header</a>.</p>
+
+<h3 id="layer_display_handles">Layer and display handles</h3>
+<p>Layers and displays are manipulated by opaque handles.</p>
+<p>When SurfaceFlinger wants to create a new layer, it calls the
+<code>createLayer</code> function, which then returns an opaque handle of type
+<code>hwc2_layer_t</code>. From that point on, any time SurfaceFlinger wants to
+modify a property of that layer, it passes that <code>hwc2_layer_t</code> value
+into the appropriate modification function, along with any other information
+needed to make the modification. The <code>hwc2_layer_t</code> type was made
+large enough to be able to hold either a pointer or an index, and it will be
+treated as opaque by SurfaceFlinger to provide HWC implementers maximum
+flexibility.</p>
+<p>Most of the above also applies to display handles, though handles are created
+differently depending on whether they are hotplugged (where the handle is passed
+through the hotplug callback) or requested by the client as a virtual display
+(where the handle is returned from <code>createVirtualDisplay</code>).</p>
+
+<h2 id="display_comp_ops">Display composition operations</h2>
+<p>Once per hardware vsync, SurfaceFlinger wakes if it has new content to
+composite. This new content could be new image buffers from applications or just
+a change in the properties of one or more layers. When it wakes, it performs the
+following steps:</p>
+
+<ol>
+<li>Apply transactions, if present. Includes changes in the properties of layers
+specified by the window manager but not changes in the contents of layers (i.e.,
+graphic buffers from applications).</li>
+<li>Latch new graphic buffers (acquire their handles from their respective
+applications), if present.</li>
+<li>If step 1 or 2 resulted in a change to the display contents, perform a new
+composition (described below).</li>
+</ol>
+
+<p>Steps 1 and 2 have some nuances (such as deferred transactions and
+presentation timestamps) that are outside the scope of this section. However,
+step 3 involves the HWC interface and is detailed below.</p>
+<p>At the beginning of the composition process, SurfaceFlinger will create and
+destroy layers or modify layer state as applicable. It will also update the
+layers with their current contents, using calls such as
+<code>setLayerBuffer</code> or <code>setLayerColor</code>. After all layers have
+been updated, it will call <code>validateDisplay</code>, which tells the device
+to examine the state of the various layers and determine how composition will
+proceed. By default, SurfaceFlinger usually attempts to configure every layer
+such that it will be composited by the device, though there may be some
+circumstances where it will mandate that it be composited by the client.</p>
+<p>After the call to <code>validateDisplay</code>, SurfaceFlinger will follow up
+with a call to <code>getChangedCompositionTypes</code> to see if the device
+wants any of the layers' composition types changed before performing the actual
+composition. SurfaceFlinger may choose to:</p>
+
+<ul>
+<li>Change some of the layer composition types and re-validate the display.</li>
+</ul>
+
+<blockquote><strong><em>OR</strong></em></blockquote>
+
+<ul>
+<li>Call <code>acceptDisplayChanges</code>, which has the same effect as
+changing the composition types as requested by the device and re-validating
+without actually having to call <code>validateDisplay</code> again.</li>
+</ul>
+
+<p>In practice, SurfaceFlinger always takes the latter path (calling
+<code>acceptDisplayChanges</code>) though this behavior may change in the
+future.</p>
+<p>At this point, the behavior differs depending on whether any of the layers
+have been marked for client composition. If any (or all) layers have been marked
+for client composition, SurfaceFlinger will now composite all of those layers
+into the client target buffer. This buffer will be provided to the device using
+the <code>setClientTarget</code> call so that it may be either displayed
+directly on the screen or further composited with layers that have not been
+marked for client composition. If no layers have been marked for client
+composition, then the client composition step is bypassed.</p>
+<p>Finally, after all of the state has been validated and client composition has
+been performed if needed, SurfaceFlinger will call <code>presentDisplay</code>.
+This is the HWC device's cue to complete the composition process and display the
+final result.</p>
+
+<h2 id="multiple_displays">Multiple displays in Android 7.0</h2>
+<p>While the HWC2 interface is quite flexible when it comes to the number of
+displays in the system, the rest of the Android framework is not yet as
+flexible. When designing a HWC2 implementation intended for use on Android 7.0,
+there are some additional restrictions not present in the HWC definition itself:
+</p>
+
+<ul>
+<li>It is assumed that there is exactly one <em>primary</em> display; that is,
+that there is one physical display that will be hotplugged immediately during
+the initialization of the device (specifically after the hotplug callback is
+registered).</li>
+<li>In addition to the primary display, exactly one <em>external</em> display
+may be hotplugged during normal operation of the device.</li>
+</ul>
+
+<p>While the SurfaceFlinger operations described above are performed per-display
+(eventual goal is to be able to composite displays independently of each other),
+they are currently performed sequentially for all active displays, even if only
+the contents of one display are updated.</p>
+<p>For example, if only the external display is updated, the sequence is:</p>
+
+<pre>
+// Update state for internal display
+// Update state for external display
+validateDisplay(<internal display>)
+validateDisplay(<external display>)
+presentDisplay(<internal display>)
+presentDisplay(<external display>)
+</pre>
+
+
+<h2 id="sync_fences">Synchronization fences</h2>
+<p>Synchronization (sync) fences are a crucial aspect of the Android graphics
+system. Fences allow CPU work to proceed independently from concurrent GPU work,
+blocking only when there is a true dependency.</p>
+<p>For example, when an application submits a buffer that is being produced on
+the GPU, it will also submit a fence object; this fence signals only when the
+GPU has finished writing into the buffer. Since the only part of the system that
+truly needs the GPU write to have finished is the display hardware (the hardware
+abstracted by the HWC HAL), the graphics pipeline is able to pass this fence
+along with the buffer through SurfaceFlinger to the HWC device. Only immediately
+before that buffer would be displayed does the device need to actually check
+that the fence has signaled.</p>
+<p>Sync fences are integrated tightly into HWC2 and organized in the following
+categories:</p>
+
+<ol>
+<li>Acquire fences are passed along with input buffers to the
+<code>setLayerBuffer</code> and <code>setClientTarget</code> calls. These
+represent a pending write into the buffer and must signal before the HWC client
+or device attempts to read from the associated buffer to perform composition.
+</li>
+<li>Release fences are retrieved after the call to <code>presentDisplay</code>
+using the <code>getReleaseFences</code> call and are passed back to the
+application along with buffers that will be replaced during the next
+composition. These represent a pending read from the buffer, and must signal
+before the application attempts to write new contents into the buffer.</li>
+<li>Retire fences are returned, one per frame, as part of the call to
+<code>presentDisplay</code> and represent when the composition of this frame
+has completed, or alternately, when the composition result of the prior frame is
+no longer needed. For physical displays, this is when the current frame appears
+on the screen and can also be interpreted as the time after which it is safe to
+write to the client target buffer again (if applicable). For virtual displays,
+this is the time when it is safe to read from the output buffer.</li>
+</ol>
+
+<h3 id="hwc2_changes">Changes in HWC2</h3>
+<p>The meaning of sync fences in HWC 2.0 has changed significantly relative to
+previous versions of the HAL.</p>
+<p>In HWC v1.x, the release and retire fences were speculative. A release fence
+for a buffer or a retire fence for the display retrieved in frame N would not
+signal any sooner than frame N + 1. In other words, the meaning of the fence
+was "the content of the buffer you provided for frame N is no longer needed."
+This is speculative because in theory SurfaceFlinger may not run again after
+frame N for an indeterminate period of time, which would leave those fences
+unsignaled for the same period.</p>
+<p>In HWC 2.0, release and retire fences are non-speculative. A release or
+retire fence retrieved in frame N will signal as soon as the content of the
+associated buffers replaces the contents of the buffers from frame N - 1, or in
+other words, the meaning of the fence is "the content of the buffer you provided
+for frame N has now replaced the previous content." This is non-speculative,
+since this fence should signal shortly after <code>presentDisplay</code> is
+called as soon as the hardware presents this frame's content.</p>
+<p>For implementation details, refer to the
+<a href="{@docRoot}devices/halref/hwcomposer2_8h.html">HWC2 header</a>.</p>
diff --git a/src/devices/graphics/implement-vdisplays.jd b/src/devices/graphics/implement-vdisplays.jd
new file mode 100644
index 0000000..177a79f
--- /dev/null
+++ b/src/devices/graphics/implement-vdisplays.jd
@@ -0,0 +1,81 @@
+page.title=Implementing Virtual Displays
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Android added platform support for virtual displays in Hardware Composer
+v1.3 (support can be used by Miracast). The virtual display composition is
+similar to the physical display: Input layers are described in
+<code>prepare()</code>, SurfaceFlinger conducts GPU composition, and layers and
+GPU framebuffer are provided to Hardware Composer in <code>set()</code>.</p>
+
+<p>Instead of the output going to the screen, it is sent to a gralloc buffer.
+Hardware Composer writes output to a buffer and provides the completion fence.
+The buffer is sent to an arbitrary consumer: video encoder, GPU, CPU, etc.
+Virtual displays can use 2D/blitter or overlays if the display pipeline can
+write to memory.</p>
+
+<h2 id=modes>Modes</h2>
+
+<p>Each frame is in one of three modes after <code>prepare()</code>:</p>
+
+<ul>
+<li><em>GLES</em>. All layers composited by GPU, which writes directly to the
+output buffer while Hardware Composer does nothing. This is equivalent to
+virtual display composition with Hardware Composer version older than v1.3.</li>
+<li><em>MIXED</em>. GPU composites some layers to framebuffer, and Hardware
+Composer composites framebuffer and remaining layers. GPU writes to scratch
+buffer (framebuffer); Hardware Composer reads scratch buffer and writes to the
+output buffer. Buffers may have different formats, e.g. RGBA and YCbCr.</li>
+<li><em>HWC</em>. All layers composited by Hardware Composer, which writes
+directly to the output buffer.</li>
+</ul>
+
+<h2 id=output_format>Output format</h2>
+<p>Output format depends on the mode:</p>
+
+<ul>
+<li><em>MIXED and HWC modes</em>. If the consumer needs CPU access, the consumer
+chooses the format. Otherwise, the format is IMPLEMENTATION_DEFINED. Gralloc
+can choose best format based on usage flags. For example, choose a YCbCr format
+if the consumer is video encoder, and Hardware Composer can write the format
+efficiently.</li>
+<li><em>GLES mode</em>. EGL driver chooses output buffer format in
+<code>dequeueBuffer()</code>, typically RGBA8888. The consumer must be able to
+accept this format.</li>
+</ul>
+
+<h2 id=egl_requirement>EGL requirement</h2>
+
+<p>Hardware Composer v1.3 virtual displays require that
+<code>eglSwapBuffers()</code> does not dequeue the next buffer immediately.
+Instead, it should defer dequeueing the buffer until rendering begins.
+Otherwise, EGL always owns the next output buffer. SurfaceFlinger can’t get the
+output buffer for Hardware Composer in MIXED/HWC mode.</p>
+
+<p>If Hardware Composer always sends all virtual display layers to GPU, all
+frames will be in GLES mode. Although not recommended, you may use this
+method if you need to support Hardware Composer v1.3 for some other reason but
+can’t conduct virtual display composition.</p>
diff --git a/src/devices/graphics/implement-vsync.jd b/src/devices/graphics/implement-vsync.jd
new file mode 100644
index 0000000..3db2a51
--- /dev/null
+++ b/src/devices/graphics/implement-vsync.jd
@@ -0,0 +1,394 @@
+page.title=Implementing VSYNC
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+
+<p>VSYNC synchronizes certain events to the refresh cycle of the display.
+Applications always start drawing on a VSYNC boundary, and SurfaceFlinger
+always composites on a VSYNC boundary. This eliminates stutters and improves
+visual performance of graphics.</p>
+
+<p>The Hardware Composer (HWC) has a function pointer indicating the function
+to implement for VSYNC:</p>
+
+<pre class=prettyprint> int (waitForVsync*) (int64_t *timestamp) </pre>
+
+<p>This function blocks until a VSYNC occurs and returns the timestamp of the
+actual VSYNC. A message must be sent every time VSYNC occurs. A client can
+receive a VSYNC timestamp once at specified intervals or continuously at
+intervals of 1. You must implement VSYNC with a maximum 1 ms lag (0.5 ms or less
+is recommended); timestamps returned must be extremely accurate.</p>
+
+<h2 id=explicit_synchronization>Explicit synchronization</h2>
+
+<p>Explicit synchronization is required and provides a mechanism for Gralloc
+buffers to be acquired and released in a synchronized way. Explicit
+synchronization allows producers and consumers of graphics buffers to signal
+when they are done with a buffer. This allows Android to asynchronously queue
+buffers to be read or written with the certainty that another consumer or
+producer does not currently need them. For details, see
+<a href="{@docRoot}devices/graphics/index.html#synchronization_framework">Synchronization
+framework</a>.</p>
+
+<p>The benefits of explicit synchronization include less behavior variation
+between devices, better debugging support, and improved testing metrics. For
+instance, the sync framework output readily identifies problem areas and root
+causes, and centralized SurfaceFlinger presentation timestamps show when events
+occur in the normal flow of the system.</p>
+
+<p>This communication is facilitated by the use of synchronization fences,
+which are required when requesting a buffer for consuming or producing. The
+synchronization framework consists of three main building blocks:
+<code>sync_timeline</code>, <code>sync_pt</code>, and <code>sync_fence</code>.</p>
+
+<h3 id=sync_timeline>sync_timeline</h3>
+
+<p>A <code>sync_timeline</code> is a monotonically increasing timeline that
+should be implemented for each driver instance, such as a GL context, display
+controller, or 2D blitter. This is essentially a counter of jobs submitted to
+the kernel for a particular piece of hardware. It provides guarantees about the
+order of operations and allows hardware-specific implementations.</p>
+
+<p>The sync_timeline is offered as a CPU-only reference implementation called
+<code>sw_sync</code> (software sync). If possible, use this instead of a
+<code>sync_timeline</code> to save resources and avoid complexity. If you’re not
+employing a hardware resource, <code>sw_sync</code> should be sufficient.</p>
+
+<p>If you must implement a <code>sync_timeline</code>, use the
+<code>sw_sync</code> driver as a starting point. Follow these guidelines:</p>
+
+<ul>
+<li>Provide useful names for all drivers, timelines, and fences. This simplifies
+debugging.</li>
+<li>Implement <code>timeline_value_str</code> and <code>pt_value_str</code>
+operators in your timelines to make debugging output more readable.</li>
+<li>If you want your userspace libraries (such as the GL library) to have access
+to the private data of your timelines, implement the fill driver_data operator.
+This lets you get information about the immutable sync_fence and
+<code>sync_pts</code> so you can build command lines based upon them.</li>
+</ul>
+
+<p>When implementing a <code>sync_timeline</code>, <strong>do not</strong>:</p>
+
+<ul>
+<li>Base it on any real view of time, such as when a wall clock or other piece
+of work might finish. It is better to create an abstract timeline that you can
+control.</li>
+<li>Allow userspace to explicitly create or signal a fence. This can result in
+one piece of the user pipeline creating a denial-of-service attack that halts
+all functionality. This is because the userspace cannot make promises on behalf
+of the kernel.</li>
+<li>Access <code>sync_timeline</code>, <code>sync_pt</code>, or
+<code>sync_fence</code> elements explicitly, as the API should provide all
+required functions.</li>
+</ul>
+
+<h3 id=sync_pt>sync_pt</h3>
+
+<p>A <code>sync_pt</code> is a single value or point on a sync_timeline. A point
+has three states: active, signaled, and error. Points start in the active state
+and transition to the signaled or error states. For instance, when a buffer is
+no longer needed by an image consumer, this sync_point is signaled so image
+producers know it is okay to write into the buffer again.</p>
+
+<h3 id=sync_fence>sync_fence</h3>
+
+<p>A <code>sync_fence</code> is a collection of <code>sync_pts</code> that often
+have different <code>sync_timeline</code> parents (such as for the display
+controller and GPU). These are the main primitives over which drivers and
+userspace communicate their dependencies. A fence is a promise from the kernel
+given upon accepting work that has been queued and assures completion in a
+finite amount of time.</p>
+
+<p>This allows multiple consumers or producers to signal they are using a
+buffer and to allow this information to be communicated with one function
+parameter. Fences are backed by a file descriptor and can be passed from
+kernel-space to user-space. For instance, a fence can contain two
+<code>sync_points</code> that signify when two separate image consumers are done
+reading a buffer. When the fence is signaled, the image producers know both
+consumers are done consuming.</p>
+
+<p>Fences, like <code>sync_pts</code>, start active and then change state based
+upon the state of their points. If all <code>sync_pts</code> become signaled,
+the <code>sync_fence</code> becomes signaled. If one <code>sync_pt</code> falls
+into an error state, the entire sync_fence has an error state.</p>
+
+<p>Membership in the <code>sync_fence</code> is immutable after the fence is
+created. As a <code>sync_pt</code> can be in only one fence, it is included as a
+copy. Even if two points have the same value, there will be two copies of the
+<code>sync_pt</code> in the fence. To get more than one point in a fence, a
+merge operation is conducted where points from two distinct fences are added to
+a third fence. If one of those points was signaled in the originating fence and
+the other was not, the third fence will also not be in a signaled state.</p>
+
+<p>To implement explicit synchronization, provide the following:</p>
+
+<ul>
+<li>A kernel-space driver that implements a synchronization timeline for a
+particular piece of hardware. Drivers that need to be fence-aware are generally
+anything that accesses or communicates with the Hardware Composer. Key files
+include:
+<ul>
+<li>Core implementation:
+<ul>
+ <li><code>kernel/common/include/linux/sync.h</code></li>
+ <li><code>kernel/common/drivers/base/sync.c</code></li>
+</ul></li>
+<li><code>sw_sync</code>:
+<ul>
+ <li><code>kernel/common/include/linux/sw_sync.h</code></li>
+ <li><code>kernel/common/drivers/base/sw_sync.c</code></li>
+</ul></li>
+<li>Documentation at <code>kernel/common//Documentation/sync.txt</code>.</li>
+<li>Library to communicate with the kernel-space in
+ <code>platform/system/core/libsync</code>.</li>
+</ul></li>
+<li>A Hardware Composer HAL module (v1.3 or higher) that supports the new
+synchronization functionality. You must provide the appropriate synchronization
+fences as parameters to the <code>set()</code> and <code>prepare()</code>
+functions in the HAL.</li>
+<li>Two fence-related GL extensions (<code>EGL_ANDROID_native_fence_sync</code>
+and <code>EGL_ANDROID_wait_sync</code>) and fence support in your graphics
+drivers.</li>
+</ul>
+
+<p>For example, to use the API supporting the synchronization function, you
+might develop a display driver that has a display buffer function. Before the
+synchronization framework existed, this function would receive dma-bufs, put
+those buffers on the display, and block while the buffer is visible. For
+example:</p>
+
+<pre class=prettyprint>/*
+ * assumes buf is ready to be displayed. returns when buffer is no longer on
+ * screen.
+ */
+void display_buffer(struct dma_buf *buf);
+</pre>
+
+<p>With the synchronization framework, the API call is slightly more complex.
+While putting a buffer on display, you associate it with a fence that says when
+the buffer will be ready. You can queue up the work and initiate after the fence
+clears.</p>
+
+<p>In this manner, you are not blocking anything. You immediately return your
+own fence, which is a guarantee of when the buffer will be off of the display.
+As you queue up buffers, the kernel will list dependencies with the
+synchronization framework:</p>
+
+<pre class=prettyprint>/*
+ * will display buf when fence is signaled. returns immediately with a fence
+ * that will signal when buf is no longer displayed.
+ */
+struct sync_fence* display_buffer(struct dma_buf *buf, struct sync_fence
+*fence);
+</pre>
+
+
+<h2 id=sync_integration>Sync integration</h2>
+<p>This section explains how to integrate the low-level sync framework with
+different parts of the Android framework and the drivers that must communicate
+with one another.</p>
+
+<h3 id=integration_conventions>Integration conventions</h3>
+
+<p>The Android HAL interfaces for graphics follow consistent conventions so
+when file descriptors are passed across a HAL interface, ownership of the file
+descriptor is always transferred. This means:</p>
+
+<ul>
+<li>If you receive a fence file descriptor from the sync framework, you must
+close it.</li>
+<li>If you return a fence file descriptor to the sync framework, the framework
+will close it.</li>
+<li>To continue using the fence file descriptor, you must duplicate the
+descriptor.</li>
+</ul>
+
+<p>Every time a fence passes through BufferQueue (such as for a window that
+passes a fence to BufferQueue saying when its new contents will be ready) the
+fence object is renamed. Since kernel fence support allows fences to have
+strings for names, the sync framework uses the window name and buffer index
+that is being queued to name the fence (i.e., <code>SurfaceView:0</code>). This
+is helpful in debugging to identify the source of a deadlock as the names appear
+in the output of <code>/d/sync</code> and bug reports.</p>
+
+<h3 id=anativewindow_integration>ANativeWindow integration</h3>
+
+<p>ANativeWindow is fence aware and <code>dequeueBuffer</code>,
+<code>queueBuffer</code>, and <code>cancelBuffer</code> have fence parameters.
+</p>
+
+<h3 id=opengl_es_integration>OpenGL ES integration</h3>
+
+<p>OpenGL ES sync integration relies upon two EGL extensions:</p>
+
+<ul>
+<li><code>EGL_ANDROID_native_fence_sync</code>. Provides a way to either
+wrap or create native Android fence file descriptors in EGLSyncKHR objects.</li>
+<li><code>EGL_ANDROID_wait_sync</code>. Allows GPU-side stalls rather than in
+CPU, making the GPU wait for an EGLSyncKHR. This is essentially the same as the
+<code>EGL_KHR_wait_sync</code> extension (refer to that specification for
+details).</li>
+</ul>
+
+<p>These extensions can be used independently and are controlled by a compile
+flag in libgui. To use them, first implement the
+<code>EGL_ANDROID_native_fence_sync</code> extension along with the associated
+kernel support. Next, add a ANativeWindow support for fences to your driver then
+turn on support in libgui to make use of the
+<code>EGL_ANDROID_native_fence_sync</code> extension.</p>
+
+<p>In a second pass, enable the <code>EGL_ANDROID_wait_sync</code>
+extension in your driver and turn it on separately. The
+<code>EGL_ANDROID_native_fence_sync</code> extension consists of a distinct
+native fence EGLSync object type so extensions that apply to existing EGLSync
+object types don’t necessarily apply to <code>EGL_ANDROID_native_fence</code>
+objects to avoid unwanted interactions.</p>
+
+<p>The EGL_ANDROID_native_fence_sync extension employs a corresponding native
+fence file descriptor attribute that can be set only at creation time and
+cannot be directly queried onward from an existing sync object. This attribute
+can be set to one of two modes:</p>
+
+<ul>
+<li><em>A valid fence file descriptor</em>. Wraps an existing native Android
+fence file descriptor in an EGLSyncKHR object.</li>
+<li><em>-1</em>. Creates a native Android fence file descriptor from an
+EGLSyncKHR object.</li>
+</ul>
+
+<p>The DupNativeFenceFD function call is used to extract the EGLSyncKHR object
+from the native Android fence file descriptor. This has the same result as
+querying the attribute that was set but adheres to the convention that the
+recipient closes the fence (hence the duplicate operation). Finally, destroying
+the EGLSync object should close the internal fence attribute.</p>
+
+<h3 id=hardware_composer_integration>Hardware Composer integration</h3>
+
+<p>The Hardware Composer handles three types of sync fences:</p>
+
+<ul>
+<li><em>Acquire fence</em>. One per layer, set before calling
+<code>HWC::set</code>. It signals when Hardware Composer may read the buffer.</li>
+<li><em>Release fence</em>. One per layer, filled in by the driver in
+<code>HWC::set</code>. It signals when Hardware Composer is done reading the
+buffer so the framework can start using that buffer again for that particular
+layer.</li>
+<li><em>Retire fence</em>. One per the entire frame, filled in by the driver
+each time <code>HWC::set</code> is called. This covers all layers for the set
+operation and signals to the framework when all effects of this set operation
+have completed. The retire fence signals when the next set operation takes place
+on the screen.</li>
+</ul>
+
+<p>The retire fence can be used to determine how long each frame appears on the
+screen. This is useful in identifying the location and source of delays, such
+as a stuttering animation.</p>
+
+<h2 id=vsync_offset>VSYNC offset</h2>
+
+<p>Application and SurfaceFlinger render loops should be synchronized to the
+hardware VSYNC. On a VSYNC event, the display begins showing frame N while
+SurfaceFlinger begins compositing windows for frame N+1. The app handles
+pending input and generates frame N+2.</p>
+
+<p>Synchronizing with VSYNC delivers consistent latency. It reduces errors in
+apps and SurfaceFlinger and the drifting of displays in and out of phase with
+each other. This, however, does assume application and SurfaceFlinger per-frame
+times don’t vary widely. Nevertheless, the latency is at least two frames.</p>
+
+<p>To remedy this, you can employ VSYNC offsets to reduce the input-to-display
+latency by making application and composition signal relative to hardware
+VSYNC. This is possible because application plus composition usually takes less
+than 33 ms.</p>
+
+<p>The result of VSYNC offset is three signals with same period, offset
+phase:</p>
+
+<ul>
+<li><code>HW_VSYNC_0</code>. Display begins showing next frame.</li>
+<li><code>VSYNC</code>. App reads input and generates next frame.</li>
+<li><code>SF VSYNC</code>. SurfaceFlinger begins compositing for next frame.</li>
+</ul>
+
+<p>With VSYNC offset, SurfaceFlinger receives the buffer and composites the
+frame, while the application processes the input and renders the frame, all
+within a single frame of time.</p>
+
+<p class="note"><strong>Note:</strong> VSYNC offsets reduce the time available
+for app and composition and therefore provide a greater chance for error.</p>
+
+<h3 id=dispsync>DispSync</h3>
+
+<p>DispSync maintains a model of the periodic hardware-based VSYNC events of a
+display and uses that model to execute periodic callbacks at specific phase
+offsets from the hardware VSYNC events.</p>
+
+<p>DispSync is essentially a software phase lock loop (PLL) that generates the
+VSYNC and SF VSYNC signals used by Choreographer and SurfaceFlinger, even if
+not offset from hardware VSYNC.</p>
+
+<img src="images/dispsync.png" alt="DispSync flow">
+
+<p class="img-caption"><strong>Figure 1.</strong> DispSync flow</p>
+
+<p>DispSync has the following qualities:</p>
+
+<ul>
+<li><em>Reference</em>. HW_VSYNC_0.</li>
+<li><em>Output</em>. VSYNC and SF VSYNC.</li>
+<li><em>Feedback</em>. Retire fence signal timestamps from Hardware Composer.
+</li>
+</ul>
+
+<h3 id=vsync_retire_offset>VSYNC/Retire offset</h3>
+
+<p>The signal timestamp of retire fences must match HW VSYNC even on devices
+that don’t use the offset phase. Otherwise, errors appear to have greater
+severity than reality. Smart panels often have a delta: Retire fence is the end
+of direct memory access (DMA) to display memory, but the actual display switch
+and HW VSYNC is some time later.</p>
+
+<p><code>PRESENT_TIME_OFFSET_FROM_VSYNC_NS</code> is set in the device’s
+BoardConfig.mk make file. It is based upon the display controller and panel
+characteristics. Time from retire fence timestamp to HW VSYNC signal is
+measured in nanoseconds.</p>
+
+<h3 id=vsync_and_sf_vsync_offsets>VSYNC and SF_VSYNC offsets</h3>
+
+<p>The <code>VSYNC_EVENT_PHASE_OFFSET_NS</code> and
+<code>SF_VSYNC_EVENT_PHASE_OFFSET_NS</code> are set conservatively based on
+high-load use cases, such as partial GPU composition during window transition
+or Chrome scrolling through a webpage containing animations. These offsets
+allow for long application render time and long GPU composition time.</p>
+
+<p>More than a millisecond or two of latency is noticeable. We recommend
+integrating thorough automated error testing to minimize latency without
+significantly increasing error counts.</p>
+
+<p class="note"><strong>Note:</strong> Theses offsets are also configured in the
+device’s BoardConfig.mk file. Both settings are offset in nanoseconds after
+HW_VSYNC_0, default to zero (if not set), and can be negative.</p>
diff --git a/src/devices/graphics/implement-vulkan.jd b/src/devices/graphics/implement-vulkan.jd
new file mode 100644
index 0000000..dcc2efc
--- /dev/null
+++ b/src/devices/graphics/implement-vulkan.jd
@@ -0,0 +1,309 @@
+page.title=Implementing Vulkan
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+
+<p>Vulkan is a low-overhead, cross-platform API for high-performance 3D
+graphics. Like OpenGL ES, Vulkan provides tools for creating high-quality,
+real-time graphics in applications. Vulkan advantages include reductions in CPU
+overhead and support for the <a href="https://www.khronos.org/spir">SPIR-V
+Binary Intermediate</a> language.</p>
+
+<p class="note"><strong>Note:</strong> This section describes Vulkan
+implementation; for details on Vulkan architecture, advantages, API, and other
+resources, see <a href="{@docRoot}devices/graphics/arch-vulkan.html">Vulkan
+Architecture</a>.</p>
+
+<p>To implement Vulkan, a device:</p>
+<ul>
+<li>Must include the Vulkan Loader (provided by Android) in the build.</li>
+<li>Must include a Vulkan driver (provided by SoCs such as GPU IHVs) that
+implements the
+<a href="https://www.khronos.org/registry/vulkan/specs/1.0-wsi_extensions/xhtml/vkspec.html">Vulkan
+API</a>. To support Vulkan functionality, the Android device needs capable GPU
+hardware and the associated driver. Consult your SoC vendor to request driver
+support.</li>
+</ul>
+<p>If a Vulkan driver is available on the device, the device needs to declare
+<code>FEATURE_VULKAN_HARDWARE_LEVEL</code> and
+<code>FEATURE_VULKAN_HARDWARE_VERSION</code> system features, with versions that
+accurately reflect the capabilities of the device.</p>
+
+<h2 id=vulkan_loader>Vulkan Loader</h2>
+<p>The primary interface between Vulkan applications and a device's Vulkan
+driver is the Vulkan loader, which is part of Android Open Source Project (AOSP)
+(<code>platform/frameworks/native/vulkan</code>) and installed at
+<code>/system/lib[64]/libvulkan.so</code>. The loader provides the core Vulkan
+API entry points, as well as entry points of a few extensions that are required
+on Android and always present. In particular, Window System Integration (WSI)
+extensions are exported by the loader and primarily implemented in it rather
+than the driver. The loader also supports enumerating and loading layers that
+can expose additional extensions and/or intercept core API calls on their way to
+the driver.</p>
+
+<p>The NDK includes a stub <code>libvulkan.so</code> library that exports the
+same symbols as the loader and which is used for linking. When running on a
+device, applications call the Vulkan functions exported from
+<code>libvulkan.so</code> (the real library, not the stub) to enter trampoline
+functions in the loader (which then dispatch to the appropriate layer or driver
+based on their first argument). The <code>vkGetDeviceProcAddr</code> calls
+return the function pointers to which the trampolines would dispatch (i.e. it
+calls directly into the core API code), so calling through these function
+pointers (rather than the exported symbols) is slightly more efficient as it
+skips the trampoline and dispatch. However, <code>vkGetInstanceProcAddr</code>
+must still call into trampoline code.</p>
+
+<h2 id=driver_emun>Driver enumeration and loading</h2>
+<p>Android expects the GPUs available to the system to be known when the system
+image is built. The loader uses the existing HAL mechanism (see
+<code><a href="https://android.googlesource.com/platform/hardware/libhardware/+/marshmallow-release/include/hardware/hardware.h">hardware.h</code></a>) for
+discovering and loading the driver. Preferred paths for 32-bit and 64-bit Vulkan
+drivers are:</p>
+
+<p>
+<pre>
+/vendor/lib/hw/vulkan.<ro.product.platform>.so
+/vendor/lib64/hw/vulkan.<ro.product.platform>.so
+</pre>
+</p>
+
+<p>Where <<code>ro.product.platform</code>> is replaced by the value of
+the system property of that name. For details and supported alternative
+locations, refer to
+<code><a href="https://android.googlesource.com/platform/hardware/libhardware/+/marshmallow-release/hardware.c">libhardware/hardware.c</code></a>.</p>
+
+<p>In Android 7.0, the Vulkan <code>hw_module_t</code> derivative is trivial;
+only one driver is supported and the constant string
+<code>HWVULKAN_DEVICE_0</code> is passed to open. If support for multiple
+drivers is added in future versions of Android, the HAL module will export a
+list of strings that can be passed to the <code>module open</code> call.</p>
+
+<p>The Vulkan <code>hw_device_t</code> derivative corresponds to a single
+driver, though that driver can support multiple physical devices. The
+<code>hw_device_t</code> structure can be extended to export
+<code>vkGetGlobalExtensionProperties</code>, <code>vkCreateInstance</code>, and
+<code>vkGetInstanceProcAddr</code> functions. The loader can find all other
+<code>VkInstance</code>, <code>VkPhysicalDevice</code>, and
+<code>vkGetDeviceProcAddr</code> functions by calling
+<code>vkGetInstanceProcAddr</code>.</p>
+
+<h2 id=layer_discover>Layer discovery and loading</h2>
+<p>The Vulkan loader supports enumerating and loading layers that can expose
+additional extensions and/or intercept core API calls on their way to the
+driver. Android 7.0 does not include layers on the system image; however,
+applications may include layers in their APK.</p>
+<p>When using layers, keep in mind that Android's security model and policies
+differ significantly from other platforms. In particular, Android does not allow
+loading external code into a non-debuggable process on production (non-rooted)
+devices, nor does it allow external code to inspect or control the process's
+memory, state, etc. This includes a prohibition on saving core dumps, API
+traces, etc. to disk for later inspection. Only layers delivered as part of the
+application are enabled on production devices, and drivers must not provide
+functionality that violates these policies.</p>
+
+<p>Use cases for layers include:</p>
+<ul>
+<li><strong>Development-time layers</strong>. These layers (validation layers,
+shims for tracing/profiling/debugging tools, etc.) should not be installed on
+the system image of production devices as they waste space for users and should
+be updateable without requiring a system update. Developers who want to use one
+of these layers during development can modify the application package (e.g.
+adding a file to their native libraries directory). IHV and OEM engineers who
+want to diagnose failures in shipping, unmodifiable apps are assumed to have
+access to non-production (rooted) builds of the system image.</li>
+<li><strong>Utility layers</strong>. These layers almost always expose
+extensions, such as a layer that implements a memory manager for device memory.
+Developers choose layers (and versions of those layers) to use in their
+application; different applications using the same layer may still use
+different versions. Developers choose which of these layers to ship in their
+application package.</li>
+<li><strong>Injected (implicit) layers</strong>. Includes layers such as
+framerate, social network, or game launcher overlays provided by the user or
+some other application without the application's knowledge or consent. These
+violate Android's security policies and are not supported.</li>
+</ul>
+
+<p>In the normal state, the loader searches for layers only in the application's
+native library directory and attempts to load any library with a name matching a
+particular pattern (e.g. <code>libVKLayer_foo.so</code>). It does not need a
+separate manifest file as the developer deliberately included these layers and
+reasons to avoid loading libraries before enabling them don't apply.</p>
+
+<p>Android allows layers to be ported with build-environment changes between
+Android and other platforms. For details on the interface between layers and the
+loader, refer to
+<a href="https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/blob/master/loader/LoaderAndLayerInterface.md">Vulkan
+Loader Specification and Architecture Overview</a>. Versions of the LunarG
+validation layers that have been verified to build and work on Android are
+hosted in the android_layers branch of the
+<a href="https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/tree/android_layers">KhronosGroup/Vulkan-LoaderAndValidationLayers</a>
+project on GitHub.</p>
+
+<h2 id=wsi>Window System Integration (WSI)</h2>
+<p>The Window System Integration (WSI) extensions <code>VK_KHR_surface</code>,
+<code>VK_KHR_android_surface</code>, and <code>VK_KHR_swapchain</code> are
+implemented by the platform and live in <code>libvulkan.so</code>. The
+<code>VkSurfaceKHR</code> and <code>VkSwapchainKHR</code> objects and all
+interaction with <code>ANativeWindow</code> is handled by the platform and is
+not exposed to drivers. The WSI implementation relies on the
+<code>VK_ANDROID_native_buffer</code> extension (described below) which must be
+supported by the driver; this extension is only used by the WSI implementation
+and will not be exposed to applications.</p>
+
+<h3 id=gralloc_usage_flags>Gralloc usage flags</h3>
+<p>Implementations may need swapchain buffers to be allocated with
+implementation-defined private gralloc usage flags. When creating a swapchain,
+the platform asks the driver to translate the requested format and image usage
+flags into gralloc usage flags by calling:</p>
+
+<p>
+<pre>
+VkResult VKAPI vkGetSwapchainGrallocUsageANDROID(
+ VkDevice device,
+ VkFormat format,
+ VkImageUsageFlags imageUsage,
+ int* grallocUsage
+);
+</pre>
+</p>
+
+<p>The <code>format</code> and <code>imageUsage</code> parameters are taken from
+the <code>VkSwapchainCreateInfoKHR</code> structure. The driver should fill
+<code>*grallocUsage</code> with the gralloc usage flags required for the format
+and usage (which are combined with the usage flags requested by the swapchain
+consumer when allocating buffers).</p>
+
+<h3 id=gralloc_usage_flags>Gralloc-backed images</h3>
+
+<p><code>VkNativeBufferANDROID</code> is a <code>vkCreateImage</code> extension
+structure for creating an image backed by a gralloc buffer. This structure is
+provided to <code>vkCreateImage</code> in the <code>VkImageCreateInfo</code>
+structure chain. Calls to <code>vkCreateImage</code> with this structure happen
+during the first call to <code>vkGetSwapChainInfoWSI(..
+VK_SWAP_CHAIN_INFO_TYPE_IMAGES_WSI ..)</code>. The WSI implementation allocates
+the number of native buffers requested for the swapchain, then creates a
+<code>VkImage</code> for each one:</p>
+
+<p><pre>
+typedef struct {
+ VkStructureType sType; // must be VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID
+ const void* pNext;
+
+ // Buffer handle and stride returned from gralloc alloc()
+ buffer_handle_t handle;
+ int stride;
+
+ // Gralloc format and usage requested when the buffer was allocated.
+ int format;
+ int usage;
+} VkNativeBufferANDROID;
+</pre></p>
+
+<p>When creating a gralloc-backed image, the <code>VkImageCreateInfo</code> has
+the following data:</p>
+
+<p><pre>
+ .imageType = VK_IMAGE_TYPE_2D
+ .format = a VkFormat matching the format requested for the gralloc buffer
+ .extent = the 2D dimensions requested for the gralloc buffer
+ .mipLevels = 1
+ .arraySize = 1
+ .samples = 1
+ .tiling = VK_IMAGE_TILING_OPTIMAL
+ .usage = VkSwapChainCreateInfoWSI::imageUsageFlags
+ .flags = 0
+ .sharingMode = VkSwapChainCreateInfoWSI::sharingMode
+ .queueFamilyCount = VkSwapChainCreateInfoWSI::queueFamilyCount
+ .pQueueFamilyIndices = VkSwapChainCreateInfoWSI::pQueueFamilyIndices
+</pre></p>
+
+<h3 id=acquire_image>Aquiring images</h3>
+<p><code>vkAcquireImageANDROID</code> acquires ownership of a swapchain image
+and imports an externally-signalled native fence into both an existing
+<code>VkSemaphore</code> object and an existing <code>VkFence</code> object:</p>
+
+<p><pre>
+VkResult VKAPI vkAcquireImageANDROID(
+ VkDevice device,
+ VkImage image,
+ int nativeFenceFd,
+ VkSemaphore semaphore,
+ VkFence fence
+);
+</pre></p>
+
+<p>This function is called during <code>vkAcquireNextImageWSI</code> to import a
+native fence into the <code>VkSemaphore</code> and <code>VkFence</code> objects
+provided by the application (however, both semaphore and fence objects are
+optional in this call). The driver may also use this opportunity to recognize
+and handle any external changes to the gralloc buffer state; many drivers won't
+need to do anything here. This call puts the <code>VkSemaphore</code> and
+<code>VkFence</code> into the same pending state as
+<code>vkQueueSignalSemaphore</code> and <code>vkQueueSubmit</code> respectively,
+so queues can wait on the semaphore and the application can wait on the fence.</p>
+
+<p>Both objects become signalled when the underlying native fence signals; if
+the native fence has already signalled, then the semaphore is in the signalled
+state when this function returns. The driver takes ownership of the fence fd and
+is responsible for closing it when no longer needed. It must do so even if
+neither a semaphore or fence object is provided, or even if
+<code>vkAcquireImageANDROID</code> fails and returns an error. If fenceFd is -1,
+it is as if the native fence was already signalled.</p>
+
+<h3 id=acquire_image>Releasing images</h3>
+<p><code>vkQueueSignalReleaseImageANDROID</code> prepares a swapchain image for
+external use, and creates a native fence and schedules it to be signalled when
+prior work on the queue has completed:</p>
+
+<p><pre>
+VkResult VKAPI vkQueueSignalReleaseImageANDROID(
+ VkQueue queue,
+ VkImage image,
+ int* pNativeFenceFd
+);
+</pre></p>
+
+<p>This API is called during <code>vkQueuePresentWSI</code> on the provided
+queue. Effects are similar to <code>vkQueueSignalSemaphore</code>, except with a
+native fence instead of a semaphore. Unlike <code>vkQueueSignalSemaphore</code>,
+however, this call creates and returns the synchronization object that will be
+signalled rather than having it provided as input. If the queue is already idle
+when this function is called, it is allowed (but not required) to set
+<code>*pNativeFenceFd</code> to -1. The file descriptor returned in
+*<code>pNativeFenceFd</code> is owned and will be closed by the caller.</p>
+
+<h3 id=update_drivers>Updating drivers</h3>
+
+<p>Many drivers can ignore the image parameter, but some may need to prepare
+CPU-side data structures associated with a gralloc buffer for use by external
+image consumers. Preparing buffer contents for use by external consumers should
+have been done asynchronously as part of transitioning the image to
+<code>VK_IMAGE_LAYOUT_PRESENT_SRC_KHR</code>.</p>
+
+<h2 id=validation>Validation</h2>
+<p>OEMs can test their Vulkan implementation using CTS, which includes
+<a href="{@docRoot}devices/graphics/cts-integration.html">drawElements
+Quality Program (dEQP)</a> tests that exercise the Vulkan Runtime.</p>
diff --git a/src/devices/graphics/implement.jd b/src/devices/graphics/implement.jd
index 3f3654a..54b4620 100644
--- a/src/devices/graphics/implement.jd
+++ b/src/devices/graphics/implement.jd
@@ -26,580 +26,151 @@
</div>
-<p>Follow the instructions here to implement the Android graphics HAL.</p>
+<p>To implement the Android graphics HAL, review the following requirements,
+implementation details, and testing advice.</p>
<h2 id=requirements>Requirements</h2>
-<p>The following list and sections describe what you need to provide to support
-graphics in your product:</p>
+<p>Android graphics support requires the following components:</p>
-<ul> <li> OpenGL ES 1.x Driver <li> OpenGL ES 2.0 Driver <li> OpenGL ES 3.0
-Driver (optional) <li> EGL Driver <li> Gralloc HAL implementation <li> Hardware
-Composer HAL implementation <li> Framebuffer HAL implementation </ul>
+<ul>
+ <li>EGL driver</li>
+ <li>OpenGL ES 1.x driver</li>
+ <li>OpenGL ES 2.0 driver</li>
+ <li>OpenGL ES 3.x driver (optional)</li>
+ <li>Vulkan (optional)</li>
+ <li>Gralloc HAL implementation</li>
+ <li>Hardware Composer HAL implementation</li>
+</ul>
<h2 id=implementation>Implementation</h2>
<h3 id=opengl_and_egl_drivers>OpenGL and EGL drivers</h3>
-<p>You must provide drivers for OpenGL ES 1.x, OpenGL ES 2.0, and EGL. Here are
-some key considerations:</p>
+<p>You must provide drivers for EGL, OpenGL ES 1.x, and OpenGL ES 2.0 (support
+for OpenGL 3.x is optional). Key considerations include:</p>
-<ul> <li> The GL driver needs to be robust and conformant to OpenGL ES
-standards. <li> Do not limit the number of GL contexts. Because Android allows
-apps in the background and tries to keep GL contexts alive, you should not
-limit the number of contexts in your driver. <li> It is not uncommon to have
-20-30 active GL contexts at once, so you should also be careful with the amount
-of memory allocated for each context. <li> Support the YV12 image format and
-any other YUV image formats that come from other components in the system such
-as media codecs or the camera. <li> Support the mandatory extensions:
-<code>GL_OES_texture_external</code>,
-<code>EGL_ANDROID_image_native_buffer</code>, and
-<code>EGL_ANDROID_recordable</code>. The
-<code>EGL_ANDROID_framebuffer_target</code> extension is required for Hardware
-Composer 1.1 and higher, as well. <li> We highly recommend also supporting
-<code>EGL_ANDROID_blob_cache</code>, <code>EGL_KHR_fence_sync</code>,
-<code>EGL_KHR_wait_sync</code>, and <code>EGL_ANDROID_native_fence_sync</code>.
-</ul>
+<ul>
+ <li>GL driver must be robust and conformant to OpenGL ES standards.</li>
+ <li>Do not limit the number of GL contexts. Because Android allows apps in
+ the background and tries to keep GL contexts alive, you should not limit the
+ number of contexts in your driver.</li>
+ <li> It is common to have 20-30 active GL contexts at once, so be
+ mindful of the amount of memory allocated for each context.</li>
+ <li>Support the YV12 image format and other YUV image formats that come from
+ other components in the system, such as media codecs or the camera.</li>
+ <li>Support the mandatory extensions: <code>GL_OES_texture_external</code>,
+ <code>EGL_ANDROID_image_native_buffer</code>, and
+ <code>EGL_ANDROID_recordable</code>. In addition, the
+ <code>EGL_ANDROID_framebuffer_target</code> extension is required for
+ Hardware Composer v1.1 and higher.</li>
+ </ul>
+<p>We highly recommend also supporting <code>EGL_ANDROID_blob_cache</code>,
+<code>EGL_KHR_fence_sync</code>, <code>EGL_KHR_wait_sync</code>, and <code>EGL_ANDROID_native_fence_sync</code>.</p>
-<p>Note the OpenGL API exposed to app developers is different from the OpenGL
-interface that you are implementing. Apps do not have access to the GL driver
-layer and must go through the interface provided by the APIs.</p>
+<p class="note"><strong>Note</strong>: The OpenGL API exposed to app developers
+differs from the OpenGL implemented on the device. Apps cannot directly access
+the GL driver layer and must go through the interface provided by the APIs.</p>
<h3 id=pre-rotation>Pre-rotation</h3>
-<p>Many hardware overlays do not support rotation, and even if they do it costs
-processing power. So the solution is to pre-transform the buffer before it
-reaches SurfaceFlinger. A query hint in <code>ANativeWindow</code> was added
-(<code>NATIVE_WINDOW_TRANSFORM_HINT</code>) that represents the most likely
-transform to be applied to the buffer by SurfaceFlinger. Your GL driver can use
-this hint to pre-transform the buffer before it reaches SurfaceFlinger so when
-the buffer arrives, it is correctly transformed.</p>
+<p>Many hardware overlays do not support rotation (and even if they do it costs
+processing power); the solution is to pre-transform the buffer before it reaches
+SurfaceFlinger. Android supports a query hint
+(<code>NATIVE_WINDOW_TRANSFORM_HINT</code>) in <code>ANativeWindow</code> to
+represent the most likely transform to be applied to the buffer by
+SurfaceFlinger. GL drivers can use this hint to pre-transform the buffer
+before it reaches SurfaceFlinger so when the buffer arrives, it is correctly
+transformed.</p>
-<p>For example, you may receive a hint to rotate 90 degrees. You must generate
-a matrix and apply it to the buffer to prevent it from running off the end of
-the page. To save power, this should be done in pre-rotation. See the
-<code>ANativeWindow</code> interface defined in
-<code>system/core/include/system/window.h</code> for more details.</p>
+<p>For example, when receiving a hint to rotate 90 degrees, generate and apply a
+matrix to the buffer to prevent it from running off the end of the page. To save
+power, do this pre-rotation. For details, see the <code>ANativeWindow</code>
+interface defined in <code>system/core/include/system/window.h</code>.</p>
<h3 id=gralloc_hal>Gralloc HAL</h3>
-<p>The graphics memory allocator is needed to allocate memory that is requested
-by image producers. You can find the interface definition of the HAL at:
-<code>hardware/libhardware/modules/gralloc.h</code></p>
+<p>The graphics memory allocator allocates memory requested by image producers.
+You can find the interface definition of the HAL at
+<code>hardware/libhardware/modules/gralloc.h</code>.</p>
<h3 id=protected_buffers>Protected buffers</h3>
<p>The gralloc usage flag <code>GRALLOC_USAGE_PROTECTED</code> allows the
graphics buffer to be displayed only through a hardware-protected path. These
-overlay planes are the only way to display DRM content. DRM-protected buffers
-cannot be accessed by SurfaceFlinger or the OpenGL ES driver.</p>
+overlay planes are the only way to display DRM content (DRM-protected buffers
+cannot be accessed by SurfaceFlinger or the OpenGL ES driver).</p>
<p>DRM-protected video can be presented only on an overlay plane. Video players
that support protected content must be implemented with SurfaceView. Software
-running on unprotected hardware cannot read or write the buffer.
-Hardware-protected paths must appear on the Hardware Composer overlay. For
-instance, protected videos will disappear from the display if Hardware Composer
-switches to OpenGL ES composition.</p>
+running on unprotected hardware cannot read or write the buffer;
+hardware-protected paths must appear on the Hardware Composer overlay (i.e.,
+protected videos will disappear from the display if Hardware Composer switches
+to OpenGL ES composition).</p>
-<p>See the <a href="{@docRoot}devices/drm.html">DRM</a> page for a description
-of protected content.</p>
+<p>For details on protected content, see
+<a href="{@docRoot}devices/drm.html">DRM</a>.</p>
<h3 id=hardware_composer_hal>Hardware Composer HAL</h3>
-<p>The Hardware Composer HAL is used by SurfaceFlinger to composite surfaces to
-the screen. The Hardware Composer abstracts objects like overlays and 2D
-blitters and helps offload some work that would normally be done with
-OpenGL.</p>
-
-<p>We recommend you start using version 1.3 of the Hardware Composer HAL as it
-will provide support for the newest features (explicit synchronization,
-external displays, and more). Because the physical display hardware behind the
-Hardware Composer abstraction layer can vary from device to device, it is
-difficult to define recommended features. But here is some guidance:</p>
-
-<ul> <li> The Hardware Composer should support at least four overlays (status
-bar, system bar, application, and wallpaper/background). <li> Layers can be
-bigger than the screen, so the Hardware Composer should be able to handle
-layers that are larger than the display (for example, a wallpaper). <li>
-Pre-multiplied per-pixel alpha blending and per-plane alpha blending should be
-supported at the same time. <li> The Hardware Composer should be able to
-consume the same buffers that the GPU, camera, video decoder, and Skia buffers
-are producing, so supporting some of the following properties is helpful: <ul>
-<li> RGBA packing order <li> YUV formats <li> Tiling, swizzling, and stride
-properties </ul> <li> A hardware path for protected video playback must be
-present if you want to support protected content. </ul>
-
-<p>The general recommendation when implementing your Hardware Composer is to
-implement a non-operational Hardware Composer first. Once you have the
-structure done, implement a simple algorithm to delegate composition to the
-Hardware Composer. For example, just delegate the first three or four surfaces
-to the overlay hardware of the Hardware Composer.</p>
-
-<p>Focus on optimization, such as intelligently selecting the surfaces to send
-to the overlay hardware that maximizes the load taken off of the GPU. Another
-optimization is to detect whether the screen is updating. If not, delegate
-composition to OpenGL instead of the Hardware Composer to save power. When the
-screen updates again, continue to offload composition to the Hardware
-Composer.</p>
-
-<p>Devices must report the display mode (or resolution). Android uses the first
-mode reported by the device. To support televisions, have the TV device report
-the mode selected for it by the manufacturer to Hardware Composer. See
-hwcomposer.h for more details.</p>
-
-<p>Prepare for common use cases, such as:</p>
-
-<ul> <li> Full-screen games in portrait and landscape mode <li> Full-screen
-video with closed captioning and playback control <li> The home screen
-(compositing the status bar, system bar, application window, and live
-wallpapers) <li> Protected video playback <li> Multiple display support </ul>
-
-<p>These use cases should address regular, predictable uses rather than edge
-cases that are rarely encountered. Otherwise, any optimization will have little
-benefit. Implementations must balance two competing goals: animation smoothness
-and interaction latency.</p>
-
-<p>Further, to make best use of Android graphics, you must develop a robust
-clocking strategy. Performance matters little if clocks have been turned down
-to make every operation slow. You need a clocking strategy that puts the clocks
-at high speed when needed, such as to make animations seamless, and then slows
-the clocks whenever the increased speed is no longer needed.</p>
-
-<p>Use the <code>adb shell dumpsys SurfaceFlinger</code> command to see
-precisely what SurfaceFlinger is doing. See the <a
-href="{@docRoot}devices/graphics/architecture.html#hwcomposer">Hardware
-Composer</a> section of the Architecture page for example output and a
-description of relevant fields.</p>
-
-<p>You can find the HAL for the Hardware Composer and additional documentation
-in: <code>hardware/libhardware/include/hardware/hwcomposer.h
-hardware/libhardware/include/hardware/hwcomposer_defs.h</code></p>
-
-<p>A stub implementation is available in the
-<code>hardware/libhardware/modules/hwcomposer</code> directory.</p>
+<p>The Hardware Composer HAL (HWC) is used by SurfaceFlinger to composite
+surfaces to the screen. It abstracts objects such as overlays and 2D blitters
+and helps offload some work that would normally be done with OpenGL. For details
+on the HWC, see <a href="{@docRoot}devices/graphics/implement-hwc.html">Hardware
+Composer HAL</a>.</p>
<h3 id=vsync>VSYNC</h3>
<p>VSYNC synchronizes certain events to the refresh cycle of the display.
-Applications always start drawing on a VSYNC boundary, and SurfaceFlinger
-always composites on a VSYNC boundary. This eliminates stutters and improves
-visual performance of graphics. The Hardware Composer has a function
-pointer:</p>
+Applications always start drawing on a VSYNC boundary, and SurfaceFlinger always
+composites on a VSYNC boundary. This eliminates stutters and improves visual
+performance of graphics. For details on VSYNC, see
+<a href="{@docRoot}devices/graphics/implement-vsync.html">Implementing
+VSYNC</a>.</p>
-<pre class=prettyprint> int (waitForVsync*) (int64_t *timestamp) </pre>
+<h3 id=vulkan>Vulkan</h3>
-
-<p>This points to a function you must implement for VSYNC. This function blocks
-until a VSYNC occurs and returns the timestamp of the actual VSYNC. A message
-must be sent every time VSYNC occurs. A client can receive a VSYNC timestamp
-once, at specified intervals, or continuously (interval of 1). You must
-implement VSYNC to have no more than a 1ms lag at the maximum (0.5ms or less is
-recommended), and the timestamps returned must be extremely accurate.</p>
-
-<h4 id=explicit_synchronization>Explicit synchronization</h4>
-
-<p>Explicit synchronization is required and provides a mechanism for Gralloc
-buffers to be acquired and released in a synchronized way. Explicit
-synchronization allows producers and consumers of graphics buffers to signal
-when they are done with a buffer. This allows the Android system to
-asynchronously queue buffers to be read or written with the certainty that
-another consumer or producer does not currently need them. See the
-<a href="{@docRoot}devices/graphics/index.html#synchronization_framework">Synchronization
-framework</a> section for an overview of this mechanism.</p>
-
-<p>The benefits of explicit synchronization include less behavior variation
-between devices, better debugging support, and improved testing metrics. For
-instance, the sync framework output readily identifies problem areas and root
-causes. And centralized SurfaceFlinger presentation timestamps show when events
-occur in the normal flow of the system.</p>
-
-<p>This communication is facilitated by the use of synchronization fences,
-which are now required when requesting a buffer for consuming or producing. The
-synchronization framework consists of three main building blocks:
-sync_timeline, sync_pt, and sync_fence.</p>
-
-<h5 id=sync_timeline>sync_timeline</h5>
-
-<p>A sync_timeline is a monotonically increasing timeline that should be
-implemented for each driver instance, such as a GL context, display controller,
-or 2D blitter. This is essentially a counter of jobs submitted to the kernel
-for a particular piece of hardware. It provides guarantees about the order of
-operations and allows hardware-specific implementations.</p>
-
-<p>Please note, the sync_timeline is offered as a CPU-only reference
-implementation called sw_sync (which stands for software sync). If possible,
-use sw_sync instead of a sync_timeline to save resources and avoid complexity.
-If you’re not employing a hardware resource, sw_sync should be sufficient.</p>
-
-<p>If you must implement a sync_timeline, use the sw_sync driver as a starting
-point. Follow these guidelines:</p>
-
-<ul> <li> Provide useful names for all drivers, timelines, and fences. This
-simplifies debugging. <li> Implement timeline_value str and pt_value_str
-operators in your timelines as they make debugging output much more readable.
-<li> If you want your userspace libraries (such as the GL library) to have
-access to the private data of your timelines, implement the fill driver_data
-operator. This lets you get information about the immutable sync_fence and
-sync_pts so you might build command lines based upon them. </ul>
-
-<p>When implementing a sync_timeline, <strong>don’t</strong>:</p>
-
-<ul> <li> Base it on any real view of time, such as when a wall clock or other
-piece of work might finish. It is better to create an abstract timeline that
-you can control. <li> Allow userspace to explicitly create or signal a fence.
-This can result in one piece of the user pipeline creating a denial-of-service
-attack that halts all functionality. This is because the userspace cannot make
-promises on behalf of the kernel. <li> Access sync_timeline, sync_pt, or
-sync_fence elements explicitly, as the API should provide all required
-functions. </ul>
-
-<h5 id=sync_pt>sync_pt</h5>
-
-<p>A sync_pt is a single value or point on a sync_timeline. A point has three
-states: active, signaled, and error. Points start in the active state and
-transition to the signaled or error states. For instance, when a buffer is no
-longer needed by an image consumer, this sync_point is signaled so that image
-producers know it is okay to write into the buffer again.</p>
-
-<h5 id=sync_fence>sync_fence</h5>
-
-<p>A sync_fence is a collection of sync_pts that often have different
-sync_timeline parents (such as for the display controller and GPU). These are
-the main primitives over which drivers and userspace communicate their
-dependencies. A fence is a promise from the kernel that it gives upon accepting
-work that has been queued and assures completion in a finite amount of
-time.</p>
-
-<p>This allows multiple consumers or producers to signal they are using a
-buffer and to allow this information to be communicated with one function
-parameter. Fences are backed by a file descriptor and can be passed from
-kernel-space to user-space. For instance, a fence can contain two sync_points
-that signify when two separate image consumers are done reading a buffer. When
-the fence is signaled, the image producers know both consumers are done
-consuming.
-
-Fences, like sync_pts, start active and then change state based upon the state
-of their points. If all sync_pts become signaled, the sync_fence becomes
-signaled. If one sync_pt falls into an error state, the entire sync_fence has
-an error state.
-
-Membership in the sync_fence is immutable once the fence is created. And since
-a sync_pt can be in only one fence, it is included as a copy. Even if two
-points have the same value, there will be two copies of the sync_pt in the
-fence.
-
-To get more than one point in a fence, a merge operation is conducted. In the
-merge, the points from two distinct fences are added to a third fence. If one
-of those points was signaled in the originating fence, and the other was not,
-the third fence will also not be in a signaled state.</p>
-
-<p>To implement explicit synchronization, you need to provide the
-following:</p>
-
-<ul> <li> A kernel-space driver that implements a synchronization timeline for
-a particular piece of hardware. Drivers that need to be fence-aware are
-generally anything that accesses or communicates with the Hardware Composer.
-Here are the key files (found in the android-3.4 kernel branch): <ul> <li> Core
-implementation: <ul> <li> <code>kernel/common/include/linux/sync.h</code> <li>
-<code>kernel/common/drivers/base/sync.c</code> </ul> <li> sw_sync: <ul> <li>
-<code>kernel/common/include/linux/sw_sync.h</code> <li>
-<code>kernel/common/drivers/base/sw_sync.c</code> </ul> <li> Documentation:
-<li> <code>kernel/common//Documentation/sync.txt</code> Finally, the
-<code>platform/system/core/libsync</code> directory includes a library to
-communicate with the kernel-space. </ul> <li> A Hardware Composer HAL module
-(version 1.3 or later) that supports the new synchronization functionality. You
-will need to provide the appropriate synchronization fences as parameters to
-the set() and prepare() functions in the HAL. <li> Two GL-specific extensions
-related to fences, <code>EGL_ANDROID_native_fence_sync</code> and
-<code>EGL_ANDROID_wait_sync</code>, along with incorporating fence support into
-your graphics drivers. </ul>
-
-<p>For example, to use the API supporting the synchronization function, you
-might develop a display driver that has a display buffer function. Before the
-synchronization framework existed, this function would receive dma-bufs, put
-those buffers on the display, and block while the buffer is visible, like
-so:</p>
-
-<pre class=prettyprint>
-/*
- * assumes buf is ready to be displayed. returns when buffer is no longer on
- * screen.
- */
-void display_buffer(struct dma_buf *buf); </pre>
-
-
-<p>With the synchronization framework, the API call is slightly more complex.
-While putting a buffer on display, you associate it with a fence that says when
-the buffer will be ready. So you queue up the work, which you will initiate
-once the fence clears.</p>
-
-<p>In this manner, you are not blocking anything. You immediately return your
-own fence, which is a guarantee of when the buffer will be off of the display.
-As you queue up buffers, the kernel will list dependencies. With the
-synchronization framework:</p>
-
-<pre class=prettyprint>
-/*
- * will display buf when fence is signaled. returns immediately with a fence
- * that will signal when buf is no longer displayed.
- */
-struct sync_fence* display_buffer(struct dma_buf *buf, struct sync_fence
-*fence); </pre>
-
-
-<h4 id=sync_integration>Sync integration</h4>
-
-<h5 id=integration_conventions>Integration conventions</h5>
-
-<p>This section explains how to integrate the low-level sync framework with
-different parts of the Android framework and the drivers that need to
-communicate with one another.</p>
-
-<p>The Android HAL interfaces for graphics follow consistent conventions so
-when file descriptors are passed across a HAL interface, ownership of the file
-descriptor is always transferred. This means:</p>
-
-<ul> <li> if you receive a fence file descriptor from the sync framework, you
-must close it. <li> if you return a fence file descriptor to the sync
-framework, the framework will close it. <li> if you want to continue using the
-fence file descriptor, you must duplicate the descriptor. </ul>
-
-<p>Every time a fence is passed through BufferQueue - such as for a window that
-passes a fence to BufferQueue saying when its new contents will be ready - the
-fence object is renamed. Since kernel fence support allows fences to have
-strings for names, the sync framework uses the window name and buffer index
-that is being queued to name the fence, for example:
-<code>SurfaceView:0</code></p>
-
-<p>This is helpful in debugging to identify the source of a deadlock. Those
-names appear in the output of <code>/d/sync</code> and bug reports when
-taken.</p>
-
-<h5 id=anativewindow_integration>ANativeWindow integration</h5>
-
-<p>ANativeWindow is fence aware. <code>dequeueBuffer</code>,
-<code>queueBuffer</code>, and <code>cancelBuffer</code> have fence
-parameters.</p>
-
-<h5 id=opengl_es_integration>OpenGL ES integration</h5>
-
-<p>OpenGL ES sync integration relies upon these two EGL extensions:</p>
-
-<ul> <li> <code>EGL_ANDROID_native_fence_sync</code> - provides a way to either
-wrap or create native Android fence file descriptors in EGLSyncKHR objects.
-<li> <code>EGL_ANDROID_wait_sync</code> - allows GPU-side stalls rather than in
-CPU, making the GPU wait for an EGLSyncKHR. This is essentially the same as the
-<code>EGL_KHR_wait_sync</code> extension. See the
-<code>EGL_KHR_wait_sync</code> specification for details. </ul>
-
-<p>These extensions can be used independently and are controlled by a compile
-flag in libgui. To use them, first implement the
-<code>EGL_ANDROID_native_fence_sync</code> extension along with the associated
-kernel support. Next add a ANativeWindow support for fences to your driver and
-then turn on support in libgui to make use of the
-<code>EGL_ANDROID_native_fence_sync</code> extension.</p>
-
-<p>Then, as a second pass, enable the <code>EGL_ANDROID_wait_sync</code>
-extension in your driver and turn it on separately. The
-<code>EGL_ANDROID_native_fence_sync</code> extension consists of a distinct
-native fence EGLSync object type so extensions that apply to existing EGLSync
-object types don’t necessarily apply to <code>EGL_ANDROID_native_fence</code>
-objects to avoid unwanted interactions.</p>
-
-<p>The EGL_ANDROID_native_fence_sync extension employs a corresponding native
-fence file descriptor attribute that can be set only at creation time and
-cannot be directly queried onward from an existing sync object. This attribute
-can be set to one of two modes:</p>
-
-<ul> <li> A valid fence file descriptor - wraps an existing native Android
-fence file descriptor in an EGLSyncKHR object. <li> -1 - creates a native
-Android fence file descriptor from an EGLSyncKHR object. </ul>
-
-<p>The DupNativeFenceFD function call is used to extract the EGLSyncKHR object
-from the native Android fence file descriptor. This has the same result as
-querying the attribute that was set but adheres to the convention that the
-recipient closes the fence (hence the duplicate operation). Finally, destroying
-the EGLSync object should close the internal fence attribute.</p>
-
-<h5 id=hardware_composer_integration>Hardware Composer integration</h5>
-
-<p>Hardware Composer handles three types of sync fences:</p>
-
-<ul> <li> <em>Acquire fence</em> - one per layer, this is set before calling
-HWC::set. It signals when Hardware Composer may read the buffer. <li>
-<em>Release fence</em> - one per layer, this is filled in by the driver in
-HWC::set. It signals when Hardware Composer is done reading the buffer so the
-framework can start using that buffer again for that particular layer. <li>
-<em>Retire fence</em> - one per the entire frame, this is filled in by the
-driver each time HWC::set is called. This covers all of the layers for the set
-operation. It signals to the framework when all of the effects of this set
-operation has completed. The retire fence signals when the next set operation
-takes place on the screen. </ul>
-
-<p>The retire fence can be used to determine how long each frame appears on the
-screen. This is useful in identifying the location and source of delays, such
-as a stuttering animation. </p>
-
-<h4 id=vsync_offset>VSYNC Offset</h4>
-
-<p>Application and SurfaceFlinger render loops should be synchronized to the
-hardware VSYNC. On a VSYNC event, the display begins showing frame N while
-SurfaceFlinger begins compositing windows for frame N+1. The app handles
-pending input and generates frame N+2.</p>
-
-<p>Synchronizing with VSYNC delivers consistent latency. It reduces errors in
-apps and SurfaceFlinger and the drifting of displays in and out of phase with
-each other. This, however, does assume application and SurfaceFlinger per-frame
-times don’t vary widely. Nevertheless, the latency is at least two frames.</p>
-
-<p>To remedy this, you may employ VSYNC offsets to reduce the input-to-display
-latency by making application and composition signal relative to hardware
-VSYNC. This is possible because application plus composition usually takes less
-than 33 ms.</p>
-
-<p>The result of VSYNC offset is three signals with same period, offset
-phase:</p>
-
-<ul> <li> <em>HW_VSYNC_0</em> - Display begins showing next frame <li>
-<em>VSYNC</em> - App reads input and generates next frame <li> <em>SF
-VSYNC</em> - SurfaceFlinger begins compositing for next frame </ul>
-
-<p>With VSYNC offset, SurfaceFlinger receives the buffer and composites the
-frame, while the application processes the input and renders the frame, all
-within a single frame of time.</p>
-
-<p>Please note, VSYNC offsets reduce the time available for app and composition
-and therefore provide a greater chance for error.</p>
-
-<h5 id=dispsync>DispSync</h5>
-
-<p>DispSync maintains a model of the periodic hardware-based VSYNC events of a
-display and uses that model to execute periodic callbacks at specific phase
-offsets from the hardware VSYNC events.</p>
-
-<p>DispSync is essentially a software phase lock loop (PLL) that generates the
-VSYNC and SF VSYNC signals used by Choreographer and SurfaceFlinger, even if
-not offset from hardware VSYNC.</p>
-
-<img src="images/dispsync.png" alt="DispSync flow">
-
-<p class="img-caption"><strong>Figure 4.</strong> DispSync flow</p>
-
-<p>DispSync has these qualities:</p>
-
-<ul> <li> <em>Reference</em> - HW_VSYNC_0 <li> <em>Output</em> - VSYNC and SF
-VSYNC <li> <em>Feedback</em> - Retire fence signal timestamps from Hardware
-Composer </ul>
-
-<h5 id=vsync_retire_offset>VSYNC/Retire Offset</h5>
-
-<p>The signal timestamp of retire fences must match HW VSYNC even on devices
-that don’t use the offset phase. Otherwise, errors appear to have greater
-severity than reality.</p>
-
-<p>“Smart” panels often have a delta. Retire fence is the end of direct memory
-access (DMA) to display memory. The actual display switch and HW VSYNC is some
-time later.</p>
-
-<p><code>PRESENT_TIME_OFFSET_FROM_VSYNC_NS</code> is set in the device’s
-BoardConfig.mk make file. It is based upon the display controller and panel
-characteristics. Time from retire fence timestamp to HW Vsync signal is
-measured in nanoseconds.</p>
-
-<h5 id=vsync_and_sf_vsync_offsets>VSYNC and SF_VSYNC Offsets</h5>
-
-<p>The <code>VSYNC_EVENT_PHASE_OFFSET_NS</code> and
-<code>SF_VSYNC_EVENT_PHASE_OFFSET_NS</code> are set conservatively based on
-high-load use cases, such as partial GPU composition during window transition
-or Chrome scrolling through a webpage containing animations. These offsets
-allow for long application render time and long GPU composition time.</p>
-
-<p>More than a millisecond or two of latency is noticeable. We recommend
-integrating thorough automated error testing to minimize latency without
-significantly increasing error counts.</p>
-
-<p>Note these offsets are also set in the device’s BoardConfig.mk make file.
-The default if not set is zero offset. Both settings are offset in nanoseconds
-after HW_VSYNC_0. Either can be negative.</p>
+<p>Vulkan is a low-overhead, cross-platform API for high-performance 3D graphics.
+Like OpenGL ES, Vulkan provides tools for creating high-quality, real-time
+graphics in applications. Vulkan advantages include reductions in CPU overhead
+and support for the <a href="https://www.khronos.org/spir">SPIR-V Binary
+Intermediate</a> language. For details on Vulkan, see
+<a href="{@docRoot}devices/graphics/implement-vulkan.html">Implementing
+Vulkan</a>.</p>
<h3 id=virtual_displays>Virtual displays</h3>
-<p>Android added support for virtual displays to Hardware Composer in version
-1.3. This support was implemented in the Android platform and can be used by
-Miracast.</p>
-
-<p>The virtual display composition is similar to the physical display: Input
+<p>Android added platform support for virtual displays in Hardware Composer v1.3.
+The virtual display composition is similar to the physical display: Input
layers are described in prepare(), SurfaceFlinger conducts GPU composition, and
-layers and GPU framebuffer are provided to Hardware Composer in set().</p>
-
-<p>Instead of the output going to the screen, it is sent to a gralloc buffer.
-Hardware Composer writes output to a buffer and provides the completion fence.
-The buffer is sent to an arbitrary consumer: video encoder, GPU, CPU, etc.
-Virtual displays can use 2D/blitter or overlays if the display pipeline can
-write to memory.</p>
-
-<h4 id=modes>Modes</h4>
-
-<p>Each frame is in one of three modes after prepare():</p>
-
-<ul> <li> <em>GLES</em> - All layers composited by GPU. GPU writes directly to
-the output buffer while Hardware Composer does nothing. This is equivalent to
-virtual display composition with Hardware Composer <1.3. <li> <em>MIXED</em> -
-GPU composites some layers to framebuffer, and Hardware Composer composites
-framebuffer and remaining layers. GPU writes to scratch buffer (framebuffer).
-Hardware Composer reads scratch buffer and writes to the output buffer. Buffers
-may have different formats, e.g. RGBA and YCbCr. <li> <em>HWC</em> - All
-layers composited by Hardware Composer. Hardware Composer writes directly to
-the output buffer. </ul>
-
-<h4 id=output_format>Output format</h4>
-
-<p><em>MIXED and HWC modes</em>: If the consumer needs CPU access, the consumer
-chooses the format. Otherwise, the format is IMPLEMENTATION_DEFINED. Gralloc
-can choose best format based on usage flags. For example, choose a YCbCr format
-if the consumer is video encoder, and Hardware Composer can write the format
-efficiently.</p>
-
-<p><em>GLES mode</em>: EGL driver chooses output buffer format in
-dequeueBuffer(), typically RGBA8888. The consumer must be able to accept this
-format.</p>
-
-<h4 id=egl_requirement>EGL requirement</h4>
-
-<p>Hardware Composer 1.3 virtual displays require that eglSwapBuffers() does
-not dequeue the next buffer immediately. Instead, it should defer dequeueing
-the buffer until rendering begins. Otherwise, EGL always owns the “next” output
-buffer. SurfaceFlinger can’t get the output buffer for Hardware Composer in
-MIXED/HWC mode. </p>
-
-<p>If Hardware Composer always sends all virtual display layers to GPU, all
-frames will be in GLES mode. Although it is not recommended, you may use this
-method if you need to support Hardware Composer 1.3 for some other reason but
-can’t conduct virtual display composition.</p>
+layers and GPU framebuffer are provided to Hardware Composer in set(). For
+details on virtual displays, see
+<a href="{@docRoot}devices/graphics/implement-vdisplays.html">Implementing
+Virtual Displays</a>.</p>
<h2 id=testing>Testing</h2>
-<p>For benchmarking, we suggest following this flow by phase:</p>
+<p>For benchmarking, use the following flow by phase:</p>
-<ul> <li> <em>Specification</em> - When initially specifying the device, such
-as when using immature drivers, you should use predefined (fixed) clocks and
-workloads to measure the frames per second rendered. This gives a clear view of
-what the hardware is capable of doing. <li> <em>Development</em> - In the
-development phase as drivers mature, you should use a fixed set of user actions
-to measure the number of visible stutters (janks) in animations. <li>
-<em>Production</em> - Once the device is ready for production and you want to
-compare against competitors, you should increase the workload until stutters
-increase. Determine if the current clock settings can keep up with the load.
-This can help you identify where you might be able to slow the clocks and
-reduce power use. </ul>
+<ul>
+ <li><em>Specification</em>. When initially specifying the device (such as when
+ using immature drivers), use predefined (fixed) clocks and workloads to
+ measure frames per second (fps) rendered. This gives a clear view of hardware
+ capabilities.</li>
+ <li><em>Development</em>. As drivers mature, use a fixed set of user actions
+ to measure the number of visible stutters (janks) in animations.</li>
+ <li><em>Production</em>. When a device is ready for comparison against
+ competitors, increase the workload until stutters increase. Determine if the
+ current clock settings can keep up with the load. This can help you identify
+ where to slow the clocks and reduce power use.</li>
+</ul>
-<p>For the specification phase, Android offers the Flatland tool to help derive
-device capabilities. It can be found at:
-<code>platform/frameworks/native/cmds/flatland/</code></p>
+<p>For help deriving device capabilities during the specification phase, use the
+Flatland tool at <code>platform/frameworks/native/cmds/flatland/</code>.
+Flatland relies upon fixed clocks and shows the throughput achievable with
+composition-based workloads. It uses gralloc buffers to simulate multiple window
+scenarios, filling in the window with GL then measuring the compositing.</p>
-<p>Flatland relies upon fixed clocks and shows the throughput that can be
-achieved with composition-based workloads. It uses gralloc buffers to simulate
-multiple window scenarios, filling in the window with GL and then measuring the
-compositing. Please note, Flatland uses the synchronization framework to
-measure time. So you must support the synchronization framework to readily use
-Flatland.</p>
+<p class="note"><strong>Note:</strong> Flatland uses the synchronization
+framework to measure time, so your implementation must support the
+synchronization framework.</p>
diff --git a/src/devices/graphics/index.jd b/src/devices/graphics/index.jd
index 4ce174f..b618909 100644
--- a/src/devices/graphics/index.jd
+++ b/src/devices/graphics/index.jd
@@ -206,7 +206,7 @@
implemented their own implicit synchronization within their own drivers. This
is no longer required with the Android graphics synchronization framework. See
the
-<a href="{@docRoot}devices/graphics/implement.html#explicit_synchronization">Explicit
+<a href="{@docRoot}devices/graphics/implement-vsync.html#explicit_synchronization">Explicit
synchronization</a> section for implementation instructions.</p>
<p>The synchronization framework explicitly describes dependencies between
diff --git a/src/devices/graphics/run-tests.jd b/src/devices/graphics/run-tests.jd
index 02f7713..d08dd35 100644
--- a/src/devices/graphics/run-tests.jd
+++ b/src/devices/graphics/run-tests.jd
@@ -24,11 +24,13 @@
</ol>
</div>
</div>
-
+<p>This page provides instructions for running deqp tests in Linux and Windows
+environments, using command line arguments, and working with the Android
+application package.</p>
<h2 id=linux_and_windows_environments>Linux and Windows environments</h2>
-<p>The following files and directories must be copied to the target.</p>
+<p>Start by copying the following files and directories to the target.</p>
<table>
<tr>
@@ -38,59 +40,78 @@
</tr>
<tr>
- <td><p>Execution Server</p></td>
+ <td>Execution Server</td>
<td><code>build/execserver/execserver</code></td>
- <td><code><dst>/execserver</code></td>
+ <td><code><dst>/execserver</code></td>
</tr>
-
+
<tr>
- <td><p>EGL Module</p></td>
+ <td>EGL Module</td>
<td><code>build/modules/egl/deqp-egl</code></td>
- <td><code><dst>/deqp-egl</code></td>
+ <td><code><dst>/deqp-egl</code></td>
</tr>
-
+
<tr>
- <td><p>GLES2 Module</p></td>
- <td><code>build/modules/gles2/deqp-gles2data/gles2</code></td>
- <td>
- <code>
-<dst>/deqp-gles2<br/>
-<dst>/gles2
- </code>
- </td>
+ <td rowspan=2 style="vertical-align:middle">GLES2 Module</td>
+ <td><code>build/modules/gles2/deqp-gles2</code></td>
+ <td><code><dst>/deqp-gles2</code></td>
</tr>
-
+
+
<tr>
- <td><p>GLES3 Module</p></td>
- <td><code>build/modules/gles3/deqp-gles3data/gles3</code></td>
- <td>
- <code>
-<dst>/deqp-gles3<br/>
-<dst>/gles3
-</code>
-</td>
+ <td><code>data/gles2</code></td>
+ <td><code><dst>/gles2</code></td>
</tr>
-
+
+
+
<tr>
- <td><p>GLES3.1 Module</p></td>
- <td><code>build/modules/gles31/deqp-gles31data/gles31</code></td>
- <td>
- <code>
-<dst>/deqp-gles31<br/>
-<dst>/gles31
- </code>
- </td>
+ <td rowspan=2 style="vertical-align:middle">GLES3 Module</td>
+ <td><code>build/modules/gles3/deqp-gles3</td>
+ <td><code><dst>/deqp-gles3</code></td>
</tr>
+
+ <tr>
+ <td><code>data/gles3</code></td>
+ <td><code><dst>/gles3</code></td>
+ </tr>
+
+ <tr>
+ <td rowspan=2 style="vertical-align:middle">GLES3.1 Module</td>
+ <td><code>build/modules/gles31/deqp-gles31</code></td>
+ <td><code><dst>/deqp-gles31</code></td>
+ </tr>
+
+ <tr>
+ <td><code>data/gles31</code></td>
+ <td><code><dst>/gles31</code></td>
+ </tr>
+
+
+ <tr>
+ <td rowspan=2 style="vertical-align:middle">GLES3.2 Module</td>
+ <td><code>build/modules/gles32/deqp-gles32</code></td>
+ <td><code><dst>/deqp-gles32</code></td>
+ </tr>
+
+ <tr>
+ <td><code>data/gles32</code></td>
+ <td><code><dst>/gles32</code></td>
+ </tr>
+
</table>
-<p>Execution service and test binaries can be deployed anywhere in the target file system. Test binaries expect to find data directories in the current working directory.</p>
-
-<p>Start the Test Execution Service on the target device. For more details on
-starting the service, see <a href="port-tests.html#test_execution_service">Test execution service</a>.</p>
+<p>You can deploy the execution service and test binaries anywhere in the target
+file system; however, test binaries expect to find data directories in the
+current working directory. When ready, start the Test Execution Service on the
+target device. For details on starting the service, see
+<a href="{@docRoot}devices/graphics/port-tests.html#test_execution_service">Test
+execution service</a>.</p>
<h2 id=command_line_arguments>Command line arguments</h2>
-<p>The following table lists command line arguments that affect execution of all test programs. </p>
+<p>The following table lists command line arguments that affect execution of all
+test programs.</p>
<table width="100%">
<col style="width:50%">
@@ -101,42 +122,32 @@
</tr>
<tr>
- <td><code>
---deqp-case=<casename></code></td>
-<td><p>Run cases that match a given pattern. Wildcard (*) is supported.</p>
-</td>
- </tr>
-
- <tr>
- <td><code>
---deqp-log-filename=<filename></code></td>
-<td><p>Write test results to the file whose name you provide. </p>
-<p>The test execution service will set the filename when starting a test.</p>
-</td>
+<td><code>--deqp-case=<casename></code></td>
+<td>Run cases that match a given pattern. Wildcard (*) is supported.</td>
</tr>
<tr>
- <td><code>
---deqp-stdin-caselist<br/>
---deqp-caselist=<caselist><br/>
---deqp-caselist-file=<filename></code></td>
-<td><p>Read case list from stdin or from a given argument. The test execution service
-will set the argument according to the execution request received. See the next
-section for a description of the case list format.</p>
-</td>
+<td><code>--deqp-log-filename=<filename></code></td>
+<td>Write test results to the file whose name you provide. The test execution
+service will set the filename when starting a test.</td>
+ </tr>
+
+ <tr>
+ <td><code>--deqp-stdin-caselist<br/>
+--deqp-caselist=<caselist><br/>
+--deqp-caselist-file=<filename></code></td>
+<td>Read case list from stdin or from a given argument. The test execution
+service will set the argument according to the execution request received. See
+the next section for a description of the case list format.</td>
</tr>
<tr>
- <td><code>
---deqp-test-iteration-count=<count></code></td>
-<td><p>Override iteration count for tests that support a variable number of
-iterations. </p>
-</td>
+<td><code>--deqp-test-iteration-count=<count></code></td>
+<td>Override iteration count for tests that support a variable number of
+iterations.</td>
</tr>
<tr>
- <td><code>
---deqp-base-seed=<seed></code></td>
-<td><p>Base seed for the test cases that use randomization.</p>
-</td>
+ <td><code>--deqp-base-seed=<seed></code></td>
+ <td>Base seed for the test cases that use randomization.</td>
</tr>
</table>
@@ -153,72 +164,64 @@
<th>Description</th>
</tr>
<tr>
- <td><code>
---deqp-gl-context-type=<type></code></td>
-<td><p>OpenGL context type. Available context types depend on the platform. On
-platforms supporting EGL, the value <code>egl</code> can be used to select the EGL context.</p>
-</td>
+ <td><code>--deqp-gl-context-type=<type></code></td>
+ <td>OpenGL context type. Available context types depend on the platform. On
+ platforms supporting EGL, the value <code>egl</code> can be used to select
+ the EGL context.</td>
</tr>
<tr>
- <td><code>
---deqp-gl-config-id=<id></code></td>
-<td><p>Run tests for the provided GL configuration ID. Interpretation is
-platform-dependent. On the EGL platform, this is the EGL configuration ID.</p>
-</td>
+ <td><code>--deqp-gl-config-id=<id></code></td>
+ <td>Run tests for the provided GL configuration ID. Interpretation is
+ platform-dependent. On the EGL platform, this is the EGL configuration ID.</td>
</tr>
<tr>
- <td><code>
---deqp-gl-config-name=<name></code></td>
-<td><p>Run tests for a named GL configuration. Interpretation is platform-dependent.
-For EGL, the format is <code>rgb(a)<bits>d<bits>s<bits></code>. For example, a value of <code>rgb888s8</code> will select the first configuration where the color buffer is RGB888 and the
-stencil buffer has 8 bits.</p>
-</td>
+ <td><code>--deqp-gl-config-name=<name></code></td>
+ <td><p>Run tests for a named GL configuration. Interpretation is
+ platform-dependent. For EGL, the format is
+ <code>rgb(a)<bits>d<bits>s<bits></code>. For example, a
+ value of <code>rgb888s8</code> will select the first configuration where the
+ color buffer is RGB888 and the stencil buffer has 8 bits.</td>
</tr>
<tr>
- <td><code>
---deqp-gl-context-flags=<flags></code></td>
-<td><p>Creates a context. Specify <code>robust</code> or <code>debug</code>.</p>
-</td>
+ <td><code>--deqp-gl-context-flags=<flags></code></td>
+ <td>Creates a context. Specify <code>robust</code> or <code>debug</code>.</td>
</tr>
<tr>
- <td><code>
---deqp-surface-width=<width><br/>
---deqp-surface-height=<height></code></td>
-<td><p>Try to create a surface with a given size. Support for this is optional.</p>
-</td>
+ <td><code>--deqp-surface-width=<width><br/>
+ --deqp-surface-height=<height></code></td>
+ <td>Try to create a surface with a given size. Support for this is optional.</td>
</tr>
<tr>
- <td><code>
---deqp-surface-type=<type></code></td>
-<td><p>Use a given surface type as the main test rendering target. Possible types are <code>window</code>, <code>pixmap</code>, <code>pbuffer</code>, and <code>fbo</code>.</p>
-</td>
+ <td><code>--deqp-surface-type=<type></code></td>
+ <td>Use a given surface type as the main test rendering target. Possible
+ types are <code>window</code>, <code>pixmap</code>, <code>pbuffer</code>,
+ and <code>fbo</code>.</td>
</tr>
<tr>
- <td><code>
---deqp-screen-rotation=<rotation></code></td>
-<td><p>Screen orientation in increments of 90 degrees for platforms that support it.</p>
-</td>
+ <td><code>--deqp-screen-rotation=<rotation></code></td>
+ <td>Screen orientation in increments of 90 degrees for platforms that
+ support it.</td>
</tr>
</table>
<h3 id=test_case_list_format>Test case list format</h3>
-<p>The test case list can be given in two formats. The first option is to list the
-full name of each test on a separate line in a standard ASCII file. As the test
-sets grow, the repetitive prefixes can be cumbersome. To avoid repeating the
-prefixes, use a trie (also known as a prefix tree) syntax shown below.</p>
+<p>The test case list can be given in two formats. The first option is to list
+the full name of each test on a separate line in a standard ASCII file. As the
+test sets grow, the repetitive prefixes can be cumbersome. To avoid repeating
+the prefixes, use a trie (also known as a prefix tree) syntax shown below.</p>
<pre>
{nodeName{firstChild{…},…lastChild{…}}}
</pre>
-<p>For example, please review the following:</p>
+<p>For example:</p>
<pre>
{dEQP-EGL{config-list,create_context{rgb565_depth_stencil}}}
</pre>
-<p>That list would translate into two test cases:</p>
+<p>Translates into the following two test cases:</p>
<pre>
dEQP-EGL.config_list
@@ -227,39 +230,47 @@
<h2 id=android>Android</h2>
-<p>The Android application package contains everything required, including the
-test execution service, test binaries, and data files. The test activity is a <code>NativeActivity </code>and it uses EGL, which requires Android 3.2 or later.</p>
+<p>The Android application package contains all required components, including
+the test execution service, test binaries, and data files. The test activity is
+a <code>NativeActivity</code> that uses EGL (requires Android 3.2 or higher).</p>
-<p>The application package can be installed with the following command. The name shown is the name of the APK in the Android CTS package. The name depends on the build:</p>
-<pre>
-adb -d install -r com.drawelements.deqp.apk
-</pre>
+<p>The application package can be installed with the following command (name
+shown is the name of the APK in the Android CTS package; which name depends on
+the build):</p>
+<pre>$ adb –d install –r com.drawelements.deqp.apk</pre>
<p>To launch the test execution service and to setup port forwarding, use the
following:</p>
<pre>
-adb -d forward tcp:50016 tcp:50016
-adb -d shell am start -n com.drawelements.deqp/.execserver.ServiceStarter
+$ adb –d forward tcp:50016 tcp:50016
+$ adb –d shell am start –n com.drawelements.deqp/.execserver.ServiceStarter
</pre>
<p>Debug prints can be enabled by executing the following before starting the
tests:</p>
+
<pre>
-adb -d shell setprop log.tag.dEQP DEBUG
+$ adb –d shell setprop log.tag.dEQP DEBUG
</pre>
-<h3 id=executing_tests_on_android_without_android_cts>Executing tests on Android without Android CTS</h3>
+<h3 id=executing_tests_on_android_without_android_cts>Executing tests on
+Android without Android CTS</h3>
-<p>If you want to manually start the test execution activity, construct an Android
-intent that targets <code>android.app.NativeActivity</code>. The activities can be found in the <code>com.drawelements.deqp</code> package. The command line must be supplied as an extra string with key <code>"cmdLine"</code> in the Intent.</p>
+<p>To manually start the test execution activity, construct an Android intent
+that targets <code>android.app.NativeActivity</code>. The activities can be
+found in the <code>com.drawelements.deqp</code> package. The command line must
+be supplied as an extra string with key <code>"cmdLine"</code> in the Intent.</p>
-<p>A test log will be written to <code>/sdcard/dEQP-log.qpa</code>. If the test run does not start normally, additional debug information is
-available in the device log.</p>
+<p>A test log is written to <code>/sdcard/dEQP-log.qpa</code>. If the test run
+does not start normally, additional debug information is available in the device
+log.</p>
-<p>The activity can be launched from the command line using the <code>"am"</code> utility. For example, to run <code>dEQP-GLES2.info</code> tests on a platform supporting <code>NativeActivity,</code> the following command can be used:</p>
+<p>You can launch an activity from the command line using the <code>am</code>
+utility. For example, to run <code>dEQP-GLES2.info</code> tests on a platform
+supporting <code>NativeActivity,</code> use the following commands.</p>
-<pre>
-adb -d shell am start -n com.drawelements.deqp/android.app.NativeActivity -e cmdLine "'deqp --deqp-case=dEQP-GLES2.info.* --deqp-log-filename=/sdcard/dEQP-Log.qpa'"
+<pre>$ adb -d shell am start -n com.drawelements.deqp/android.app.NativeActivity -e \
+cmdLine "deqp --deqp-case=dEQP-GLES2.info.* --deqp-log-filename=/sdcard/dEQP-Log.qpa
</pre>
<h3 id=debugging_on_android>Debugging on Android</h3>
@@ -268,46 +279,39 @@
the debug build by running the following two scripts:</p>
<pre>
-python android/scripts/build.py --native-build-type=Debug
-python android/scripts/install.py
+$ python android/scripts/build.py --native-build-type=Debug
+$ python android/scripts/install.py
</pre>
-<p>After the debug build is installed on the device, to launch the tests under GDB
-running on the host, run the following command:</p>
+<p>After the debug build is installed on the device, to launch the tests under
+GDB running on the host, run the following command:</p>
-<pre>
-python android/scripts/debug.py --deqp-commandline="--deqp-log-filename=/sdcard/TestLog.qpa --deqp-case=dEQP-GLES2.functional.*"
+<pre>$ python android/scripts/debug.py \
+--deqp-commandline="--deqp-log-filename=/sdcard/TestLog.qpa --deqp-case=dEQP-GLES2.functional.*"
</pre>
-<p>The deqp command line will depend on test cases to be executed and other
-required parameters. The script will add a default breakpoint into the
-beginning of the deqp execution (<code>tcu::App::App</code>).</p>
+<p>The deqp command line depends on the test cases to be executed and other
+required parameters. The script adds a default breakpoint at the beginning of
+the deqp execution (<code>tcu::App::App</code>).</p>
-<p>The <code>debug.py </code>script accepts multiple command line arguments, e.g. for the following:</p>
+<p>The <code>debug.py</code> script accepts multiple command line arguments for
+actions such as setting breakpoints for debugging, gdbserver connection
+parameters, and paths to additional binaries to debug (use <code>debug.py
+--help</code> for all arguments and explanations). The script also copies some
+default libraries from the target device to get symbol listings.</p>
-<ul>
- <li> Setting the breakpoints for debugging
- <li> gdbserver connection parameters
- <li> Paths to additional binaries to debug
-</ul>
+<p>To step through driver code (such as when the GDB needs to know the locations
+of the binaries with full debug information), add more libraries via
+<code>debug.py</code> command line parameters. This script writes out a
+configuration file for the GDB starting from line 132 of the script file. You
+can provide additional paths to binaries, etc., but supplying correct command
+line parameters should be enough.</p>
-<p>Running <code>debug.py --help</code> will list all command line parameters, with explanations.</p>
+<p class="note"><strong>Note:</strong> On Windows, the GDB binary requires
+<code>libpython2.7.dll</code>. Before launching <code>debug.py</code>, add
+<code><path-to-ndk>/prebuilt/windows/bin</code> to the PATH variable.</p>
-<p>The script copies some default libraries from the target device to get symbol
-listings. </p>
-
-<p>If there is a need to step through driver code, more libraries can be added via <code>debug.py</code> command line parameters. This would be applicable, for
-example, if the GDB needs to know the locations of the binaries with full debug
-information. The <code>debug.py</code> script writes out a configuration file for the GDB starting from line 132 of
-the script file. Additional paths to binaries, etc., can be added there, but
-supplying correct command line parameters should be enough.</p>
-
-<p><strong>Notes:</strong></p>
-
-<ul>
- <li> On Windows, the gdb binary requires <code>libpython2.7.dll</code>. Add
-<code><path to ndk>/prebuilt/windows/bin</code> to the PATH variable before launching <code>debug.py</code>.
- <li> Native code debugging does not work on stock Android 4.3. See the Android bug
-report below for suggested workarounds. The bug has been fixed in Android 4.4;
-see the following: <a href="https://code.google.com/p/android/issues/detail?id=58373">https://code.google.com/p/android/issues/detail?id=58373</a>
-</ul>
+<p class="note"><strong>Note:</strong> Native code debugging does not work on
+stock Android 4.3; for workarounds, refer to
+<a href="https://code.google.com/p/android/issues/detail?id=58373">https://code.google.com/p/android/issues/detail?id=58373</a>.
+Android 4.4 and higher do not contain this bug.</p>
diff --git a/src/devices/graphics/testing.jd b/src/devices/graphics/testing.jd
index 56f4495..32db08b 100644
--- a/src/devices/graphics/testing.jd
+++ b/src/devices/graphics/testing.jd
@@ -26,29 +26,46 @@
</div>
-<p>This page provides an overview of the GPU testing suite
-called deqp (drawElements Quality Program).</p>
+<p>AOSP includes the drawElements Quality Program (deqp) GPU testing suite at
+<a href="https://android.googlesource.com/platform/external/deqp">https://android.googlesource.com/platform/external/deqp</a>.
+</p>
-<p>You can access the code for deqp in AOSP at the following location: <a href="https://android.googlesource.com/platform/external/deqp">https://android.googlesource.com/platform/external/deqp</a></p>
-
-<p>To work with the latest submitted code, use the <code>deqp-dev</code> branch. If you want the code that matches the Android 5.0 CTS release, use the <code>lollipop-release</code> branch. </p>
+<p>To work with the latest submitted code, use the
+<code>deqp-dev</code> branch. For code that matches a specific Android CTS
+release, use the <code><em>release-code-name</em>-release</code> branch (e.g.
+for Android 6.0, use the <code>marshmallow-release</code> branch).</p>
<h2 id=deploying_deqp>Deploying deqp</h2>
-<p>To deploy the deqp test suite to a new environment, please review the deqp information regarding the following: </p>
-
+<p>To deploy the deqp test suite to a new environment, review all pages in this
+section:</p>
<ul>
- <li>Building test programs
- <li>Porting the test framework (optional, depending on the target platform)
- <li>Running the tests
- <li>Automating the tests
- <li>Using special test groups
- <li>Integrating with Android CTS
+<li><a href="{@docRoot}devices/graphics/build-tests.html">Building test
+programs</a>. Discusses build systems such as CMake, targets, and various builds
+(Win32, Android, Linux).</li>
+<li><a href="{@docRoot}devices/graphics/port-tests.html">Porting the test
+framework</a>. Describes adapting base portability libraries, implementing
+test-framework platform-integration interfaces, and porting the
+execution service. Porting is optional (depending on the target platform).</li>
+<li><a href="{@docRoot}devices/graphics/run-tests.html">Running the tests</a>.
+Provides instructions for running deqp tests in Linux and Windows environments,
+command line arguments, and the Android package.</li>
+<li><a href="{@docRoot}devices/graphics/automate-tests.html">Automating the
+tests</a>. Covers test automation options, command line tools, CSV and XML
+exporting, and conversion to JUnit.</li>
+<li><a href="{@docRoot}devices/graphics/test-groups.html">Using special test
+groups</a>. Provides advice for running memory allocation and long-running
+stress tests.</li>
+<li><a href="{@docRoot}devices/graphics/cts-integration.html">Integrating with
+Android CTS</a>. Describes the <code>mustpass</code> list of tests, duplicating
+runs, and mapping CTS results.</li>
</ul>
<h2 id=source_layout>Source layout</h2>
-<p>The source code layout for the deqp test modules and supporting libraries is shown in the table below. The listing is not complete but highlights the most important directories.</p>
+<p>The source code layout for the deqp test modules and supporting libraries is
+shown in the table below (the listing is not comprehensive but highlights the
+most important directories).</p>
<table>
<tr>
@@ -93,6 +110,12 @@
<td><p>GLES3.1 module</p>
</td>
</tr>
+ <tr>
+ <td><code>
+ modules/gles32</code></td>
+<td><p>GLES3.2 module</p>
+</td>
+ </tr>
<tr>
<td><code>targets</code></td>
<td><p>Target-specific build configuration files</p>
@@ -153,6 +176,8 @@
</tr>
</table>
-<h3 id=open-source_components>Open Source components</h3>
+<h3 id=open-source_components>Open source components</h3>
-<p>The deqp uses <code>libpng</code> and <code>zlib</code>. They can be fetched from the web with the script <code>external/fetch_sources.py </code>or with git pulls from git repositories <code>platform/external/[libpng,zlib]</code>.</p>
+<p>The deqp uses <code>libpng</code> and <code>zlib</code>, which can be fetched
+using the script <code>external/fetch_sources.py </code>or via git from
+<code>platform/external/[libpng,zlib]</code>.</p>
diff --git a/src/devices/images/ape_fwk_hal_vehicle.png b/src/devices/images/ape_fwk_hal_vehicle.png
new file mode 100644
index 0000000..500934d
--- /dev/null
+++ b/src/devices/images/ape_fwk_hal_vehicle.png
Binary files differ
diff --git a/src/devices/images/vehicle_hal_arch.png b/src/devices/images/vehicle_hal_arch.png
new file mode 100644
index 0000000..8c8a6ab
--- /dev/null
+++ b/src/devices/images/vehicle_hal_arch.png
Binary files differ
diff --git a/src/devices/images/vehicle_hvac_get.png b/src/devices/images/vehicle_hvac_get.png
new file mode 100644
index 0000000..1006db6
--- /dev/null
+++ b/src/devices/images/vehicle_hvac_get.png
Binary files differ
diff --git a/src/devices/images/vehicle_hvac_set.png b/src/devices/images/vehicle_hvac_set.png
new file mode 100644
index 0000000..fcf4683
--- /dev/null
+++ b/src/devices/images/vehicle_hvac_set.png
Binary files differ
diff --git a/src/devices/media/framework-hardening.jd b/src/devices/media/framework-hardening.jd
new file mode 100644
index 0000000..bcf4296
--- /dev/null
+++ b/src/devices/media/framework-hardening.jd
@@ -0,0 +1,213 @@
+page.title=Media Framework Hardening
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>To improve device security, Android 7.0 breaks up the monolithic
+<code>mediaserver</code> process into multiple processes with permissions and
+capabilities restricted to only those required by each process. These changes
+mitigate media framework security vulnerabilities by:</p>
+<ul>
+<li>Splitting AV pipeline components into app-specific sandboxed processes.</li>
+<li>Enabling updatable media components (extractors, codecs, etc.).</li>
+</ul>
+
+<p>These changes also improve security for end users by significantly reducing
+the severity of most media-related security vulnerabilities, keeping end user
+devices and data safe.</p>
+
+<p>OEMs and SoC vendors need to update their HAL and framework changes to make
+them compatible with the new architecture. Specifically, because vendor-provided
+Android code often assumes everything runs in the same process, vendors must
+update their code to pass around native handles (<code>native_handle</code>)
+that have meaning across processes. For a reference implementation of changes
+related to media hardening, refer to <code>frameworks/av</code> and
+<code>frameworks/native</code>.</p>
+
+<h2 id=arch_changes>Architectural changes</h2>
+<p>Previous versions of Android used a single, monolithic
+<code>mediaserver</code> process with great many permissions (camera access,
+audio access, video driver access, file access, network access, etc.). Android
+7.0 splits the <code>mediaserver</code> process into several new processes that
+each require a much smaller set of permissions:</p>
+
+<p><img src="images/ape_media_split.png" alt="mediaserver hardening"></p>
+<p class="img-caption"><strong>Figure 1.</strong> Architecture changes for
+mediaserver hardening</p>
+
+<p>This new architecture ensures that even if a process is compromised,
+malicious code does not have access to the full set of permissions previously
+held by mediaserver. Processes are restricted by SElinux and seccomp policies.
+</p>
+
+<p class=note><strong>Note:</strong> Because of vendor dependencies, some codecs
+still run in the <code>mediaserver</code> and consequently grant
+<code>mediaserver</code> more permissions than necessary. Specifically, Widevine
+Classic continues to run in the <code>mediaserver</code> for Android 7.0.</p>
+
+<h3 id=mediaserver-changes>MediaServer changes</h3>
+<p>In Android 7.0, the <code>mediaserver</code> process exists for driving
+playback and recording, e.g. passing and synchronizing buffers between
+components and processes. Processes communicate through the standard Binder
+mechanism.</p>
+<p>In a standard local file playback session, the application passes a file
+descriptor (FD) to <code>mediaserver</code> (usually via the MediaPlayer Java
+API), and the <code>mediaserver</code>:</p>
+<ol>
+<li>Wraps the FD into a Binder DataSource object that is passed to the extractor
+process, which uses it to read from the file using Binder IPC. (The
+mediaextractor doesn't get the FD but instead makes Binder calls back to the
+<code>mediaserver</code> to get the data.)</li>
+<li>Examines the file, creates the appropriate extractor for the file type
+(e.g. MP3Extractor, or MPEG4Extractor), and returns a Binder interface for the
+extractor to the <code>mediaserver</code> process.</li>
+<li>Makes Binder IPC calls to the extractor to determine the type of data in the
+file (e.g. MP3 or H.264 data).</li>
+<li>Calls into the <code>mediacodec</code> process to create codecs of the
+required type; receives Binder interfaces for these codecs.</li>
+<li>Makes repeated Binder IPC calls to the extractor to read encoded samples,
+uses the Binder IPC to send encoded data to the <code>mediacodec</code> process
+for decoding, and receives decoded data.</li>
+</ol>
+<p>In some use cases, no codec is involved (such as an offloaded playback where
+encoded data is sent directly to the output device), or the codec may render the
+decoded data directly instead of returning a buffer of decoded data (video
+playback).</p>
+
+<h3 id=mediacodecservice_changes>MediaCodecService changes</h3>
+<p>The codec service is where encoders and decoders live. Due to vendor
+dependencies, not all codecs live in the codec process yet. In Android 7.0:</p>
+<ul>
+<li>Non-secure decoders and software encoders live in the codec process.</li>
+<li>Secure decoders and hardware encoders live in the <code>mediaserver</code>
+(unchanged).</li>
+</ul>
+
+<p>An application (or mediaserver) calls the codec process to create a codec of
+the required type, then calls that codec to pass in encoded data and retrieve
+decoded data (for decoding) or to pass in decoded data and retrieve encoded data
+(for encoding). Data transfer to and from codecs uses shared memory already, so
+that process is unchanged.</p>
+
+<h3 id=mediadrmserver_changes>MediaDrmServer changes</h3>
+<p>The DRM server is used when playing DRM-protected content, such as movies in
+Google Play Movies. It handles decrypting the encrypted data in a secure way,
+and as such has access to certificate and key storage and other sensitive
+components. Due to vendor dependencies, the DRM process is not used in all cases
+yet.</p>
+
+<h3 id=audioserver_changes>AudioServer changes</h3>
+<p>The AudioServer process hosts audio related components such as audio input
+and output, the policymanager service that determines audio routing, and FM
+radio service. For details on Audio changes and implementation guidance, see
+<a href="{@docRoot}devices/audio/implement.html">Implementing Audio</a>.</p>
+
+<h3 id=cameraserver_changes>CameraServer changes</h3>
+<p>The CameraServer controls the camera and is used when recording video to
+obtain video frames from the camera and then pass them to
+<code>mediaserver</code> for further handling. For details on changes and
+implementation guidance for CameraServer changes, refer to
+<a href="{@docRoot}devices/camera/versioning.html#hardening">Camera Framework
+Hardening</a>.</p>
+
+<h3 id=extractor_service_changes>ExtractorService changes</h3>
+<p>The extractor service hosts the <em>extractors</em>, components that parse
+the various file formats supported by the media framework. The extractor service
+is the least privileged of all the services—it can't read FDs so instead
+it makes calls onto a Binder interface (provided to it by the
+<code>mediaserver for</code> each playback session) to access files.</p>
+<p>An application (or <code>mediaserver</code>) makes a call to the extractor
+process to obtain an <code>IMediaExtractor</code>, calls that
+<code>IMediaExtractor</code> to obtain<code> IMediaSources</code> for the track
+contained in the file, and then calls <code>IMediaSources</code> to read data
+from them.</p>
+<p>To transfer the data between processes, the application (or
+<code>mediaserver</code>) includes the data in the reply-Parcel as part of the
+Binder transaction or uses shared memory:</p>
+
+<ul>
+<li>Using <strong>shared memory</strong> requires an extra Binder call to
+release the shared memory but is faster and uses less power for large buffers.
+</li>
+<li>Using <strong>in-Parcel</strong> requires extra copying but is faster and
+uses less power for buffers smaller than 64KB.</li>
+</ul>
+
+<h2 id=implementation>Implementation</h2>
+<p>To support the move of <code>MediaDrm</code> and <code>MediaCrypto</code>
+components into the new <code>mediadrmserver</code> process, vendors must change
+the allocation method for secure buffers to allow buffers to be shared between
+processes.</p>
+<p>In previous Android releases, secure buffers are allocated in
+<code>mediaserver</code> by <code>OMX::allocateBuffer</code> and used during
+decryption in the same process, as shown below:</p>
+
+<p><img src="images/ape_media_buffer_alloc_pren.png"></p>
+<p class="img-caption"><strong>Figure 2.</strong> Android 6.0 and lower buffer
+allocation in mediaserver.</p>
+
+<p>In Android 7.0, the buffer allocation process has changed to a new mechanism
+that provides flexibility while minimizing the impact on existing
+implementations. With <code>MediaDrm</code> and <code>MediaCrypto</code> stacks
+in the new <code>mediadrmserver</code> process, buffers are allocated
+differently and vendors must update the secure buffer handles so they can be
+transported across binder when <code>MediaCodec</code> invokes a decrypt
+operation on <code>MediaCrypto</code>.</p>
+
+<p><img src="images/ape_media_buffer_alloc_n.png"></p>
+<p class="img-caption"><strong>Figure 3.</strong> Android 7.0 and higher buffer
+allocation in mediaserver.</p>
+
+<h3 id=native_handles>Using native handles</h3>
+<p>The <code>OMX::allocateBuffer</code> must return a pointer to a
+<code>native_handle</code> struct, which contains file descriptors (FDs) and
+additional integer data. A <code>native_handle</code> has all of the advantages
+of using FDs, including existing binder support for
+serialization/deserialization, while allowing more flexibility for vendors who
+don't currently use FDs.</p>
+<p>Use <code>native_handle_create()</code> to allocate the native handle.
+Framework code takes ownership of the allocated <code>native_handle</code>
+struct and is responsible for releasing resources in both the process where
+the <code>native_handle</code> is originally allocated and in the process where
+it is deserialized. The framework releases native handles with
+<code>native_handle_close()</code> followed by
+<code>native_handle_delete()</code> and serializes/deserializes the
+<code>native_handle</code> using
+<code>Parcel::writeNativeHandle()/readNativeHandle()</code>.
+</p>
+<p>SoC vendors who use FDs to represent secure buffers can populate the FD in
+the <code>native_handle</code> with their FD. Vendors who don't use FDs can
+represent secure buffers using additional fields in the
+<code>native_buffer</code>.</p>
+
+<h3 id=decrypt_location>Setting decryption location</h3>
+<p>Vendors must update the OEMCrypto decrypt method that operates on the
+<code>native_handle</code> to perform any vendor-specific operations necessary
+to make the <code>native_handle</code> usable in the new process space (changes
+typically include updates to OEMCrypto libraries).</p>
+<p>As <code>allocateBuffer</code> is a standard OMX operation, Android 7.0
+includes a new OMX extension
+(<code>OMX.google.android.index.allocateNativeHandle</code>) to query for this
+support and an <code>OMX_SetParameter</code> call that notifies the OMX
+implementation it should use native handles.</p>
diff --git a/src/devices/media/images/ape_media_buffer_alloc_n.png b/src/devices/media/images/ape_media_buffer_alloc_n.png
new file mode 100644
index 0000000..54f93a7
--- /dev/null
+++ b/src/devices/media/images/ape_media_buffer_alloc_n.png
Binary files differ
diff --git a/src/devices/media/images/ape_media_buffer_alloc_pren.png b/src/devices/media/images/ape_media_buffer_alloc_pren.png
new file mode 100644
index 0000000..e0e6e75
--- /dev/null
+++ b/src/devices/media/images/ape_media_buffer_alloc_pren.png
Binary files differ
diff --git a/src/devices/media/images/ape_media_split.png b/src/devices/media/images/ape_media_split.png
new file mode 100644
index 0000000..85b4a5d
--- /dev/null
+++ b/src/devices/media/images/ape_media_split.png
Binary files differ
diff --git a/src/devices/media/index.jd b/src/devices/media/index.jd
index 6d2359d..b7d2a8d 100644
--- a/src/devices/media/index.jd
+++ b/src/devices/media/index.jd
@@ -24,101 +24,107 @@
</div>
</div>
-<img style="float: right; margin: 0px 15px 15px 15px;" src="images/ape_fwk_hal_media.png" alt="Android Media HAL icon"/>
+<img style="float: right; margin: 0px 15px 15px 15px;"
+src="images/ape_fwk_hal_media.png" alt="Android Media HAL icon"/>
-<p>
- Android provides a media playback engine at the native level called
-Stagefright that comes built-in with software-based codecs for several popular
-media formats. Stagefright features for audio and video playback include
-integration with OpenMAX codecs, session management, time-synchronized
-rendering, transport control, and DRM.</p>
+<p>Android includes Stagefright, a media playback engine at the native level
+that has built-in software-based codecs for popular media formats.</p>
-<p class="note"><strong>Note:</strong> The Stagefright media playback engine
-had been updated through our <a
-href="{@docRoot}security/bulletin/index.html">monthly security update</a>
-process.</p>
+<p>Stagefright audio and video playback features include integration with
+OpenMAX codecs, session management, time-synchronized rendering, transport
+control, and DRM.</p>
- <p>In addition, Stagefright supports integration with custom hardware codecs
-that you provide. There actually isn't a HAL to implement for custom codecs,
-but to provide a hardware path to encode and decode media, you must implement
-your hardware-based codec as an OpenMax IL (Integration Layer) component.</p>
+<p>Stagefright also supports integration with custom hardware codecs provided by
+you. To set a hardware path to encode and decode media, you must implement a
+hardware-based codec as an OpenMax IL (Integration Layer) component.</p>
+
+<p class="note"><strong>Note:</strong> Stagefright updates can occur through the
+Android <a href="{@docRoot}security/bulletin/index.html">monthly security
+update</a> process and as part of an Android OS release.</p>
<h2 id="architecture">Architecture</h2>
-<p>The following diagram shows how media applications interact with the Android native multimedia framework.</p>
- <img src="images/ape_fwk_media.png" alt="Android media architecture" id="figure1" />
-<p class="img-caption">
- <strong>Figure 1.</strong> Media architecture
-</p>
+<p>Media applications interact with the Android native multimedia framework
+according to the following architecture.</p>
+<img src="images/ape_fwk_media.png" alt="Android media architecture"
+id="figure1" /><p class="img-caption"><strong>Figure 1.</strong> Media
+architecture</p>
+
<dl>
<dt>Application Framework</dt>
- <dd>At the application framework level is the app's code, which utilizes the
- <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
- APIs to interact with the multimedia hardware.</dd>
- <dt>Binder IPC</dt>
- <dd>The Binder IPC proxies facilitate communication over process boundaries. They are located in
- the <code>frameworks/av/media/libmedia</code> directory and begin with the letter "I".</dd>
- <dt>Native Multimedia Framework</dt>
- <dd>At the native level, Android provides a multimedia framework that utilizes the Stagefright engine for
- audio and video recording and playback. Stagefright comes with a default list of supported software codecs
- and you can implement your own hardware codec by using the OpenMax integration layer standard. For more
- implementation details, see the various MediaPlayer and Stagefright components located in
- <code>frameworks/av/media</code>.
- </dd>
- <dt>OpenMAX Integration Layer (IL)</dt>
- <dd>The OpenMAX IL provides a standardized way for Stagefright to recognize and use custom hardware-based
- multimedia codecs called components. You must provide an OpenMAX plugin in the form of a shared library
- named <code>libstagefrighthw.so</code>. This plugin links your custom codec components to Stagefright.
- Your custom codecs must be implemented according to the OpenMAX IL component standard.
- </dd>
+<dd>At the application framework level is application code that utilizes
+<a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
+APIs to interact with the multimedia hardware.</dd>
+
+<dt>Binder IPC</dt>
+<dd>The Binder IPC proxies facilitate communication over process boundaries.
+They are located in the <code>frameworks/av/media/libmedia</code> directory and
+begin with the letter "I".</dd>
+
+<dt>Native Multimedia Framework</dt>
+<dd>At the native level, Android provides a multimedia framework that utilizes
+the Stagefright engine for audio and video recording and playback. Stagefright
+comes with a default list of supported software codecs and you can implement
+your own hardware codec by using the OpenMax integration layer standard. For
+more implementation details, see the MediaPlayer and Stagefright components
+located in <code>frameworks/av/media</code>.</dd>
+
+<dt>OpenMAX Integration Layer (IL)</dt>
+<dd>The OpenMAX IL provides a standardized way for Stagefright to recognize and
+use custom hardware-based multimedia codecs called components. You must provide
+an OpenMAX plugin in the form of a shared library named
+<code>libstagefrighthw.so</code>. This plugin links Stagefright with your custom
+codec components, which must be implemented according to the OpenMAX IL
+component standard.</dd>
</dl>
+<h2 id="codecs">Implementing custom codecs</h2>
+<p>Stagefright comes with built-in software codecs for common media formats, but
+you can also add your own custom hardware codecs as OpenMAX components. To do
+this, you must create the OMX components and an OMX plugin that hooks together
+your custom codecs with the Stagefright framework. For example components, see
+the <code>hardware/ti/omap4xxx/domx/</code>; for an example plugin for the
+Galaxy Nexus, see <code>hardware/ti/omap4xx/libstagefrighthw</code>.</p>
-<h2 id="codecs">
-Implementing Custom Codecs
-</h2>
-<p>Stagefright comes with built-in software codecs for common media formats, but you can also add your
- own custom hardware codecs as OpenMAX components. To do this, you need to create OMX components and also an
- OMX plugin that hooks together your custom codecs with the Stagefright framework. For an example, see
- the <code>hardware/ti/omap4xxx/domx/</code> for example components and <code>hardware/ti/omap4xx/libstagefrighthw</code>
- for an example plugin for the Galaxy Nexus.
-</p>
- <p>To add your own codecs:</p>
+<p>To add your own codecs:</p>
<ol>
-<li>Create your components according to the OpenMAX IL component standard. The component interface is located in the
- <code>frameworks/native/include/media/OpenMAX/OMX_Component.h</code> file. To learn more about the
- OpenMAX IL specification, see the <a href="http://www.khronos.org/openmax/">OpenMAX website</a>.</li>
-<li>Create a OpenMAX plugin that links your components with the Stagefright service.
- See the <code>frameworks/native/include/media/hardware/OMXPluginBase.h</code> and <code>HardwareAPI.h</code> header
- files for the interfaces to create the plugin.
-</li>
-<li>Build your plugin as a shared library with the name <code>libstagefrighthw.so</code> in your product Makefile. For example:
-<pre>LOCAL_MODULE := libstagefrighthw</pre>
-
-<p>In your device's Makefile, ensure that you declare the module as a product package:</p>
+<li>Create your components according to the OpenMAX IL component standard. The
+component interface is located in the
+<code>frameworks/native/include/media/OpenMAX/OMX_Component.h</code> file. To
+learn more about the OpenMAX IL specification, refer to the
+<a href="http://www.khronos.org/openmax/">OpenMAX website</a>.</li>
+<li>Create a OpenMAX plugin that links your components with the Stagefright
+service. For the interfaces to create the plugin, see
+<code>frameworks/native/include/media/hardware/OMXPluginBase.h</code> and
+<code>HardwareAPI.h</code> header files.</li>
+<li>Build your plugin as a shared library with the name
+<code>libstagefrighthw.so</code> in your product Makefile. For example:
+<br>
+<p><pre>LOCAL_MODULE := libstagefrighthw</pre></p>
+<p>In your device's Makefile, ensure you declare the module as a product
+package:</p>
<pre>
PRODUCT_PACKAGES += \
libstagefrighthw \
...
-</pre>
-</li>
-</ol>
+</pre></li></ol>
-<h2 id="expose">Exposing Codecs to the Framework</h2>
-<p>The Stagefright service parses the <code>system/etc/media_codecs.xml</code> and <code>system/etc/media_profiles.xml</code>
- to expose the supported codecs and profiles on the device to app developers via the <code>android.media.MediaCodecList</code> and
- <code>android.media.CamcorderProfile</code> classes. You need to create both files in the
- <code>device/<company_name>/<device_name>/</code> directory
- and copy this over to the system image's <code>system/etc</code> directory in your device's Makefile.
- For example:</p>
-
- <pre>
+<h2 id="expose">Exposing codecs to the framework</h2>
+<p>The Stagefright service parses the <code>system/etc/media_codecs.xml</code>
+and <code>system/etc/media_profiles.xml</code> to expose the supported codecs
+and profiles on the device to app developers via the
+<code>android.media.MediaCodecList</code> and
+<code>android.media.CamcorderProfile</code> classes. You must create both files
+in the <code>device/<company>/<device>/</code> directory
+and copy this over to the system image's <code>system/etc</code> directory in
+your device's Makefile. For example:</p>
+<pre>
PRODUCT_COPY_FILES += \
device/samsung/tuna/media_profiles.xml:system/etc/media_profiles.xml \
device/samsung/tuna/media_codecs.xml:system/etc/media_codecs.xml \
</pre>
-<p>See the <code>device/samsung/tuna/media_codecs.xml</code> and
- <code>device/samsung/tuna/media_profiles.xml</code> file for complete examples.</p>
+<p>For complete examples, seee <code>device/samsung/tuna/media_codecs.xml</code>
+and <code>device/samsung/tuna/media_profiles.xml</code> .</p>
-<p class="note"><strong>Note:</strong> The <code><Quirk></code> element for media codecs is no longer supported
- by Android starting in Jelly Bean.</p>
+<p class="note"><strong>Note:</strong> As of Android 4.1, the
+<code><Quirk></code> element for media codecs is no longer supported.</p>
diff --git a/src/devices/sensors/images/axis_auto.png b/src/devices/sensors/images/axis_auto.png
new file mode 100644
index 0000000..dd6b187
--- /dev/null
+++ b/src/devices/sensors/images/axis_auto.png
Binary files differ
diff --git a/src/devices/sensors/sensor-types.jd b/src/devices/sensors/sensor-types.jd
index 3007b4a..3697aba 100644
--- a/src/devices/sensors/sensor-types.jd
+++ b/src/devices/sensors/sensor-types.jd
@@ -24,71 +24,93 @@
</div>
</div>
-<h2 id="sensor_axis_definition">Sensor axis definition</h2>
-<p>Sensor event values from many sensors are expressed in a specific frame that is
- static relative to the phone. This API is relative only to the NATURAL
- orientation of the screen. In other words, the axes are not swapped when the
- device's screen orientation changes.</p>
+<p>This section describes sensor axes, base sensors, and composite sensors
+(activity, attitude, uncalibrated, and interaction).</p>
-<div class="figure" style="width:269px">
- <img src="http://developer.android.com/images/axis_device.png"
-alt="Coordinate system of sensor API" height="225" />
- <p class="img-caption">
- <strong>Figure 1.</strong> Coordinate system (relative to a device) that's
- used by the Sensor API.
- </p>
-</div>
+<h2 id="sensor_axis_definition">Sensor axes</h2>
+<p>Sensor event values from many sensors are expressed in a specific frame that
+is static relative to the device.
+
+<h3 id=phone_axes>Mobile device axes</h3>
+<p>The Sensor API is relative only to the natural orientation of the screen
+(axes are not swapped when the device's screen orientation changes.</p>
+
+<img src="http://developer.android.com/images/axis_device.png" alt="Coordinate
+system of sensor API for mobile devices"/>
+<p class="img-caption"><strong>Figure 1.</strong> Coordinate system (relative to
+a mobile device) used by the Sensor API.</p>
+
+<h3 id=auto_axes>Automotive axes</h3>
+<p>In Android Automotive implementations, axes are defined with respect to the
+vehicle body frame:</p>
+
+<img src="images/axis_auto.png" alt="Coordinate system of sensor API for
+automotive devices"/>
+<p class="img-caption"><strong>Figure 2.</strong> Coordinate system (relative to
+an automotive device) used by the Sensor API.</p>
+
+<ul>
+<li>X increases towards the right of the vehicle</li>
+<li>Y increases towards the nose of the body frame</li>
+<li>Z increases towards the roof of the body frame</li>
+</ul>
+
+<p>When looking from the positive direction of an axis, positive rotations are
+counterclockwise. Thus, when a vehicle is making a left turn, the z-axis
+gyroscope rate of turn is expected to be a positive value.</p>
<h2 id="base_sensors">Base sensors</h2>
-<p>Some sensor types are named directly after the physical sensors they represent.
- Sensors with such types are called “base” sensors, referring to the fact they
- relay data from a single physical sensor, contrary to “composite” sensors, for
- which the data is generated out of other sensors.</p>
-<p>Examples of base sensor types:</p>
+<p>Base sensor types are named after the physical sensors they represent. These
+sensors relay data from a single physical sensor (as opposed to composite
+sensors that generate data out of other sensors). Examples of base sensor types
+include:</p>
<ul>
<li><code>SENSOR_TYPE_ACCELEROMETER</code></li>
<li><code>SENSOR_TYPE_GYROSCOPE</code></li>
<li><code>SENSOR_TYPE_MAGNETOMETER</code></li>
</ul>
- <p> See the list of Android sensor types below for more details on each
-<h3 id="base_sensors_=_not_equal_to_physical_sensors">Base sensors != (not equal to) physical sensors</h3>
-<p>Base sensors are not to be confused with their underlying physical sensor. The
- data from a base sensor is not the raw output of the physical sensor:
- corrections are be applied, such as bias compensation and temperature
- compensation.</p>
-<p>The characteristics of a base sensor might be different from the
- characteristics of its underlying physical sensor.</p>
+
+<p class='note'><strong>Note:</strong> For details on each Android sensor type,
+review the following sections.</p>
+
+<p>However, base sensors are not equal to and should not be confused with their
+underlying physical sensor. The data from a base sensor is <strong>not</strong>
+the raw output of the physical sensor because corrections (such as bias
+compensation and temperature compensation) are applied.</p>
+
+<p>For example, the characteristics of a base sensor might be different from the
+characteristics of its underlying physical sensor in the following use cases:</p>
<ul>
- <li> For example, a gyroscope chip might be rated to have a bias range of 1 deg/sec.
- <ul>
- <li> After factory calibration, temperature compensation and bias compensation are
- applied, the actual bias of the Android sensor will be reduced, may be to a
- point where the bias is guaranteed to be below 0.01deg/sec. </li>
- <li> In this situation, we say that the Android sensor has a bias below 0.01
- deg/sec, even though the data sheet of the underlying sensor said 1 deg/sec. </li>
- </ul>
- </li>
- <li> As another example, a barometer might have a power consumption of 100uW.
- <ul>
- <li> Because the generated data needs to be transported from the chip to the SoC,
- the actual power cost to gather data from the barometer Android sensor might be
- much higher, for example 1000uW. </li>
- <li> In this situation, we say that the Android sensor has a power consumption of
- 1000uW, even though the power consumption measured at the barometer chip leads
- is 100uW. </li>
- </ul>
- </li>
- <li> As a third example, a magnetometer might consume 100uW when calibrated, but
- consume more when calibrating.
- <ul>
- <li> Its calibration routine might require activating the gyroscope, consuming
- 5000uW, and running some algorithm, costing another 900uW. </li>
- <li> In this situation, we say that the maximum power consumption of the
- (magnetometer) Android sensor is 6000uW. </li>
- <li> In this case, the average power consumption is the more useful measure, and it
- is what is reported in the sensor static characteristics through the HAL. </li>
- </ul>
- </li>
+<li>A gyroscope chip rated to have a bias range of 1 deg/sec.
+ <ul>
+ <li>After factory calibration, temperature compensation and bias compensation are
+ applied, the actual bias of the Android sensor will be reduced, may be to a
+ point where the bias is guaranteed to be below 0.01deg/sec.</li>
+ <li>In this situation, we say that the Android sensor has a bias below 0.01
+ deg/sec, even though the data sheet of the underlying sensor said 1 deg/sec.</li>
+ </ul>
+</li>
+<li>A barometer with a power consumption of 100uW.
+ <ul>
+ <li>Because the generated data needs to be transported from the chip to the SoC,
+ the actual power cost to gather data from the barometer Android sensor might be
+ much higher, for example 1000uW.</li>
+ <li>In this situation, we say that the Android sensor has a power consumption of
+ 1000uW, even though the power consumption measured at the barometer chip leads
+ is 100uW.</li>
+ </ul>
+</li>
+<li>A magnetometer that consumes 100uW when calibrated, but consumes more when
+calibrating.
+ <ul>
+ <li>Its calibration routine might require activating the gyroscope, consuming
+ 5000uW, and running some algorithm, costing another 900uW.</li>
+ <li> In this situation, we say that the maximum power consumption of the
+ (magnetometer) Android sensor is 6000uW.</li>
+ <li>In this case, the average power consumption is the more useful measure, and it
+ is what is reported in the sensor static characteristics through the HAL.</li>
+ </ul>
+</li>
</ul>
<h3 id="accelerometer">Accelerometer</h3>
<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
@@ -227,45 +249,44 @@
<p><code>getDefaultSensor(SENSOR_TYPE_RELATIVE_HUMIDITY)</code> <em>returns a non-wake-up sensor</em></p>
<p>A relative humidity sensor measures relative ambient air humidity and returns a
value in percent.</p>
+
<h2 id="composite_sensor_types">Composite sensor types</h2>
-<p>Any sensor that is not a base sensor is called a composite sensor. Composite
- sensors generate their data by processing and/or fusing data from one or
- several physical sensors.</p>
-<p>Examples of composite sensor types:</p>
+<p>A composite sensor generates data by processing and/or fusing data from one
+or several physical sensors. (Any sensor that is not a base sensor is called a
+composite sensor.) Examples of composite sensors include:</p>
<ul>
- <li><a href="#step_detector">Step detector</a> and <a href="#significant_motion">Significant motion</a>, which are usually based on an accelerometer, but could be based on other
- sensors as well, if the power consumption and accuracy was acceptable. </li>
- <li><a href="#game_rotation_vector">Game rotation vector</a>, based on an
- accelerometer and a gyroscope. </li>
- <li><a href="#gyroscope_uncalibrated">Uncalibrated gyroscope</a>, which is
- similar to the gyroscope base sensor, but with
- the bias calibration being reported separately instead of being corrected in
- the measurement. </li>
+<li><a href="#step_detector">Step detector</a> and
+<a href="#significant_motion">Significant motion</a>, which are usually based on
+an accelerometer, but could be based on other sensors as well, if the power
+consumption and accuracy was acceptable.</li>
+<li><a href="#game_rotation_vector">Game rotation vector</a>, based on an
+accelerometer and a gyroscope.</li>
+<li><a href="#gyroscope_uncalibrated">Uncalibrated gyroscope</a>, which is
+similar to the gyroscope base sensor, but with the bias calibration being
+reported separately instead of being corrected in the measurement.</li>
</ul>
-<p>Just like base sensors, the characteristics of the composite sensors come from
- the characteristics of their final data.</p>
-<ul>
- <li> For example, the power consumption of a game rotation vector is probably equal
- to the sum of the power consumptions of: the accelerometer chip, the gyroscope
- chip, the chip processing the data, and the buses transporting the data. </li>
- <li> As another example, the drift of a game rotation vector will depend as much on
- the quality of the calibration algorithm as on the physical sensor
- characteristics. </li>
-</ul>
-<h2 id="composite_sensor_type_summary">Composite sensor type summary</h2>
-<p>The following table lists the composite sensor types. Each composite sensor
- relies on data from one or several physical sensors. Choosing other underlying
- physical sensors to approximate results should be avoided as they will provide
- a poor user experience.</p>
-<p>When there is no gyroscope on the device, and only when there is no gyroscope,
- you may implement the rotation vector, linear acceleration and gravity sensors
- without using the gyroscope.</p>
+<p>As with base sensors, the characteristics of the composite sensors come from
+the characteristics of their final data. For example, the power consumption of a
+game rotation vector is probably equal to the sum of the power consumptions of
+the accelerometer chip, the gyroscope chip, the chip processing the data, and
+the buses transporting the data. As another example, the drift of a game
+rotation vector depends as much on the quality of the calibration algorithm as
+on the physical sensor characteristics.</p>
+
+<p>The following table lists available composite sensor types. Each composite
+sensor relies on data from one or several physical sensors. Avoid choosing other
+underlying physical sensors to approximate results as they provide a poor user
+experience.</p>
+<p class="note"><strong>Note:</strong> When there is no gyroscope on the device
+(and only when there is no gyroscope), you may implement the rotation vector,
+linear acceleration, and gravity sensors without using the gyroscope.</p>
+
<table>
<tr>
- <th><p>Sensor type</p></th>
- <th><p>Category</p></th>
- <th><p>Underlying physical sensors</p></th>
- <th><p>Reporting mode</p></th>
+ <th width=34%>Sensor type</th>
+ <th width=10%>Category</th>
+ <th width=34%>Underlying physical sensors</th>
+ <th width=19%>Reporting mode</th>
</tr>
<tr>
<td><p><a href="#game_rotation_vector">Game rotation vector</a></p></td>
@@ -627,7 +648,7 @@
<img src="images/axis_positive_roll.png" alt="Depiction of orientation
relative to a device" height="253" />
<p class="img-caption">
- <strong>Figure 2.</strong> Orientation relative to a device.
+ <strong>Figure 3.</strong> Orientation relative to a device.
</p>
</div>
<p>This definition is different from yaw, pitch and roll used in aviation where
diff --git a/src/devices/tech/admin/enterprise-telephony.jd b/src/devices/tech/admin/enterprise-telephony.jd
new file mode 100644
index 0000000..8a81e76
--- /dev/null
+++ b/src/devices/tech/admin/enterprise-telephony.jd
@@ -0,0 +1,124 @@
+page.title=Implementing Enterprise Telephony
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+This document outlines the changes made to the telephony-related parts of the
+Android framework in the 7.0 release to support enterprise use cases. This
+document is targeted at manufacturers and focuses entirely on framework-related
+telephony changes. In addition, this document outlines the changes that OEMs
+will need to make to their preloaded applications that handle telephony-related
+functions.
+</p>
+
+<p>
+Android 7.0 introduces several new features to support enterprise telephony use
+cases, in particular:
+</p>
+
+<ul>
+<li>Cross profile contact search - Allows applications in the personal profile
+to search for contacts that are supplied by the managed profile contacts
+provider, which can be backed by any datastore, for example local to the device
+or perhaps within an enterprise directory
+<li>Cross profile contact badging - Allows work contacts to be clearly
+distinguished from personal contacts
+<li>Making Connection Service managed profile aware - Allows applications within
+the Managed Profile to offer telephony features, such as to provide a separate
+work dialer and work ConnectionService</li>
+</ul>
+
+<h2 id="examples-and-source">Examples and source</h2>
+
+<p>
+The Android Open Source Project (AOSP) implementations of Dialer, Contacts, and
+Messaging apps have integrated the cross profile contact search and badging
+capability.
+</p>
+
+<p>
+Examples:
+</p><ul>
+<li><strong>Adding badge to work contacts</strong>: see
+<code>packages/apps/ContactsCommon</code> <em>f3eb5a207bfe0ff3b4ed2350ae5865ed8bc59798</em>
+<li><strong>Cross profile search</strong>: see <code>packages/apps/ContactsCommon</code> <em>cd0b29ddbf3648e48f048196c62245d545bc6122</em></li>
+</ul>
+
+<h2 id="implementation">Implementation</h2>
+
+<p>
+Partners must implement cross-profile, search, lookup and badging for contacts
+in their Dialer Contacts and SMS/MMS Messaging apps.
+</p>
+
+<h3 id="cross-profile-contact-search">Cross-profile contact search</h3>
+
+<p>
+Cross profile contact search should be implemented using the Enterprise Contacts
+API (<code>ContactsContract.Contacts.ENTERPRISE_CONTENT_FILTER_URI</code> etc.)
+see <a
+href="http://developer.android.com/preview/features/afw.html#contacts">http://developer.android.com/preview/features/afw.html#contacts</a>
+</p>
+
+<h3 id="work-profile-contact-badging">Work profile contact badging</h3>
+
+<p>
+Work profile contact badging can be implemented by checking
+<code>ContactsContract.Directory.isEntepriseDirectoryId() </code>if available or
+<a
+href="http://developer.android.com/reference/android/provider/ContactsContract.Contacts.html#isEnterpriseContactId(long)">http://developer.android.com/reference/android/provider/ContactsContract.Contacts.html#isEnterpriseContactId(long)</a>
+<code> </code>
+</p>
+
+<h3 id="managed-profile-aware-connectionservice">Managed Profile Aware
+ConnectionService</h3>
+
+<p>
+Manufacturers should not need to modify the framework code to support this
+functionality, but should be aware of it’s impact on the Telecomm service and
+other telephony features.
+</p>
+
+<h2 id="validation">Validation</h2>
+
+<p>
+The cross profile contact search and badging feature can be validated by:
+</p>
+
+<ol>
+<li>Setting up a managed profile on a test device using <a
+href="https://github.com/googlesamples/android-testdpc">TestDPC</a>.
+<li>Enabling cross profile contact search.
+<li>Adding a local work contact within the managed profile.
+<li>Searching for that contact within the system Dialer Contacts and SMS/MMS
+Messaging Apps within the personal profile, checking that this contact is found
+and it is correctly badged.</li>
+</ol>
+
+<p>
+CTS tests have been added to ensure the underlying cross profile contact search
+API has been implemented in
+<code>com/android/cts/managedprofile/ContactsTest.java</code>.
+</p>
diff --git a/src/devices/tech/admin/implement.jd b/src/devices/tech/admin/implement.jd
index 03ce93c..8c4580c 100644
--- a/src/devices/tech/admin/implement.jd
+++ b/src/devices/tech/admin/implement.jd
@@ -24,45 +24,92 @@
</div>
</div>
-<p>This page walks you through the many features in Android 5.0 and higher
-platform release that need to be enabled and validated on devices to make them
-ready for managed profile and device owner user cases that are essential to using
-them in a corporate environment. In addition to the related Android Open Source
-Project (AOSP) code, there are a number of additional components required for a
-device to function with managed profiles.</p>
+<p>This section describes how to enable and validate device administration
+features required to prepare devices for managed profiles. It also covers device
+owner user cases that are essential in a corporate environment.</p>
-<h2 id=requirements>Requirements</h2>
+<p>In addition to Android Open Source Project (AOSP) code, a device requires the
+following components to function with managed profiles.</p>
-<p>The following uses-feature need to be defined:</p>
+<h2 id=requirements>General requirements</h2>
+<p>Devices intending to support device administration must meet the following
+general requirements.</p>
+
+<h3 id=HAL_values>Thermal HAL values</h3>
+<p>Android 7.0 includes support for HardwarePropertiesManager API, a new device
+monitoring and health reporting API that enables applications to query the state
+of device hardware. This API is exposed via
+<code>android.os.HardwarePropertiesManager</code> and makes calls through
+<code>HardwarePropertiesManagerService</code> to the hardware thermal HAL
+(<code>hardware/libhardware/include/hardware/thermal.h</code>). It is a
+protected API, meaning only device/profile owner Device Policy Controller (DPC)
+applications and the current <code>VrListenerService</code> can call it.</p>
+
+<p>To support the HardwarePropertiesManager API, the device thermal HAL
+implementation must be able to report the following values:</p>
+
+<table>
+<tr>
+<th width="32%">Value</th>
+<th>Reporting Scale</th>
+<th>Enables</th>
+</tr>
+
+<tr>
+ <td>Temperature of [CPU|GPU|Battery|Device Skin]</td>
+ <td>Temperature of component in degrees Celsius</td>
+ <td>Apps can check device temperatures and component throttling/shutdown
+ temperatures</td>
+</tr>
+
+<tr>
+ <td>CPU active/total enabled times</td>
+ <td>Time in milliseconds</td>
+ <td>Apps can check CPU usage per core</td>
+</tr>
+
+<tr>
+ <td>Fan speed</td>
+ <td>RPM</td>
+ <td>Apps can check fan speed</td>
+</tr>
+
+</table>
+
+<p>Implementations should correctly handle reporting values situations when a
+core (or GPU, battery, fan) goes offline or is plugged/unplugged.</p>
+
+
+<h3 id=low_ram>No low-RAM</h3>
+<p>Device should not be a low-RAM device, meaning <code>ro.config.low_ram</code>
+should not be defined. The framework automatically limits the number of users
+to 1 when the <code>low_ram</code> flag is defined.</p>
+
+<h3 id=uses-feature>Uses-feature</h3>
+<p>Devices must define the following <code>uses-feature</code>:</p>
<pre>
android.software.managed_users
android.software.device_admin
</pre>
-<p>Confirm with: <code>adb shell pm list features</code></p>
+<p>To confirm these <code>uses-feature</code> values have been defined on a
+device, run: <code>adb shell pm list features</code>.</p>
-<p>It should not be a low-RAM device, meaning <code>ro.config.low_ram</code>
-should not be defined. The framework automatically limits the number of users
-to 1 when the <code>low_ram</code> flag is defined.</p>
+<h3 id=required_apps>Essential apps only</h3>
+<p>By default, only applications essential for correct operation of the profile
+should be enabled as part of provisioning a managed device. OEMs must ensure the
+managed profile or device has all required applications by modifying:</p>
-<p>By default, only applications that are essential for correct operation of the
-profile should be enabled as part of provisioning a managed device.</p>
-
-<p>OEMs must ensure the managed profile or device has all required applications by
-modifying:</p>
-
-<pre>
-vendor_required_apps_managed_profile.xml
+<pre>vendor_required_apps_managed_profile.xml
vendor_required_apps_managed_device.xml
</pre>
-<p>Here are examples from a Nexus device:</p>
+<p>Examples from a Nexus device:</p>
-<code>packages/apps/ManagedProvisioning/res/values/vendor_required_apps_managed_device.xml</code>
+<p><code>packages/apps/ManagedProvisioning/res/values/vendor_required_apps_managed_device.xml</code></p>
-<pre>
-<resources>
+<pre><resources>
<!-- A list of apps to be retained on the managed device -->
<string-array name="vendor_required_apps_managed_device">
<item>com.android.vending</item> <!--Google Play -->
@@ -75,9 +122,9 @@
</resources>
</pre>
-<code>
+<p><code>
packages/apps/ManagedProvisioning/res/values/vendor_required_apps_managed_profile.xml
-</code>
+</code></p>
<pre>
<resources>
@@ -90,42 +137,38 @@
</resources>
</pre>
-<h3 id=launcher>Launcher</h3>
+<h2 id=launcher>Launcher requirements</h2>
-<p>The launcher must support badging applications with the icon badge provided
-in the Android Open Source Project (AOSP) to represent the managed applications
-and other badge user interface elements such as recents and notifications.</p>
+<p>You must update the Launcher to support badging applications with the icon
+badge (provided in AOSP to represent the managed applications) and other badge
+user interface elements such as recents and notifications. If you use
+<a href="https://android.googlesource.com/platform/packages/apps/Launcher3/">launcher3</a>
+in AOSP without modifications, then you likely already support this badging
+feature.</p>
-<p>Update the Launcher to support badging. If you use <a
-href="https://android.googlesource.com/platform/packages/apps/Launcher3/">launcher3</a>
-in AOSP as-is, then you likely already support this badging feature.
-</p>
+<h2 id=nfc>NFC requirements</h2>
-<h3 id=nfc>NFC</h3>
+<p>Devices with NFC must enable NFC during the out-of-the-box experience (i.e.,
+setup wizard) and be configured to accept managed provisioning intents:</p>
-<p>On devices with NFC, NFC must be enabled in the Android Setup Wizard and
-configured to accept managed provisioning intents:</p>
-
-<code>packages/apps/Nfc/res/values/provisioning.xml</code>
-
-<pre>
-<bool name="enable_nfc_provisioning">true</bool>
+<p><code>packages/apps/Nfc/res/values/provisioning.xml</code></p>
+<pre><bool name="enable_nfc_provisioning">true</bool>
<item>application/com.android.managedprovisioning</item>
</pre>
-<h3 id=setup_wizard>Setup Wizard</h3>
+<h2 id=setup_wizard>Setup requirements</h2>
-<p>The Android Setup Wizard needs to support device owner provisioning. When it
-opens, it needs to check if another process (such as device owner provisioning)
-has already finished the user setup. If this is the case, it needs to fire a
-home intent and finish the setup wizard. </p>
+<p>Devices that include an out-of-box experience (i.e., setup wizard)
+should implement device owner provisioning. When the out-of-box experience
+opens, it should check if another process (such as device owner provisioning)
+has already finished the user setup and, if so, it should fire a home intent
+and finish the setup. This intent is caught by the provisioning application,
+which then hands control to the newly-set device owner.</p>
-<p>This intent will be caught by the provisioning application, which will then
-hand over control to the newly set device owner. This can be achieved by adding
-the following to your setup wizard’s main activity:</p>
+<p>To meet setup requirements, add the following code to the device setup's main
+activity:</p>
-<pre>
-@Override
+<pre>@Override
protected void onStart() {
super.onStart();
@@ -133,7 +176,7 @@
// has intervened and, if so, complete an orderly exit
boolean completed = Settings.Secure.getInt(getContentResolver(),
Settings.Secure.USER_SETUP_COMPLETE, 0) != 0;
- if (completed) {
+ if (completed) {
startActivity(new Intent(Intent.ACTION_MAIN, null)
.addCategory(Intent.CATEGORY_HOME)
.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK
diff --git a/src/devices/tech/admin/managed-profiles.jd b/src/devices/tech/admin/managed-profiles.jd
index 8951166..72463f5 100644
--- a/src/devices/tech/admin/managed-profiles.jd
+++ b/src/devices/tech/admin/managed-profiles.jd
@@ -25,49 +25,40 @@
</div>
<p>A <em>managed profile</em> or <em>work profile</em> is an Android <a
-href="multi-user.html">user</a> with some additional special properties around
+href="multi-user.html">user</a> with additional special properties around
management and visual aesthetic.</p>
-<h2 id=purpose>DevicePolicyManager APIs</h2>
-
-<p>Android 5.x or newer offers a greatly improved DevicePolicyManager with dozens of new
-APIs to support both corporate-owned and bring your own device (BYOD)
-administration use cases. Examples include app restrictions, silent
-installation of certificates, and cross-profile sharing intent access control.
-You may use the sample Device Policy Client (DPC) app, <a
-href="https://developer.android.com/samples/BasicManagedProfile/index.html">BasicManagedProfile.apk</a>,
-as a starting point. See <a
-href="https://developer.android.com/training/enterprise/work-policy-ctrl.html">Building
-a Work Policy Controller</a> for additional details.
-
-<h2 id=purpose>Purpose</h2>
-
<p>The primary goal of a managed profile is to create a segregated and secure
-space for managed (for example, corporate) data to reside. The administrator of
+space for managed data (such as corporate date) to reside. The administrator of
the profile has full control over scope, ingress, and egress of data as well as
its lifetime. These policies offer great powers and therefore fall upon the
managed profile instead of the device administrator.</p>
<ul>
- <li><strong>Creation</strong> - Managed profiles can be created by any application in the primary user. The
-user is notified of managed profile behaviors and policy enforcement before
-creation.
- <li><strong>Management</strong> - Management is performed by applications that programmatically invoke APIs in
-the <a href="http://developer.android.com/reference/android/app/admin/DevicePolicyManager.html">DevicePolicyManager</a> class to restrict use. Such applications are referred to as <em>profile owners</em> and are defined at initial profile setup. Policies unique to managed profile
-involve app restrictions, updatability, and intent behaviors.
- <li><strong>Visual treatment</strong> - Applications, notifications, and widgets from the managed profile are always
-badged and typically made available inline with user interface (UI) elements
-from the primary user.
+ <li><strong>Creation</strong>. Managed profiles can be created by any
+ application in the primary user. The user is notified of managed profile
+ behaviors and policy enforcement before creation.</li>
+ <li><strong>Management</strong>. Management is performed by applications that
+ programmatically invoke APIs in the
+ <a href="http://developer.android.com/reference/android/app/admin/DevicePolicyManager.html">DevicePolicyManager</a>
+ class to restrict use. Such applications are referred to as <em>profile
+ owners</em> and are defined at initial profile setup. Policies unique to
+ managed profile involve app restrictions, updatability, and intent behaviors.
+ </li>
+ <li><strong>Visual treatment</strong>. Applications, notifications, and
+ widgets from the managed profile are always badged and typically made
+ available inline with user interface (UI) elements from the primary user.</li>
</ul>
-<h2 id=data_segregation>Data Segregation </h2>
+<h2 id=data_segregation>Data segregation</h2>
+<p>Managed profiles use the following data segregation rules.</p>
<h3 id=applications>Applications</h3>
-<p>Applications are scoped with their own segregated data when the same app exists
-in the primary user and managed profile. Generally, applications cannot
-communicate directly with one another across the profile-user boundary and act
-independently of one another.</p>
+<p>Applications are scoped with their own segregated data when the same app
+exists in the primary user and managed profile. Generally, applications act
+independently of one another and cannot communicate directly with one another
+across the profile-user boundary.</p>
<h3 id=accounts>Accounts</h3>
@@ -83,63 +74,63 @@
<h3 id=settings>Settings</h3>
-<p>Enforcement of settings is generally scoped to the managed profile with a few
-exceptions. Specifically, lockscreen and encryption settings are still scoped
+<p>Enforcement of settings is generally scoped to the managed profile, with
+exceptions for lockscreen and encryption settings that are still scoped
to the device and shared between the primary user and managed profile.
Otherwise, a profile owner does not have any device administrator privileges
outside the managed profile.</p>
<p>Managed profiles are implemented as a new kind of secondary user, such that:</p>
-<pre>
-uid = 100000 * userid + appid
-</pre>
-
+<pre>uid = 100000 * userid + appid</pre>
<p>They have separate app data like regular users:</p>
-<pre>
-/data/user/<userid>
-</pre>
+<pre>/data/user/<userid></pre>
-<p>The UserId is calculated for all system requests using <code>Binder.getCallingUid()</code>, and all system state and responses are separated by userId. You may consider
-instead using <code>Binder.getCallingUserHandle</code> rather than <code>getCallingUid</code> to avoid confusion between uid and userId.</p>
+<p>The UserId is calculated for all system requests using
+<code>Binder.getCallingUid()</code>, and all system state and responses are
+separated by userId. You may consider instead using
+<code>Binder.getCallingUserHandle</code> rather than <code>getCallingUid</code>
+to avoid confusion between uid and userId.</p>
-<p>The AccountManagerService maintains a separate list of accounts for each user.</p>
-
-<p>The main differences between a managed profile and a regular secondary user are
-as follows:</p>
+<p>The AccountManagerService maintains a separate list of accounts for each
+user. The main differences between a managed profile and a regular secondary
+user are as follows:</p>
<ul>
- <li> The managed profile is associated with its parent user and started alongside
-the primary user at boot time.
- <li> Notifications for managed profiles are enabled by ActivityManagerService
-allowing the managed profile to share the activity stack with the primary user.
- <li> Some other system services shared are: IME, A11Y services, Wi-Fi, and NFC.
- <li> New Launcher APIs allow launchers to display badged apps and whitelisted
-widgets from the managed profile alongside apps in the primary profile without
-switching users.
+ <li>The managed profile is associated with its parent user and started
+ alongside the primary user at boot time.</li>
+ <li>Notifications for managed profiles are enabled by ActivityManagerService
+ allowing the managed profile to share the activity stack with the primary
+ user.</li>
+ <li>Other shared system services include IME, A11Y services, Wi-Fi, and NFC.
+ </li>
+ <li>New Launcher APIs allow launchers to display badged apps and whitelisted
+ widgets from the managed profile alongside apps in the primary profile without
+ switching users.</li>
</ul>
<h2 id=device_administration>Device administration</h2>
-<p>Android device administration includes two new types of device administrators for
-enterprises:</p>
+<p>Android device administration includes the following types of device
+administrators for enterprises:</p>
<ul>
- <li><em>Profile owner</em>—Designed for bring your own device (BYOD) environments
- <li><em>Device Owner</em>—Designed for corp-liable environments
+ <li><em>Profile owner</em>. Designed for bring your own device (BYOD)
+ environments</li>
+ <li><em>Device Owner</em>. Designed for corp-liable environments</li>
</ul>
-<p>The majority of the new device administrator APIs that have been added for
-Android 5.0 are available only to profile or device owners. Traditional device
-administrators remain but are applicable to the simpler consumer-only case
-(e.g. find my device).</p>
+<p>The majority of the new device administrator APIs added for Android 5.0 are
+available only to profile or device owners. Traditional device administrators
+remain but are applicable to the simpler consumer-only case (e.g., find my
+device).</p>
<h3 id=profile_owners>Profile owners</h3>
-<p>A Device Policy Client (DPC) app typically functions as the profile owner. The
-DPC app is typically provided by an enterprise mobility management (EMM)
+<p>A Device Policy Client (DPC) app typically functions as the profile owner.
+The DPC app is typically provided by an enterprise mobility management (EMM)
partner, such as Google Apps Device Policy.</p>
<p>The profile owner app creates a managed profile on the device by sending the
@@ -148,25 +139,39 @@
apps, as well as personal instances. That badge, or Android device
administration icon, identifies which apps are work apps.</p>
-<p>The EMM has control only over the managed profile (not personal space) with some
-exceptions, such as enforcing the lock screen.</p>
+<p>The EMM has control only over the managed profile (not personal space) with
+some exceptions, such as enforcing the lock screen.</p>
<h3 id=device_owners>Device owners</h3>
<p>The device owner can be set only in an unprovisioned device:</p>
<ul>
- <li>Can be provisioned only at initial device setup
- <li>Enforced disclosure always displayed in quick-settings
+ <li>Can be provisioned only at initial device setup</li>
+ <li>Enforced disclosure always displayed in quick-settings</li>
</ul>
-<p>Device owners can conduct some tasks profile owners cannot, and here are a few examples:</p>
+<p>Device owners can conduct some tasks profile owners cannot, such as:</p>
<ul>
- <li>Wipe device data
- <li>Disable Wi-Fi/ BT
- <li>Control <code>setGlobalSetting</code>
- <li><code>setLockTaskPackages</code> (the ability to whitelist packages that can pin themselves to the foreground)
- <li>Set <code>DISALLOW_MOUNT_PHYSICAL_MEDIA</code> (<code>FALSE</code> by default.
-When <code>TRUE</code>, physical media, both portable and adoptable, cannot be mounted.)
+ <li>Wipe device data</li>
+ <li>Disable Wi-Fi/Bluetooth</li>
+ <li>Control <code>setGlobalSetting</code></li>
+ <li><code>setLockTaskPackages</code> (the ability to whitelist packages that
+ can pin themselves to the foreground)</li>
+ <li>Set <code>DISALLOW_MOUNT_PHYSICAL_MEDIA</code> (<code>FALSE</code> by
+ default). When <code>TRUE</code>, physical media, both portable and adoptable,
+ cannot be mounted.</li>
</ul>
+
+<h3 id=dpm_api>DevicePolicyManager APIs</h3>
+
+<p>Android 5.0 and higher offers a greatly improved DevicePolicyManager with
+dozens of new APIs to support both corporate-owned and bring your own device
+(BYOD) administration use cases. Examples include app restrictions, silent
+installation of certificates, and cross-profile sharing intent access control.
+Use the sample Device Policy Client (DPC) app
+<a href="https://developer.android.com/samples/BasicManagedProfile/index.html">BasicManagedProfile.apk</a>
+as a starting point. For details, refer to
+<a href="https://developer.android.com/training/enterprise/work-policy-ctrl.html">Building
+a Work Policy Controller</a>.</p>
diff --git a/src/devices/tech/admin/multi-user.jd b/src/devices/tech/admin/multi-user.jd
index 8319be0..24f4fec 100644
--- a/src/devices/tech/admin/multi-user.jd
+++ b/src/devices/tech/admin/multi-user.jd
@@ -24,139 +24,169 @@
</div>
</div>
-<p>This document describes the Android multi-user feature. It allows more than one
-user on a single Android device by separating their accounts and application
-data. For instance, parents may let their children use the family tablet. Or a
-critical team might share a mobile device for on-call duty.</p>
+<p>Android supports multiple users on a single Android device by separating user
+accounts and application data. For instance, parents may allow their children to
+use the family tablet, or a critical response team might share a mobile device
+for on-call duty.</p>
-<h1 id=definitions>Definitions</h1>
+<h2 id=definitions>Terminology</h2>
+<p>Android uses the following terms when describing Android users and accounts.</p>
-<p>Before supporting multiple Android users, you should understand the basic
-concepts involved. Here are the primary terms used when describing Android
-users and accounts:</p>
+<h3 id=general_defs>General</h3>
+<p>Android device administration uses the following general terms.</p>
<ul>
- <li><em>User</em> - Each user is intended to be used by a different physical person. Each user
-has distinct application data and some unique settings, as well as a user
-interface to explicitly switch between users. A user can run in the background
-when another user is active; the system manages shutting down users to conserve
-resources when appropriate. Secondary users can be created either directly via
-the primary user interface or from a <a
-href="https://developer.android.com/guide/topics/admin/device-admin.html">Device
-Administration</a> application.
- <li><em>Account</em> - Accounts are contained within a user but are not defined by a user. Nor is a
-user defined by or linked to any given account. Users and profiles contain
-their own unique accounts but are not required to have accounts to be
-functional. The list of accounts differs by user. See the <a href="https://developer.android.com/reference/android/accounts/Account.html">Account class</a> definition.
- <li><em>Profile<strong></em> </strong>- A profile has separated app data but shares some system-wide settings (for
-example, Wi-Fi and Bluetooth). A profile is a subset of and tied to the
-existence of a user. A user can have multiple profiles. They are created
-through a <a href="https://developer.android.com/guide/topics/admin/device-admin.html">Device
-Administration</a> application. A profile always has an immutable
-association to a ‘parent’ user, defined by the user that created the profile.
-Profiles do not live beyond the lifetime of the creating user.
- <li><em>App</em> - An application’s data exists within each associated user. App data is
-sandboxed from other applications within the same user. Apps within the same
-user can interact with each other via IPC. See <a href="https://developer.android.com/training/enterprise/index.html">Building Apps for Work</a>.
+ <li><em>User</em>. Each user is intended to be used by a different physical
+ person. Each user has distinct application data and some unique settings, as
+ well as a user interface to explicitly switch between users. A user can run in
+ the background when another user is active; the system manages shutting down
+ users to conserve resources when appropriate. Secondary users can be created
+ either directly via the primary user interface or from a
+ <a href="https://developer.android.com/guide/topics/admin/device-admin.html">Device
+ Administration</a> application.</li>
+ <li><em>Account</em>. Accounts are contained within a user but are not defined
+ by a user, nor is a user defined by or linked to any given account. Users and
+ profiles contain their own unique accounts but are not required to have
+ accounts to be functional. The list of accounts differs by user. For details,
+ refer to the
+ <a href="https://developer.android.com/reference/android/accounts/Account.html">Account
+ class</a> definition.</li>
+ <li><em>Profile</em>. A profile has separated app data but shares some
+ system-wide settings (for example, Wi-Fi and Bluetooth). A profile is a subset
+ of and tied to the existence of a user. A user can have multiple profiles.
+ They are created through a
+ <a href="https://developer.android.com/guide/topics/admin/device-admin.html">Device
+ Administration</a> application. A profile always has an immutable association
+ to a parent user, defined by the user that created the profile. Profiles do not live beyond the lifetime of the creating user.</li>
+ <li><em>App</em>. An application’s data exists within each associated user.
+ App data is sandboxed from other applications within the same user. Apps
+ within the same user can interact with each other via IPC. For details, refer
+ to <a href="https://developer.android.com/training/enterprise/index.html">Building
+ Apps for Work</a>.</li>
</ul>
-<h2 id=user_types>User types</h2>
+<h3 id=user_types>User types</h3>
+<p>Android device administration uses the following user types.</p>
<ul>
- <li><em>Primary</em> - The first user added to a device. The primary user cannot be removed except
-by factory reset. This user also has some special privileges and settings only
-it can set. The primary user is always running even when other users are in the
-foreground.
- <li><em>Secondary</em> - Any user added to the device other than the primary user. They can be
-removed by either themselves or the primary user and cannot impact other users
-on a device. Secondary users can run in the background and will continue to
-have network connectivity when they do.
- <li><em>Guest<strong></em> </strong>- A guest user is a temporary secondary user with an explicit option to quick
-delete the guest user when its usefulness is over. There can be only one guest
-user at a time.
+ <li><em>Primary</em>. First user added to a device. The primary user
+ cannot be removed except by factory reset and is always running even when
+ other users are in the foreground. This user also has special privileges and
+ settings only it can set.</li>
+ <li><em>Secondary</em>. Any user added to the device other than the primary
+ user. Secondary users can be removed (either by themselves or by the primary
+ user) and cannot impact other users on a device. These users can run in the
+ background and continue to have network connectivity.</li>
+ <li><em>Guest</em>. Temporary secondary user. Guest users have an explicit
+ option to quick delete the guest user when its usefulness is over. There can
+ be only one guest user at a time.</li>
</ul>
-<h2 id=profile_types>Profile types</h2>
+<h3 id=profile_types>Profile types</h3>
+<p>Android device administration uses the following profile types.</p>
<ul>
- <li><em>Managed<strong></em> </strong>- Managed profiles are created by an application to contain work data and
-apps. They are managed exclusively by the ‘profile owner’, the app who created
-the corp profile. Launcher, notifications and recent tasks are shared by the
-primary user and the corp profile.
- <li><em>Restricted</em> - Restricted profiles use the accounts based off the primary user. The Primary
-user can control what apps are available on the restricted profile. Restricted
-profiles are available only on tablets.
+ <li><em>Managed</em>. Created by an application to contain work data
+ and apps. They are managed exclusively by the profile owner (the app that
+ created the corp profile). Launcher, notifications, and recent tasks are
+ shared by the primary user and the corp profile.</li>
+ <li><em>Restricted</em>. Uses accounts based off the primary user, who can
+ control what apps are available on the restricted profile. Available only on
+ tablets.</li>
</ul>
-<h1 id=effects>Effects</h1>
+<h2 id=applying_the_overlay>Enabling multi-user</h2>
-<p>When users are added to a device, some functionality will be curtailed when
-another user is in the foreground. Since app data is separated by user, the
-state of those apps differs by user. For example, email destined for an account
-of a user not currently in focus won’t be available until that user and account
-are active on the device.</p>
-
-<p>The default state is only the primary user has full access to phone calls and
-texts. The secondary user may receive inbound calls but cannot send or receive
-texts. The primary user must enable these functions for others.</p>
-
- <p class="note"><strong>Note</strong>: To enable or disable the phone and SMS functions for a secondary user, go to
-Settings > Users, select the user, and switch the <em>Allow phone calls and SMS</em> setting to off.</p>
-
-<p>Please note, some restrictions exist when a secondary user is in background.
-For instance, the background secondary user will not be able to display the
-user interface or make Bluetooth services active. Finally, background secondary
-users will be halted by the system process if the device needs additional
-memory for operations in the foreground user.</p>
-
-<p>Here are aspects of behavior to keep in mind when employing multiple users on
-an Android device:</p>
-
-<ul>
- <li>Notifications appear for all accounts of a single user at once.
- <li>Notifications for other users do not appear until they are active.
- <li>Each user gets his or her own workspace to install and place apps.
- <li>No user has access to the app data of another user.
- <li>Any user can affect the installed apps for all users.
- <li>The primary user can remove apps or even the entire workspace established by
-secondary users.
-</ul>
-
-<h1 id=implementation>Implementation</h1>
-
-<h2 id=managing_users>Managing users</h2>
-
-<p>Management of users and profiles (with the exception of restricted profiles) is
-performed by applications that programmatically invoke API in the <code>DevicePolicyManager</code> class to restrict use.</p>
-
-<p>Schools and enterprises may employ users and profiles to manage the lifetime
-and scope of apps and data on devices. They may use the types outlined above in
-conjunction with the <a href="http://developer.android.com/reference/android/os/UserManager.html">UserManager API</a> to build unique solutions tailored to their use cases.</p>
-
-<h2 id=applying_the_overlay>Applying the overlay</h2>
-
-<p>The multi-user feature is disabled by default in the Android 5.0 release. To
+<p>As of Android 5.0, the multi-user feature is disabled by default. To
enable it, device manufacturers must define a resource overlay that replaces
-the following values in frameworks/base/core/res/res/values/config.xml:</p>
+the following values in <code>frameworks/base/core/res/res/values/config.xml</code>:
+</p>
-<pre>
-<!-- Maximum number of supported users -->
+<pre><!-- Maximum number of supported users -->
<integer name="config_multiuserMaximumUsers">1</integer>
<!-- Whether Multiuser UI should be shown -->
<bool name="config_enableMultiUserUI">false</bool>
</pre>
-<p>To apply this overlay and enable guest and secondary users on the device, use the
-<code>DEVICE_PACKAGE_OVERLAYS</code> feature of the Android build system to:</p>
+<p>To apply this overlay and enable guest and secondary users on the device, use
+the <code>DEVICE_PACKAGE_OVERLAYS</code> feature of the Android build system to:</p>
<ul>
- <li> Replace the value for <code>config_multiuserMaximumUsers</code> with one greater than 1
- <li> Replace the value of <code>config_enableMultiUserUI</code> with: <code>true</code>
+ <li>Replace the value for <code>config_multiuserMaximumUsers</code> with one
+ greater than 1</li>
+ <li>Replace the value of <code>config_enableMultiUserUI</code> with:
+ <code>true</code></li>
</ul>
-<p>Device manufacturers may decide upon the maximum number of users.</p>
+<p>Device manufacturers may decide upon the maximum number of users. If device
+manufacturers or others have modified settings, they must ensure SMS and
+telephony work as defined in the
+<a href="{@docRoot}compatibility/android-cdd.pdf">Android Compatibility
+Definition Document</a> (CDD).</p>
-<p>That said, if device manufacturers or others have modified settings, they need
-to ensure SMS and telephony work as defined in the <a
-href="{@docRoot}compatibility/android-cdd.pdf">Android Compatibility Definition
-Document</a> (CDD).</p>
+<h2 id=managing_users>Managing multiple users</h2>
+
+<p>Management of users and profiles (with the exception of restricted profiles)
+is performed by applications that programmatically invoke API in the
+<code>DevicePolicyManager</code> class to restrict use.</p>
+
+<p>Schools and enterprises may employ users and profiles to manage the lifetime
+and scope of apps and data on devices, using the types outlined above in
+conjunction with the
+<a href="http://developer.android.com/reference/android/os/UserManager.html">UserManager
+API</a> to build unique solutions tailored to their use cases.</p>
+
+
+<h2 id=effects>Multi-user system behavior</h2>
+
+<p>When users are added to a device, some functionality is curtailed when
+another user is in the foreground. Since app data is separated by user, the
+state of those apps differs by user. For example, email destined for an account
+of a user not currently in focus won’t be available until that user and account
+are active on the device.</p>
+
+<p>By default, only the primary user has full access to phone calls and texts.
+The secondary user may receive inbound calls but cannot send or receive texts.
+The primary user must enable these functions for others.</p>
+
+<p class="note"><strong>Note</strong>: To enable or disable the phone and SMS
+functions for a secondary user, go to <em>Settings > Users</em>, select the
+user, and switch the <em>Allow phone calls and SMS</em> setting to off.</p>
+
+<p>Some restrictions exist when a secondary user is in background. For instance,
+the background secondary user cannot display the user interface or make
+Bluetooth services active. In addition, the system process will halt background
+secondary users if the device needs additional memory for operations in the
+foreground user.</p>
+
+<p>When employing multiple users on an Android device, keep the following
+behavior in mind:</p>
+
+<ul>
+ <li>Notifications appear for all accounts of a single user at once.</li>
+ <li>Notifications for other users do not appear until active.</li>
+ <li>Each user gets a workspace to install and place apps.</li>
+ <li>No user has access to the app data of another user.</li>
+ <li>Any user can affect the installed apps for all users.</li>
+ <li>The primary user can remove apps or even the entire workspace established
+ by secondary users.</li>
+</ul>
+
+<p>Android 7.0 includes several enhancements, including:</p>
+
+<ul>
+ <li><em>Toggle work profile</em>. Users can disable their managed profile
+ (such as when not at work). This functionality is achieved by stopping the
+ user; UserManagerService calls <code>ActivityManagerNative#stopUser()</code>.
+ </li>
+ <li><em>Always-on VPN</em>. VPN applications can now be set to always-on by
+ the user, Device DPC, or Managed Profile DPC (applies only to Managed Profile
+ applications). When enabled, applications cannot access the public network
+ (access to network resources is stopped until the VPN has connected and
+ connections can be routed over it). Devices that report
+ <code>device_admin</code> must implement always-on VPN.</li>
+</ul>
+
+<p>For more details on Android 7.0 device administration features, refer to
+<a href="https://developer.android.com/preview/features/afw.html">Android
+for Work Updates</a>.</p>
diff --git a/src/devices/tech/admin/provision.jd b/src/devices/tech/admin/provision.jd
index a1b20bc..b69e3cf 100644
--- a/src/devices/tech/admin/provision.jd
+++ b/src/devices/tech/admin/provision.jd
@@ -24,34 +24,35 @@
</div>
</div>
-<p>This page describes the process for deploying devices to corporate users.</p>
+<p>This page describes the process for deploying devices to corporate users
+using NFC or with an activation code (for a complete list of requirements, see
+<a href="{@docRoot}devices/tech/admin/implement.html">Implementing Device
+Administration</a>).</p>
-<p>Device owner provisioning can be accomplished over NFC or with an activation
-code. See <a href="implement.html">Implementing Device Administration</a> for
-the complete list of requirements.</p>
-
-<p>Download the <a
-href="https://github.com/googlesamples/android-NfcProvisioning">NfcProvisioning
-APK</a> and <a
-href="https://github.com/googlesamples/android-DeviceOwner">Android-DeviceOwner
-APK</a>.</p>
+<p>To get started, download the
+<a href="https://github.com/googlesamples/android-NfcProvisioning">NfcProvisioning
+APK</a>
+and
+<a href="https://github.com/googlesamples/android-DeviceOwner">Android-DeviceOwner
+APK</a>.
+</p>
<p class="caution"><strong>Caution:</strong> If provisioning has already
-started, affected devices will first need to be factory reset.</p>
+started, affected devices must be factory reset first.</p>
-<h2 id=managed_provisioning>Managed Provisioning</h2>
+<h2 id=managed_provisioning>Managed provisioning</h2>
<p>Managed Provisioning is a framework UI flow to ensure users are adequately
-informed of the implications of setting a device owner or managed profile. You can
-think of it as a setup wizard for managed profiles.</p>
+informed of the implications of setting a device owner or managed profile. It is
+designed to act as a setup wizard for managed profiles.</p>
-<p class="note"><strong>Note:</strong> Remember, the device owner can be set
-only from an unprovisioned device. If
-<code>Settings.Secure.USER_SETUP_COMPLETE</code> has ever been set, then the
-device is considered provisioned & device owner cannot be set.</p>
+<p class="note"><strong>Note:</strong> The device owner can be set only from an
+unprovisioned device. If <code>Settings.Secure.USER_SETUP_COMPLETE</code> has
+ever been set, the device is considered provisioned and the device owner cannot
+be set.</p>
-<p>Please note, devices that enable default encryption offer considerably
-simpler/quicker device administration provisioning flow. The managed provisioning
+<p>Devices that enable default encryption offer a considerably simpler and
+quicker device administration provisioning flow. The managed provisioning
component:</p>
<ul>
@@ -70,46 +71,52 @@
</ul>
<p>In this flow, managed provisioning triggers device encryption. The framework
- copies the EMM app into the managed profile as part of managed provisioning.
- The instance of the EMM app inside of the managed profile gets a callback from the
-framework when provisioning is done.</p>
+copies the EMM app into the managed profile as part of managed provisioning. The
+instance of the EMM app inside of the managed profile gets a callback from the
+framework when provisioning is done. The EMM can then add accounts and enforce
+policies; it then calls <code>setProfileEnabled()</code>, which makes the
+launcher icons visible.</p>
-<p>The EMM can then add accounts and enforce policies; it then calls
-<code>setProfileEnabled()</code>, which makes the launcher icons visible.</p>
+<h2 id=profile_owner_provisioning>Profile owner provisioning</h2>
-<h2 id=profile_owner_provisioning>Profile Owner Provisioning</h2>
+<p>Profile owner provisioning assumes the user of the device (and not a company
+IT department) oversees device management. To enable profile owner provisioning,
+you must send an intent with appropriate extras. For an example, use the TestDPC
+application
+(<a href="https://play.google.com/store/apps/details?id=com.afwsamples.testdpc&hl=en">Download
+from Google Play</a> or <a href="https://github.com/googlesamples/android-testdpc/">Build
+from GitHub</a>). Install TestDPC on the device, launch the app from the
+launcher, then follow the app instructions. Provisioning is complete when badged
+icons appear in the launcher drawer.</p>
-<p>Profile owner provisioning assumes the user of the device oversees its
-management (and not a company IT department). To enable, profile owner
-provisioning, you must send an intent with appropriate extras. See the <a href="https://developer.android.com/samples/BasicManagedProfile/index.html">BasicManagedProfile.apk</a> for an example.</p>
+<p>Mobile Device Management (MDM) applications trigger the creation of the
+managed profile by sending an intent with action:
+<a href="https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/app/admin/DevicePolicyManager.java">DevicePolicyManager.ACTION_PROVISION_MANAGED_PROFILE</a>
+. Below is a sample intent that triggers the creation of the managed profile
+and sets the DeviceAdminSample as the profile owner:</p>
-<p>Mobile Device Management (MDM) applications trigger the creation of the managed
-profile by sending an intent with action:</p>
-
-<p><a href="https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/app/admin/DevicePolicyManager.java">DevicePolicyManager.ACTION_PROVISION_MANAGED_PROFILE</a></p>
-
-<p>Here is a sample intent that will trigger the creation of the managed profile
-and set the DeviceAdminSample as the profile owner:</p>
-
-<pre>
-adb shell am start -a android.app.action.PROVISION_MANAGED_PROFILE \
+<pre>adb shell am start -a android.app.action.PROVISION_MANAGED_PROFILE \
-c android.intent.category.DEFAULT \
- -e wifiSsid $(printf '%q' \"GoogleGuest\") \
+ -e wifiSsid $(printf '%q' \"WifiSSID\") \
-e deviceAdminPackage "com.google.android.deviceadminsample" \
-e android.app.extra.deviceAdminPackageName $(printf '%q'
.DeviceAdminSample\$DeviceAdminSampleReceiver) \
-e android.app.extra.DEFAULT_MANAGED_PROFILE_NAME "My Organisation"
</pre>
-<h2 id=device_owner_provisioning_via_nfc>Device Owner Provisioning via NFC</h2>
+<h2 id=device_owner_provisioning_via_nfc>Device owner provisioning</h2>
+<p>Use one of the following methods to set up device owner (DO)
+provisioning.</p>
-<p>Device owner provisioning via NFC is similar to the profile owner method but
-requires more bootstrapping before managed provisioning.</p>
+<h3 id=do_provision_nfc>Provisioning via NFC</h3>
+<p>DO provisioning via NFC is similar to the profile owner method but requires
+more bootstrapping. To use this method,
+<a href="http://developer.android.com/guide/topics/connectivity/nfc/nfc.html">NFC
+bump</a> the device during the initial setup step (i.e., first page of the setup
+wizard). This low-touch flow configures Wi-Fi, installs the DPC, and sets the
+DPC as device owner.</p>
-<p>To use this method, <a href="http://developer.android.com/guide/topics/connectivity/nfc/nfc.html">NFC bump</a> the device from the first page of setup wizard (SUW). This offers a low-touch
-flow and configures Wi-Fi, installs the DPC, and sets the DPC as device owner.</p>
-
-<p>Here is the typical NFC bundle:</p>
+<p>A typical NFC bundle includes the following:</p>
<pre>
EXTRA_PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME
@@ -119,53 +126,59 @@
EXTRA_PROVISIONING_WIFI_SECURITY_TYPE
</pre>
-<p>The device must have NFC configured to accept the managed provisioning mimetype
-from SUW:</p>
+<p>Devices must have NFC configured to accept the managed provisioning
+mimetype from the setup experience:</p>
-<pre>
-/packages/apps/Nfc/res/values/provisioning.xml
+<pre>/packages/apps/Nfc/res/values/provisioning.xml
<bool name="enable_nfc_provisioning">true</bool>
<item>application/com.android.managedprovisioning</item>
</pre>
-<h2 id=device_owner_provisioning_with_activation_code>Device Owner Provisioning with Activation Code</h2>
-
-<p>Select <em>Add Work Account</em> from the setup wizard. This triggers a
-lookup of the EMM from Android servers.</p>
-
-<p>The device installs the EMM app and starts provisioning flow. As an extra
-option, Android device administration supports the option of using email
-address with a six-digit activation code to bootstrap the process as part of
-setup wizard.</p>
+<h3 id=do_provision_cs>Provisioning via Cloud Services</h2>
+<p>Device owner provisioning via cloud services includes the ability to
+provision a device in device owner mode during out-of-the-box setup. The device
+can collect credentials (or tokens) and use them to perform a lookup to a cloud
+service, which can then be used to initiate the device owner provisioning
+process.</p>
<h2 id=emm_benefits>EMM benefits</h2>
-<p>An EMM can help by conducting these tasks for you:</p>
+<p>An enterprise mobility management (EMM) app can help by conducting the
+following tasks:</p>
<ul>
- <li>Provision managed profile
+ <li>Provision managed profile</li>
<li>Apply security policies
<ul>
- <li>Set password complexity
- <li>Lockdowns: disable screenshots, sharing from managed profile, etc.
- </ul>
+ <li>Set password complexity</li>
+ <li>Lockdowns: disable screenshots, sharing from managed profile, etc.</li>
+ </ul></li>
<li>Configure enterprise connectivity
<ul>
- <li>Use WifiEnterpriseConfig to configure corporate Wi-Fi
- <li>Configure VPN on the device
- <li>Use DPM.setApplicationRestrictions() to configure corporate VPN
- </ul>
+ <li>Use WifiEnterpriseConfig to configure corporate Wi-Fi</li>
+ <li>Configure VPN on the device</li>
+ <li>Use <code>DPM.setApplicationRestrictions()</code> to configure corporate
+ VPN</li>
+ </ul></li>
<li>Enable corporate app Single Sign-On (SSO)
<ul>
<li>Install desired corporate apps
- <li>Use DPM.installKeyPair()to silently install corp client certs
- <li>Use DPM.setApplicationRestrictions() to configure hostnames, cert alias’ of
-corporate apps
- </ul>
+ <li>Use <code>DPM.installKeyPair()</code> to silently install corp client
+ certs</li>
+ <li>Use <code>DPM.setApplicationRestrictions()</code> to configure
+ hostnames, cert alias’ of corporate apps</li>
+ </ul></li>
</ul>
-<p>Managed provisioning is just one piece of the EMM end-to-end workflow, with the
- end goal being to make corporate data accessible to apps in the managed profile.</p>
+<p>Managed provisioning is just one part of the EMM end-to-end workflow, with
+the end goal of making corporate data accessible to apps in the managed
+profile. For testing guidance, see
+<a href="{@docRoot}devices/tech/admin/testing-setup.html">Setting up Device
+Testing</a>.</p>
-<p>See <a href="testing-setup.html">Setting up Device Testing</a> for testing instructions.</p>
+<h2 id=automate>Automated provisioning testing</h2>
+<p>To automate the testing of enterprise provisioning processes, use
+the Android for Work (AfW) Test Harness. For details, see
+<a href="{@docRoot}devices/tech/admin/testing-provision.html">Testing Device
+Provisioning</a>.</p>
diff --git a/src/devices/tech/admin/testing-provision.jd b/src/devices/tech/admin/testing-provision.jd
new file mode 100644
index 0000000..dd66a3b
--- /dev/null
+++ b/src/devices/tech/admin/testing-provision.jd
@@ -0,0 +1,323 @@
+page.title=Testing Device Provisioning
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>The Android for Work (AfW) Test Harness is a test suite for validating the
+AfW compatibility of Android devices. It includes support apps, test cases,
+configuration files, and a test runner (<code>afw-test-tradefed</code>) built on
+<code>cts-tradefed</code>. You should setup and run the AfW Test Harness after
+completing <a href="{@docRoot}devices/tech/admin/provision.html">Provisioning
+for Device Administration</a>.</p>
+
+<p class=note><strong>Note:</strong> Building and running the AfW Test Harness
+is similar to building and running the Android
+<a href="http://source.android.com/compatibility/cts/index.html">Compatibility
+Test Suite (CTS)</a>.</p>
+
+<h2 id=setup_env>Setting up a development environment</h2>
+<p>The development environment for the AfW Test Harness is similar to Android
+OS. Follow the steps in
+<a href="{@docRoot}source/requirements.html">Requirements</a> to set up a
+development machine.</p>
+
+<h2 id=download_source>Downloading source code</h2>
+<p>Download the AfW Test Harness source code using the steps in
+<a href="{@docRoot}source/downloading.html">Downloading the Source</a>. The AfW
+Test Harness source code is in the <code>./test/AfwTestHarness</code> project.
+The branch name determines the version of AfW Test Harness to download (each
+Android platform has a separate version of AfW Test Harness). For Android 6.0,
+the branch name is <code>afw-test-harness-marshmallow-dev</code>. To initialize
+the repo and download source code for this branch, use:</p>
+
+<pre>
+$ mkdir WORKING_DIRECTORY
+$ cd WORKING_DIRECTORY
+$ git config --global user.name "Your Name"
+$ git config --global user.email "you@example.com"
+$ repo init -u https://android.googlesource.com/platform/manifest -b afw-test-harness-marshmallow-dev
+$ repo sync -j24
+</pre>
+
+<p>To check out the source code for a different version, specify the branch with
+the corresponding tag. Available branches include:</p>
+
+<table>
+<tr>
+<th>Branch Name</td>
+<th>Supported Android Platform</td>
+</tr>
+<tr>
+<td>afw-test-harness-marshmallow-dev</td>
+<td>Android 6.0</td>
+</tr>
+<tr>
+<td>afw-test-harness-1.5</td>
+<td>Android 6.0</td>
+</tr>
+</table>
+
+<p>Other dependency projects required to build the harness are also downloaded
+with the source code.</p>
+
+<h3 id=view_studio>Viewing in Android Studio</h3>
+<p>To view and edit AfW source code in Android Studio:</p>
+<ol>
+<li>Run the following commands
+<pre>
+$ make idegen
+$ development/tools/idegen/idegen.sh
+</pre>
+</li>
+<li>In Android Studio, open <code>android.ipr</code>.</li>
+</ol>
+
+<p>The AfW Test Harness source code is in <code>test/AfwTestHarness</code>.</p>
+
+<h2 id=config_harness>Configuring the AfW Test Harness</h2>
+<p>You can customize the harness by configuring
+<code>test/AfwTestHarness/afw-test.props</code>. To run the harness
+successfully, complete the following steps:</p>
+<ol>
+<li>Configure the Wi-Fi network in <code>afw-test.props</code> with the
+following properties:
+<pre>wifi_ssid
+wifi_password (optional)
+wifi_security_type (optional, available options are: NONE, WEP or WPA)
+</pre>
+</li>
+<li>Obtain at least one account from a domain that is bound to Test DPC as
+its device policy controller. Specify the details in <code>afw-test.props</code>
+with the following properties:
+<pre>
+work_account_username
+work_account_password
+</pre>
+<p>The AfW Test Harness uses Test DPC to test provisioning flows, so accounts
+<strong>must</strong> bind to Test DPC to run the test harness.</p>
+</li>
+</ol>
+
+<h2 id=build_harness>Building the AfW Test Harness</h2>
+<p>Initialize the build configuration using:</p>
+<pre>
+$ source build/envsetup.sh
+$ lunch
+</pre>
+
+<p>Select a device type and press <strong>Enter</strong>.</p>
+
+<p>Build the harness using:</p>
+<pre>$ make afw-test-harness -j32</pre>
+<p>This creates a directory (<code>out/host/linux-x86/afw-th/android-cts</code>)
+with all necessary binaries, configuration files, and tools to run the test
+harness. This directory is also zipped into a file
+(<code>out/host/linux-x86/afw-th/android-afw-test-harness.zip</code>)
+for distribution.</p>
+
+<h2 id=run_harness>Running the AfW Test Harness</h2>
+<p>Use the following steps to run the AfW Test Harness:</p>
+<ol>
+<li>In your build environment, launch the test runner using:
+<pre>$ afw-test-tradefed</pre>
+This starts the <code>cts-tf</code> console, loads test plans, test cases,
+and <code>afw-test.props</code> from
+<code>out/host/linux-x86/afw-th/android-cts</code>.</li>
+<li>From the unzipped folder of <code>android-afw-test-harness.zip</code>,
+launch the test runner using:
+<pre>$ cts-tf > ./android‐cts/tools/afw-test‐tradefed</pre>
+This loads test plans, test cases, and <code>afw-test.props</code> from
+<code>android-cts</code> directory. Ensure
+<code>./android‐cts/repository/testcases/afw-test.props</code> has the work
+account and Wi-Fi configuration.</li>
+
+<li>Run a test plan. Each test plan is an XML file that contains a set of test
+packages from the <code>AfwTestHarness/tests</code> test package directory.
+Common plans include:
+
+<ul>
+<li><code><strong>afw-userdebug-build</code></strong>. Contains all test
+packages that require a userdebug build.</li>
+<li><code><strong>afw-user-build</code></strong>. Runs on a user build but
+requires the test device to be set up properly, including completing the initial
+setup and enabling USB debugging.</li>
+</ul>
+
+<br>To run the test plan <code>afw-userdebug-build</code>, use:
+<pre>$ cts-tf > run cts --plan afw-userdebug-build</pre>
+To see all test plans, use the command <code>list plans</code>. To view plan
+definitions, refer to
+<code>out/host/linux-x86/afw-th/android-cts/repository/plans</code>.
+<br>
+</li>
+<li>Run a test package. To run a single test package, use
+<pre>$ cts-tf > run cts --package com.android.afwtest.NfcProvisioning
+</pre>
+To view all packages, use the command <code>list packages</code>. For more
+options, use the command <code>run cts --help</code>.</li>
+</ol>
+
+<h2 id=debug_harness>Debugging the AfW Test Harness</h2>
+<p>Run all commands in the afw-test-tradefed console (<code>cts-tf</code>),
+which you can launch by running <code>afw-test-tradefed</code>.</p>
+<ul>
+
+<li>Display more information with the <code>-l INFO</code> or <code>-l
+DEBUG</code> flags. Example:
+<pre>$ cts-tf > run cts ‐‐plan afw-userdebug-build -l DEBUG</pre></li>
+
+<li>Run the test harness on a specific device with the <code>-s</code> flag.
+Example:
+<pre>$ cts-tf > run cts ‐‐plan afw-userdebug-build -l DEBUG -s device_sn</pre>
+</li>
+
+<li>Run test harness on all connected devices with the
+<code>--all-devices</code> flag. Example:
+<pre>$ cts-tf > run cts ‐‐plan afw-userdebug-build -l DEBUG --all-devices</pre>
+</li>
+
+<li>View current running executions using <code>list invocations</code> or
+<code>l i</code>.</li>
+
+<li>View summary of past test executions using <code>list results</code> or
+<code>l r</code>.</li>
+
+<li>View other <code>list</code> commands using <code>help list</code>.</li>
+
+<li>Monitor real-time logcat with filter using <code>afwtest</code>, then open
+another terminal and start logcat using: <code>adb logcat | grep afwtest</code>.
+After a test completes:
+<ul>
+<li>View logs in
+<code>out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em></code>.
+The full device logcat and host log (<code>afw-test-tradefed</code> logs) are
+saved in separate zip files.</li>
+
+<li>Find relevant information by searching the device logcat for
+<strong>afwtest</strong>. Example: <code>zless
+out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em>/device_logcat_<em>random-number</em>.zip
+| grep afwtest</code></li>
+
+<li>To view the full afw-test-tradefed log, use: <code>zless
+out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em>/host_log_<em>random-number</em>.zip</code>
+</li>
+</ul>
+</li>
+<li>A test package automates an AfW provisioning flow by going through UI pages
+and recording a navigation log in the device logcat file for each page.
+Example: <code>afwtest.AutomationDriver:
+Navigating:com.android.afwtest.uiautomator.pages.gms.AddAccountPage</code>
+<br>UI pages for test package
+<code>com.android.afwtest.NfcProvisioning</code> include:<ul>
+<li>
+<code>com.android.afwtest.uiautomator.pages.managedprovisioning.NfcProvisioningPage</code>
+</li>
+<li><code>com.android.afwtest.uiautomator.pages.PageSkipper</code></li>
+<li><code>com.android.afwtest.uiautomator.pages.LandingPage</code></li>
+</ul>
+</li>
+<li>If a test failed during the provisioning process, logcat contains an error
+similar to:
+<pre>TestRunner: java.lang.RuntimeException: Failed to load page: com.android.afwtest.uiautomator.pages.packageinstaller.DeviceAccessPage
+</pre>
+This is typically caused by errors in a previous UI page or the page that
+failed to load, so try to find other error messages in logcat before this error,
+then try to reproduce it manually following the provisioning flow.</li>
+<li>If a test package fails:
+<ul>
+<li>A screenshot is saved to
+<code>out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em></code>
+using the following syntax:
+<code>screenshot-test_<em>test_class_full_name</em>_<em>test_case_name</em>-<em>random_number</em>.png</code>.
+This information is also logged in the host log.</li>
+<li>A bug report is saved to
+<code>out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em></code>
+as:
+<code>bug-<em>test_class_full_name</em>_<em>test_case_name</em>-<em>random_number</em>.zip</code>.
+</li>
+</ul>
+</li>
+<li>After all test packages execute, a screenshot is taken and saved to
+<code>out/host/linux-x86/afw-th/android-cts/repository/logs/<em>start-time</em></code>
+as: <code>screenshot-<em>random_number</em>.png</code>.
+This information is also logged in the host log.</li>
+</ul>
+</li>
+</ul>
+
+<h2 id=faq>FAQ</h2>
+<p>For help with questions not answered below, contact
+<a href="mailto:afw-testharness-support@google.com">afw-testharness-support@google.com</a>.
+</p>
+
+<p><strong>Can I run test plan <code>afw-userdebug-build</code> on a device
+flashed with user build?</strong></p>
+<p><em>No. Test packages in the <code>afw-userdebug-build</code> plan factory
+reset the testing device before running the actual test flow and require
+<code>adb</code> debugging to be auto-enabled. With a user build,
+<code>adb</code> debugging can be enabled only by manually changing the
+setting in Developer options.</em></p>
+
+<p><strong>Can I run test plan <code>afw-user-build</code> on a device flashed
+with userdebug build?</strong></p>
+<p><em>Yes, but we recommend that you run this test plan on a user build.</em></p>
+
+<p><strong>Sometimes my test fails because UI loading takes too much time. How
+can I fix this?</strong></p><em>Configure the <code>timeout_size</code> setting
+in <code>./android-cts/repository/testcases/afw-test.props</code>. Valid
+settings are: S, M, L, XL, XXL.</em></p>
+
+<p><strong>The test package
+<code>com.android.afwtest.NfcProvisioning</code> (or
+<code>SuwDoProvisioning</code>) fails on my device because the installed initial
+setup (i.e. Setup Wizard) shows customized UI (such as Term & Conditions)
+after provisioning is complete. How can I skip this customized UI?</strong></p>
+<p><em>There should be minimal UI after the provisioning process. The test
+harness will automatically skip such UI if the UI has a button that has
+meaningful text or content description that contains any of the following words:
+Skip, Finish, Done, Accept, Agree, Next, Continue, or Proceed. Alternatively,
+you can define a button in <code>afw-test.props</code> to configure the test
+harness to skip your UI. Example:</em></p>
+<pre>
+oem_widgets=your_btn
+your_btn.text=your_customized_text
+your_btn.package=your_package
+your_btn.action=click
+</pre>
+<em><p>To define multiple widgets, separate using commas.</em></p>
+
+<p><strong>The test package
+<code>com.android.afwtest.NfcProvisioning</code> (or
+<code>SuwDoProvisioning</code>) failed and the last UI screen is "Verify your
+account." Why does this happen and how can I recover the testing device?
+</strong></p>
+<p><em>This failure occurs because the previous test package failed to clear
+Factory Reset Protection at the end of the test. You must manually enter the
+account to unlock the device.</em></p>
+
+<p><strong>My device needs more time to factory reset. Can I extend the factory
+reset timeout?</strong></p>
+<p><em>Yes. Configure the <code>factory_reset_timeout_min</code> setting in
+<code>afw-test.props</code>. Valid settings are in minutes; you can set to any
+number of minutes that works with your device.</em></p>
diff --git a/src/devices/tech/admin/testing-setup.jd b/src/devices/tech/admin/testing-setup.jd
index 678c04b..727f685 100644
--- a/src/devices/tech/admin/testing-setup.jd
+++ b/src/devices/tech/admin/testing-setup.jd
@@ -1,4 +1,4 @@
-page.title=Setting up Device Testing
+page.title=Testing Device Administration
@jd:body
<!--
@@ -24,70 +24,90 @@
</div>
</div>
-<p>These are the essential elements that must exist for OEM devices to ensure
-minimal support for managed profiles:</p>
+<p>To ensure minimal support for managed profiles, OEM devices must contain the
+following essential elements:</p>
<ul>
- <li>Profile Owner as described in <a
-href="https://developer.android.com/training/enterprise/app-compatibility.html">Ensuring
-Compatibility with Managed Profiles</a>
- <li>Device Owner
- <li>Activation Code Provisioning
+ <li>Profile owner (as described in
+ <a href="https://developer.android.com/training/enterprise/app-compatibility.html">Ensuring
+ Compatibility with Managed Profiles</a>)</li>
+ <li>Device owner</li>
+ <li>Activation code provisioning</li>
</ul>
-<p>See <a href="implement.html">Implementing Device Administration</a> for the complete list of requirements.</p>
-<h2 id=summary>Summary</h2>
-<p>To test your device administration features:</p>
+<p>For a complete list of requirements, see
+<a href="{@docRoot}devices/tech/admin/implement.html">Implementing Device
+Administration</a>.</p>
+
+<p>To test device administration features, device owners can use the TestDPC
+application (described below); consider also working directly with other
+enterprise mobility management (EMM) providers.</p>
+
+<h2 id=set_up_the_device_owner_for_testing>Set up device owner for testing</h2>
+<p>Use the following instructions to set up a device owner testing environment.</p>
<ol>
- <li>For device owner, use the <a
-href="https://developer.android.com/samples/BasicManagedProfile/index.html">BasicManagedProfile.apk</a>
-test app.
- <li>Consider working with other enterprise mobility management (EMM) providers
-directly.
-</ol>
-
-<h2 id=set_up_the_device_owner_for_testing>Set up the device owner for testing</h2>
-<ol>
- <li>Device MUST be built with <strong>userdebug</strong> or <strong>eng</strong> build.
+ <li>Set up the device:
+ <ol>
+ <li style="list-style-type: lower-alpha">Ensure the device uses a
+ <strong>userdebug</strong> or <strong>eng</strong> build.</li>
+ <li style="list-style-type: lower-alpha">Factory reset the target device.</li>
+ </ol></li>
+ <li>Set up the testing application using one of the following methods:
+ <ul>
+ <li><a href="https://play.google.com/store/apps/details?id=com.afwsamples.testdpc&hl=en">Download
+ the TestDPC application</a> (available from Google Play).</li>
+ <li><a href="https://github.com/googlesamples/android-testdpc/">Build
+ the TestDPC application</a> (available from github.com).</li>
+ </ul>
</li>
- <li>Factory reset the target device (and continue with the next steps in the
- meantime).
- </li>
- <li>Download <a
- href="http://developer.android.com/downloads/samples/BasicManagedProfile.zip">BasicManagedProfile.zip</a>. (Also see the <a
- href="http://developer.android.com/samples/BasicManagedProfile/index.html">BasicManagedProfile</a> documentation.)</li>
- <li>Unzip the file.
- <li>Navigate (<code>cd</code>) to the unzipped directory.</li>
- <li>If you don't have it, download the <a href="http://developer.android.com/sdk/index.html#Other">Android SDK Tools</a> package.</li>
- <li>Create a file with the name <code>local.properties</code> containing the following single
- line:<br>
- <code>sdk.dir=<em><path to your android SDK folder></em></code><br>
- <li>On Linux and Mac OS, run:<br>
- <code>./gradlew assembleDebug</code><br>
- Or on windows run:<br>
- <code>gradlew.bat assembleDebug</code></li>
- <li>If the build is unsuccessful because you have an outdated android SDK, run:<br>
- <code><em><your android sdk folder></em>/tools/android update sdk -u -a</code></li>
- <li>Wait for factory reset to complete if it hasn’t yet.<br>
- <p class="Caution"><strong>Caution</strong>: Stay on the first screen
- after factory reset and do not finish the setup wizard.</li>
- <li>Install the BasicManagedProfile app by running the following command:<br>
- <code>adb install ./Application/build/outputs/apk/Application-debug.apk </code>
- </li>
- <li>Set this app as the device owner by running this command:<br><code>$ adb shell am start -a
- com.android.managedprovisioning.ACTION_PROVISION_MANAGED_DEVICE --es
- android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME
- com.example.android.basicmanagedprofile</code>
+ <li>Set the TestDPC app as the device owner using the following command:<br>
+ <pre>$ adb shell dpm set-device-owner "com.afwsamples.testdpc/.DeviceAdminReceiver"</pre>
</li>
<li>Go through device owner setup on the device (encrypt, select Wi-Fi, etc.)</li>
</ol>
-<h2 id=verify_the_device_owner_was_correctly_setup>Verify the device owner was correctly setup</h2>
-<ol>
- <li>Go to <em>Settings > Security > Device Administrators</em>.
- </li>
- <li>Confirm the BasicManagedProfile is in the list and verify it cannot be
- disabled. (This signifies it is a device owner.)
- </li>
-</ol>
+<h2 id=verify_the_device_owner_was_correctly_setup>Verify device owner setup</h2>
+<p>To verify the device owner was correctly setup, go to <em>Settings >
+Security > Device Administrators</em> and confirm TestDPC is in the
+list. Verify it cannot be disabled (this signifies it is a device owner).</p>
+
+<h2 id=automate>Automated provisioning testing</h2>
+<p>To automate the testing of enterprise provisioning processes, use
+the Android for Work (AfW) Test Harness. For details, see
+<a href="{@docRoot}devices/tech/admin/testing-provision.html">Testing Device
+Provisioning</a>.</p>
+
+<h2 id="troubleshooting">Bug reports and logs</h2>
+<p>In Android 7.0, device owner Device Policy Client (DPCs) can get bug reports
+and view logs for enterprise processes on a managed device.</p>
+
+<p>To trigger a bug report (i.e., the equivalent data collected by <code>adb
+bugreport</code> containing dumpsys, dumpstate, and logcat data), use
+<code>DevicePolicyController.requestBugReport</code>. After the bug report is
+collected, the user is prompted to give consent to send the bug report data.
+Results are received by
+<code>DeviceAdminReceiver.onBugreport[Failed|Shared|SharingDeclined]</code>. For
+details on bug report contents, see
+<a href="{@docRoot}source/read-bug-reports.html">Reading Bug Reports</a>.
+
+<p>In addition, device owner DPCs can also collect logs related to actions a
+user has taken on a managed device. Enterprise process logging is required for
+all devices that report device_admin and enabled by a new log security buffer
+readable only by the system server (i.e., <code>adb logcat -b security</code>
+cannot read the buffer). ActivityManager service and Keyguard components log the
+following events to the security buffer:</p>
+
+<ul>
+<li>Application processes starting</li>
+<li>Keyguard actions (e.g., unlock failure and success)</li>
+<li><code>adb</code> commands issued to the device</li>
+</ul>
+
+<p>To optionally retain logs across reboots (not cold boot) and make these logs
+available to device owner DPCs, a device must have a kernel with
+<code>pstore</code> and <code>pmsg</code> enabled, and DRAM powered and
+refreshed through all stages of reboot to avoid corruption to the logs retained
+in memory. To enable support, use the
+<code>config_supportPreRebootSecurityLogs</code> setting in
+<code>frameworks/base/core/res/res/values/config.xml</code>.</p>
diff --git a/src/devices/tech/config/images/namespace-libraries.png b/src/devices/tech/config/images/namespace-libraries.png
new file mode 100644
index 0000000..9152fa1
--- /dev/null
+++ b/src/devices/tech/config/images/namespace-libraries.png
Binary files differ
diff --git a/src/devices/tech/config/namespaces_libraries.jd b/src/devices/tech/config/namespaces_libraries.jd
new file mode 100644
index 0000000..1839d71
--- /dev/null
+++ b/src/devices/tech/config/namespaces_libraries.jd
@@ -0,0 +1,79 @@
+page.title=Namespaces for Native Libraries
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+Android 7.0 introduces namespaces for native libraries to limit internal API
+visibility and resolve situations when apps accidentally end up using platform
+libraries instead of their own. See the <a
+href="http://android-developers.blogspot.com/2016/06/improving-stability-with-private-cc.html">Improving
+Stability with Private C/C++ Symbol Restrictions in Android 7.0</a> Android
+Developers blog post</a> for application-specific changes.
+</p>
+
+<h2 id="architecture">Architecture</h2>
+
+<p>
+The change separates system libraries from application libraries and makes it
+hard to use internal system libraries by accident (and vice versa).
+</p>
+
+<img src="images/namespace-libraries.png" alt="Namespaces for native libraries" width="466" id="namespace-libraries" />
+<p class="img-caption">
+ <strong>Figure 1.</strong> Namespaces for native libraries
+</p>
+
+<p>
+Namespaces for native libraries prevent apps from using private-platform native
+APIs (as was done with OpenSSL). It also removes situations where apps
+accidentally end up using platform libraries instead of their own (as witnessed
+with <code>libpng</code>).
+</p>
+
+<h2 id="adding-additional-native-libraries">Adding additional native
+libraries</h2>
+
+<p>
+In addition to standard public native libraries, vendors may choose to provide
+additional native libraries accessible to apps by putting them under the
+<code>/vendor</code> library folder (/vendor/lib for 32 bit libraries and,
+/vendor/lib64 for 64 bit) and listing them in:
+<code>/vendor/etc/public.libraries.txt</code>
+</p>
+
+<h2 id="updating-app-non-public">Updating apps to not use non-public native libraries</h2>
+
+<p>
+This feature is enabled only for applications targeting SDK version 24 or later;
+for backward compatibility, see <a
+href="http://android-developers.blogspot.com/2016/06/improving-stability-with-private-cc.html">Table
+1. What to expect if your app is linking against private native libraries</a>.
+The list of Android native libraries accessible to apps (also know as
+public native libraries) is listed in CDD section 3.1.1. Apps targeting 24 or
+later and using any non-public libraries should be updated. Please see <a
+href="https://developer.android.com/preview/behavior-changes.html#ndk">NDK Apps
+Linking to Platform Libraries </a> for more details.
+</p>
diff --git a/src/devices/tech/config/uicc.jd b/src/devices/tech/config/uicc.jd
index 762e25d..c93b91c 100644
--- a/src/devices/tech/config/uicc.jd
+++ b/src/devices/tech/config/uicc.jd
@@ -24,55 +24,60 @@
</div>
</div>
-<p>Android 5.1 introduced a new mechanism to grant special privileges for APIs relevant
-to the Universal Integrated Circuit Card (UICC) owner’s apps. The Android platform will load
-certificates stored on a UICC and grant
-permission to apps signed by these certificates to make calls to a handful of
-special APIs. Since carriers have full control of the UICC, this mechanism
-provides a secure and flexible way to manage apps from the Mobile Network
-Operator (MNO) hosted on generic application distribution channels such as
-Google Play but still have special privileges on devices without the need for
-the apps to be signed by the per-device platform certificate or be
-pre-installed as a system app.</p>
+<p>Android 5.1 introduced a mechanism to grant special privileges for APIs
+relevant to the Universal Integrated Circuit Card (UICC) owner’s apps. The
+Android platform loads certificates stored on a UICC and grants permission to
+apps signed by these certificates to make calls to a handful of special APIs.
+</p>
+<p>Android 7.0 extends this feature to support other storage sources, such as
+Access File Rule (ARF), for UICC carrier privilege rules, dramatically
+increasing the number of carriers that can use the APIs. For an API reference,
+see <a href="#carrierconfigmanager">CarrierConfigManager</a>; for instructions,
+see <a href="{@docRoot}devices/tech/config/carrier.html">Carrier
+Configuration</a>.</p>
+
+<p>Since carriers have full control of the UICC, this mechanism provides a
+secure and flexible way to manage apps from the Mobile Network Operator (MNO)
+hosted on generic application distribution channels (such as Google Play) while
+retaining special privileges on devices and without the need to sign apps with
+the per-device platform certificate or pre-install as a system app.</p>
<h2 id=rules_on_uicc>Rules on UICC</h2>
-<p>Storage on the UICC is compatible with the <a
-href="http://www.globalplatform.org/specificationsdevice.asp">GlobalPlatform
+<p>Storage on the UICC is compatible with the
+<a href="http://www.globalplatform.org/specificationsdevice.asp">GlobalPlatform
Secure Element Access Control specification</a>. The application identifier
-(AID) on card is A00000015141434C00, and the standard GET DATA command is used
-to fetch rules stored on the card. You may update these rules via card
-over-the-air (OTA) update. Data hierarchy is as follows (noting the
-two-character letter and number combination in parentheses is the object tag).
-(An extension to spec is under review.)</p>
+(AID) on card is <code>A00000015141434C00</code>, and the standard GET DATA
+command is used to fetch rules stored on the card. You may update these rules
+via card over-the-air (OTA) update.</p>
-<p>Each rule is a REF-AR-DO (E2) and consists of a concatenation of a REF-DO and
-an AR-DO:</p>
+<h3 id=data_hierarchy>Data hierarchy</h3>
+<p>UICC rules use the following data hierarchy (the two-character letter and
+number combination in parentheses is the object tag). Each rule is a REF-AR-DO
+(E2) and consists of a concatenation of a REF-DO and an AR-DO:</p>
<ul>
<li>REF-DO (E1) contains a DeviceAppID-REF-DO or a concatenation of a
-DeviceAppID-REF-DO and a PKG-REF-DO.
+ DeviceAppID-REF-DO and a PKG-REF-DO.
<ul>
- <li>DeviceAppID-REF-DO (C1) stores the SHA1 (20 bytes) or SHA256 (32 bytes)
-signature of the certificate.
- <li>PKG-REF-DO (CA) is the full package name string defined in manifest, ASCII
-encoded, max length 127 bytes.
- </ul>
- <li>AR-DO (E3) is extended to include PERM-AR-DO (DB), which is an 8-byte bit mask
-representing 64 separate permissions.
+ <li>DeviceAppID-REF-DO (C1) stores the SHA-1 (20 bytes) or SHA-256 (32 bytes)
+ signature of the certificate.
+ <li>PKG-REF-DO (CA) is the full package name string defined in manifest, ASCII
+ encoded, max length 127 bytes.
+ </ul></li>
+ <li>AR-DO (E3) is extended to include PERM-AR-DO (DB), which is an 8-byte bit
+ mask representing 64 separate permissions.</li>
</ul>
-<p>If PKG-REF-DO is not present, any app signed by the certificate will be granted
+<p>If PKG-REF-DO is not present, any app signed by the certificate is granted
access; otherwise both certificate and package name need to match.</p>
-<h3 id=example>Example</h3>
+<h3 id=rule_example>Rule example</h3>
+<p>The application name is <code>com.google.android.apps.myapp</code> and the
+SHA-1 certificate in hex string is:</p>
+<pre>AB:CD:92:CB:B1:56:B2:80:FA:4E:14:29:A6:EC:EE:B6:E5:C1:BF:E4</pre>
-<p>App name: com.google.android.apps.myapp<br>
-Sha1 of certificate in hex string:</p>
-<pre>
-AB:CD:92:CB:B1:56:B2:80:FA:4E:14:29:A6:EC:EE:B6:E5:C1:BF:E4</pre>
-
-<p>Rule on UICC in hex string:</p>
+<p>The rule on UICC in hex string is:</p>
<pre>
E243 <= 43 is value length in hex
E135
@@ -82,229 +87,256 @@
DB08 0000000000000001
</pre>
+<h2 id=arf>Access Rule File (ARF) support</h2>
+<p>Android 7.0 adds support for reading carrier privilege rules from the Access
+Rule File (ARF).</p>
+<p>The Android platform first attempts to select the Access Rule Applet (ARA)
+application identifier (AID) <code>A00000015141434C00</code>. If it doesn't find
+the AID on the Universal Integrated Circuit Card (UICC), it falls back to ARF by
+selecting PKCS15 AID <code>A000000063504B43532D3135</code>. Android then reads
+Access Control Rules File (ACRF) at <code>0x4300</code> and looks for entries
+with AID <code>FFFFFFFFFFFF</code>. Entries with different AIDs are ignored, so
+rules for other use cases can co-exist.</p>
+<p>Example ACRF content in hex string:</p>
+<pre>30 10 A0 08 04 06 FF FF FF FF FF FF 30 04 04 02 43 10</pre>
+
+<p>Example Access Control Conditions File (ACCF) content:</p>
+<pre>30 16 04 14 61 ED 37 7E 85 D3 86 A8 DF EE 6B 86 4B D8 5B 0B FA A5 AF 81
+</pre>
+
+<p>In above example, <code>0x4310</code> is the address for ACCF, which contains
+the certificate hash
+<code>61:ED:37:7E:85:D3:86:A8:DF:EE:6B:86:4B:D8:5B:0B:FA:A5:AF:81</code>. Apps
+signed by this certificate are granted carrier privileges.</p>
+
<h2 id=enabled_apis>Enabled APIs</h2>
-<p>Currently we support the following APIs, listed below (refer to
-developer.android.com for more details).</p>
+<p>Android supports the following APIs.</p>
<h3 id=telephonymanager>TelephonyManager</h3>
-<p>API to check whether calling application has been granted carrier privileges:</p>
+<ul>
+<li>API to allow the carrier application to ask UICC for a challenge/response:
+<a href="https://developer.android.com/reference/android/telephony/TelephonyManager.html#getIccAuthentication(int,%20int,%20java.lang.String)"><code>getIccAuthentication</code></a>.
+</li>
-<pre>
-<a
-href="http://developer.android.com/reference/android/telephony/TelephonyManager.html#hasCarrierPrivileges()">hasCarrierPrivileges</a>
-</pre>
+<li>API to check whether calling application has been granted carrier
+privileges:
+<a href="http://developer.android.com/reference/android/telephony/TelephonyManager.html#hasCarrierPrivileges()"><code>hasCarrierPrivileges</code></a>.
+</li>
-<p>APIs for brand and number override:</p>
+<li>APIs to override brand and number:
+<ul>
+ <li><code>setOperatorBrandOverride</code></li>
+ <li><code>setLine1NumberForDisplay</code></li>
+ <li><code>setVoiceMailNumber</code></li>
+</ul></li>
-<pre>
-setOperatorBrandOverride
-setLine1NumberForDisplay
-setVoiceMailNumber
-</pre>
+<li>APIs for direct UICC communication:
+<ul>
+ <li><code>iccOpenLogicalChannel</code></li>
+ <li><code>iccCloseLogicalChannel</code></li>
+ <li><code>iccExchangeSimIO</code></li>
+ <li><code>iccTransmitApduLogicalChannel</code></li>
+ <li><code>iccTransmitApduBasicChannel</code></li>
+ <li><code>sendEnvelopeWithStatus</code></li>
+</ul></li>
-<p>APIs for direct UICC communication:</p>
-
-<pre>
-iccOpenLogicalChannel
-iccCloseLogicalChannel
-iccExchangeSimIO
-iccTransmitApduLogicalChannel
-iccTransmitApduBasicChannel
-sendEnvelopeWithStatus
-</pre>
-
-<p>API to set device mode to global:</p>
-
-<pre>
-setPreferredNetworkTypeToGlobal
-</pre>
+<li>API to set device mode to global:
+<code>setPreferredNetworkTypeToGlobal</code>.</li>
+</ul>
<h3 id=smsmanager>SmsManager</h3>
-<p>API allows caller to create new incoming SMS messages:</p>
+<p>API to allow caller to create new incoming SMS messages:
+<code>injectSmsPdu</code>.</p>
-<pre>
-injectSmsPdu
-</pre>
+<h3 id=carrierconfigmanager>CarrierConfigManager</h3>
-<h4 id=carriermessagingservice>CarrierMessagingService</h4>
+<p>API to notify configuration changed:
+<code>notifyConfigChangedForSubId</code>. For instructions, see
+<a href="{@docRoot}devices/tech/config/carrier.html">Carrier Configuration</a>.
+</p>
-<p>A service that receives calls from the system when new SMS and MMS are
-sent or
-received. To extend this class, you must declare the service in your manifest
-file with the android.Manifest.permission#BIND_CARRIER_MESSAGING_SERVICE
-permission and include an intent filter with the #SERVICE_INTERFACE action.</p>
+<h3 id=carriermessagingservice>CarrierMessagingService</h3>
-<pre>
-onFilterSms
-onSendTextSms
-onSendDataSms
-onSendMultipartTextSms
-onSendMms
-onDownloadMms
-</pre>
+<p>Service that receives calls from the system when new SMS and MMS are sent
+or received. To extend this class, declare the service in your manifest file
+with the <code>android.Manifest.permission#BIND_CARRIER_MESSAGING_SERVICE</code>
+permission and include an intent filter with the <code>#SERVICE_INTERFACE</code>
+action. APIs include:</p>
+<ul>
+ <li><code>onFilterSms</code></li>
+ <li><code>onSendTextSms</code></li>
+ <li><code>onSendDataSms</code></li>
+ <li><code>onSendMultipartTextSms</code></li>
+ <li><code>onSendMms</code></li>
+ <li><code>onDownloadMms</code></li>
+</ul>
-<h4 id=telephonyprovider>TelephonyProvider</h4>
+<h3 id=telephonyprovider>TelephonyProvider</h3>
-<p>Content provider APIs that allow modification to the telephony database, value
-fields are defined at Telephony.Carriers:</p>
-
-<pre>
-insert, delete, update, query
-</pre>
-
-<p>See the <a
-href="https://developer.android.com/reference/android/provider/Telephony.html">Telephony
-reference on developer.android.com</a> for additional information.</p>
+<p>Content provider APIs to allow modifications (insert, delete, update, query)
+to the telephony database. Values fields are defined at
+<a href="https://developer.android.com/reference/android/provider/Telephony.Carriers.html"><code>Telephony.Carriers</code></a>;
+for more details, refer to
+<a href="https://developer.android.com/reference/android/provider/Telephony.html">Telephony</a>
+API reference on developer.android.com.</p>
<h2 id=android_platform>Android platform</h2>
<p>On a detected UICC, the platform will construct internal UICC objects that
-include carrier privilege rules as part of the UICC. <a
-href="https://android.googlesource.com/platform/frameworks/opt/telephony/+/master/src/java/com/android/internal/telephony/uicc/UiccCarrierPrivilegeRules.java">UiccCarrierPrivilegeRules.java</a>
-will load rules, parse them from the UICC card, and cache them in memory. When
-a privilege check is needed, UiccCarrierPrivilegeRules will compare the caller
-certificate with its own rules one by one. If the UICC is removed, rules will
-be destroyed along with the UICC object.</p>
+include carrier privilege rules as part of the UICC.
+<a href="https://android.googlesource.com/platform/frameworks/opt/telephony/+/master/src/java/com/android/internal/telephony/uicc/UiccCarrierPrivilegeRules.java"><code>UiccCarrierPrivilegeRules.java</code></a>
+loads rules, parses them from the UICC card, and caches them in memory. When
+a privilege check is needed, <code>UiccCarrierPrivilegeRules</code> compares the
+caller certificate with its own rules one by one. If the UICC is removed, rules
+are destroyed along with the UICC object.</p>
+
+<h2 id=validation>Validation</h2>
+<p>The Android 7.0 CTS includes tests for carrier APIs in
+<code>CtsCarrierApiTestCases.apk</code>. Because this feature depends on
+certificates on the UICC, you must prepare the UICC to pass these tests.</p>
+
+<h3 id=prepare_uicc>Preparing the UICC</h3>
+<p>By default, <code>CtsCarrierApiTestCases.apk</code> is signed by Android
+developer key, with hash value
+<code>61:ED:37:7E:85:D3:86:A8:DF:EE:6B:86:4B:D8:5B:0B:FA:A5:AF:81</code>. The
+tests also print out the expected certificate hash if certificates on UICC
+mismatch.</p>
+<p>Example output:</p>
+<pre>
+junit.framework.AssertionFailedError: This test requires a SIM card with carrier privilege rule on it.
+Cert hash: 61ed377e85d386a8dfee6b864bd85b0bfaa5af81
+</pre>
+
+<p>You may need a developer UICC to update it with the correct applet and
+certificate rules. However, the UICC does not require active cellular service to
+pass CTS tests.</p>
+
+<h3 id=run_tests>Running tests</h3>
+<p>For convenience, the Android 7.0 CTS supports a device token that restricts
+tests to run only on devices configured with same token. Carrier API CTS tests
+support the device token <code>sim-card-with-certs</code>. For example, the
+following device token restricts carrier API tests to run only on device
+<code>abcd1234</code>:</p>
+<pre>cts-tradefed run cts --device-token abcd1234:sim-card-with-certs</pre>
+
+<p>When running a test without using a device token, the test runs on all
+devices.</p>
<h2 id=faq>FAQ</h2>
-<p><strong>How can certificates be updated on the UICC?
-</strong></p>
+<p><strong>How can certificates be updated on the UICC?</strong></p>
-<p><em>A: Use existing card OTA update mechanism.
-</em></p>
+<p><em>A: Use existing card OTA update mechanism.</em></p>
-<p><strong>Can it co-exist with other rules?
-</strong></p>
+<p><strong>Can it co-exist with other rules?</strong></p>
<p><em>A: It’s fine to have other security rules on the UICC under same AID; the
-platform will filter them out automatically.
-</em></p>
+platform will filter them out automatically.</em></p>
<p><strong>What happens when the UICC is removed for an app that relies on the
-certificates on it?
-</strong></p>
+certificates on it?</strong></p>
-<p><em>A: The app will lose its privileges because the rules associated with the UICC
-are destroyed on UICC removal.
-</em></p>
+<p><em>A: The app will lose its privileges because the rules associated with the
+UICC are destroyed on UICC removal.</em></p>
-<p><strong>Is there a limit on the number of certificates on the UICC?
-</strong></p>
+<p><strong>Is there a limit on the number of certificates on the UICC?</strong>
+</p>
-<p><em>A: The platform doesn’t limit the number of certificates; but because the check
-is linear, too many rules may incur a latency for check.
-</em></p>
+<p><em>A: The platform doesn’t limit the number of certificates; but because the
+check is linear, too many rules may incur a latency for check.</em></p>
<p><strong>Is there a limit to number of APIs we can support via this method?
</strong></p>
-<p><em>A: No, but we limit the scope of APIs to carrier related.
-</em></p>
+<p><em>A: No, but we limit the scope of APIs to carrier related.</em></p>
-<p><strong>Are there some APIs prohibited from using this method? If so, how do you
-enforce them? (ie. Will you have tests to validate which APIs are supported via
-this method?)
-</strong></p>
+<p><strong>Are there some APIs prohibited from using this method? If so, how do
+you enforce them? (i.e. Will you have tests to validate which APIs are supported
+via this method?)</strong></p>
-<p><em>A: Please refer to the "API Behavioral Compatibility" section of the <a
-href="{@docRoot}compatibility/android-cdd.pdf">Android Compatibility Definition
-Document CDD)</a>. We have some CTS tests to make sure the permission model of
-the APIs is not changed.
-</em></p>
+<p><em>A: See the "API Behavioral Compatibility" section of the
+<a href="{@docRoot}compatibility/cdd.html">Android Compatibility Definition
+Document (CDD)</a>. We have some CTS tests to make sure the permission model of
+the APIs is not changed.</em></p>
-<p><strong>How does this work with the multi-SIM feature?
-</strong></p>
+<p><strong>How does this work with the multi-SIM feature?</strong></p>
<p><em>A: The default SIM that gets set by the user will be used.</em></p>
-<p><strong>Does this in any way interact or overlap with other SE access technologies e.g.
-SEEK?
-<em>A: As an example, SEEK uses the same AID as on the UICC. So the rules co-exist
-and are filtered by either SEEK or UiccCarrierPrivileges.</em>
-</strong></p>
+<p><strong>Does this in any way interact or overlap with other SE access
+technologies, e.g. SEEK?</strong></p>
+<p><em>A: As an example, SEEK uses the same AID as on the UICC. So the rules
+co-exist and are filtered by either SEEK or UiccCarrierPrivileges.</em></p>
-<p><strong>When is it a good time to check carrier privileges?
-<em>A: After the SIM state loaded broadcast.</em>
-</strong></p>
+<p><strong>When is it a good time to check carrier privileges?</strong></p>
+<p><em>A: After the SIM state loaded broadcast.</em></p>
-<p><strong>Can OEMs disable part of carrier APIs?
-</strong></p>
+<p><strong>Can OEMs disable part of carrier APIs?</strong></p>
-<p><em>A: No. We believe current APIs are the minimal set, and we plan to use the bit
-mask for finer granularity control in the future.
+<p><em>A: No. We believe current APIs are the minimal set, and we plan to use
+the bit mask for finer granularity control in the future.</em></p>
+
+<p><strong>Does setOperatorBrandOverride override ALL other forms of operator
+name strings? For example, SE13, UICC SPN, network based NITZ, etc.?</strong>
+</p>
+
+<p><em>A: Refer to the SPN entry in
+<a href="http://developer.android.com/reference/android/telephony/TelephonyManager.html">TelephonyManager</a>
</em></p>
-<p><strong>Does setOperatorBrandOverride override ALL other forms of operator name
-strings? For example, SE13, UICC SPN, network based NITZ, etc.?
-</strong></p>
+<p><strong>What does the injectSmsPdu method call do?</strong></p>
-<p><em>A: See the SPN entry within TelephonyManager:
-<a
-href="http://developer.android.com/reference/android/telephony/TelephonyManager.html">http://developer.android.com/reference/android/telephony/TelephonyManager.html</a>
-</em></p>
+<p><em>A: This facilitates SMS backup/restore in the cloud. The injectSmsPdu
+call enables the restore function.</em></p>
-<p><strong>What does the injectSmsPdu method call do?
-</strong></p>
-
-<p><em>A: This facilitates SMS backup/restore in the cloud. The injectSmsPdu call
-enables the restore function.
-</em></p>
-
-<p><strong>For SMS filtering, is the onFilterSms call based on SMS UDH port filtering? Or
-would carrier apps have access to ALL incoming SMS?
-</strong></p>
+<p><strong>For SMS filtering, is the onFilterSms call based on SMS UDH port
+filtering? Or would carrier apps have access to ALL incoming SMS?</strong></p>
<p><em>A: Carriers have access to all SMS data.</em></p>
<p><strong>Since the extension of DeviceAppID-REF-DO to support 32 bytes appears
incompatible with the current GP spec (which allows 0 or 20 bytes only) why are
you introducing this change? Do you not consider SHA-1 to be good enough to
-avoid collisions? Have you proposed this change to GP already, as this could
-be backwards incompatible with existing ARA-M / ARF?
-</strong></p>
+avoid collisions? Have you proposed this change to GP already, as this could
+be backwards incompatible with existing ARA-M/ARF?</strong></p>
-<p><em>A: For providing future proof security this extension introduces SHA-256 for
-DeviceAppID-REF-DO in addition to SHA-1 which is currently the only option in
-the GP SEAC standard. It is highly recommended to use SHA-256.</em></p>
+<p><em>A: For providing future-proof security this extension introduces SHA-256
+for DeviceAppID-REF-DO in addition to SHA-1 which is currently the only option
+in the GP SEAC standard. It is highly recommended to use SHA-256.</em></p>
-<p><strong>If DeviceAppID is 0 (empty), would you really apply the rule to all device
-applications not covered by a specific rule?
-</strong></p>
+<p><strong>If DeviceAppID is 0 (empty), would you really apply the rule to all
+device applications not covered by a specific rule?</strong></p>
<p><em>A: Carrier apis require deviceappid-ref-do be non-empty. Being empty is
-intended for test purpose and is not recommended for operational deployments.</em></p>
+intended for test purpose and is not recommended for operational deployments.
+</em></p>
<p><strong>According to your spec, PKG-REF-DO used just by itself, without
DeviceAppID-REF-DO, should not be accepted. But it is still described in Table
6-4 as extending the definition of REF-DO. Is this on purpose? What will be the
-behavior of the code when only a PKG-REF-DO is used in a REF-DO?
-</strong></p>
+behavior of the code when only a PKG-REF-DO is used in a REF-DO?</strong></p>
-<p><em>A: The option of having PKG-REF-DO as a single value item in REF-DO was removed
-in the latest version. PKG-REF-DO should only occur in combination with
-DeviceAppID-REF-DO.
-</em></p>
+<p><em>A: The option of having PKG-REF-DO as a single value item in REF-DO was
+removed in the latest version. PKG-REF-DO should only occur in combination with
+DeviceAppID-REF-DO.</em></p>
-<p><strong>We assume we can grant access to all carrier-based permissions or have a
-finer-grained control. What will define the mapping between the bit mask and
-the actual permissions then? One permission per class? One permission per
+<p><strong>We assume we can grant access to all carrier-based permissions or
+have a finer-grained control. What will define the mapping between the bit mask
+and the actual permissions then? One permission per class? One permission per
method specifically? Will 64 separate permissions be enough in the long run?
</strong></p>
<p><em>A: This is reserved for the future, and we welcome suggestions.</em></p>
-<p><strong>Can you further define the DeviceAppID for Android specifically? Since this is
-the SHA-1 (20 bytes) hash value of the Publisher certificate used to signed the
-given app, shouldn't the name reflect that purpose? (The name could be
-confusing to many readers as the rule will be applicable then to all apps
-signed with that same Publisher certificate.)
-</strong></p>
+<p><strong>Can you further define the DeviceAppID for Android specifically?
+Since this is the SHA-1 (20 bytes) hash value of the Publisher certificate used
+to signed the given app, shouldn't the name reflect that purpose? (The name
+could be confusing to many readers as the rule will be applicable then to all
+apps signed with that same Publisher certificate.)</strong></p>
-<p><em>A: See the <a
-href="#rules_on_uicc">Rules on UICC</a> section for details. The deviceAppID storing
-certificates is already supported by the existing spec. We tried to minimize
-spec changes to lower barrier for adoption. </em></p>
+<p><em>A: The deviceAppID storing certificates is already supported by the
+existing spec. We tried to minimize spec changes to lower barrier for adoption.
+For details, see <a href="#rules_on_uicc">Rules on UICC</a>.</em></p>
diff --git a/src/devices/tech/config/voicemail.jd b/src/devices/tech/config/voicemail.jd
index 609e75d..d13d2ff 100644
--- a/src/devices/tech/config/voicemail.jd
+++ b/src/devices/tech/config/voicemail.jd
@@ -24,19 +24,28 @@
</div>
</div>
-<p>Android 6.0 (Marshmallow) brings an implementation of visual voicemail (VVM)
+<p>Android 6.0 (Marshmallow) brought an implementation of visual voicemail (VVM)
support integrated into the Dialer, allowing compatible Carrier VVM services to
hook into the Dialer with minimal configuration. Visual voicemail lets users
easily check voicemail without making any phone calls. Users can view a list of
messages in an inbox-like interface, listen to them in any order, and can
delete them as desired.</p>
+<p>Android 7.0 added the following configuration parameters to visual voicemail:
+<ul>
+ <li>Prefetching of voicemails controlled by <code>KEY_VVM_PREFETCH_BOOLEAN</code>
+ <li>Control of whether a cellular data connection is required by
+ <code>KEY_VVM_CELLULAR_DATA_REQUIRED_BOOLEAN</code>
+ <li>Fetching of voicemail transcriptions
+ <li>Fetching of voicemail quota
+</ul>
+
<p>This article gives an overview of what is provided, how carriers can integrate
with it, and some details of the implementation.</p>
<h2 id=visual_voicemail_vvm_client>Visual voicemail (VVM) client</h2>
-<p>Android 6.0 includes a OMTP VVM client, which (when provided with the correct
+<p>Android 6.0 and above includes a OMTP VVM client, which (when provided with the correct
configuration) will connect to Carrier VVM servers and populate visual
voicemail messages within the Android Open Source Project (AOSP) Dialer. The VVM client:</p>
@@ -46,6 +55,8 @@
subscriber's mailbox
<li>Syncs the mailbox with the IMAP server
<li>Downloads the voicemails when the user chooses to listen to them
+ <li>Fetches voicemail transcriptions
+ <li>Fetches details of voicemail quota (total mailbox size and occupied size)
<li>Integrates into the Dialer for user functionality such as calling back, viewing
unread messages, deleting messages, etc.
</ul>
@@ -54,14 +65,19 @@
<h3 id=implementation>Implementation</h3>
-<p>The Carrier must provide a visual voicemail server implementing the <a href="http://www.gsma.com/newsroom/wp-content/uploads/2012/07/OMTP_VVM_Specification_1_3.pdf">OMTP VVM specifications</a>. The current implementation of the AOSP VVM client supports the core
+<p>The Carrier must provide a visual voicemail server implementing the
+<a href="http://www.gsma.com/newsroom/wp-content/uploads/2012/07/OMTP_VVM_Specification_1_3.pdf">OMTP
+VVM specifications</a>. The current implementation of the AOSP VVM client supports the core
features (read/delete voicemails, download/sync/listen) but the additional TUI
features (password change, voicemail greeting, languages) are not implemented.
At this time, we only support OMTP version 1.1 and do not use encryption for
-IMAP authentication. </p>
+IMAP authentication.</p>
-<p><strong>Note</strong> that server-originated SMS messages to the device (e.g. STATUS or SYNC) must
-not be class 0 messages.</p>
+<p>To support transcriptions, carriers must support the transcription attachment
+format (MIME type plain/text) specified in the OMTP 1.3 spec, item 2.1.3.</p>
+
+<p class="note"><strong>Note</strong>: Server-originated SMS messages to the device
+(e.g. STATUS or SYNC) must be data SMS messages.</p>
<h3 id=configuration>Configuration</h3>
@@ -71,13 +87,14 @@
<ul>
<li>Destination number and port number for SMS
- <li>Authentication security type for IMAP (SSL, TLS, none, etc.)
<li>The package name of the carrier-provided visual voicemail app (if one is
provided), so that the platform implementation can be disabled if that package
is installed
</ul>
-<p>These values are provided through the <a href="https://developer.android.com/reference/android/telephony/CarrierConfigManager.html">Carrier Config API</a>. This functionality, launched in Android 6.0, allows an application to
+<p>These values are provided through the
+<a href="https://developer.android.com/reference/android/telephony/CarrierConfigManager.html">Carrier Config API</a>.
+This functionality, launched in Android 6.0, allows an application to
dynamically provide telephony-related configuration to the various platform
components that need it. In particular the following keys must have values
defined:</p>
@@ -87,24 +104,32 @@
<li><code>KEY_VVM_PORT_NUMBER_INT</code>
<li><code>KEY_VVM_TYPE_STRING</code>
<li><code>KEY_CARRIER_VVM_PACKAGE_NAME_STRING</code>
+ <li><code>KEY_VVM_PREFETCH_BOOLEAN</code>
+ <li><code>KEY_VVM_CELLULAR_DATA_REQUIRED_BOOLEAN</code>
</ul>
-<p>Please see the <a href="{@docRoot}devices/tech/config/carrier.html">Carrier Configuration</a> article for more detail.</p>
+<p>Please see the <a href="{@docRoot}devices/tech/config/carrier.html">Carrier Configuration</a>
+article for more detail.</p>
<h2 id=implementation>Implementation</h2>
-<p>The OMTP VVM client is implemented within <code>packages/services/Telephony</code>, in particular within <code>src/com/android/phone/vvm/</code></p>
+<p>The OMTP VVM client is implemented within <code>packages/services/Telephony</code>,
+in particular within <code>src/com/android/phone/vvm/</code></p>
<h3 id=setup>Setup</h3>
<ol>
- <li>The VVM client listens for <code>TelephonyIntents#ACTION_SIM_STATE_CHANGED</code> or <code>CarrierConfigManager#ACTION_CARRIER_CONFIG_CHANGED</code>.
- <li>When a SIM is added that has the right Carrier Config values (<code>KEY_VVM_TYPE_STRING</code> set to <code>TelephonyManager.VVM_TYPE_OMTP</code> or <code>TelephonyManager.VVM_TYPE_CVVM</code>), the VVM client sends an ACTIVATE SMS to the value specified in <code>KEY_VVM_DESTINATION_NUMBER_STRING</code>.
+ <li>The VVM client listens for <code>TelephonyIntents#ACTION_SIM_STATE_CHANGED</code>
+ or <code>CarrierConfigManager#ACTION_CARRIER_CONFIG_CHANGED</code>.
+ <li>When a SIM is added that has the right Carrier Config values
+ (<code>KEY_VVM_TYPE_STRING</code> set to <code>TelephonyManager.VVM_TYPE_OMTP</code>
+ or <code>TelephonyManager.VVM_TYPE_CVVM</code>), the VVM client sends an
+ ACTIVATE SMS to the value specified in <code>KEY_VVM_DESTINATION_NUMBER_STRING</code>.
<li>The server activates the visual voicemail service and sends the OMTP
-credentials via STATUS sms. When the VVM client receives the STATUS sms, it
-registers the voicemail source and displays the voicemail tab on the device.
+ credentials via STATUS sms. When the VVM client receives the STATUS sms, it
+ registers the voicemail source and displays the voicemail tab on the device.
<li>The OMTP credentials are saved locally and the device begins a full sync, as
-described below.
+ described below.
</ol>
<h3 id=syncing>Syncing</h3>
@@ -113,34 +138,45 @@
server and vice versa.</p>
<ul>
- <li><strong>Full syncs</strong> occur upon initial download. The VVM client only fetches voicemail metadata
-like date and time, origin number, duration, etc. Full syncs can be triggered
-by a:
- <ul>
- <li>new SIM
- <li>device reboot
- <li>device coming back in service
- </ul>
- <li><strong>Upload sync</strong> happens when a user interacts with a voicemail to read or delete it. Upload
-syncs result in the server changing its data to match the data on the device.
-For example, if the user reads a voicemail, it's marked as read on the server;
-if a user deletes a voicemail, it's deleted on the server.
- <li><strong>Download sync</strong> occurs when the VVM client receives a "MBU" (mailbox update) SYNC sms from the
-carrier. A SYNC message contains the metadata for a new message so that it can
-be stored in the voicemail content provider.
+ <li><strong>Full syncs</strong> occur upon initial download. The VVM client
+ fetches voicemail metadata like date and time; origin number; duration;
+ voicemail transcriptions, if available; and audio data if
+ <code>KEY_VVM_PREFETCH_BOOLEAN</code> is True. Full syncs can be
+ triggered by:
+ <ul>
+ <li>Inserting a new SIM
+ <li>Rebooting the device
+ <li>Coming back in service
+ <li>Receiving the <code>VoicemailContract.ACTION_SYNC_VOICEMAIL</code> broadcast
+ </ul>
+ <li><strong>Upload sync</strong> happens when a user interacts with a voicemail
+ to read or delete it. Upload syncs result in the server changing its data to
+ match the data on the device. For example, if the user reads a voicemail,
+ it's marked as read on the server; if a user deletes a voicemail, it's
+ deleted on the server.
+ <li><strong>Download sync</strong> occurs when the VVM client receives a "MBU"
+ (mailbox update) SYNC sms from the carrier. A SYNC message contains the
+ metadata for a new message so that it can be stored in the voicemail
+ content provider.
</ul>
+<p class="note"><strong>Note</strong>: The voicemail inbox quota values are
+retrieved during every sync.</p>
+
<h3 id=voicemail_download>Voicemail download</h3>
<p>When a user presses play to listen to a voicemail, the corresponding audio file
is downloaded. If the user chooses to listen to the voicemail, the Dialer can
-broadcast <code>VoicemailContract.ACTION_FETCH_VOICEMAIL</code>, which the voicemail client will receive, initiate the download of the
+broadcast <code>VoicemailContract.ACTION_FETCH_VOICEMAIL</code>, which the
+voicemail client will receive, initiate the download of the
content, and update the record in the platform voicemail content provider.</p>
<h3 id=disabling_vvm>Disabling VVM</h3>
<p>The VVM service can be disabled or deactivated by user interaction, removal of
-a valid SIM, or replacement by a carrier VVM app. <em>Disabled</em> means that the local device no longer displays visual voicemail. <em>Deactivated</em> means that the service is turned off for the subscriber. User interaction can
+a valid SIM, or replacement by a carrier VVM app. <em>Disabled</em> means that the
+local device no longer displays visual voicemail. <em>Deactivated</em> means that
+the service is turned off for the subscriber. User interaction can
deactivate the service, SIM removal temporarily disables the service because
it's no longer present, and carrier VVM replacement disables the AOSP VVM client.</p>
@@ -154,7 +190,9 @@
<h4 id=sim_removal>SIM removal</h4>
-<p>If there are changes to the device's SIM state (<code>ACTION_SIM_STATE_CHANGED</code>) or Carrier Config values (<code>ACTION_CARRIER_CONFIG_CHANGED</code>), and a valid configuration for the given SIM no longer exists, then the
+<p>If there are changes to the device's SIM state (<code>ACTION_SIM_STATE_CHANGED</code>)
+or Carrier Config values (<code>ACTION_CARRIER_CONFIG_CHANGED</code>), and
+a valid configuration for the given SIM no longer exists, then the
voicemail source is unregistered locally and the voicemail tab disappears. If
the SIM is replaced, VVM will be re-enabled.</p>
diff --git a/src/devices/tech/connect/block-numbers.jd b/src/devices/tech/connect/block-numbers.jd
new file mode 100644
index 0000000..aa20729
--- /dev/null
+++ b/src/devices/tech/connect/block-numbers.jd
@@ -0,0 +1,254 @@
+page.title=Implementing Block Phone Numbers
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+Because telephony is such an open communications channel - anyone may call or
+text any number at any time - Android users need the ability to easily block
+unwanted calls and texts.
+</p>
+
+<p>
+Before N, Android users had to rely on downloaded apps to restrict calls and
+texts from bothersome phone numbers. Many of those apps either do not work as
+desired or provide a less-than ideal experience because there are no proper APIs
+for blocking calls and messages.
+</p>
+
+<p>
+Some manufacturers might ship their own blocking solutions out-of-the-box, but
+if users switch devices, they may lose the blocked list completely due to lack
+of interoperability. Finally, even if users are employing dialing apps and
+messaging clients that provide such functionality, they likely still have to
+perform the block action in each app for the block to take effect for both
+calling and texting.
+</p>
+
+<h2 id="features">Features</h2>
+
+<p>
+The Android 7.0 release introduces a <code>BlockedNumberProvider</code> content
+provider that stores a list of phone numbers the user has specified should not
+be able to contact them via telephony communications (calls, SMS, MMS). The
+system will respect the numbers in the blocked list by restricting calls and
+texts from those numbers. Android 7.0 displays the list of blocked numbers and
+allows the user to add and remove numbers.
+</p>
+
+<p>
+Further, the number-blocking feature enables the system and the relevant apps on
+the platform to work together to help protect the user and to simplify the
+experience. The default dialer, default messaging client, UICC-privileged app,
+and apps with the same signature as the system can all directly read from and
+write to the blocked list. Because the blocked numbers are stored on the system,
+no matter what dialing or messaging apps the user employs, the numbers stay
+blocked. Finally, the blocked numbers list may be restored on any new device,
+regardless of the manufacturer.
+</p>
+
+<ul>
+<li>User will be guaranteed to have a blocking feature that works out-of-the-box
+and will not lose their block list when they switch apps or get a new phone. All
+relevant apps on the system can share the same list to provide the user with the
+most streamlined experience.
+<li>App developers do not need to develop their own way to manage a block list
+and the calls and messages that come in. They can simply use the
+platform-provided feature.
+<li>Dialer / messenger apps that are selected as the default by the user can
+read and write to the provider. Other apps can launch the block list management
+user interface by using <code>createManageBlockedNumbersIntent()</code>
+<li>OEMs can use platform provided feature to ship a blocking feature
+out-of-the-box. OEMs can rest assured that when users switch from another OEM’s
+device that they have a better onboarding experience because the block list will
+be transferred as well.
+<li>If carrier has their own dialer or messenger app, they can reuse platform
+feature for allowing the user to maintain a block list. They can rest assured
+that the user’s block list can stay with the users, even when they get a new
+device. Finally, all carrier-privileged apps can read the block list, so if the
+carrier wants to provide some additional more powerful blocking for the user
+based on the block list, that is now possible with this feature.</li></ul>
+
+<h2 id="data-flow">Data flow</h2>
+
+<img src="images/block-numbers-flow.png" alt="block numbers data flow" width="642" id="block-numbers-flow" />
+<p class="img-caption">
+ <strong>Figure 1.</strong> Block phone numbers data flow
+</p>
+
+<h2 id="examples-and-source">Examples and source</h2>
+
+<p>
+Here are example calls using the number-blocking new feature:
+</p>
+
+<h3 id="launch-from-app">Launch blocked number manager from app</h3>
+
+<pre>
+Context.startActivity(telecomManager.createManageBlockedNumbersIntent(), null);
+</pre>
+
+<h3 id="query-blocked-numbers">Query blocked numbers</h3>
+
+<pre>
+Cursor c = getContentResolver().query(BlockedNumbers.CONTENT_URI,
+ new String[]{BlockedNumbers.COLUMN_ID,
+ BlockedNumbers.COLUMN_ORIGINAL_NUMBER,
+ BlockedNumbers.COLUMN_E164_NUMBER}, null, null, null);
+</pre>
+
+<h3 id="put-blocked-number">Put blocked number</h3>
+
+<pre>
+ContentValues values = new ContentValues();
+values.put(BlockedNumbers.COLUMN_ORIGINAL_NUMBER, "1234567890");
+Uri uri = getContentResolver().insert(BlockedNumbers.CONTENT_URI, values);
+</pre>
+
+<h3 id="delete-blocked-number">Delete blocked number</h3>
+
+<pre>
+ContentValues values = new ContentValues();
+values.put(BlockedNumbers.COLUMN_ORIGINAL_NUMBER, "1234567890");
+Uri uri = getContentResolver().insert(BlockedNumbers.CONTENT_URI, values);
+getContentResolver().delete(uri, null, null);
+</pre>
+
+<h2 id="implementation">Implementation</h2>
+
+<p>
+These are the high-level tasks that must be completed to put the number-blocking
+feature to use:
+</p>
+
+<ul>
+<li>OEMs implement call/message-restriction features on their devices by using
+<code>BlockedNumberProvider</code>
+<li>If carrier has dialer or messenger application, implement call/message
+restriction features by using <code>BlockedNumberProvider</code>
+<li>Third-party dialer and messenger app vendors use
+<code>BlockedNumberProvider</code> for their blocking features</li>
+</ul>
+
+<h3 id="recommendations-for-oems">Recommendations for OEMs</h3>
+
+<p>
+If the device had previously never shipped with any additional call/message
+restriction features, use the number-blocking feature in the Android Open Source
+Project (AOSP) on all such devices. It is recommended that reasonable entry
+points for blocking are supported, such as blocking a number right from the call
+log or within a message thread.
+</p>
+
+<p>
+If the device had previously shipped with call/message restriction features,
+adapt the features so all <em>strict-match phone numbers</em> that are blocked
+are stored in the <code>BlockedNumberProvider,</code> and that the behavior
+around the provider satisfy the requirements for this feature outlined in the
+Android Compatibility Definition Document (CDD).
+</p>
+
+<p>
+Any other advanced feature can be implemented via custom providers and custom UI
+/ controls, as long as the CDD requirements are satisfied with regards to
+blocking strict-match phone numbers. It is recommended that those other features
+be labeled as “advanced” features to avoid confusion with the basic
+number-blocking feature.
+</p>
+
+<h3 id="apis">APIs</h3>
+
+<p>
+Here are the APIs in use:
+</p>
+<ul>
+<li><code><a
+href="http://developer.android.com/reference/android/telecom/TelecomManager.html">TelecomManager</a>
+API</code>
+ <ul>
+ <li><code>Intent createManageBlockedNumbersIntent()</code>
+ </ul>
+</li>
+<li><code><a
+href="http://developer.android.com/reference/android/telephony/CarrierConfigManager.html">Carrier
+Config</a></code>
+ <ul>
+ <li><code>KEY_DURATION_BLOCKING_DISABLED_AFTER_EMERGENCY_INT</code>
+ </ul>
+</li>
+<li>Please refer to <code>BlockedNumberContract</code>
+ <ul>
+ <li>APIs provided by <code><a
+ href="https://developer.android.com/reference/android/provider/BlockedNumberContract.html">BlockedNumberContract</a></code></li>
+ <li><code>boolean isBlocked(Context context, String phoneNumber)</code></li>
+ <li><code>int unblock(Context context, String phoneNumber)</code></li>
+ <li><code>boolean canCurrentUserBlockNumbers(Context context)</code></li>
+ </ul>
+ </li>
+</ul>
+
+<h3 id="user-interface">User interface</h3>
+<p>
+The BlockedNumbersActivity.java user interface provided in AOSP can be used as
+is. Partners may also implement their own version of the UI, as long as it
+satisfies related CDD requirements.
+</p>
+
+<p>
+Please note, the partner’s PC application for backup and restore may be needed
+to implement restoration of the block list by using
+<code>BlockedNumberProvider</code>. See the images below for the blocked
+numbers interface supplied in AOSP.
+</p>
+
+<img src="images/block-numbers-ui.png" alt="block numbers user interface" width="665" id="block-numbers-ui" />
+<p class="img-caption">
+ <strong>Figure 2.</strong> Block phone numbers user interface
+</p>
+
+<h2 id="validation">Validation</h2>
+
+<p>
+Implementers can ensure their version of the feature works as intended by
+running the following CTS tests:
+</p>
+
+<pre>
+android.provider.cts.BlockedNumberContractTest
+com.android.cts.numberblocking.hostside.NumberBlockingTest
+android.telecom.cts.ExtendedInCallServiceTest#testIncomingCallFromBlockedNumber_IsRejected
+android.telephony.cts.SmsManagerTest#testSmsBlocking
+</pre>
+
+<p>
+The <code>BlockedNumberProvider</code> can be manipulated using <code>adb</code> commands
+after running <code>$ adb root</code>. For example:
+</p>
+<pre>
+$ adb root
+$ adb shell content query --uri content://com.android.blockednumber/blocked
+$ adb shell content insert --uri / content://com.android.blockednumber/blocked --bind / original_number:s:'6501002000'
+$ adb shell content delete --uri / content://com.android.blockednumber/blocked/1
+</pre>
diff --git a/src/devices/tech/connect/call-notification.jd b/src/devices/tech/connect/call-notification.jd
new file mode 100644
index 0000000..e0365ec
--- /dev/null
+++ b/src/devices/tech/connect/call-notification.jd
@@ -0,0 +1,118 @@
+page.title=Call Notifications
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Android 7.0 moves functionality related to call notifications from the
+Telecom system service in the Android platform to the Dialer application.
+Previously, the responsibility for displaying call-related notifications was
+split between Telecom and the default Dialer app, creating inconsistencies in
+behavior. In Android 7.0, the Dialer assumes all responsibility for handling
+call notifications.</p>
+
+<h2 id=android_6>Behavior in Android 6.x and earlier</h2>
+<p>In earlier Android releases, Telecom and Dialer split responsibilities as
+described below:</p>
+
+<table>
+<tr>
+<th>Functionality</th>
+<th>Done by Telecom</th>
+<th>Done by Dialer</th>
+</tr>
+
+<tr>
+<td>Incoming call notification</td>
+<td>Yes (ringing, vibrate)</td>
+<td>Yes (notification display, caller ID)</td>
+</tr>
+
+<tr>
+<td>Send to voicemail</td>
+<td>Yes</td>
+<td>No</td>
+</tr>
+
+<tr>
+<td>Custom ringtone</td>
+<td>Yes</td>
+<td>No</td>
+</tr>
+
+<tr>
+<td>Missed call notifications</td>
+<td>Yes</td>
+<td>No</td>
+</tr>
+
+<tr>
+<td>Message Waiting Indicator (call voicemail)</td>
+<td>Yes (telephony)</td>
+<td>No</td>
+</tr>
+
+<tr>
+<td>Visual voicemail notifications</td>
+<td>No</td>
+<td>Yes</td>
+</tr>
+
+</tbody>
+</table>
+
+<p>Examples of inconsistent behavior caused by this responsibility split
+included:</p>
+<ul>
+<li>Telecom was responsible for starting the ringer/vibrator, but the dialer was
+responsible for displaying the incoming call notification. If the dialer is slow
+to startup, this can result in ringing starting several seconds before the
+incoming call notification is displayed.</li>
+<li>Telecom was responsible for displaying missed call notifications. As
+proprietary features (such as Google caller ID) do not work on these
+notifications, this could result in inconsistencies between Telecom
+notifications and Dialer UI (such as the call log).</li>
+</ul>
+
+<h2 id=android_7>Behavior in Android 7.0 and later</h2>
+<p>The Android Open Source Project (AOSP) Dialer implements the new
+functionality. For details, refer to the following documentation:</p>
+<ul>
+<li>Missed call notifications<br>
+<a href="https://android.googlesource.com/platform/packages/services/Telecomm/+/nougat-release/src/com/android/server/telecom/ui/MissedCallNotifierImpl.java">Telecom/src/com/android/server/telecom/ui/MissedCallNotifierImpl.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/apps/Dialer/+/nougat-release/src/com/android/dialer/calllog/MissedCallNotificationReceiver.java">Dialer/src/com/android/dialer/calllog/MissedCallNotificationReceiver.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/apps/Dialer/+/nougat-release/src/com/android/dialer/calllog/MissedCallNotifier.java">Dialer/src/com/android/dialer/calllog/MissedCallNotifier.java</a></li>
+<li>Playing ringtones:<br>
+<a href="https://android.googlesource.com/platform/frameworks/base/+/nougat-release/telecomm/java/android/telecom/InCallService.java">frameworks/base/telecomm/java/android/telecom/InCallService.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/services/Telecomm/+/nougat-release/src/com/android/server/telecom/InCallController.java">Telecom/src/com/android/server/telecom/InCallController.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/apps/Dialer/+/nougat-release/InCallUI/src/com/android/incallui/ringtone/">Dialer/InCallUI/src/com/android/incallui/ringtone</a><br>
+<a href="https://android.googlesource.com/platform/packages/apps/Dialer/+/nougat-release/InCallUI/src/com/android/incallui/StatusBarNotifier.java">Dialer/InCallUI/src/com/android/incallui/StatusBarNotifier.java</a></li>
+<li>VVM notifications<br>
+<a href="https://android.googlesource.com/platform/frameworks/base/+/nougat-release/telephony/java/android/telephony/TelephonyManager.java">frameworks/base/telephony/java/android/telephony/TelephonyManager.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/services/Telephony/+/nougat-release/src/com/android/phone/PhoneInterfaceManager.java">Telephony/src/com/android/phone/PhoneInterfaceManager.java</a><br>
+<a href="https://android.googlesource.com/platform/packages/apps/Dialer/+/nougat-release/src/com/android/dialer/calllog/DefaultVoicemailNotifier.java">Dialer/src/com/android/dialer/calllog/DefaultVoicemailNotifier.java</a></li>
+</ul>
+
+<h2 id=implement>Implementation</h2>
+<p>Partners may need to update Telecom/Telephony components that expose APIs
+available for use by by the default Dialer.</p>
diff --git a/src/devices/tech/connect/data-saver.jd b/src/devices/tech/connect/data-saver.jd
new file mode 100644
index 0000000..5d2a717
--- /dev/null
+++ b/src/devices/tech/connect/data-saver.jd
@@ -0,0 +1,149 @@
+page.title=Data Saver mode
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+Mobile data use is costly and even more so where data plan costs are not
+affordable by all. Android users need the ability to reduce data use or block it
+from apps altogether. The Data Saver feature in the Android 7.0 release provides
+this functionality to the user.
+</p>
+
+<p>
+The <a href="https://developer.android.com/preview/features/data-saver.html">Data Saver</a>
+feature can be turned on or off by the user. App developers
+should use a new API to check if Data Saver mode is on. If it is on, the app
+developers can handle the situation gracefully by tuning their applications for
+low- or no-data access.
+</p>
+
+<p>
+End users benefit as they will be able to control which apps can access data in
+the background and which can access data only while in the foreground. This
+ensures desired background data exchange when Data Saver is on per user control.
+</p>
+
+<h2 id="implementation">Implementation</h2>
+
+<p>
+Since the Data Saver is a feature in the platform, device manufacturers gain its
+functionality by default with the N release.
+</p>
+
+<h3 id="settings-interface">Settings interface</h3>
+
+<p>
+A default Data Saver settings user interface is supplied in the Android Open
+Source Project (AOSP). See the screenshots below for examples.
+</p>
+
+<p>
+These screenshots show the Data Saver mode in use.
+</p>
+
+<img src="images/data-saver-use.png" width="397" alt="Toggling Data Saver off/on" />
+<p class="img-caption">
+ <strong>Figure 1.</strong> Toggling Data Saver off/on
+ </p>
+
+<img src="images/data-battery-saver.png" width="641" alt="Battery saver and Data Saver are on" />
+<p class="img-caption">
+ <strong>Figure 2.</strong> When both battery saver and Data Saver are on
+ </p>
+
+<img src="images/data-saver-app.png" width="376" alt="App-specific data usage screen" />
+<p class="img-caption">
+ <strong>Figure 3.</strong> App-specific data usage screen: Settings > Apps > Data usage
+ </p>
+
+<img src="images/data-saver-quick-settings.png" width="446" alt="Data saver in the Quick Settings" />
+<p class="img-caption">
+ <strong>Figure 4.</strong> Data saver states on the Quick Settings menu
+ </p>
+
+<h3 id="apps">Apps</h3>
+
+<strong>Important</strong>: Partners should not whitelist apps.
+Even if they do, users may remove them. Including other
+apps forces users to decide on which to apply Data Saver.
+</p>
+
+<p>
+All app developers must act to implement Data Saver, including OEMs and carrier
+partners with preloaded apps. See <a
+href="https://developer.android.com/preview/features/data-saver.html">Data Saver
+on developer.android.com</a> for app developer instructions on detecting and
+monitoring Data Saver states. See the sections below for additional details
+helpful to partners.
+</p>
+
+<p>
+To optimize for Data Saver mode, apps should:
+</p>
+
+<ul>
+ <li>Remove unnecessary images
+ <li>Use lower resolution for remaining images
+ <li>Use lower bitrate video
+ <li>Trigger existing “lite” experiences
+ <li>Compress data
+ <li>Respect metered vs. unmetered network status even when Data Saver is
+off
+</ul>
+
+<p>
+Conversely, to work well with Data Saver, apps should not:
+</p>
+
+<ul>
+ <li>Autoplay videos
+ <li>Prefetch content/attachments
+ <li>Download updates / code
+ <li>Ask to be whitelisted unless background data is truly part of core
+ functionality
+ <li>Treat whitelisting as a license to use more bandwidth
+</ul>
+
+<h2 id="validation">Validation</h2>
+
+<p>
+Implementers can ensure their version of the feature works as intended by
+running the following CTS test:
+</p>
+
+<pre>
+com.android.cts.net.HostsideRestrictBackgroundNetworkTests
+</pre>
+
+<p>
+In addition, <code>adb</code> commands can be used to conduct tests manually by
+first running this command to see all available options:<br>
+<code>$ adb shell cmd netpolicy</code>
+</p>
+
+<p>
+For example, this command returns the UIDs of the whitelisted apps:<br>
+<code>$ adb shell cmd netpolicy list restrict-background-whitelist</code>
+</p>
diff --git a/src/devices/tech/connect/felica.jd b/src/devices/tech/connect/felica.jd
new file mode 100644
index 0000000..d44a6a1
--- /dev/null
+++ b/src/devices/tech/connect/felica.jd
@@ -0,0 +1,63 @@
+page.title=Host Card Emulation of FeliCa
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Felicity Card, or FeliCa, an RFID smart card system, is the NFC standard in
+Japan, Hong Kong and other markets in the Asia-Pacific (APAC) region. It has
+been expanding in adoption in that region and is well used among transit,
+retail, and loyalty services. Adding support for FeliCa in Android devices
+destined for that region improves their usefulness.</p>
+
+<h2 id="implementation">Implementation</h2>
+
+<p>HCE FeliCa requires NFC hardware that supports the NFC-F (JIS 6319-4) standard.</p>
+
+<p>Host Card Emulation (HCE) of FeliCa is essentially a parallel implementation to
+the existing HCE implementation on Android; it creates new classes for FeliCa
+where it makes sense and merges with the existing HCE implementation where
+possible.</p>
+
+<p>The following Android components are included in the Android Open Source Project
+(AOSP):</p>
+
+<ul>
+ <li>Framework classes
+ <ul>
+ <li>Public HostNfcFService (convenience service class)
+ <li>@hide NfcFServiceInfo
+ </ul>
+ <li>Modifications to core NFC framework</li></ul>
+ </li>
+</ul>
+
+<p>As with most Android platform features, manufacturers write the drivers to
+make the hardware work with the API.</p>
+
+<h2 id="validation">Validation</h2>
+
+<p>Use the <a href="{@docRoot}compatibility/cts/index.html">Android Compatibility
+Test Suite</a> to ensure this feature works as intended. CTS Verifier
+(NfcTestActivity) tests this implementation for devices reporting the
+<code>android.hardware.nfc.hcef</code> feature constant.</p>
diff --git a/src/devices/tech/connect/images/block-numbers-flow.png b/src/devices/tech/connect/images/block-numbers-flow.png
new file mode 100644
index 0000000..a5eb265
--- /dev/null
+++ b/src/devices/tech/connect/images/block-numbers-flow.png
Binary files differ
diff --git a/src/devices/tech/connect/images/block-numbers-ui.png b/src/devices/tech/connect/images/block-numbers-ui.png
new file mode 100644
index 0000000..093d299
--- /dev/null
+++ b/src/devices/tech/connect/images/block-numbers-ui.png
Binary files differ
diff --git a/src/devices/tech/connect/images/data-battery-saver.png b/src/devices/tech/connect/images/data-battery-saver.png
new file mode 100644
index 0000000..d416183
--- /dev/null
+++ b/src/devices/tech/connect/images/data-battery-saver.png
Binary files differ
diff --git a/src/devices/tech/connect/images/data-saver-app.png b/src/devices/tech/connect/images/data-saver-app.png
new file mode 100644
index 0000000..a67a91a
--- /dev/null
+++ b/src/devices/tech/connect/images/data-saver-app.png
Binary files differ
diff --git a/src/devices/tech/connect/images/data-saver-quick-settings.png b/src/devices/tech/connect/images/data-saver-quick-settings.png
new file mode 100644
index 0000000..89dde02
--- /dev/null
+++ b/src/devices/tech/connect/images/data-saver-quick-settings.png
Binary files differ
diff --git a/src/devices/tech/connect/images/data-saver-use.png b/src/devices/tech/connect/images/data-saver-use.png
new file mode 100644
index 0000000..6ffc58b
--- /dev/null
+++ b/src/devices/tech/connect/images/data-saver-use.png
Binary files differ
diff --git a/src/devices/tech/connect/images/host_card.png b/src/devices/tech/connect/images/host_card.png
new file mode 100755
index 0000000..315c5f5
--- /dev/null
+++ b/src/devices/tech/connect/images/host_card.png
Binary files differ
diff --git a/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-1.png b/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-1.png
new file mode 100644
index 0000000..0456311
--- /dev/null
+++ b/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-1.png
Binary files differ
diff --git a/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-2.png b/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-2.png
new file mode 100644
index 0000000..dd16acf
--- /dev/null
+++ b/src/devices/tech/connect/images/ril-refactor-scenario-1-solution-2.png
Binary files differ
diff --git a/src/devices/tech/connect/images/ril-refactor-scenario-1.png b/src/devices/tech/connect/images/ril-refactor-scenario-1.png
new file mode 100644
index 0000000..6634c77
--- /dev/null
+++ b/src/devices/tech/connect/images/ril-refactor-scenario-1.png
Binary files differ
diff --git a/src/devices/tech/connect/images/ril-refactor-scenario-2-solution.png b/src/devices/tech/connect/images/ril-refactor-scenario-2-solution.png
new file mode 100644
index 0000000..c2aaf8a
--- /dev/null
+++ b/src/devices/tech/connect/images/ril-refactor-scenario-2-solution.png
Binary files differ
diff --git a/src/devices/tech/connect/images/ril-refactor-scenario-2.png b/src/devices/tech/connect/images/ril-refactor-scenario-2.png
new file mode 100644
index 0000000..c0c8a17
--- /dev/null
+++ b/src/devices/tech/connect/images/ril-refactor-scenario-2.png
Binary files differ
diff --git a/src/devices/tech/connect/index.jd b/src/devices/tech/connect/index.jd
new file mode 100644
index 0000000..7e9fbb1
--- /dev/null
+++ b/src/devices/tech/connect/index.jd
@@ -0,0 +1,21 @@
+page.title=Ensuring Network Connectivity
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<p>Follow the instructions in this section to ensure your Android devices are
+connected properly.</p>
diff --git a/src/devices/tech/connect/ril.jd b/src/devices/tech/connect/ril.jd
new file mode 100644
index 0000000..0c4c1f2
--- /dev/null
+++ b/src/devices/tech/connect/ril.jd
@@ -0,0 +1,291 @@
+page.title=RIL Refactoring
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<h2 id="introduction">Introduction</h2>
+
+<p>The Radio Interface Layer (RIL) refactoring feature
+of the Android 7.0 release is a set of subfeatures
+that improves RIL functionality. Implementing the features is optional but
+encouraged. Partner code changes are required to implement these features. The
+refactoring changes are backward compatible, so prior implementations of
+the refactored features will still work.</p>
+
+<p>The following subfeatures are included in the RIL refactoring feature. You
+can implement any or all of the subfeatures:</p>
+
+<ul>
+<li>Enhanced RIL error codes: Code can return more specific error codes
+than the existing <code>GENERIC_FAILURE</code> code. This enhances error
+troubleshooting by providing more specific information about the cause
+of errors.</li>
+
+<li>Enhanced RIL versioning: The RIL versioning mechanism is enhanced to
+provide more accurate and easier to configure version information.</li>
+
+<li>Redesigned RIL communication using wakelocks: RIL communication using
+wakelocks is enhanced to improve device battery performance.</li>
+</ul>
+
+<h2 id="examples">Examples and source</h2>
+
+<p>Documentation for RIL versioning is also in code comments in <a
+href="https://android.googlesource.com/platform/hardware/ril/+/master/include/telephony/ril.h"><code>https://android.googlesource.com/platform/hardware/ril/+/master/include/telephony/ril.h</code></a>.</p>
+
+<h2 id="implementation">Implementation</h2>
+
+<p>The following sections describe how to implement the subfeatures of the
+RIL refactoring feature.</p>
+
+<h3 id="errorcodes">Implementing enhanced RIL error codes</h3>
+
+<h4 id="errorcodes-problem">Problem</h4>
+
+<p>Almost all RIL request calls can return the <code>GENERIC_FAILURE</code>
+error code in response to an error. This is an issue with all solicited
+responses returned by the OEMs. It is difficult to debug an issue from
+the bug report if the same <code>GENERIC_FAILURE</code> error code is
+returned by RIL calls for different reasons. It can take considerable time
+for vendors to even identify what part of the code could have returned a
+<code>GENERIC_FAILURE</code> code.</p>
+
+<h4 id="errorcodes-solution">Solution</h4>
+
+<p>OEMs should return a distinct error code value associated
+with each of the different errors that are currently categorized as
+<code>GENERIC_FAILURE</code>.</p>
+
+<p>If OEMs do not want to publicly reveal their custom error codes, they may
+return errors as a distinct set of integers (for example, from 1 to x) that
+are mapped as <code>OEM_ERROR_1</code> to <code>OEM_ERROR_X</code>. The
+vendor should make sure each such masked error code returned maps to a unique
+error reason in their code. The purpose of doing this is
+to speed up debugging RIL issues whenever generic errors are returned
+by the OEM. It can take too much time to identify what exactly caused
+<code>GENERIC_FAILURE</code>, and sometimes it's impossible to figure out.<p>
+
+<p>In <code>ril.h</code>, more error codes are
+added for enums <code>RIL_LastCallFailCause</code> and
+<code>RIL_DataCallFailCause</code> so that vendor code avoids returning
+generic errors like <code>CALL_FAIL_ERROR_UNSPECIFIED</code> and
+<code>PDP_FAIL_ERROR_UNSPECIFIED</code>.</p>
+
+<h3 id="version">Implementing enhanced RIL versioning</h3>
+
+<h4 id="version-problem">Problem</h4>
+
+<p>RIL versioning is not accurate enough. The mechanism for vendors to
+report their RIL version is not clear, causing vendors to report an incorrect
+version. A workaround method of estimating the version is used, but it can
+be inaccurate.</p>
+
+<h4 id="version-solution">Solution</h4>
+
+<p>There is a documented section in <code>ril.h</code> describing what a
+particular RIL version value corresponds to. Each
+RIL version is documented, including what changes correspond
+to that version. Vendors must update their version in code when making
+changes corresponding to that version, and return that version while doing
+<code>RIL_REGISTER</code>.</p>
+
+<h3 id="wakelocks">Implementing redesigned RIL communication using
+wakelocks</h3>
+
+<h4 id="wakelocks-prob-sum">Problem summary</h4>
+
+<p>Timed wakelocks are used in RIL communication in an imprecise way,
+which negatively affects battery performance. RIL requests can be either
+solicited or unsolicited. Solicited requests should be classified as one of
+the following:</p>
+
+<ul>
+<li>synchronous: Those that do not take considerable time to respond back. For
+example, <code>RIL_REQUEST_GET_SIM_STATUS</code>.</li>
+
+<li>asynchronous: Those that take considerable time to respond back. For
+example, <code>RIL_REQUEST_QUERY_AVAILABLE_NETWORKS</code>.</li>
+</ul>
+
+<p>Follow these steps to implement redesigned wakelocks:</p>
+
+<ol>
+<li>
+Classify solicited RIL commands as either synchronous or asynchronous
+depending on how much time they take to respond.
+<p>Here are some things to consider while making
+that decision:</p>
+
+<ul>
+<li>As explained in the solution of asynchronous solicited RIL requests,
+because the requests take considerable time, RIL Java releases the wakelock
+after receiving ack from vendor code. This might cause the application
+processor to go from idle to suspend state. When the response is available
+from vendor code, RIL Java (the application processor) will re-acquire the
+wakelock and process the response, and later go to idle state again. This
+process of moving from idle to suspend state and back to idle can consume
+a lot of power.</li>
+
+<li>If the response time isn't long enough then holding the wakelock and
+staying in idle state for the entire time it takes to respond can be more
+power efficient than going in suspend state by releasing the wakelock and
+then waking up when the response arrives. So vendors should use
+platform-specific power measurement to find out the threshold value of time 't' when
+power consumed by staying in idle state for the entire time 't' consumes
+more power than moving from idle to suspend and back to idle in same time
+'t'. When that time 't' is discovered, RIL commands that take more than time
+'t' can be classified as asynchronous, and the rest of the RIL commands can
+be classified as synchronous.</li>
+</ul>
+</li>
+
+<li>Understand the RIL communications scenarios described in the <a
+href="#ril-comm-scenarios">RIL communication scenarios</a> section.</li>
+
+<li>Follow the solutions in the scenarios by modifying your code to handle
+RIL solicited and unsolicited requests.</li>
+</ol>
+
+<h4 id="ril-comm-scenarios">RIL communication scenarios</h4>
+
+<p>For implementation details of the functions used in the
+following diagrams, see the source code of <code>ril.cpp</code>:
+<code>acquireWakeLock()</code>, <code>decrementWakeLock()</code>,
+<code>clearWakeLock(</code>)</p>
+
+<h5>Scenario 1: RIL request from Java APIs and solicited asynchronous response
+to that request</h5>
+
+<p><img src="images/ril-refactor-scenario-1.png"></p>
+
+<h6>Problem</h6>
+
+<p>If the RIL solicited response is expected to take considerable time (for
+example, <code>RIL_REQUEST_GET_AVAILABLE_NETWORKS</code>), then wakelock
+is held for a long time on the Application processor side, which is a
+problem. Also, modem problems result in a long wait.</p>
+
+<h6>Solution part 1</h6>
+
+<p>In this scenario, wakelock equivalent is held by Modem code (RIL request
+and asynchronous response back).</p>
+
+<p><img src="images/ril-refactor-scenario-1-solution-1.png"></p>
+
+<p>As shown in the above sequence diagram:</p>
+
+<ol>
+<li>RIL request is sent, and the modem needs to acquire wakelock to process
+the request.</li>
+
+<li>The modem code sends acknowledgement that causes the Java side to decrement
+the wakelock counter and release it if the wakelock counter value is 0.</li>
+
+<li>After the modem processes the request, it sends an interrupt to the
+vendor code that acquires wakelock and sends a response to ril.cpp. ril.cpp
+then acquires wakelock and sends a response to the Java side.</li>
+
+<li>When the response reaches the Java side, wakelock is acquired and response
+is sent back to caller.</li>
+
+<li>After that response is processed by all modules, acknowledgement is
+sent back to <code>ril.cpp</code> over a socket. <code>ril.cpp</code> then
+releases the wakelock that was acquired in step 3.</li>
+</ol>
+
+<p>Note that the wakelock timeout duration for the request-ack sequence
+would be smaller than the currently used timeout duration because the ack
+should be received back fairly quickly.</p>
+
+<h6>Solution part 2</h6>
+
+<p>In this scenario, wakelock is not held by modem and response is quick
+(synchronous RIL request and response).</p>
+
+<p><img src="images/ril-refactor-scenario-1-solution-2.png"></p>
+
+<p>As shown in the above sequence diagram:</p>
+
+<ol>
+<li>RIL request is sent by calling <code>acquireWakeLock()</code> on the
+Java side.</li>
+
+<li>Vendor code doesn't need to acquire wakelock and can process the request
+and respond quickly.</li>
+
+<li>When the response is received by the Java side,
+<code>decrementWakeLock()</code> is called, which decreases wakelock counter
+and releases wakelock if the counter value is 0.</li>
+</ol>
+
+<p>Note that this synchronous vs. asynchronous behavior is hardcoded for a
+particular RIL command and decided on a call-by-call basis.</p>
+
+<h5>Scenario 2: RIL unsolicited response</h5>
+
+<p><img src="images/ril-refactor-scenario-2.png"></p>
+
+<p>As shown in the above diagram, RIL unsolicited responses have a wakelock
+type flag in the response that indicates whether a wakelock needs to be
+acquired or not for the particular response received from the vendor. If
+the flag is set, then a timed wakelock is set and response is sent over a
+socket to the Java side. When the timer expires, the wakelock is released.</p>
+
+<h6>Problem</h6>
+
+<p>The timed wakelock illustrated in Scenario 2 could be too long or too
+short for different RIL unsolicited responses.</p>
+
+<h6>Solution</h6>
+
+<p><img src="images/ril-refactor-scenario-2-solution.png"></p>
+
+<p>As shown, the problem can be solved by sending an acknowledgement from
+the Java code to the native side (<code>ril.cpp</code>), instead of holding
+a timed wakelock on the native side while sending an unsolicited response.</p>
+
+<h2 id="validation">Validation</h2>
+
+<p>The following sections describe how to validate the implementation of
+the RIL refactoring feature's subfeatures.</p>
+
+<h3 id="validate-error">Validating enhanced RIL error codes</h3>
+
+<p>After adding new error codes to replace the <code>GENERIC_FAILURE</code>
+code, verify that the new error codes are returned by the RIL call instead
+of <code>GENERIC_FAILURE</code>.</p>
+
+<h3 id="validate-version">Validating enhanced RIL versioning</h3>
+
+<p>Verify that the RIL version corresponding to your RIL code is returned
+during <code>RIL_REGISTER</code> rather than the <code>RIL_VERSION</code>
+defined in <code>ril.h</code>.</p>
+
+<h3 id="validate-wakelocks">Validating redesigned wakelocks</h3>
+
+<p>Verify that RIL calls are identified as synchronous or asynchronous.</p>
+
+<p>Because battery power consumption can be hardware/platform dependent,
+vendors should do some internal testing to find out if using the new wakelock
+semantics for asynchronous calls leads to battery power savings.</p>
diff --git a/src/devices/tech/dalvik/dex-format.jd b/src/devices/tech/dalvik/dex-format.jd
index 4b03270..d3cc34f 100644
--- a/src/devices/tech/dalvik/dex-format.jd
+++ b/src/devices/tech/dalvik/dex-format.jd
@@ -286,7 +286,7 @@
document.</p>
<p class="note"><strong>Note:</strong> Support for version <code>037</code> of
-the format was added in the Android N release. Prior to this release most
+the format was added in the Android 7.0 release. Prior to this release most
versions of Android have used version <code>035</code> of the format. The only
difference between versions <code>035</code> and <code>037</code> is the
addition of default methods and the adjustment of the <code>invoke</code>
diff --git a/src/devices/tech/dalvik/images/jit-arch.png b/src/devices/tech/dalvik/images/jit-arch.png
new file mode 100644
index 0000000..de6177b
--- /dev/null
+++ b/src/devices/tech/dalvik/images/jit-arch.png
Binary files differ
diff --git a/src/devices/tech/dalvik/images/jit-daemon.png b/src/devices/tech/dalvik/images/jit-daemon.png
new file mode 100644
index 0000000..60098b9
--- /dev/null
+++ b/src/devices/tech/dalvik/images/jit-daemon.png
Binary files differ
diff --git a/src/devices/tech/dalvik/images/jit-profile-comp.png b/src/devices/tech/dalvik/images/jit-profile-comp.png
new file mode 100644
index 0000000..0001bdc
--- /dev/null
+++ b/src/devices/tech/dalvik/images/jit-profile-comp.png
Binary files differ
diff --git a/src/devices/tech/dalvik/images/jit-workflow.png b/src/devices/tech/dalvik/images/jit-workflow.png
new file mode 100644
index 0000000..57365eb
--- /dev/null
+++ b/src/devices/tech/dalvik/images/jit-workflow.png
Binary files differ
diff --git a/src/devices/tech/dalvik/jit-compiler.jd b/src/devices/tech/dalvik/jit-compiler.jd
new file mode 100644
index 0000000..d341450
--- /dev/null
+++ b/src/devices/tech/dalvik/jit-compiler.jd
@@ -0,0 +1,267 @@
+page.title=Implementing ART Just-In-Time (JIT) Compiler
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+
+<div id="qv-wrapper">
+<div id="qv">
+ <h2 id="Contents">In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+</div>
+</div>
+
+<p>
+Android 7.0 adds a just-in-time (JIT) compiler with code profiling to Android
+runtime (ART) that constantly improves the performance of Android apps as they
+run. The JIT compiler complements ART's current ahead-of-time (AOT) compiler and
+improves runtime performance, saves storage space, and speeds app updates and
+system updates.
+</p>
+
+<p>
+The JIT compiler also improves upon the AOT compiler by avoiding system slowdown
+during automatic application updates or recompilation of applications during
+OTAs. This feature should require minimal device integration on the part of
+manufacturers.
+</p>
+
+<p>
+JIT and AOT use the same compiler with an almost identical set of optimizations.
+The generated code might not be the same but it depends. JIT makes uses of
+runtime type information and can do better inlining. Also, with JIT we sometimes
+do OSR compilation (on stack replacement) which will again generate a bit
+different code.
+</p>
+
+<p>
+See <a
+href="https://developer.android.com/preview/api-overview.html#jit_aot">Profile-guided
+JIT/AOT Compilation</a> on developer.android.com for a more thorough overview.
+</p>
+
+<h2 id="architectural-overview">Architectural Overview</h2>
+
+<img src="images/jit-arch.png" alt="JIT architecture" width="633" id="JIT-architecture" />
+<p class="img-caption">
+ <strong>Figure 1.</strong> JIT architecture - how it works
+</p>
+
+<h2 id="flow">Flow</h2>
+
+<p>
+JIT compilation works in this manner:
+</p>
+
+<ol>
+<li>The user runs the app, which then triggers ART to load the .dex file.
+<li>If the .oat file (the AOT binary for the .dex file) is available, ART uses
+them directly. Note that .oat files are generated regularly. However, that does
+not imply they contain compiled code (AOT binary).
+<li>If no .oat file is available, ART runs through either JIT or an interpreter
+to execute the .dex file. ART will always use the .oat files if available.
+Otherwise, it will use the APK and extract it in memory to get to the .dex
+incurring a big memory overhead (equal to the size of the dex files).
+<li>JIT is enabled for any application that is not compiled according to the
+"speed" compilation filter (which says, compile as much as you can from the
+app).
+<li>The JIT profile data is dumped to a file in a system directory. Only the
+application has access to the directory.
+<li>The AOT compilation (dex2oat) daemon parses that file to drive its
+compilation.</li>
+</ol>
+
+<img src="images/jit-profile-comp.png" alt="Profile-guided comp" width="452" id="JIT-profile-comp" />
+<p class="img-caption">
+ <strong>Figure 2.</strong> Profile-guided compilation
+</p>
+
+<img src="images/jit-daemon.png" alt="JIT daemon" width="718" id="JIT-daemon" />
+<p class="img-caption">
+ <strong>Figure 3.</strong> How the daemon works
+</p>
+
+<p>
+The Google Play service is an example used by other apps. These application tend
+to behave more like shared libraries.
+</p>
+
+<h2 id="jit-workflow">JIT Workflow</h2>
+<p>
+See the following high-level overview of how JIT works in the next diagram.
+</p>
+
+<img src="images/jit-workflow.png" alt="JIT architecture" width="707" id="JIT-workflow" />
+<p class="img-caption">
+ <strong>Figure 4.</strong> JIT data flow
+</p>
+
+<p>
+This means:
+</p>
+
+<ul>
+<li>Profiling information is stored in the code cache and subjected to garbage
+collection under memory pressure.
+<li>As a result, there’s no guarantee the snapshot taken when the application is
+in the background will contain the complete data (i.e. everything that was
+JITed).
+<li>There is no attempt to make sure we record everything as that will impact
+runtime performance.
+<li>Methods can be in three different states: <ul>
+ <li>interpreted (dex code)
+ <li>JIT compiled
+ <li>AOT compiled
+<li>If both, JIT and AOT code exists (e.g. due to repeated de-optimizations),
+the JITed code will be preferred.
+<li>The memory requirement to run JIT without impacting foreground app
+performance depends upon the app in question. Large apps will require more
+memory than small apps. In general, big apps stabilize around 4 MB.</li></ul>
+</li>
+</ul>
+
+<h2 id="system-properties">System Properties</h2>
+
+<p>
+These system properties control JIT behavior:
+</p><ul>
+<li><code>dalvik.vm.usejit <true|false></code> - Whether or not the JIT is
+enabled.
+<li><code>dalvik.vm.jitinitialsize</code> (default 64K) - The initial capacity
+of the code cache. The code cache will regularly GC and increase if needed. It
+is possible to view the size of the code cache for your app with:<br>
+<code> $ adb shell dumpsys meminfo -d <pid></code>
+<li><code>dalvik.vm.jitmaxsize</code> (default 64M) - The maximum capacity of
+the code cache.
+<li><code>dalvik.vm.jitthreshold <integer></code> (default 10000) - This
+is the threshold that the "hotness" counter of a method needs to pass in order
+for the method to be JIT compiled. The "hotness" counter is a metric internal
+to the runtime. It includes the number of calls, backward branches & other
+factors.
+<li><code>dalvik.vm.usejitprofiles <true|false></code> - Whether or not
+JIT profiles are enabled; this may be used even if usejit is false.
+<li><code>dalvik.vm.jitprithreadweight <integer></code> (default to
+<code>dalvik.vm.jitthreshold</code> / 20) - The weight of the JIT "samples"
+(see jitthreshold) for the application UI thread. Use to speed up compilation
+of methods that directly affect users experience when interacting with the
+app.
+<li><code>dalvik.vm.jittransitionweight <integer></code>
+(<code>dalvik.vm.jitthreshold</code> / 10) - The weight of the method
+invocation that transitions between compile code and interpreter. This helps
+make sure the methods involved are compiled to minimize transitions (which are
+expensive).
+</li>
+</ul>
+
+<h2 id="tuning">Tuning</h2>
+
+<p>
+Partners may precompile (some of) the system apps if they want so. Initial JIT
+performance vs pre-compiled depends on the the app, but in general they are
+quite close. It might be worth noting that precompiled apps will not be profiled
+and as such will take more space and may miss on other optimizations.
+</p>
+
+<p>
+In Android 7.0, there's a generic way to specify the level of
+compilation/verification based on the different use cases. For example, the
+default option for install time is to do only verification (and postpone
+compilation to a later stage). The compilation levels can be configured via
+system properties with the defaults being:
+</p>
+
+<pre>
+pm.dexopt.install=interpret-only
+pm.dexopt.bg-dexopt=speed-profile
+pm.dexopt.ab-ota=speed-profile
+pm.dexopt.nsys-library=speed
+pm.dexopt.shared-apk=speed
+pm.dexopt.forced-dexopt=speed
+pm.dexopt.core-app=speed
+pm.dexopt.first-boot=interpret-only
+pm.dexopt.boot=verify-profile
+</pre>
+
+<p>
+Note the reference to A/B over-the-air (OTA) updates here.
+</p>
+
+<p>
+Check <code>$ adb shell cmd package compile</code> for usage. Note all commands
+are preceded by a dollar ($) sign that should be excluded when copying and
+pasting. A few common use cases:
+</p>
+
+<h3 id="turn-on-jit-logging">Turn on JIT logging</h3>
+
+<pre>
+$ adb root
+$ adb shell stop
+$ adb shell setprop dalvik.vm.extra-opts -verbose:jit
+$ adb shell start
+</pre>
+
+<h3 id="disable-jit-and-run-applications-in-interpreter">Disable JIT</h3>
+
+<pre>
+$ adb root
+$ adb shell stop
+$ adb shell setprop dalvik.vm.usejit false
+$ adb shell start
+</pre>
+
+<h3 id="force-compilation-of-a-specific-package">Force compilation of a specific
+package</h3>
+
+<ul>
+<li>Profile-based:
+<code>$ adb shell cmd package compile -m speed-profile -f
+my-package</code>
+<li>Full:
+<code>$ adb shell cmd package compile -m speed -f
+my-package</code></li>
+</ul>
+
+<h3 id="force-compilation-of-all-packages">Force compilation of all
+packages</h3>
+
+<ul>
+<li>Profile-based:
+<code>$ adb shell cmd package compile -m speed-profile -f
+-a</code>
+<li>Full:
+<code>$ adb shell cmd package compile -m speed -f -a</code></li></ul>
+
+<h3 id="clear-profile-data-and-remove-compiled-code">Clear profile data and
+remove compiled code</h3>
+
+<ul>
+<li>One package:
+<code>$ adb shell cmd package compile --reset my-package</code>
+<li>All packages
+<code>$ adb shell cmd package compile --reset
+-a</code></li>
+</ul>
+
+<h2 id="validation">Validation</h2>
+
+<p>
+To ensure their version of the feature works as intended, partners should run
+the ART test in <code>android/art/test</code>. Also, see the CTS test
+<code>hostsidetests/compilation</code> for userdedug builds.
+</p>
diff --git a/src/devices/tech/display/dnd.jd b/src/devices/tech/display/dnd.jd
new file mode 100644
index 0000000..5588d31
--- /dev/null
+++ b/src/devices/tech/display/dnd.jd
@@ -0,0 +1,70 @@
+page.title=Configuring DND
+@jd:body
+
+<!--
+ Copyright 2015 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Android 7.0 supports the following do not disturb (DND) configurations.</p>
+
+<h2 id="third_party">Third-party automatic rules</h3>
+<p>Third-party applications can use the DND Access API to control DND rules:</p>
+<ul>
+<li><strong>Applications</strong> can export and list custom DND rules, which
+appear next to built-in Android DND rules in the DND settings.</li>
+<li><strong>Users</strong> can access all DND controls for all rules (both
+automatic and manually-created).</li>
+<li>The <strong>platform</strong> can implement DND rules from different sources
+without creating unexpected states.</li>
+</ul>
+
+<h2 id="control_alarms">Controlling alarms</h3>
+<p>When DND mode is enabled, the Android settings UI presents user options for
+configuring:</p>
+<ul>
+<li><strong>DND end condition as next alarm time</strong>. Enables user to set
+the DND end condition to an alarm. Appears only if an alarm is set for a time
+within a week from now <em>and</em> the day of the week for that alarm is
+<em>not</em> the same day of the week as today. (Not supported for automatic
+rules.)</li>
+<li><strong>Alarm can override end time</strong>. Enables users to configure the
+DND end condition as a specific time or next alarm (whichever comes first).</li>
+</ul>
+
+<h2 id="suppress_vis_distract">Suppressing visual distractions</h3>
+<p>The Android settings UI presents user options for suppressing visual
+distractions such as heads up notifications, fullscreen intents, ambient
+display, and LED notification lights.</p>
+
+<h2 id="implementation">Customizing DND settings</h2>
+<p>When customizing settings, OEMs must preserve the AOSP behavior of the system
+APIs and maintain the behavior of DND settings. Specifically, the DND settings
+page in system settings must include the following:</p>
+<ul>
+<li><strong>Application-provided DND rules</strong>. These automated DND rules
+must include active rules instances and rule listings in the Add Rule menu.</li>
+<li><strong>Pre-loaded application DND rules</strong>. OEMs can provide DND
+rules that appear next to end user manually-created rules.</li>
+</ul>
+<p>For details on new DND APIs, refer to
+<code><a href="https://developer.android.com/reference/android/service/notification/package-summary.html">android.service.notification</a></code>
+reference documentation.</p>
diff --git a/src/devices/tech/display/hdr.jd b/src/devices/tech/display/hdr.jd
new file mode 100644
index 0000000..2062a37
--- /dev/null
+++ b/src/devices/tech/display/hdr.jd
@@ -0,0 +1,700 @@
+page.title=HDR Video Playback
+@jd:body
+
+<!--
+ Copyright 2015 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>High dynamic range (HDR) video is the next frontier in high-quality
+video decoding, bringing unmatched scene reproduction qualities. It does
+so by significantly increasing the dynamic range of the luminance component
+(from the current 100 cd/m<sup>2</sup> to 1000s of cd/m<sup>2</sup>) and by using a much wider
+color space (BT 2020). This is now a central element of the 4K UHD evolution
+in the TV space.</p>
+
+<p>In Android 7.0, initial HDR support has been added, which includes the
+creation of proper constants for the discovery and setup of HDR video
+pipelines. That means defining codec types and display modes and specifying
+how HDR data must be passed to MediaCodec and supplied to HDR decoders. HDR
+is only supported in tunneled video playback mode.</p>
+
+<p>The purpose of this document is to help application developers support HDR stream
+playback, and help OEMs and SOCs enable the HDR features on Android 7.0.</p>
+
+<h2 id="technologies">Supported HDR technologies</h2>
+
+<p>As of Android 7.0 release, the following HDR technologies are supported.
+
+<table>
+<tbody>
+<tr>
+<th>Technology
+</th>
+<th>Dolby-Vision
+</th>
+<th>HDR10
+</th>
+<th>VP9-HLG
+</th>
+<th>VP9-PQ
+</th>
+</tr>
+<tr>
+<th>Codec
+</th>
+<td>AVC/HEVC
+</td>
+<td>HEVC
+</td>
+<td>VP9
+</td>
+<td>VP9
+</td>
+</tr>
+<tr>
+<th>Transfer Function
+</th>
+<td>ST-2084
+</td>
+<td>ST-2084
+</td>
+<td>HLG
+</td>
+<td>ST-2084
+</td>
+</tr>
+<tr>
+<th>HDR Metadata Type
+</th>
+<td>Dynamic
+</td>
+<td>Static
+</td>
+<td>None
+</td>
+<td>Static
+</td>
+</tr>
+</tbody>
+</table>
+
+<p>In Android 7.0, <b>only HDR playback via tunneled mode is defined</b>,
+but devices may add support for playback of HDR on SurfaceViews using opaque
+video buffers. In other words:</p>
+<ul>
+<li>There is no standard Android API to check if HDR playback is supported
+using non-tunneled decoders.</li>
+<li>Tunneled video decoders that advertise HDR playback capability must
+support HDR playback when connected to HDR-capable displays.</li>
+<li>GL composition of HDR content is not supported by the AOSP Android
+7.0 release.</li>
+</ul>
+
+<h2 id="discovery">Discovery</h2>
+
+<p>HDR Playback requires an HDR-capable decoder and a connection to an
+HDR-capable display. Optionally, some technologies require a specific
+extractor.</p>
+
+<h3 id="display">Display</h3>
+
+<p>Applications shall use the new <code>Display.getHdrCapabilities</code>
+API to query the HDR technologies supported by the specified display. This is
+basically the information in the EDID Static Metadata Data Block as defined
+in CTA-861.3:</p>
+
+<ul>
+<li><code>public Display.HdrCapabilities getHdrCapabilities()</code><br>
+Returns the display's HDR capabilities.</li>
+
+<li><code>public Display.HdrCapabilities getHdrCapabilities()</code><br>
+Returns the display's HDR capabilities.</li>
+
+<li><code>Display.HdrCapabilities</code><br>
+Encapsulates the HDR capabilities of a given display. For example, what HDR
+types it supports and details about the desired luminance data.</li>
+</ul>
+
+<p><b>Constants:</b></p>
+
+<ul>
+<li><code>int HDR_TYPE_DOLBY_VISION</code><br>
+Dolby Vision support.</li>
+
+<li><code>int HDR_TYPE_HDR10</code><br>
+HDR10 / PQ support.</li>
+
+<li><code>int HDR_TYPE_HLG</code><br>
+Hybrid Log-Gamma support.</li>
+
+<li><code>float INVALID_LUMINANCE</code><br>
+Invalid luminance value.</li>
+</ul>
+
+<p><b>Public Methods:</b></p>
+
+<ul>
+<li><code>float getDesiredMaxAverageLuminance()</code><br>
+Returns the desired content max frame-average luminance data in cd/cd/m<sup>2</sup> for
+this display.</li>
+
+<li><code>float getDesiredMaxLuminance()</code><br>
+Returns the desired content max luminance data in cd/cd/m<sup>2</sup> for this display.</li>
+
+<li><code>float getDesiredMinLuminance()</code><br>
+Returns the desired content min luminance data in cd/cd/m<sup>2</sup> for this display.</li>
+
+<li><code>int[] getSupportedHdrTypes()</code><br>
+Gets the supported HDR types of this display (see constants). Returns empty
+array if HDR is not supported by the display.</li>
+</ul>
+
+<h3 id="decoder">Decoder</h3>
+
+<p>Applications shall use the existing
+<a href="https://developer.android.com/reference/android/media/MediaCodecInfo.CodecCapabilities.html#profileLevels">
+<code>CodecCapabilities.profileLevels</code></a> API to verify support for the
+new HDR capable profiles:</p>
+
+<h4>Dolby-Vision</h4>
+
+<p><code>MediaFormat</code> mime constant:
+<blockquote><pre>
+String MIMETYPE_VIDEO_DOLBY_VISION
+</pre></blockquote></p>
+
+<p><code>MediaCodecInfo.CodecProfileLevel</code> profile constants:</p>
+<blockquote><pre>
+int DolbyVisionProfileDvavPen
+int DolbyVisionProfileDvavPer
+int DolbyVisionProfileDvheDen
+int DolbyVisionProfileDvheDer
+int DolbyVisionProfileDvheDtb
+int DolbyVisionProfileDvheDth
+int DolbyVisionProfileDvheDtr
+int DolbyVisionProfileDvheStn
+</pre></blockquote>
+
+<p>Dolby Vision video layers and metadata must be concatenated into a single
+buffer per frames by video applications. This is done automatically by the
+Dolby-Vision capable MediaExtractor.</p>
+
+<h4>HEVC HDR 10</h4>
+
+<p><code>MediaCodecInfo.CodecProfileLevel</code> profile constants:<p>
+<blockquote><pre>
+int HEVCProfileMain10HDR10
+</pre></blockquote>
+
+<h4>VP9 HLG & PQ</h4>
+
+<p><code>MediaCodecInfo.CodecProfileLevel</code> profile
+constants:</p>
+<blockquote><pre>
+int VP9Profile2HDR
+int VP9Profile3HDR
+</pre></blockquote>
+
+<p>If a platform supports an HDR-capable decoder, it shall also support an
+HDR-capable extractor.</p>
+
+<p>Only tunneled decoders are guaranteed to play back HDR content. Playback
+by non-tunneled decoders may result in the HDR information being lost and
+the content being flattened into an SDR color volume.</p>
+
+<h3 id="extractor">Extractor</h3>
+
+<p>The following containers are supported for the various HDR technologies
+on Android 7.0:</p>
+
+<table>
+<tbody>
+<tr>
+<th>Technology
+</th>
+<th>Dolby-Vision
+</th>
+<th>HDR10
+</th>
+<th>VP9-HLG
+</th>
+<th>VP9-PQ
+</th>
+</tr>
+<tr>
+<th>Container
+</th>
+<td>MP4
+</td>
+<td>MP4
+</td>
+<td>WebM
+</td>
+<td>WebM
+</td>
+</tr>
+</tbody>
+</table>
+
+<p>Discovery of whether a track (of a file) requires HDR support is not
+supported by the platform. Applications may parse the codec-specific data
+to determine if a track requires a specific HDR profile.</p>
+
+<h3 id ="summary">Summary</h3>
+
+<p>Component requirements for each HDR technology are shown in the following table:</p>
+
+<div style="overflow:auto">
+<table>
+<tbody>
+<tr>
+<th>Technology
+</th>
+<th>Dolby-Vision
+</th>
+<th>HDR10
+</th>
+<th>VP9-HLG
+</th>
+<th>VP9-PQ
+</th>
+</tr>
+<tr>
+<th>Supported HDR type (Display)
+</th>
+<td>HDR_TYPE_DOLBY_VISION
+</td>
+<td>HDR_TYPE_HDR10
+</td>
+<td>HDR_TYPE_HLG
+</td>
+<td>HDR_TYPE_HDR10
+</td>
+</tr>
+<tr>
+<th>Container (Extractor)
+</th>
+<td>MP4
+</td>
+<td>MP4
+</td>
+<td>WebM
+</td>
+<td>WebM
+</td>
+</tr>
+<tr>
+<th>Decoder
+</th>
+<td>MIMETYPE_VIDEO_DOLBY_VISION
+</td>
+<td>MIMETYPE_VIDEO_HEVC
+</td>
+<td>MIMETYPE_VIDEO_VP9
+</td>
+<td>MIMETYPE_VIDEO_VP9
+</td>
+</tr>
+<tr>
+<th>Profile (Decoder)
+</th>
+<td>One of the Dolby profiles
+</td>
+<td>HEVCProfileMain10HDR10
+</td>
+<td>VP9Profile2HDR or
+VP9Profile3HDR
+</td>
+<td>VP9Profile2HDR or
+VP9Profile3HDR
+</td>
+</tr>
+</tbody>
+</table>
+</div>
+<br>
+
+<p>Notes:</p>
+<ul>
+<li>Dolby-Vision bitstreams are packaged in an MP4 container in a way defined
+by Dolby. Applications may implement their own Dolby-capable extractors as
+long as they package the access units from the corresponding layers into a
+single access unit for the decoder as defined by Dolby.</li>
+<li>A platform may support an HDR-capable extractor, but no corresponding
+HDR-capable decoder.</li>
+</ul>
+
+<h2 id="playback">Playback</h2>
+
+<p>After an application has verified support for HDR playback, it can play
+back HDR content nearly the same way as it plays back non-HDR content,
+with the following caveats:</p>
+
+<ul>
+<li>For Dolby-Vision, whether or not a specific media file/track requires
+an HDR capable decoder is not immediately available. The application must
+have this information in advance or be able to obtain this information by
+parsing the codec-specific data section of the MediaFormat.</li>
+<li><code>CodecCapabilities.isFormatSupported</code> does not consider whether
+the tunneled decoder feature is required for supporting such a profile.</li>
+</ul>
+
+<h2 id="enablinghdr">Enabling HDR platform support</h2>
+
+<p>SoC vendors and OEMs must do additional work to enable HDR platform
+support for a device.</p>
+
+<h3 id="platformchanges">Platform changes in Android 7.0 for HDR</h3>
+
+<p>Here are some key changes in the platform (Application/Native layer)
+that OEMs and SOCs need to be aware of.</p>
+
+<h3 id="display">Display</h3>
+
+<h4>Hardware composition</h4>
+
+<p>HDR-capable platforms must support blending HDR content with non-HDR
+content. The exact blending characteristics and operations are not defined
+by Android as of release 7.0, but the process generally follows these steps:</p>
+<ol>
+<li>Determine a linear color space/volume that contains all layers to be
+composited, based on the layers' color, mastering, and potential dynamic
+metadata.
+<br>If compositing directly to a display, this could be the linear space
+that matches the display's color volume.</li>
+<li>Convert all layers to the common color space.</li>
+<li>Perform the blending.</li>
+<li>If displaying through HDMI:
+<ol style="list-style-type: lower-alpha">
+<li>Determine the color, mastering, and potential dynamic metadata for the
+blended scene.</li>
+<li>Convert the resulting blended scene to the derived color
+space/volume.</ol></li>
+<li>If displaying directly to the display, convert the resulting blended
+scene to the required display signals to produce that scene.
+</ol></li>
+</ol>
+
+<h4>Display discovery</h4>
+
+<p>HDR display discovery is only supported via HWC2. Partners must selectively
+enable the HWC2 adapter that is released with Android 7.0 for this feature
+to work. Therefore, platforms must add support for HWC2 or extend the AOSP
+framework to allow a way to provide this information. HWC2 exposes a new
+API to propagate HDR Static Data to the framework and the application.</p>
+
+<h4>HDMI</h4>
+
+<ul>
+<li>A connected HDMI display advertises
+its HDR capability through HDMI EDID as defined in CTA-861.3
+section 4.2.</li>
+<li>The following EOTF mapping shall be used:<ul>
+<li>ET_0 Traditional gamma - SDR Luminance Range: not mapped to any HDR
+type</li>
+<li>ET_1 Traditional gamma - HDR Luminance Range: not mapped to any HDR
+type</li>
+<li>ET_2 SMPTE ST 2084 - mapped to HDR type HDR10</ul></li>
+<li>The signaling of Dolby Vision or HLG support over HDMI is done as defined
+by their relevant bodies.</li>
+<li>Note that the HWC2 API uses float desired luminance values, so the 8-bit
+EDID values must be translated in a suitable fashion.</ul></li>
+</ul>
+
+<h3 id="decoders">Decoders</h3>
+
+<p>Platforms must add HDR-capable tunneled decoders and advertise their HDR
+support. Generally, HDR-capable decoders must:</p>
+<ul>
+<li>Support tunneled decoding (<code>FEATURE_TunneledPlayback</code>).</li>
+<li>Support HDR static metadata
+(<code>OMX.google.android.index.describeHDRColorInfo</code>) and its
+propagation to the display/hardware composition. For HLG, appropriate metadata
+must be submitted to the display.</li>
+<li>Support color description
+(<code>OMX.google.android.index.describeColorAspects</code>) and its
+propagation to the display/hardware composition.</li>
+<li>Support HDR embedded metadata as defined by the relevant standard.</li>
+</ul>
+
+<h4>Dolby Vision decoder support</h4>
+
+<p>To support Dolby Vision, platforms must add a Dolby-Vision capable
+HDR OMX decoder. Given the specifics of Dolby Vision, this is normally a
+wrapper decoder around one or more AVC and/or HEVC decoders as well as a
+compositor. Such decoders must:</p>
+<ul>
+<li>Support mime type "video/dolby-vision."</li>
+<li>Advertise supported Dolby Vision profiles/levels.</li>
+<li>Accept access units that contain the sub-access-units of all layers as
+defined by Dolby.</li>
+<li>Accept codec-specific data defined by Dolby. For example, data containing
+Dolby Vision profile/level and possibly the codec-specific data for the
+internal decoders.</li>
+<li>Support adaptive switching between Dolby Vision profiles/levels as
+required by Dolby.</li>
+</ul>
+
+<p>When configuring the decoder, the actual Dolby profile is not communicated
+to the codec. This is only done via codec-specific data after the decoder
+has been started. A platform could choose to support multiple Dolby Vision
+decoders: one for AVC profiles, and another for HEVC profiles to be able to
+initialize underlying codecs during configure time. If a single Dolby Vision
+decoder supports both types of profiles, it must also support switching
+between those dynamically in an adaptive fashion.</p>
+<p>If a platform provides a Dolby-Vision capable decoder in addition to the
+general HDR decoder support, it must:</p>
+
+<ul>
+<li>Provide a Dolby-Vision aware extractor, even if it does not support
+HDR playback.</li>
+<li>Provide a decoder that supports at least Dolby Vision profile X/level
+Y.</li>
+</ul>
+
+<h4>HDR10 decoder support</h4>
+
+<p>To support HDR10, platforms must add an HDR10-capable OMX decoder. This
+is normally a tunneled HEVC decoder that also supports parsing and handling
+HDMI related metadata. Such a decoder (in addition to the general HDR decoder
+support) must:</p>
+<ul>
+<li>Support mime type "video/hevc."</li>
+<li>Advertise supported HEVCMain10HDR10. HEVCMain10HRD10 profile support
+also requires supporting the HEVCMain10 profile, which requires supporting
+the HEVCMain profile at the same levels.</li>
+<li>Support parsing the mastering metadata SEI blocks, as well as other HDR
+related info contained in SPS.</li>
+</ul>
+
+<h4>VP9 decoder support</h4>
+
+<p>To support VP9 HDR, platforms must add a VP9 Profile2-capable HDR OMX
+decoder. This is normally a tunneled VP9 decoder that also supports handling
+HDMI related metadata. Such decoders (in addition to the general HDR decoder
+support) must:</p>
+<ul>
+<li>Support mime type "video/x-vnd.on2.vp9."</li>
+<li>Advertise supported VP9Profile2HDR. VP9Profile2HDR profile support also
+requires supporting VP9Profile2 profile at the same level.</li>
+</ul>
+
+<h3 id="extractors">Extractors</h3>
+
+<h4>Dolby Vision extractor support</h4>
+
+<p>Platforms that support Dolby Vision decoders must add Dolby extractor
+(called Dolby Extractor) support for Dolby Video content.</p>
+<ul>
+<li>A regular MP4 extractor can only extract the base layer from a file,
+but not the enhancement or metadata layers. So a special Dolby extractor is
+needed to extract the data from the file.</li>
+<li>The Dolby extractor must expose 1 to 2 tracks for each Dolby video track
+(group):
+<ul>
+<li>A Dolby Vision HDR track with the type of "video/dolby-vision" for the
+combined 2/3-layers Dolby stream. The HDR track's access-unit format, which
+defines how to package the access units from the base/enhancement/metadata
+layers into a single buffer to be decoded into a single HDR frame, is to be
+defined by Dolby.</li>
+<li>If a Dolby Vision video track contains a separate (backward compatible)
+base-layer (BL), the extractor must also expose this as a separate "video/avc"
+or "video/hevc" track. The extractor must provide regular AVC/HEVC access
+units for this track.</li>
+<li>The BL track must have the same track-unique-ID ("track-ID") as the
+HDR track so the app understands that these are two encodings of the same
+video.</li>
+<li>The application can decide which track to choose based on the platform's
+capability.</li>
+</ul>
+</li>
+<li>The Dolby Vision profile/level must be exposed in the track format of
+the HDR track.</li>
+<li>If a platform provides a Dolby-Vision capable decoder, it must also provide
+a Dolby-Vision aware extractor, even if it does not support HDR playback.</li>
+</ul>
+
+<h4>HDR10 and VP9 HDR extractor support</h4>
+
+<p>There are no additional extractor requirements to support HDR10 or VP9
+HLG. Platforms must extend MP4 extractor to support VP9 PQ in MP4. HDR
+static metadata must be propagated in the VP9 PQ bitstream, such that this
+metadata is passed to the VP9 PQ decoder and to the display via the normal
+MediaExtractor => MediaCodec pipeline.</p>
+
+<h3 id="stagefright">Stagefright extensions for Dolby Vision support</h3>
+
+<p>Platforms must add Dolby Vision format support to Stagefright:</p>
+<ul>
+<li>Support for port definition query for compressed port.</li>
+<li>Support profile/level enumeration for DV decoder.</li>
+<li>Support exposing DV profile/level for DV HDR tracks.</li>
+</ul>
+
+<h2 id="implementationnotes">Technology-specific implementation details</h2>
+
+<h3 id="hdr10decoder">HDR10 decoder pipeline</h3>
+
+<p><img src="../images/hdr10_decoder_pipeline.png"></p>
+
+<p class="img-caption"><strong>Figure 1.</strong> HDR10 pipeline</p>
+
+<p>HDR10 bitstreams are packaged in MP4 containers. Applications use a regular
+MP4 extractor to extract the frame data and send it to the decoder.</p>
+
+<ul>
+<li><b>MPEG4 Extractor</b><br>
+HDR10 bitstreams are recognized as just a normal HEVC stream by a
+MPEG4Extractor and the HDR track with the type "video/HEVC" will be
+extracted. The framework picks an HEVC video decoder that supports the
+Main10HDR10 profile to decode that track.</li>
+
+<li><b>HEVC Decoder</b><br>
+HDR information is in either SEI or SPS. The HEVC decoder first receives
+frames that contain the HDR information. The decoder then extracts the HDR
+information and notifies the application that it is decoding an HDR video. HDR
+information is bundled into decoder output format, which is propagated to
+the surface later.</li>
+</ul>
+
+<h4>Vendor actions</h4>
+<ol>
+<li>Advertise supported HDR decoder profile and level OMX type. Example:<br>
+<code>OMX_VIDEO_HEVCProfileMain10HDR10</code> (and <code>Main10</code>)</li>
+<li>Implement support for index:
+'<code>OMX.google.android.index.describeHDRColorInfo</code>'</li>
+<li>Implement support for index:
+'<code>OMX.google.android.index.describeColorAspects</code>'</li>
+<li>Implement support for SEI parsing of mastering metadata.</li>
+</ol>
+
+<h3 id="dvdecoder">Dolby Vision decoder pipeline</h3>
+
+<p><img src="../images/dolby_vision_decoder_pipleline.png"></p>
+
+<p class="img-caption"><strong>Figure 2.</strong> Dolby Vision pipeline</p>
+
+<p>Dolby-bitstreams are packaged in MP4 containers as defined by
+Dolby. Applications could, in theory, use a regular MP4 extractor to extract
+the base layer, enhancement layer, and metadata layer independently; however,
+this does not fit the current Android MediaExtractor/MediaCodec model.</p>
+
+<ul>
+<li>DolbyExtractor:
+<ul>
+<li>Dolby-bitstreams are recognized by a DolbyExtractor, which exposes the
+various layers as 1 to 2 tracks for each dolby video track (group):
+<ul>
+<li>An HDR track with the type of "video/dolby-vision" for the combined
+2/3-layers dolby stream. The HDR track's access-unit format, which defines
+how to package the access units from the base/enhancement/metadata layers
+into a single buffer to be decoded into a single HDR frame, is to be defined
+by Dolby.</li>
+<li>(Optional, only if the BL is backward compatible) A BL track contains
+only the base layer, which must be decodable by regular MediaCodec decoder,
+for example, AVC/HEVC decoder. The extractor should provide regular AVC/HEVC
+access units for this track. This BL track must have the same track-unique-ID
+("track-ID") as the Dolby track so the application understands that these
+are two encodings of the same video.</li>
+</ul>
+<li>The application can decide which track to choose based on the platform's
+capability.</li>
+<li>Because an HDR track has a specific HDR type, the framework will pick
+a Dolby video decoder to decode that track. The BL track will be decoded by
+a regular AVC/HEVC video decoder.</li>
+</ul>
+
+<li>DolbyDecoder:
+<ul>
+<li>The DolbyDecoder receives access units that contain the required access
+units for all layers (EL+BL+MD or BL+MD)</li>
+<li>CSD (codec specific data, such as SPS+PPS+VPS) information for the
+individual layers can be packaged into 1 CSD frame to be defined by
+Dolby. Having a single CSD frame is required.</li>
+</ul>
+</ul>
+
+<h4>Dolby actions</h4>
+<ol>
+<li>Define the packaging of access units for the various Dolby container
+schemes (e.g. BL+EL+MD) for the abstract Dolby decoder (i.e. the buffer
+format expected by the HDR decoder).</li>
+<li>Define the packaging of CSD for the abstract Dolby decoder.</li>
+</ol>
+
+<h4>Vendor actions</h4>
+<ol>
+<li>Implement Dolby extractor. This can also be done by Dolby.</li>
+<li>Integrate DolbyExtractor into the framework. The entry point is
+<code>frameworks/av/media/libstagefright/MediaExtractor.cpp</code>.</li>
+<li>Declare HDR decoder profile and level OMX
+type. Example: <code>OMX_VIDEO_DOLBYPROFILETYPE</code> and
+<code>OMX_VIDEO_DOLBYLEVELTYP</code>.</li>
+<li>Implement support for index:
+<code>'OMX.google.android.index.describeColorAspects</code>'</li>
+<li>Propagate the dynamic HDR metadata to the app and surface in each
+frame. Typically this information must be packaged into the decoded frame
+as defined by Dolby, because the HDMI standard does not provide a way to
+pass this to the display.</li>
+</ol>
+
+<h3 id="v9decoder">VP9 decoder pipeline</h3>
+
+<p><img src="../images/vp9-pq_decoder_pipleline.png"></p>
+
+<p class="img-caption"><strong>Figure 3.</strong> VP9-PQ pipeline</p>
+
+<p>VP9 bitstreams are packaged in WebM containers in a way defined by WebM
+team. Applications need to use a WebM extractor to extract HDR metadata from
+the bitstream before sending frames to the decoder.</p>
+
+<ul>
+<li>WebM Extractor:
+<ul>
+<li>WebM Extractor extracts the HDR <a
+href="http://www.webmproject.org/docs/container/#colour">metadata</a>
+and frames from the <a
+href="http://www.webmproject.org/docs/container/#location-of-the-colour-element-in-an-mkv-file">
+container</a>.</li>
+</ul>
+
+<li>VP9 Decoder:
+<ul>
+<li>Decoder receives Profile2 bitstreams and decodes them as normal VP9
+streams.</li>
+<li>Decoder receives any HDR static metadata from the framework.</li>
+<li>Decoder receives static metadata via the bitstream access units for VP9
+PQ streams.</li>
+<li>VP9 decoder must be able to propagate the HDR static/dynamic metadata
+to the display.</li>
+</ul>
+</ul>
+
+<h4>Vendor Actions</h4>
+
+<ol>
+<li>Implement support for index:
+<code>OMX.google.android.index.describeHDRColorInfo</code></li>
+<li>Implement support for index:
+<code>OMX.google.android.index.describeColorAspects</code></li>
+<li>Propagate HDR static metadata</li>
+</ol>
diff --git a/src/devices/tech/display/index.jd b/src/devices/tech/display/index.jd
new file mode 100644
index 0000000..e3e0da9
--- /dev/null
+++ b/src/devices/tech/display/index.jd
@@ -0,0 +1,44 @@
+page.title=Configuring Display Settings
+@jd:body
+
+<!--
+ Copyright 2015 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>This section covers AOSP implementation of various Android display
+settings such as do not disturb (DND) configurations and multi-window
+(split-screen, free-form, and picture-in-picture) options.</p>
+
+<h2 id="settingshome">Settings Home screen enhancements</h2>
+
+<p>In Android 7.0, the Settings Home page is enhanced with suggested
+settings and customizable status notifications. The feature is implemented
+automatically, and partners can configure it.</p>
+
+
+<p>The source code for these enhancements is in these files:</p>
+
+<ul>
+<li><code>frameworks/base/packages/SettingsLib/src/com/android/settingslib/SuggestionParser.java</code></li>
+<li><code>frameworks/base/packages/SettingsLib/src/com/android/settingslib/drawer/TileUtils.java</code></li>
+</ul>
diff --git a/src/devices/tech/display/multi-window.jd b/src/devices/tech/display/multi-window.jd
new file mode 100644
index 0000000..41aa0e9
--- /dev/null
+++ b/src/devices/tech/display/multi-window.jd
@@ -0,0 +1,123 @@
+page.title=Supporting Multi-Window
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+In Android 7.0, users can have multiple apps simultaneously displayed on their
+device screen with the new platform feature, multi-window. In addition to the
+default implementation of multi-window, Android 7.0 supports a few varieties
+of multi-window: split-screen, free-form, and picture-in-picture.
+</p>
+
+<ul>
+<li><strong>Split-screen</strong> is the base implementation of multi-window and
+provides two activity panes for users to place apps.
+<li><strong>Freeform</strong> allows users to dynamically resize the activity
+panes and have more than two apps visible on their screen.
+<li><strong>Picture-in-picture (PIP)</strong> allows Android devices to continue
+playing video content in a small window while the user interacts with other
+applications.</li>
+</ul>
+
+<p>
+To implement the multi-window feature, device manufacturers set a flag in the
+config file on their devices to enable or disable multi-window support.
+</p>
+
+<h2 id="implementation">Implementation</h2>
+<p>
+Multi-window support is enabled by default in Android 7.0. To disable it, set
+the <code>config_supportsMultiWindow</code> flag to false in the <a
+href="https://android.googlesource.com/platform/frameworks/base/+/master/core/res/res/values/config.xml">config.xml</a>
+file.
+</p>
+<p>
+For devices that declare <code>ActivityManager.isLowRam()</code>, multi-window
+is disabled regardless of the value of <code>config_supportsMultiWindow</code>
+flag.
+</p>
+<h3 id="split-screen">Split-screen</h3>
+<p>
+The default multi-window experience is split-screen mode, where the System UI is
+divided directly down the middle of the device in portrait or landscape. Users
+can resize the window by dragging the dividing line side-to-side or
+top-to-bottom, depending on the device orientation.
+</p>
+<p>
+Then device manufacturers can choose if they want to enable freeform or PIP.
+</p>
+<h3 id="freeform">Freeform</h3>
+<p>
+After enabling standard multi-window mode with the flag
+<code>config_supportsMultiWindow</code>, device manufacturers can optionally
+allow freeform windowing. This mode is most useful for manufacturers of larger
+devices, like tablets.
+</p>
+<p>
+To support freeform mode, enable the
+PackageManager#FEATURE_FREEFORM_WINDOW_MANAGEMENT system feature in <a
+href="https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/content/pm/PackageManager.java">/android/frameworks/base/core/java/android/content/pm/PackageManager.java</a>
+and set <code>config_freeformWindowManagement</code> to true in <a
+href="https://android.googlesource.com/platform/frameworks/base/+/master/core/res/res/values/config.xml">config.xml</a>.
+</p>
+
+<pre>
+<bool name="config_freeformWindowManagement">true</bool>
+</pre>
+
+<h3 id="picture-in-picture">Picture-in-picture</h3>
+<p>
+After enabling standard multi-window mode with the flag
+<code>config_supportsMultiWindow</code>, device manufacturers can support <a
+href="http://developer.android.com/preview/features/picture-in-picture.html">picture-in-picture</a>
+to allow users to continue watching video while browsing other activities.
+While this features is primarily targeted at Android Television devices, other
+device form factors may support this feature.
+</p>
+<p>
+To support PIP, enable the PackageManager#FEATURE_PICTURE_IN_PICTURE system
+feature in <a
+href="https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/content/pm/PackageManager.java">/android/frameworks/base/core/java/android/content/pm/PackageManager.java</a>.
+</p>
+<h3 id="system-ui">System UI</h3>
+<p>
+Support all standard System UIs according to <a
+href="http://developer.android.com/preview/features/multi-window.html#testing">http://developer.android.com/preview/features/multi-window.html#testing</a>
+</p>
+<h3 id="applications">Applications</h3>
+<p>
+To support multi-window mode for preloaded apps, consult the <a
+href="http://developer.android.com/preview/features/multi-window.html">developer
+preview documentation</a>.
+</p>
+<h2 id="validation">Validation</h2>
+<p>
+To validate their implementation of multi-window, device manufacturers should
+run <a
+href="https://android.googlesource.com/platform/cts/+/master/hostsidetests/services/activitymanager/src/android/server/cts">CTS
+tests</a> and follow the <a
+href="http://developer.android.com/preview/features/multi-window.html#testing">testing
+instructions for multi-window</a>.
+</p>
diff --git a/src/devices/tech/images/dolby_vision_decoder_pipleline.png b/src/devices/tech/images/dolby_vision_decoder_pipleline.png
new file mode 100644
index 0000000..86cda40
--- /dev/null
+++ b/src/devices/tech/images/dolby_vision_decoder_pipleline.png
Binary files differ
diff --git a/src/devices/tech/images/hdr10_decoder_pipeline.png b/src/devices/tech/images/hdr10_decoder_pipeline.png
new file mode 100644
index 0000000..f0cf7dc
--- /dev/null
+++ b/src/devices/tech/images/hdr10_decoder_pipeline.png
Binary files differ
diff --git a/src/devices/tech/images/power_sustained_perf.png b/src/devices/tech/images/power_sustained_perf.png
new file mode 100644
index 0000000..5ed7810
--- /dev/null
+++ b/src/devices/tech/images/power_sustained_perf.png
Binary files differ
diff --git a/src/devices/tech/images/vp9-pq_decoder_pipleline.png b/src/devices/tech/images/vp9-pq_decoder_pipleline.png
new file mode 100644
index 0000000..a6fde12
--- /dev/null
+++ b/src/devices/tech/images/vp9-pq_decoder_pipleline.png
Binary files differ
diff --git a/src/devices/tech/index.jd b/src/devices/tech/index.jd
index 022ba6c..13fcdf2 100644
--- a/src/devices/tech/index.jd
+++ b/src/devices/tech/index.jd
@@ -47,6 +47,14 @@
<p><a href="{@docRoot}devices/tech/config/index.html">» Configuration
Information</a></p>
+<h2 id="connect">Connectivity</h2>
+<p>This section covers Android support for NFC standards (such as Felica),
+provides details on the Radio Interface Layer (RIL), describes call notification
+behavior, and gives implementation instructions for user-facing features such as
+Data Saver and phone number blocking.</p>
+<p><a href="{@docRoot}devices/tech/connect/index.html">» Connectivity
+Information</a></p>
+
<h2 id="data-usage-technical-information">Data Usage</h2>
<p>Android's data usage features allow users to understand and control how
their device uses network data. This section is designed for systems
@@ -68,6 +76,13 @@
<p><a href="{@docRoot}devices/tech/admin/index.html">» Device
administration information</a></p>
+<h2 id="display">Display Settings</h2>
+<p>This section covers AOSP implementation of various Android display settings
+such as do not disturb (DND) configurations and multi-window (split-screen,
+free-form, and picture-in-picture) options.</p>
+<p><a href="{@docRoot}devices/tech/display/index.html">» Display settings
+information</a></p>
+
<h2 id="HAL-technical-information">HAL File Reference</h2>
<p>Android's Hardware Abstraction Layer (HAL) provides the interface between
software APIs and hardware drivers. This section contains the commented code
@@ -83,11 +98,13 @@
</p>
<h2 id="power-technical-information">Power</h2>
-<p>Battery usage statistics are tracked by the framework. This involves
-keeping track of time spent by different device components in different states.
-</p>
-<p><a href="{@docRoot}devices/tech/power/index.html">» Power Information</a>
-</p>
+<p>The framework provides battery usage statistics, keeping track of time spent
+by different device components in different states. This section covers power
+management features (such as Doze), gives instructions for accurately measuring
+device and component power (and how to determine power values), and details the
+<code>batterystats</code> command and output.</p>
+<p><a href="{@docRoot}devices/tech/power/index.html">» Power
+Information</a></p>
<h2 id="tradefed-test-infrastructure">Trade Federation Testing Infrastructure
</h2>
diff --git a/src/devices/tech/power/batterystats.jd b/src/devices/tech/power/batterystats.jd
index cd9dd99..9a56d25 100644
--- a/src/devices/tech/power/batterystats.jd
+++ b/src/devices/tech/power/batterystats.jd
@@ -563,14 +563,13 @@
existing chipsets and compatible firmware on new chipsets.</p>
<p>Additionally, OEMs must continue to configure and submit the power profile
-for their devices. However, when the platform detects that Wi-Fi and Bluetooth
-radio power data is available from the chipset, it uses chipset data instead of
-power profile data (cell radio power data is not yet used). For details, see
-<a href="{@docRoot}devices/tech/power/values.html#chipset-data">Devices with
-Bluetooth and Wi-Fi controllers</a>.</p>
+for their devices. However, when the platform detects that Bluetooth, cellular
+(as of Android 7.0), or Wi-Fi radio power data is available from the chipset, it
+uses chipset data instead of power profile data. For details, see
+<a href="{@docRoot}devices/tech/power/values.html#values">Power values</a>.</p>
-<p class="note"><strong>Note</strong>: Prior to Android 6.0, power use for Wi-Fi
-radio, Bluetooth radio, and cellular radio was tracked in the <em>m</em> (Misc)
+<p class="note"><strong>Note</strong>: Prior to Android 6.0, power use for
+Bluetooth radio, cellular radio, and Wi-Fi was tracked in the <em>m</em> (Misc)
section category. In Android 6.0 and higher, power use for these components is
tracked in the <em>pwi</em> (Power Use Item) section with individual labels
-(<em>wifi</em>, <em>blue</em>, <em>cell</em>) for each component.</p>
\ No newline at end of file
+(<em>wifi</em>, <em>blue</em>, <em>cell</em>) for each component.</p>
diff --git a/src/devices/tech/power/mgmt.jd b/src/devices/tech/power/mgmt.jd
index 481e056..fa1147a 100644
--- a/src/devices/tech/power/mgmt.jd
+++ b/src/devices/tech/power/mgmt.jd
@@ -29,19 +29,21 @@
<p>Android includes the following battery life enhancements:</p>
<ul>
-<li><b><a href="#app-standby">App Standby</b></a>. The platform can place
-unused applications in App Standby mode, temporarily restricting network access
-and deferring syncs and jobs for those applications.</li>
-<li><b><a href="#doze">Doze</b></a>. The platform can enter a state of deep
-sleep (periodically resuming normal operations) if users have not actively used
-their device (screen off and stationary) for extended periods of time. Android N
-also enables Doze to trigger a lighter set of optimizations when users turn
-off the device screen yet continue to move around.</li>
-<li><b><a href="#exempt-apps">Exemptions</b></a>. System apps and cloud
-messaging services preloaded on a device are typically exempted from App Standby
-and Doze by default (although app developers can intent their applications into
-this setting). Users can exempt applications via the Settings menu.</li>
+<li><strong><a href="#app-standby">App Standby</strong></a>. The platform can
+place unused applications in App Standby mode, temporarily restricting network
+access and deferring syncs and jobs for those applications.</li>
+<li><strong><a href="#doze">Doze</strong></a>. The platform can enter a state of
+deep sleep (periodically resuming normal operations) if users have not actively
+used their device (screen off and stationary) for extended periods of time.
+Android 7.0 also enables Doze to trigger a lighter set of optimizations when
+users turn off the device screen yet continue to move around.</li>
+<li><strong><a href="#exempt-apps">Exemptions</strong></a>. System apps and
+cloud messaging services preloaded on a device are typically exempted from App
+Standby and Doze by default (although app developers can intent their
+applications into this setting). Users can exempt applications via the Settings
+menu.</li>
</ul>
+
<p>The following sections describe these enhancements.</p>
<h2 id="app-standby">App Standby</h2>
@@ -92,7 +94,7 @@
a period of time.
</p>
-<h3>Testing App Standby</h3>
+<h3 id=testing_app_standby>Testing App Standby</h3>
<p>You can manually test App Standby using the following ADB commands:</p>
<pre>
@@ -115,14 +117,14 @@
times, a device in Doze remains aware of motion and immediately leaves Doze
if motion is detected.</p>
-<p>Android N extends Doze to trigger a lighter set of optimizations every time
+<p>Android 7.0 extends Doze to trigger a lighter set of optimizations every time
a user turns off the device screen, even when the user continues to move around,
enabling longer lasting battery life.</p>
<p>System services (such as telephony) may be preloaded and exempted from Doze
by default. Users can also exempt specific applications from Doze in the
-Settings menu. By default, Doze is <b>disabled</b> in the Android Open Source
-Project (AOSP). For details on enabling Doze, see
+Settings menu. By default, Doze is <strong>disabled</strong> in the Android Open
+Source Project (AOSP). For details on enabling Doze, see
<a href="#integrate-doze">Integrating Doze</a>.</p>
<h3 id="doze-reqs">Doze requirements</h3>
@@ -132,7 +134,7 @@
<p>Full Doze support also requires a
<a href="{@docRoot}devices/sensors/sensor-types.html#significant_motion">Significant
Motion Detector (SMD)</a> on the device; however, the lightweight Doze mode in
-Android N does not require an SMD. If Doze is enabled on a device that:</p>
+Android 7.0 does not require an SMD. If Doze is enabled on a device that:</p>
<ul>
<li>Has an SMD, full Doze optimizations occur (includes lightweight
optimizations).</li>
@@ -192,7 +194,7 @@
</tbody>
</table>
-<p>Android N extends Doze by enabling a lightweight sleep mode during screen
+<p>Android 7.0 extends Doze by enabling a lightweight sleep mode during screen
off, before the device is idle.</p>
<p><img src="../images/doze_lightweight.png"></p>
<p class="img-caption">Figure 1. Doze modes for non-stationary and stationary
@@ -255,7 +257,7 @@
<li>Confirm the device has a cloud messaging service installed.</li>
<li>In the device overlay config file
<code>overlay/frameworks/base/core/res/res/values/config.xml</code>, set
-<code>config_enableAutoPowerModes</code> to <b>true</b>:
+<code>config_enableAutoPowerModes</code> to <strong>true</strong>:
<pre>
bool name="config_enableAutoPowerModes">true</bool>
</pre>
@@ -263,11 +265,11 @@
</li>
<li>Confirm that preloaded apps and services:
<ul>
-<li>Use the new
-<a href="https://developer.android.com/preview/behavior-changes.html#behavior-power">power-saving
+<li>Use the
+<a href="https://developer.android.com/training/monitoring-device-state/doze-standby.html">power-saving
optimization guidelines</a>. For details, see <a href="#test-apps">Testing and
optimizing applications</a>.
-<p><b>OR</b></p>
+<p><strong>OR</strong></p>
<li>Are exempted from Doze and App Standby. For details, see
<a href="#exempt-apps">Exempting applications</a>.</li>
</ul>
@@ -297,10 +299,10 @@
<h4 id="test-apps">Testing and optimizing applications</h4>
<p>Test all applications (especially preloaded applications) in Doze mode. For
details, refer to
-<a href="https://developer.android.com/preview/testing/guide.html#doze-standby">Testing
+<a href="https://developer.android.com/training/monitoring-device-state/doze-standby.html#testing_doze_and_app_standby">Testing
Doze and App Standby</a>.</p>
-<p class="note"><b>Note</b>: MMS/SMS/Telephony services function independently
+<p class="note"><strong>Note</strong>: MMS/SMS/Telephony services function independently
of Doze and will always wake client apps even while the device remains in Doze
mode.</p>
@@ -313,8 +315,8 @@
<li>Third-party application using non-GCM Cloud Messaging platform</li>
</ul>
-<p class="warning"><b>Warning</b>: Do not exempt apps to avoid testing and
-optimizing. Unnecessary exemptions undermine the benefits of Doze and App
+<p class="warning"><strong>Warning</strong>: Do not exempt apps to avoid testing
+and optimizing. Unnecessary exemptions undermine the benefits of Doze and App
Standby and can compromise the user experience, so we strongly suggest
minimizing such exemptions as they allow applications to defeat beneficial
controls the platform has over power use. If users become unhappy about the
@@ -327,7 +329,7 @@
<p>Apps exempted by default are listed in a single view within the Settings >
Battery menu. This list is used for exempting the app from both Doze and App
Standby modes. To provide transparency to the user, the Settings menu
-<b>MUST</b> show all exempted applications.</p>
+<strong>MUST</strong> show all exempted applications.</p>
<p>Users can manually exempt apps via Settings > Battery > Battery optimization
> All apps and then selecting the app to turn off (or back on) optimization.
diff --git a/src/devices/tech/power/performance.jd b/src/devices/tech/power/performance.jd
new file mode 100644
index 0000000..6502127
--- /dev/null
+++ b/src/devices/tech/power/performance.jd
@@ -0,0 +1,127 @@
+page.title=Performance Management
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc"></ol>
+ </div>
+</div>
+
+<p>Managing the power and performance of Android devices can help ensure
+applications run consistently and smoothly on a wide range of hardware. In
+Android 7.0, OEMs can implement support for sustained performance hints that
+enable apps to maintain a consistent device performance and specify an exclusive
+core to improve performance for CPU-intensive, foreground applications.</p>
+
+<h2 id=sustained_performance>Sustained performance</h2>
+<p>For long-running applications (games, camera, RenderScript, audio
+processing), performance can vary dramatically as device temperature limits are
+reached and system on chip (SoC) engines are throttled. App developers creating
+high-performance, long-running apps are limited because the capabilities of the
+underlying platform are a moving target when the device begins to heat up.</p>
+
+<p>To address these limitations, Android 7.0 includes support for sustained
+performance, enabling OEMs to provide hints on device performance capabilities
+for long-running applications. App developers can use these hints to tune
+applications for a predictable, consistent level of device performance over long
+periods of time.</p>
+
+<h3 id=architecture>Architecture</h3>
+<p>An Android application can request the platform to enter a sustained
+performance mode where the Android device can keep a consistent level of
+performance for prolonged periods of time.</p>
+
+<p><img src="../images/power_sustained_perf.png"></p>
+<p class="img-caption"><strong>Figure 1.</strong> Sustained performance mode
+architecture</p>
+
+<h3 id=implementation>Implementation</h3>
+<p>To support sustained performance in Android 7.0, OEMs must:</p>
+<ul>
+<li>Make device-specific changes to the power HAL to either lock the maximum
+CPU/GPU frequencies <strong>or</strong> perform other optimizations to prevent
+thermal throttling.</li>
+<li>Implement the new hint <code>POWER_HINT_SUSTAINED_PERFORMANCE</code> in
+power HAL.</li>
+<li>Declare support by returning TRUE through the
+<code>isSustainedPerformanceModeSupported()</code> API.</li>
+<li>Implement <code>Window.setSustainedPerformanceMode</code>.</li>
+</ul>
+
+<p>In the Nexus reference implementation, the power hint caps the
+maximum frequencies of the CPU and GPU at the highest sustainable levels. Keep
+in mind that lowering the MAX bar in CPU/GPU frequency will lower the frame
+rate, but this lower rate is preferred in this mode due to its sustainability.
+For example, a device using normal max clocks might be able to render at 60
+FPS for a few minutes, but after the device heats up, it may throttle to 30 FPS
+by the end of 30 minutes. When using sustained mode, the device can, for
+example, render consistently at 45 FPS for the entire 30 minutes. The goal is a
+frame rate when using the mode that is as high (or higher) than the frame rate
+when not using the mode, and consistent over time so that developers don't have
+to chase a moving target.</p>
+<p>We strongly recommend implementing sustained mode such that the device
+achieves the highest possible sustained performance—not just the minimum values
+required to pass the test (e.g. choose the highest possible MAX frequency caps
+that do not cause the device to thermally throttle over time).</p>
+
+<p class="note"><strong>Note</strong>: Capping MAX clock rates is not required
+to implement sustained mode.</p>
+
+<h3 id=validation>Validation</h3>
+<p>OEMs can use a new Android 7.0 CTS test to verify their implementation of the
+sustained performance API. The test runs a workload for approximately 30 minutes
+and benchmarks the performance with and without sustained mode enabled:</p>
+<ul>
+<li>With sustained mode enabled, the frame rate must remain relatively constant
+(test measures the percentage of change in frame rate over time and requires a
+<5% change).</li>
+<li>With sustained mode enabled, the frame rate must not be lower than the frame
+rate at the end of 30 minutes with the mode disabled.</li>
+</ul>
+<p>In addition, you can manually test your implementation with several CPU- and
+GPU-intensive workloads to ensure the device does not thermally throttle after
+30 minutes of use. In internal testing, we used sample workloads including
+games and benchmarking apps (e.g.
+<a href="https://gfxbench.com/result.jsp">gfxbench</a>).</p>
+
+<h2 id=exclusive_core>Exclusive cores</h2>
+<p>For CPU-intensive, time-sensitive workloads, getting preempted by another
+thread can be the difference between making frame deadlines or not. For apps
+that have strict latency and frame rate requirements (such as audio or virtual
+reality apps), having an exclusive CPU core can guarantee an acceptable level of
+performance.</p>
+<p>Devices running Android 7.0 can now reserve one core explicitly for the top
+foreground application, improving performance for all foreground apps and giving
+apps with high intensity workloads more control over how their work is allocated
+across CPU cores.</p>
+<p>To support an exclusive core on a device:</p>
+<ul>
+<li>Enable <code>cpusets</code> and configure a <code>cpuset</code> that
+contains only the top foreground application.</li>
+<li>Ensure one core (this is the exclusive core) is reserved for threads from
+this <code>cpuset</code>.</li>
+<li>Implement the getExclusiveCores API to return the core number of the
+exclusive core.</li>
+</ul>
+<p>To determine which processes are scheduled on which cores, use
+<code>systrace</code> while running any workload and verify no userspace threads
+from applications other than the top foreground application are scheduled on the
+exclusive core.</p>
+<p>To view a reference implementation for the Nexus 6P, refer to
+<code>android//device/huawei/angler/power/power.c</code>.</p>
diff --git a/src/devices/tech/power/values.jd b/src/devices/tech/power/values.jd
index 0bc10d6..2e82a15 100644
--- a/src/devices/tech/power/values.jd
+++ b/src/devices/tech/power/values.jd
@@ -39,9 +39,8 @@
computes the mAh value, which is then used to estimate the amount of battery
drained by the application/subsystem.</p>
-<p>Devices with <a href="#chipset-data">Bluetooth and Wi-Fi controllers</a>
-running Android 6.0 and higher can provide additional power values obtained from
-chipset data.</p>
+<p>Devices with Bluetooth, modem, and Wi-Fi controllers running Android 7.0 and
+higher can provide additional power values obtained from chipset data.</p>
<h2 id="multiple-cpus">Devices with heterogeneous CPUs</h2>
@@ -130,20 +129,6 @@
</tr>
<tr>
- <td>bluetooth.active</td>
- <td>Additional power used when playing audio through Bluetooth A2DP.</td>
- <td>14mA</td>
- <td></td>
-</tr>
-
-<tr>
- <td>bluetooth.on</td>
- <td>Additional power used when Bluetooth is turned on but idle.</td>
- <td>1.4mA</td>
- <td></td>
-</tr>
-
-<tr>
<td>wifi.on</td>
<td>Additional power used when Wi-Fi is turned on but not receiving,
transmitting, or scanning.</td>
@@ -231,6 +216,91 @@
</tr>
<tr>
+ <td>bluetooth.controller.idle</td>
+ <td>Average current draw (mA) of the Bluetooth controller when idle.</td>
+ <td> - </td>
+ <td rowspan=4>These values are not estimated, but taken from the data sheet of
+ the controller. If there are multiple receive or transmit states, the average
+ of those states is taken. In addition, the system now collects data for
+ <a href="#le-bt-scans">Low Energy (LE) and Bluetooth scans</a>.<br><br>Android
+ N and later no longer use the Bluetooth power values for bluetooth.active
+ (used when playing audio via Bluetooth A2DP) and bluetooth.on (used when
+ Bluetooth is on but idle).</td>
+</tr>
+
+<tr>
+ <td>bluetooth.controller.rx</td>
+ <td>Average current draw (mA) of the Bluetooth controller when receiving.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>bluetooth.controller.tx</td>
+ <td>Average current draw (mA) of the Bluetooth controller when transmitting.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>bluetooth.controller.voltage</td>
+ <td>Average operating voltage (mV) of the Bluetooth controller.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>modem.controller.idle</td>
+ <td>Average current draw (mA) of the modem controller when idle.</td>
+ <td> - </td>
+ <td rowspan=4>These values are not estimated, but taken from the data sheet of
+ the controller. If there are multiple receive or transmit states, the average
+ of those states is taken.</td>
+</tr>
+
+<tr>
+ <td>modem.controller.rx</td>
+ <td>Average current draw (mA) of the modem controller when receiving.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>modem.controller.tx</td>
+ <td>Average current draw (mA) of the modem controller when transmitting.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>modem.controller.voltage</td>
+ <td>Average operating voltage (mV) of the modem controller.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>wifi.controller.idle</td>
+ <td>Average current draw (mA) of the Wi-Fi controller when idle.</td>
+ <td> - </td>
+ <td rowspan=4>These values are not estimated, but taken from the data sheet of
+ the controller. If there are multiple receive or transmit states, the average
+ of those states is taken.</td>
+</tr>
+
+<tr>
+ <td>wifi.controller.rx</td>
+ <td>Average current draw (mA) of the Wi-Fi controller when receiving.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>wifi.controller.tx</td>
+ <td>Average current draw (mA) of the Wi-Fi controller when transmitting.</td>
+ <td> - </td>
+</tr>
+
+<tr>
+ <td>wifi.controller.voltage</td>
+ <td>Average operating voltage (mV) of the Wi-Fi controller.</td>
+ <td> - </td>
+</tr>
+
+<tr>
<td>cpu.speeds</td>
<td>Multi-value entry that lists each possible CPU speed in KHz.</td>
<td>125000KHz, 250000KHz, 500000KHz, 1000000KHz, 1500000KHz</td>
@@ -285,72 +355,15 @@
<td>3000mAh</td>
<td></td>
</tr>
+
</table>
-<h2 id="chipset-data">Devices with Bluetooth and Wi-Fi controllers</h2>
-<p>Devices with Bluetooth and Wi-Fi controllers running Android 6.0 and
-higher can be polled for the following energy use data:</p>
-<ul>
-<li>Time spent transmitting (in milliseconds).</li>
-<li>Time spent receiving (in milliseconds).</li>
-<li>Time spent idle (in milliseconds).</li>
-</ul>
-
-<p>Time values are not measured but are instead available from respective chip
-specifications and must be explicitly stated (for details, see
-<a href="{@docRoot}devices/tech/power/batterystats.html#wifi-reqs">Wi-Fi,
-Bluetooth, and cellular usage</a>). To convert time values to power values, the
-framework expects four (4) values for each controller in a resource overlay at
-<code>/frameworks/base/core/res/res/values/config.xml</code>.</p>
-
- <table id="chipset-energy-data">
-
- <tr>
- <th width="10%">Controller</th>
- <th width="40%">Values/Resource Names</th>
- <th width="40%">Description</th>
- </tr>
-
- <tr>
- <td rowspan=4>Bluetooth</td>
- <td>android:integer/config_bluetooth_idle_cur_ma</td>
- <td>Average current draw (mA) of the Bluetooth controller when idle.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_bluetooth_active_rx_cur_ma</td>
- <td>Average current draw (mA) of the Bluetooth controller when receiving.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_bluetooth_tx_cur_ma</td>
- <td>Average current draw (mA) of the Bluetooth controller when transmitting.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_bluetooth_operating_voltage_mv</td>
- <td>Average operating voltage (mV) of the Bluetooth controller.</td>
- </tr>
-
- <tr>
- <td rowspan=4>Wi-Fi</td>
- <td>android:integer/config_wifi_idle_receive_cur_ma</td>
- <td>Average current draw (mA) of the Wi-Fi controller when idle.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_wifi_active_rx_cur_ma</td>
- <td>Average current draw (mA) of the Wi-Fi controller when receiving.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_wifi_tx_cur_ma</td>
- <td>average current draw (mA) of the Wi-Fi controller when transmitting.</td>
- </tr>
-
- <tr>
- <td>android:integer/config_wifi_operating_voltage_mv</td>
- <td>Average operating voltage (mV) of the Wi-Fi controller.</td>
- </tr>
-
- </table>
\ No newline at end of file
+<h2 id="le-bt-scans">Low Energy (LE) and Bluetooth scans</h2>
+<p>For devices running Android 7.0, the system collects data for Low Energy (LE)
+scans and Bluetooth network traffic (such as RFCOMM and L2CAP) and associates
+these activities with the initiating application. Bluetooth scans are associated
+with the application that initiated the scan, but batch scans are not (and
+are instead associated with the Bluetooth application). For an application
+scanning for N milliseconds, the cost of the scan is N milliseconds of rx time
+and N milliseconds of tx time; all leftover controller time is assigned to
+network traffic or the Bluetooth application.</p>
diff --git a/src/index.jd b/src/index.jd
index 7fa7507..9ad6a5e 100644
--- a/src/index.jd
+++ b/src/index.jd
@@ -22,127 +22,188 @@
-->
<div class="wrap">
- <div class="landing-banner">
- <h1 itemprop="name" style="margin-bottom:0;">Welcome to the Android Open Source Project!</h1>
+<div class="landing-banner">
+<h1 itemprop="name" style="margin-bottom:0;">Welcome to the Android Open Source Project!</h1>
-<p>
-Android is an open source software stack for a wide range of mobile devices and
-a corresponding open source project led by Google. This site offers the
+<p>Android is an open source software stack for a wide range of mobile devices
+and a corresponding open source project led by Google. This site offers the
information and source code you need to create custom variants of the Android
stack, port devices and accessories to the Android platform, and ensure your
-devices meet compatibility requirements.
-</p>
- </div>
+devices meet compatibility requirements.</p>
+
+<h2 align="center">Android 7.0 Updates Available</h2>
+
+</div>
</div>
<div class="wrap">
- <div class="landing-docs">
- <div class="col-8">
- <h3>What's New</h3>
-<a href="{@docRoot}security/bulletin/2016-08-01.html">
- <h4>August Android Security Bulletin</h4></a>
- <p>The <strong><a
- href="{@docRoot}security/bulletin/2016-08-01.html">August 2016 Android Security
- Bulletin</a></strong> has been published along with links to associated fixes.
- In addition, new <strong><a
- href="{@docRoot}source/build-numbers.html#source-code-tags-and-builds">build
- numbers</a></strong> have been published for Nexus 5, Nexus 6, Nexus 7,
- Nexus 9, Nexus 5X, Nexus 6P, and Nexus Player running Android 6.0 to
- support the August Android security release.</p>
+<div class="landing-docs">
+ <div class="col-10">
+ <h3>What's New</h3>
-<a href="{@docRoot}compatibility/cts/downloads.html">
+<h4>Key Features</h4>
+<p>
+<a href="{@docRoot}devices/tech/dalvik/jit-compiler.html"><strong>Just-in-time
+(JIT) compiler</strong></a> with code profiling to Android runtime (ART)
+continually improves app performance.
+<a href="{@docRoot}security/encryption/file-based.html"><strong>Direct boot and
+file-based encryption</strong></a> allows certain apps to run securely when
+the device is powered on but not unlocked.
+<a href="{@docRoot}devices/tech/display/multi-window.html"><strong>Multi-window</strong></a>
+support enables simultaneous display of multiple apps on a device screen.
+The <a href="{@docRoot}devices/automotive.html"><strong>Vehicle
+HAL</strong></a> enables Android Automotive implementations.</p>
+
+<h4>Security Improvements</h4>
+<p>
+Android now strictly enforces
+<a href="{@docRoot}security/verifiedboot/verified-boot.html"><strong>verified
+boot</strong></a> and requires a
+<a href="{@docRoot}security/overview/app-security.html#certificate-authorities"><strong>same-system
+trusted CA store</strong></a>.
+<a href="{@docRoot}security/apksigning/index.html#v2"><strong>APK signature
+scheme v2</strong></a> performs signature checking across the entire file.
+Security hardening of <a href="{@docRoot}devices/media/framework-hardening.html"><strong>media
+framework</strong></a> and
+<a href="{@docRoot}devices/camera/versioning.html#hardening">camera service</strong></a>
+splits mediaserver into multiple processes with restricted permissions and
+capabilities (may require changes to HAL implementations).
+For more changes, see
+<a href="{@docRoot}security/enhancements/enhancements70.html"><strong>7.0
+security enhancements</strong></a>.</p>
+
+<h4>Audio, Camera, & Graphics Updates</h4>
+<p>
+<a href="{@docRoot}devices/audio/implement-policy.html"><strong>Audio policy
+improvements</strong></a> include changes to policy configuration files, a
+reorganization of audio policy code, and new extensions for audio policy routing
+APIs.
+Android now supports
+<a href="{@docRoot}devices/graphics/arch-vulkan.html"><strong>Vulkan</strong></a>
+(low-overhead, cross-platform API for high-performance 3D graphics) and
+<a href="{@docRoot}devices/graphics/testing.html"><strong>OpenGL
+3.2</strong></a>; check with your SoC for driver support.
+Enhancements to
+<a href="{@docRoot}devices/camera/versioning.html"><strong>camera3</strong></a>
+support devices with high-end cameras.</p>
+
+<h4>OEM Customizations</h4>
+<p>Users can now
+<a href="{@docRoot}devices/tech/connect/block-numbers.html"><strong>restrict
+calls and texts</strong></a> (impacts apps using blocking features).
+Do-not-disturb (DND) rules can now
+<a href="{@docRoot}devices/tech/display/dnd.html"><strong>suppress visual
+interruptions</strong></a>.</p>
+
+<h4>Android For Work Updates</h4>
+<p>New device administration features include
+<a href="{@docRoot}devices/tech/admin/enterprise-telephony.html"><strong>enterprise
+telephony</strong></a> (cross-profile contact searching and badging, affects
+preloaded Dialer and Contacts apps),
+<a href="{@docRoot}devices/tech/admin/implement.html#HAL_values"><strong>device
+monitoring and health reporting</strong></a> (APIs for apps to query device
+hardware state; device must report correct values in the HAL implementation),
+and
+<a href="{@docRoot}devices/tech/admin/testing-setup.html#troubleshooting"><strong>enterprise
+process logging and device owner triggered bugreports</strong></a> (collect logs
+for user actions on a managed device).</p>
+
+<h4>Power Improvements</h4>
+<p>
+<a href="{@docRoot}devices/tech/power/mgmt.html#doze"><strong>Doze</strong></a>
+now works on devices in motion.
+<a href="{@docRoot}devices/tech/power/mgmt.html#sustained_performance"><strong>Sustained
+performance</strong></a> hints can inform long-running applications of
+device-performance capabilities (requires power HAL changes).
+<a href="{@docRoot}devices/tech/connect/data-saver.html"><strong>Data
+Saver</strong></a> enables restricting background data when on cellular or
+metered networks.
+Android now collects data for
+<a href="{@docRoot}devices/tech/power/values.html#le-bt-scans"><strong>Low
+Energy (LE) scans and Bluetooth</strong></a> traffic.</p>
+
+<h4>Under the Hood</h4>
+<p>Dialer APIs handle all
+<a href="{@docRoot}devices/tech/connect/call-notification.html"><strong>call
+notification</strong></a> logic (instead of sharing with Telecom).
+<a href="{@docRoot}devices/tech/config/namespaces_libraries.html"><strong>Namespaces
+for native libraries</strong></a> prevent apps from using private platform
+native APIs.
+<a href="{@docRoot}source/running.html#flashing-a-device"><strong>Flash
+unlock</strong></a> reports the bootloader's lock status.
+<a href="{@docRoot}devices/tech/connect/ril.html"><strong>Radio
+Interface Layer (RIL)</strong></a> support includes enhancements to error codes,
+versioning, and wakelocks.
+<a href="{@docRoot}devices/tech/config/uicc.html"><strong>UICC carrier
+privilege</strong></a> rules now support Access File Rule (ARF) storage.
+Android now supports the RFID smart card system
+<a href="{@docRoot}devices/tech/connect/felica.html"><strong>Felicity Card
+(FeliCa)</strong></a>.
+Documentation updated for
+<a href="{@docRoot}devices/halref/index.html"><strong>HAL</strong></a>
+and <a href="{@docRoot}reference/packages.html"><strong>TradeFed</strong></a>
+references.</p>
+
+<!--SAVED for FUTURE REFERENCE
+<a href="{@docRoot}source/build-numbers.html">
+ <h4>Lollipop and Marshmallow Build Numbers</h4></a>
+ <p>New <strong><a
+ href="{@docRoot}source/build-numbers.html#source-code-tags-and-builds">Build
+ Numbers</a></strong> have been published for Lollipop on Nexus 10 and
+ Marshmallow on Nexus 5, Nexus 5X, Nexus 6, Nexus 6P, Nexus 7 (flo/deb), Nexus 9
+ (volantis/volantisg), Nexus Player, and Pixel C.</p>
+
+<a href="{@docRoot}compatibility/downloads.html">
<h4>Android 6.0, 5.1, and 5.0 CTS Downloads</h4></a>
<p>Android 6.0 R8, 5.1 R9, and 5.0 R8 Compatibility Test Suite (CTS)
and CTS Verifier are available for <strong><a
href="{@docRoot}compatibility/cts/downloads.html#android-60">Download</a></strong>.</p>
+-->
-<a href="{@docRoot}security/bulletin/2016-07-01.html">
- <h4>July Android Security Bulletin</h4></a>
- <p>The <strong><a
- href="{@docRoot}security/bulletin/2016-07-01.html">July 2016 Android Security
- Bulletin</a></strong> has been published along with links to associated fixes.
- In addition, new <strong><a
- href="{@docRoot}source/build-numbers.html#source-code-tags-and-builds">build
- numbers</a></strong> have been published for Nexus 5, Nexus 6, Nexus 7,
- Nexus 9, Nexus 5X, Nexus 6P, and Nexus Player running Android 6.0 to
- support the July Android security release.</p>
+</div>
-<a href="{@docRoot}devices/tech/config/connect_tests.html">
- <h4>Network Connectivity Tests</h4></a>
- <p>The <strong><a
- href="{@docRoot}devices/tech/config/connect_tests.html">Android
- Connectivity Testing Suite</a></strong> validates the functionality of various
- aspects of the Bluetooth, Wi-Fi and cellular radios. The suite includes the
- <strong><a
- href="https://android.googlesource.com/platform/external/sl4a/+/master/README.md">Scripting
- Layer For Android</a></strong>, the <strong><a
- href="https://android.googlesource.com/platform/packages/apps/Test/connectivity/+/master/sl4n/README.md">Scripting
- Layer For Native</a></strong> and the <strong><a
- href="https://android.googlesource.com/platform/tools/test/connectivity/+/master/acts/README.md">Android
- Comms Test Suite</a></strong>.</p>
+<div class="col-5">
-<a href="{@docRoot}devices/tech/debug/index.html">
- <h4>Crash Dump and Tomstone Details for debuggerd</h4></a>
- <p><strong><a href="{@docRoot}devices/tech/debug/index.html">Debugging
- Native Android Platform Code</a></strong> now contains detailed
- breakdowns of <strong><a
- href="{@docRoot}devices/tech/debug/index.html#crashdump">crash
- dumps</a></strong> and <strong><a
- href="{@docRoot}devices/tech/debug/index.html#tombstones">tombstones</a></strong>
- to help parse the output of <code>debuggerd</code>.</p>
+<h3>Getting Started</h3>
+<a href="{@docRoot}source/index.html"><h4>Explore the Source</h4></a>
+<p>Get the complete Android platform and modify and build it to suit your needs.
+You can also
+<strong><a href="https://android-review.googlesource.com/#/q/status:open">contribute
+to</a></strong> the <strong><a href="https://android.googlesource.com/">Android
+Open Source Project (AOSP) repository</a></strong> to make your changes
+available to everyone else in the Android ecosystem.</p>
-<a href="{@docRoot}source/requirements.html#binaries">
- <h4>Binaries Highlighted, Building the System Renamed</h4></a>
- <p>To ease use, the former Building the System page has been renamed
- <strong><a href="{@docRoot}source/building.html">Preparing to
- Build</a></strong> with links to proprietary binaries (blobs) added to
- Preparing to Build and the central <strong><a
- href="{@docRoot}source/requirements.html#binaries">Requirements</a></strong>
- list.</p>
- </div>
+<a href="{@docRoot}source/index.html"><img border="0"
+src="images/android_framework_small.png" alt="Android framework summary"
+style="display:inline;float:right;margin:5px 10px;width:42%;height:42%"></a>
- <div class="col-8">
- <h3>Getting Started</h3>
- <a href="{@docRoot}source/index.html">
- <h4>Explore the Source</h4></a>
- <p>Get the complete Android platform and modify and build it to suit your needs. You can
- also <strong><a
- href="https://android-review.googlesource.com/#/q/status:open">contribute
- to</a></strong> the <strong><a
- href="https://android.googlesource.com/">Android Open Source Project (AOSP)
- repository</a></strong> to make your changes available to everyone else
- in the Android ecosystem.</p>
-<a href="{@docRoot}source/index.html"><img border="0" src="images/android_framework_small.png" alt="Android framework summary" style="display:inline;float:right;margin:5px 10px;width:42%;height:42%"></a>
- <a href="{@docRoot}devices/index.html">
- <h4>Port Android to Devices</h4></a>
- <p>Port the latest Android platform and
- create compelling devices that your customers want.</p>
+<a href="{@docRoot}devices/index.html"><h4>Port Android to Devices</h4></a>
+<p>Get help porting the latest Android platform to create compelling devices for
+your customers. Includes documentation for HAL interfaces and details on core
+technologies such as Android runtime (ART) and over-the-air (OTA) updates.</p>
- <a href="{@docRoot}security/index.html">
- <h4>Make Secure</h4></a>
- <p>Follow these examples and instructions to harden your Android
- devices against malicious attacks. Find out how the Android security program
- works and learn to implement the latest features.</p>
+<a href="{@docRoot}security/index.html"><h4>Make Secure</h4></a>
+<p>Follow these examples and instructions to harden your Android devices against
+malicious attacks. Find out how the Android security program works and learn how
+to implement the latest features.</p>
- <a href="{@docRoot}compatibility/index.html">
- <h4>Get Compatible</h4></a>
- <p>Being Android-compatible lets you offer custom features but still give users and developers a consistent
- and standard experience across all Android-powered devices. Android provides guidance
- and a test suite to verify your Android compatibility.</p>
+<a href="{@docRoot}compatibility/index.html"><h4>Get Compatible</h4></a>
+<p>Being Android-compatible lets you offer custom features but still give users
+and developers a consistent and standard experience across all Android-powered
+devices. Android provides guidance and a test suite to verify your Android
+compatibility.</p>
- <a href="https://android.googlesource.com/platform/docs/source.android.com/">
- <h4>Help this Site</h4></a>
- <p>Use the <strong>Site Feedback</strong> button at the bottom of any
- page to request improvements to the content or identify errors. In addition,
- source.android.com is maintained in the Android Open Source Project. See the
- <strong><a
- href="https://android.googlesource.com/platform/docs/source.android.com/+log/master">docs/source.android.com
- project log in AOSP</a></strong> for the complete list of changes to this site.
- Contribute your own updates to that same project and help maintain source.android.com.</p>
- </div>
+<a href="https://android.googlesource.com/platform/docs/source.android.com/">
+<h4>Help this Site</h4></a>
+<p>Use the <strong>Site Feedback</strong> button at the bottom of any page to
+request content improvements or let us know about errors. To contribute your
+own updates to the site or to view a complete list of site changes, use the AOSP
+project
+<strong><a href="https://android.googlesource.com/platform/docs/source.android.com/+log/master">docs/source.android.com.</a></strong></p>
+</div>
- </div>
+</div>
</div>
diff --git a/src/security/apksigning/index.jd b/src/security/apksigning/index.jd
new file mode 100644
index 0000000..1145191
--- /dev/null
+++ b/src/security/apksigning/index.jd
@@ -0,0 +1,138 @@
+page.title=Application Signing
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+Application signing allows developers to identify the author of the application
+and to update their application without creating complicated interfaces and
+permissions. Every application that is run on the Android platform must be <a
+href="https://developer.android.com/studio/publish/app-signing.html">signed by
+the developer</a>. Applications that attempt to install without being signed
+will be rejected by either Google Play or the package installer on the Android
+device.
+</p>
+<p>
+On Google Play, application signing bridges the trust Google has with the
+developer and the trust the developer has with their application. Developers
+know their application is provided, unmodified, to the Android device; and
+developers can be held accountable for behavior of their application.
+</p>
+<p>
+On Android, application signing is the first step to placing an application in
+its Application Sandbox. The signed application certificate defines which user
+ID is associated with which application; different applications run under
+different user IDs. Application signing ensures that one application cannot
+access any other application except through well-defined IPC.
+</p>
+<p>
+When an application (APK file) is installed onto an Android device, the Package
+Manager verifies that the APK has been properly signed with the certificate
+included in that APK. If the certificate (or, more accurately, the public key in
+the certificate) matches the key used to sign any other APK on the device, the
+new APK has the option to specify in the manifest that it will share a UID with
+the other similarly-signed APKs.
+</p>
+<p>
+Applications can be signed by a third-party (OEM, operator, alternative market)
+or self-signed. Android provides code signing using self-signed certificates
+that developers can generate without external assistance or permission.
+Applications do not have to be signed by a central authority. Android currently
+does not perform CA verification for application certificates.
+</p>
+<p>
+Applications are also able to declare security permissions at the Signature
+protection level, restricting access only to applications signed with the same
+key while maintaining distinct UIDs and Application Sandboxes. A closer
+relationship with a shared Application Sandbox is allowed via the <a
+href="https://developer.android.com/guide/topics/manifest/manifest-element.html#uid">shared
+UID feature</a> where two or more applications signed with same developer key
+can declare a shared UID in their manifest.
+</p>
+<h2>APK signing schemes</h2>
+<p>
+Android supports two application signing schemes, one based on JAR signing (v1
+scheme) and <a href="v2.html">APK Signature Scheme v2 (v2 scheme)</a>, which
+was introduced in Android Nougat (Android 7.0).
+</p>
+<p>
+For maximum compatibility, applications should be signed both with v1 and v2
+schemes. Android Nougat and newer devices install apps signed with v2 scheme
+more quickly than those signed only with v1 scheme. Older Android platforms
+ignore v2 signatures and thus need apps to contain v1 signatures.
+</p>
+<h3 id="v1">JAR signing (v1 scheme)</h3>
+<p>
+APK signing has been a part of Android from the beginning. It is based on <a
+href="https://docs.oracle.com/javase/8/docs/technotes/guides/jar/jar.html#Signed_JAR_File">
+signed JAR</a>. For details on using this scheme, see the Android Studio documentation on
+<a href="https://developer.android.com/studio/publish/app-signing.html">Signing
+your app</a>.
+</p>
+<p>
+v1 signatures do not protect some parts of the APK, such as ZIP metadata. The
+APK verifier needs to process lots of untrusted (not yet verified) data
+structures and then discard data not covered by the signatures. This offers a
+sizeable attack surface. Moreover, the APK verifier must uncompress all
+compressed entries, consuming more time and memory. To address these issues,
+Android 7.0 introduced APK Signature Scheme v2.
+</p>
+<h3 id="v2">APK Signature Scheme v2 (v2 scheme)</h3>
+<p>
+Android 7.0 introduces APK signature scheme v2 (v2 scheme). The contents of the
+APK are hashed and signed, then the resulting APK Signing Block is inserted
+into the APK. For details on applying the v2 scheme to an application, refer to
+<a href="https://developer.android.com/preview/api-overview.html#apk_signature_v2">APK
+Signature Scheme v2</a> in the Android N Developer Preview.
+</p>
+<p>
+During validation, v2 scheme treats the APK file as a blob and performs signature
+checking across the entire file. Any modification to the APK, including ZIP metadata
+modifications, invalidates the APK signature. This form of APK verification is
+substantially faster and enables detection of more classes of unauthorized
+modifications.
+</p>
+<p>
+The new format is backwards compatible, so APKs signed with the new signature
+format can be installed on older Android devices (which simply ignore the extra
+data added to the APK), as long as these APKs are also v1-signed.
+</p>
+<p>
+ <img src="../images/apk-validation-process.png" alt="APK signature verification process" id="figure1" />
+</p>
+<p class="img-caption"><strong>Figure 1.</strong> APK signature verification
+process (new steps in red)</p>
+
+<p>
+Whole-file hash of the APK is verified against the v2 signature stored in the
+APK Signing Block. The hash covers everything except the APK Signing Block,
+which contains the v2 signature. Any modification to the APK outside of the APK
+Signing Block invalidates the APK's v2 signature. APKs with stripped v2
+signature are rejected as well, because their v1 signature specifies that the
+APK was v2-signed, which makes Android Nougat and newer refuse to verify APKs
+using their v1 signatures.
+</p>
+
+<p>For details on the APK signature verification process, see the <a href="v2.html#verification">
+Verification section</a> of APK Signature Scheme v2.</p>
diff --git a/src/security/apksigning/v2.jd b/src/security/apksigning/v2.jd
new file mode 100644
index 0000000..6a2e7a9
--- /dev/null
+++ b/src/security/apksigning/v2.jd
@@ -0,0 +1,368 @@
+page.title=APK Signature Scheme v2
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+APK Signature Scheme v2 is a whole-file signature scheme that increases
+verification speed and <a
+href="#integrity-protected-contents">strengthens integrity guarantees</a> by
+detecting any changes to the protected parts of the APK.
+</p>
+
+<p>
+Signing using APK Signature Scheme v2 inserts an <a
+href="#apk-signing-block">APK Signing Block</a> into the APK file immediately
+before the ZIP Central Directory section. Inside the APK Signing Block, v2
+signatures and signer identity information are stored in an <a
+href="#apk-signature-scheme-v2-block">APK Signature Scheme v2 Block</a>.
+</p>
+
+<p>
+ <img src="../images/apk-before-after-signing.png" alt="APK before and after signing" id="figure1" />
+</p>
+<p class="img-caption"><strong>Figure 1.</strong> APK before and after signing</p>
+
+<p>
+APK Signature Scheme v2 was introduced in Android 7.0 (Nougat). To make
+a APK installable on Android 6.0 (Marshmallow) and older devices, the
+APK should be signed using <a href="index.html#v1">JAR signing</a> before being
+signed with the v2 scheme.
+</p>
+
+
+<h2 id="apk-signing-block">APK Signing Block</h2>
+<p>
+To maintain backward-compatibility with the current APK format, v2 and newer APK
+signatures are stored inside an APK Signing Block, a new container introduced to
+support APK Signature Scheme v2. In an APK file, the APK Signing Block is located
+immediately before the ZIP Central Directory, which is located at the end of the file.
+</p>
+
+<p>
+The block contains ID-value pairs wrapped in a way that makes it easier to
+locate the block in the APK. The v2 signature of the APK is stored as an ID-value
+pair with ID 0x7109871a.
+</p>
+
+<h3 id="apk-signing-block-format">Format</h3>
+<p>
+The format of the APK Signing Block is as follows (all numeric fields are
+little-endian):
+</p>
+
+<ul>
+ <li><code>size of block</code> in bytes (excluding this field) (uint64)</li>
+ <li>Sequence of uint64-length-prefixed ID-value pairs:
+ <ul>
+ <li><code>ID</code> (uint32)</li>
+ <li><code>value</code> (variable-length: length of the pair - 4 bytes)</li>
+ </ul>
+ </li>
+ <li><code>size of block</code> in bytes—same as the very first field (uint64)</li>
+ <li><code>magic</code> “APK Sig Block 42” (16 bytes)</li>
+</ul>
+
+<p>
+APK is parsed by first finding the start of the ZIP Central Directory (by
+finding the ZIP End of Central Directory record at the end of the file, then
+reading the start offset of the Central Directory from the record). The
+<code>magic</code> value provides a quick way to establish that what precedes
+Central Directory is likely the APK Signing Block. The <code>size of
+block</code> value then efficiently points to the start of the block in the
+file.
+</p>
+
+<p>
+ID-value pairs with unknown IDs should be ignored when interpreting the
+block.
+</p>
+
+
+<h2 id="apk-signature-scheme-v2-block">APK Signature Scheme v2 Block</h2>
+<p>
+APK is signed by one or more signers/identities, each represented by a signing
+key. This information is stored as an APK Signature Scheme v2 Block. For each
+signer, the following information is stored:
+</p>
+
+<ul>
+ <li>(signature algorithm, digest, signature) tuples. The digest is stored to
+decouple signature verification from integrity checking of the APK’s contents.</li>
+ <li>X.509 certificate chain representing the signer’s identity.</li>
+ <li>Additional attributes as key-value pairs.</li>
+</ul>
+
+<p>
+For each signer, the APK is verified using a supported signature from the
+provided list. Signatures with unknown signature algorithms are ignored. It is
+up to each implementation to choose which signature to use when multiple
+supported signatures are encountered. This enables the introduction of stronger
+signing methods in the future in a backward-compatible way. The suggested
+approach is to verify the strongest signature.
+</p>
+
+<h3 id="apk-signature-scheme-v2-block-format">Format</h3>
+<p>
+APK Signature Scheme v2 Block is stored inside the APK Signing Block under ID
+<code>0x7109871a</code>.
+</p>
+
+<p>
+The format of the APK Signature Scheme v2 Block is as follows (all numeric
+values are little-endian, all length-prefixed fields use uint32 for length):
+</p>
+<ul>
+ <li>length-prefixed sequence of length-prefixed <code>signer</code>:
+ <ul>
+ <li>length-prefixed <code>signed data</code>:
+ <ul>
+ <li>length-prefixed sequence of length-prefixed <code>digests</code>:
+ <ul>
+ <li><code>signature algorithm ID</code> (uint32)</li>
+ <li>(length-prefixed) <code>digest</code>—see
+ <a href="#integrity-protected-contents">Integrity-protected Contents</a></li>
+ </ul>
+ </li>
+ <li>length-prefixed sequence of X.509 <code>certificates</code>:
+ <ul>
+ <li>length-prefixed X.509 <code>certificate</code> (ASN.1 DER form)</li>
+ </ul>
+ </li>
+ <li>length-prefixed sequence of length-prefixed <code>additional attributes</code>:
+ <ul>
+ <li><code>ID</code> (uint32)</li>
+ <li><code>value</code> (variable-length: length of the additional
+ attribute - 4 bytes)</li>
+ </ul>
+ </li>
+ </ul>
+ </li>
+ <li>length-prefixed sequence of length-prefixed <code>signatures</code>:
+ <ul>
+ <li><code>signature algorithm ID</code> (uint32)</li>
+ <li>length-prefixed <code>signature</code> over <code>signed data</code></li>
+ </ul>
+ </li>
+ <li>length-prefixed <code>public key</code> (SubjectPublicKeyInfo, ASN.1 DER form)</li>
+ </ul>
+ </li>
+</ul>
+
+<h4 id="signature-algorithm-ids">Signature Algorithm IDs</h4>
+<ul>
+ <li>0x0101—RSASSA-PSS with SHA2-256 digest, SHA2-256 MGF1, 32 bytes of salt,
+ trailer: 0xbc</li>
+ <li>0x0102—RSASSA-PSS with SHA2-512 digest, SHA2-512 MGF1, 64 bytes of salt,
+ trailer: 0xbc</li>
+ <li>0x0103—RSASSA-PKCS1-v1_5 with SHA2-256 digest. This is for build systems
+ which require deterministic signatures.</li>
+ <li>0x0104—RSASSA-PKCS1-v1_5 with SHA2-512 digest. This is for build systems
+ which require deterministic signatures.</li>
+ <li>0x0201—ECDSA with SHA2-256 digest</li>
+ <li>0x0202—ECDSA with SHA2-512 digest</li>
+ <li>0x0301—DSA with SHA2-256 digest</li>
+</ul>
+
+<p>
+All of the above signature algorithms are supported by the Android platform.
+Signing tools may support a subset of the algorithms.
+</p>
+
+<p>
+<strong>Supported keys sizes and EC curves:</strong>
+</p>
+
+<ul>
+ <li>RSA: 1024, 2048, 4096, 8192, 16384</li>
+ <li>EC: NIST P-256, P-384, P-521</li>
+ <li>DSA: 1024, 2048, 3072</li>
+</ul>
+
+<h2 id="integrity-protected-contents">Integrity-protected contents</h2>
+
+<p>
+For the purposes of protecting APK contents, an APK consists of four sections:
+</p>
+
+<ol>
+ <li>Contents of ZIP entries (from offset 0 until the start of APK Signing Block)</li>
+ <li>APK Signing Block</li>
+ <li>ZIP Central Directory</li>
+ <li>ZIP End of Central Directory</li>
+</ol>
+
+<p>
+ <img src="../images/apk-sections.png" alt="APK sections after signing" id="figure2" />
+</p>
+<p class="img-caption"><strong>Figure 2.</strong> APK sections after signing</p>
+
+<p>
+APK Signature Scheme v2 protects the integrity of sections 1, 3, 4, and the
+<code>signed data</code> blocks of the APK Signature Scheme v2 Block contained
+inside section 2.
+</p>
+
+<p>
+The integrity of sections 1, 3, and 4 is protected by one or more digests of
+their contents stored in <code>signed data</code> blocks which are, in
+turn, protected by one or more signatures.
+</p>
+
+<p>
+The digest over sections 1, 3, and 4 is computed as follows, similar to a
+two-level <a href="https://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a>.
+Each section is split into consecutive 1 MB (2<sup>20</sup> bytes) chunks. The last chunk
+in each section may be shorter. The digest of each chunk is computed over the
+concatenation of byte <code>0xa5</code>, the chunk’s length in bytes
+(little-endian uint32), and the chunk’s contents. The top-level digest is
+computed over the concatenation of byte <code>0x5a</code>, the number of chunks
+(little-endian uint32), and the concatenation of digests of the chunks in the
+order the chunks appear in the APK. The digest is computed in chunked fashion to
+enable to speed up the computation by parallelizing it.
+</p>
+
+<p>
+ <img src="../images/apk-integrity-protection.png" alt="APK digest" id="figure3" />
+</p>
+<p class="img-caption"><strong>Figure 3.</strong> APK digest</p>
+
+<p>
+Protection of section 4 (ZIP End of Central Directory) is complicated by the
+section containing the offset of ZIP Central Directory. The offset changes when
+the size of the APK Signing Block changes, for instance, when a new signature is
+added. Thus, when computing digest over the ZIP End of Central Directory, the
+field containing the offset of ZIP Central Directory must be treated as
+containing the offset of the APK Signing Block.
+</p>
+
+<h2 id="rollback-protections">Rollback Protections</h2>
+<p>
+An attacker could attempt to have a v2-signed APK verified as a v1-signed APK on
+Android platforms that support verifying v2-signed APK. To mitigate this attack,
+v2-signed APKs that are also v1-signed must contain an X-Android-APK-Signed
+attribute in the main section of their META-INF/*.SF files. The value of the
+attribute is a comma-separated set of APK signature scheme IDs (the ID of this
+scheme is 2). When verifying the v1 signature, APK verifier is required to
+reject APKs which do not have a signature for the APK signature scheme the
+verifier prefers from this set (e.g., v2 scheme). This protection relies on the
+fact that contents META-INF/*.SF files are protected by v1 signatures. See the
+section on
+<a href="#v1-verification">JAR signed APK verification</a>.
+</p>
+
+<p>
+An attacker could attempt to strip stronger signatures from the APK Signature
+Scheme v2 Block. To mitigate this attack, the list of signature algorithm IDs
+with which the APK was being signed is stored in the <code>signed data</code>
+block which is protected by each signature.
+</p>
+
+<h2 id="verification">Verification</h2>
+
+In Android 7.0, APKs can be verified according to the APK Signature Scheme v2
+(v2 scheme) or JAR signing (v1 scheme). Older platforms ignore v2 signatures
+and only verify v1 signatures.
+</p>
+
+<p>
+ <img src="../images/apk-validation-process.png" alt="APK signature verification process" id="figure4" />
+</p>
+<p class="img-caption"><strong>Figure 4.</strong> APK signature verification
+process (new steps in red)</p>
+
+<h3 id="v2-verification">APK Signature Scheme v2 verification</h3>
+<ol>
+ <li>Locate the APK Signing Block and verify that:
+ <ol>
+ <li>Two size fields of APK Signing Block contain the same value.</li>
+ <li>ZIP Central Directory is immediately followed by ZIP End of Central
+ Directory record.</li>
+ <li>ZIP End of Central Directory is not followed by more data.</li>
+ </ol>
+ </li>
+ <li>Locate the first APK Signature Scheme v2 Block inside the APK Signing Block.
+ If the v2 Block if present, proceed to step 3. Otherwise, fall back to
+ verifying the APK
+ <a href="#v1-verification">using v1 scheme</a>.</li>
+ <li>For each <code>signer</code> in the APK Signature Scheme v2 Block:
+ <ol>
+ <li>Choose the strongest supported <code>signature algorithm ID</code> from
+ <code>signatures</code>. The strength ordering is up to each
+ implementation/platform version.</li>
+ <li>Verify the corresponding <code>signature</code> from
+ <code>signatures</code> against <code>signed data</code> using <code>public
+ key</code>. (It is now safe to parse <code>signed data</code>.)</li>
+ <li>Verify that the ordered list of signature algorithm IDs in
+ <code>digests</code> and <code>signatures</code> is identical. (This is to
+ prevent signature stripping/addition.)</li>
+ <li><a href="#integrity-protected-contents">Compute the digest of APK
+ contents</a> using the same digest algorithm as the digest algorithm used by the
+ signature algorithm.</li>
+ <li>Verify that the computed digest is identical to the corresponding
+ <code>digest</code> from <code>digests</code>.</li>
+ <li>Verify that SubjectPublicKeyInfo of the first <code>certificate</code> of
+ <code>certificates</code> is identical to <code>public key</code>.</li>
+ </ol>
+ </li>
+ <li>Verification succeeds if at least one <code>signer</code> was found and
+ step 3 succeeded for each found <code>signer</code>.</li>
+</ol>
+
+<p class="note"><strong>Note</strong>: APK must not be verified using
+the v1 scheme if a failure occurs in step 3 or 4.
+</p>
+
+<h3 id="v1-verification">JAR-signed APK verification (v1 scheme)</h3>
+<p>
+The JAR-signed APK is a
+<a href="https://docs.oracle.com/javase/8/docs/technotes/guides/jar/jar.html#Signed_JAR_File">standard
+signed JAR</a>, which must contain exactly the entries listed in
+META-INF/MANIFEST.MF and where all entries must be signed by the same set of
+signers. Its integrity is verified as follows:
+</p>
+
+<ol>
+ <li>Each signer is represented by a META-INF/<signer>.SF and
+ META-INF/<signer>.(RSA|DSA|EC) JAR entry.</li>
+ <li><signer>.(RSA|DSA|EC) is a
+ <a href="https://tools.ietf.org/html/rfc5652">PKCS #7 CMS ContentInfo
+ with SignedData structure</a> whose signature is verified over the
+ <signer>.SF file.</li>
+ <li><signer>.SF file contains a whole-file digest of the META-INF/MANIFEST.MF
+ and digests of each section of META-INF/MANIFEST.MF. The whole-file digest of
+ the MANIFEST.MF is verified. If that fails, the digest of each MANIFEST.MF
+ section is verified instead.</li>
+ <li>META-INF/MANIFEST.MF contains, for each integrity-protected JAR entry, a
+ correspondingly named section containing the digest of the entry’s uncompressed
+ contents. All these digests are verified.</li>
+ <li>APK verification fails if the APK contains JAR entries which are not listed
+ in the MANIFEST.MF and are not part of JAR signature.</li>
+</ol>
+
+<p>
+The protection chain is thus <signer>.(RSA|DSA|EC) -> <signer>.SF -> MANIFEST.MF
+-> contents of each integrity-protected JAR entry.
+</p>
+
diff --git a/src/security/encryption/file-based.jd b/src/security/encryption/file-based.jd
new file mode 100644
index 0000000..e21a8e4
--- /dev/null
+++ b/src/security/encryption/file-based.jd
@@ -0,0 +1,500 @@
+page.title=File-Based Encryption
+@jd:body
+
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>
+Android 7.0 and above supports file-based encryption (FBE). File-based
+encryption allows different files to be encrypted with different keys that can
+be unlocked independently.
+</p>
+<p>
+This article describes how to enable file-based encryption on new devices
+and how system applications can be updated to take full advantage of the new
+Direct Boot APIs and offer users the best, most secure experience possible.
+</p>
+<h2 id="direct-boot">Direct Boot</h2>
+<p>
+File-based encryption enables a new feature introduced in Android 7.0 called <a
+href="https://developer.android.com/preview/features/direct-boot.html">Direct
+Boot</a>. Direct Boot allows encrypted devices to boot straight to the lock
+screen. Previously, on encrypted devices using <a href="full-disk.html">full disk
+encryption</a> (FDE), users needed to provided credentials before any data could
+be accessed, preventing the phone from performing all but the most basic of
+operations. For example, alarms could not operate, accessibility services were
+unavailable, and phones could not receive calls but were limited to only basic
+emergency dialer operations.
+</p>
+<p>
+With the introduction of file-based encryption (FBE) and new APIs to make
+applications aware of encryption, it is possible for these apps to operate
+within a limited context. This can happen before users have provided their
+credentials while still protecting private user information.
+</p>
+<p>
+On an FBE-enabled device, each user of the device has two storage locations
+available to applications:
+</p><ul>
+<li>Credential Encrypted (CE) storage, which is the default storage location and
+only available after the user has unlocked the device.
+<li>Device Encrypted (DE) storage, which is a storage location available both
+during Direct Boot mode and after the user has unlocked the device.</li></ul>
+<p>
+This separation makes work profiles more secure because it allows more than one
+user to be protected at a time as the encryption is no longer based solely on a
+boot time password.
+</p>
+<p>
+The Direct Boot API allows encryption-aware applications to access each of these
+areas. There are changes to the application lifecycle to accommodate the need to
+notify applications when a user’s CE storage is <em>unlocked</em> in response to
+first entering credentials at the lock screen, or in the case of work profile
+providing a <a
+href="https://developer.android.com/preview/api-overview.html#android_for_work">work
+challenge</a>. Devices running Android 7.0 must support these new APIs and
+lifecycles regardless of whether or not they implement FBE. Although, without
+FBE, DE and CE storage will always be in the unlocked state.
+</p>
+<p>
+A complete implementation of file based encryption on an Ext4 file system is
+provided in the Android Open Source Project (AOSP) and needs only be enabled on
+devices that meet the requirements. Manufacturers electing to use FBE may wish
+to explore ways of optimizing the feature based on the system on chip (SoC)
+used.
+</p>
+<p>
+All the necessary packages in AOSP have been updated to be direct-boot aware.
+However, where device manufacturers use customized versions of these apps, they
+will want to ensure at a minimum there are direct-boot aware packages providing
+the following services:
+</p>
+
+<ul>
+<li>Telephony Services and Dialer
+<li>Input method for entering passwords into the lock screen
+</ul>
+
+<h2 id="examples-and-source">Examples and source</h2>
+
+<p>
+Android provides a reference implementation of file-based encryption, in which
+vold (system/vold) provides the functionality for managing storage devices and
+volumes on Android. The addition of FBE provides vold with several new commands
+to support key management for the CE and DE keys of multiple users. In addition
+to the core changes to use the <a href="#kernel-support">ext4 Encryption</a>
+capabilities in kernel many system packages including the lockscreen and the
+SystemUI have been modified to support the FBE and Direct Boot features. These
+include:
+</p>
+
+<ul>
+<li>AOSP Dialer (packages/apps/Dialer)
+<li>Desk Clock (packages/apps/DeskClock)
+<li>LatinIME (packages/inputmethods/LatinIME)*
+<li>Settings App (packages/apps/Settings)*
+<li>SystemUI (frameworks/base/packages/SystemUI)*</li></ul>
+<p>
+<em>* System applications that use the <code><a
+href="#supporting-direct-boot-in-system-applications">defaultToDeviceProtectedStorage</a></code>
+manifest attribute</em>
+</p>
+<p>
+More examples of applications and services that are encryption aware can be
+found by running the command <code>mangrep directBootAware</code> in the
+frameworks or packages directory of the AOSP
+source tree.
+</p>
+<h2 id="dependencies">Dependencies</h2>
+<p>
+To use the AOSP implementation of FBE securely, a device needs to meet the
+following dependencies:
+</p>
+
+<ul>
+<li><strong>Kernel Support</strong> for ext4 encryption (Kernel config option:
+EXT4_FS_ENCRYPTION)
+<li><strong><a
+href="{@docRoot}security/keystore/index.html">Keymaster
+Support</a></strong> with a HAL version 1.0 or 2.0. There is no support for
+Keymaster 0.3 as that does not provide that necessary capabilities or assure
+sufficient protection for encryption keys.
+<li><strong>Keymaster/<a
+href="{@docRoot}security/keystore/index.html">Keystore</a> and
+Gatekeeper</strong> must be implemented in a <a
+href="{@docRoot}security/trusty/index.html">Trusted Execution
+Environment</a> (TEE) to provide protection for the DE keys so that an
+unauthorized OS (custom OS flashed onto the device) cannot simply request the
+DE keys.
+<li><strong>Encryption performance</strong> in the kernel of at least 50MB/s
+using AES XTS to ensure a good user experience.
+<li><strong>Hardware Root of Trust</strong> and <strong>Verified Boot</strong>
+bound to the keymaster initialisation is required to ensure that Device
+Encryption credentials are not accessible by an unauthorized operating
+system.</li>
+</ul>
+
+<p class="note">
+<strong>Note</strong>: Storage policies are applied to a folder and all of its
+subfolders. Manufacturers should limit the contents that go unencrypted to the
+OTA folder and the folder that holds the key that decrypts the system. Most
+contents should reside in credential-encrypted storage rather than
+device-encrypted storage.
+</p>
+
+<h2 id="implementation">Implementation</h2>
+<p>
+First and foremost, apps such as alarm clocks, phone, accessibility features
+should be made android:directBootAware according to <a
+href="https://developer.android.com/preview/features/direct-boot.html">Direct
+Boot</a> developer documentation.
+</p>
+<h3 id="kernel-support">Kernel Support</h3>
+<p>
+The AOSP implementation of file-based encryption uses the ext4 encryption
+features in the Linux 4.4 kernel. The recommended solution is to use a kernel
+based on 4.4 or later. Ext4 encryption has also been backported to a 3.10 kernel
+in the Android common repositories and for the supported Nexus kernels.
+</p>
+<p>
+The android-3.10.y branch in the AOSP kernel/common git repository may
+provide a good starting point for device manufacturers that want to import this
+capability into their own device kernels. However, it is necessary to apply
+the most recent patches from the latest stable Linux kernel (currently <a
+href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/log/?id=refs/tags/v4.6">linux-4.6</a>)
+of the ext4 and jbd2 projects. The Nexus device kernels already include many of
+these patches.
+</p>
+<table>
+ <tr>
+ <th>Device</th>
+ <th>Kernel</th>
+ </tr>
+ <tr>
+ <td>Android Common
+ </td>
+ <td><strong>kernel/common</strong> android-3.10.y (<a
+href="https://android.googlesource.com/kernel/common/+/android-3.10.y">git</a>)
+ </td>
+ </tr>
+ <tr>
+ <td>Nexus 5X (bullhead)
+ </td>
+ <td><strong>kernel/msm</strong> android-msm-bullhead-3.10-n-preview-2 (<a
+href="https://android.googlesource.com/kernel/msm/+/android-msm-bullhead-3.10-n-preview-2">git</a>)
+ </td>
+ </tr>
+ <tr>
+ <td>Nexus 6P (angler)
+ </td>
+ <td><strong>kernel/msm</strong> android-msm-angler-3.10-n-preview-2 (<a
+href="https://android.googlesource.com/kernel/msm/+/android-msm-angler-3.10-n-preview-2">git</a>
+ )
+ </td>
+ </tr>
+</table>
+<p>
+Note that each of these kernels uses a backport to 3.10. The ext4
+and jbd2 drivers from linux 3.18 were transplanted into existing kernels based
+on 3.10. Due to interdependencies between parts of the kernel, this backport
+breaks support for a number of features that are not used by Nexus devices.
+These include:
+</p>
+
+<ul>
+<li>The ext3 driver, although ext4 can still mount and use ext3 filesystems
+<li>Global File Sytem (GFS) Support
+<li>ACL support in ext4</li>
+</ul>
+
+<p>
+In addition to functional support for ext4 encryption, device manufacturers may
+also consider implementing cryptographic acceleration to speed up file-based
+encryption and improve the user experience.
+</p>
+<h3 id="enabling-file-based-encryption">Enabling file-based encryption</h3>
+<p>
+FBE is enabled by adding the flag <code>fileencryption</code> with no parameters
+to the <code>fstab</code> line in the final column for the <code>userdata</code>
+partition. You can see an example at:
+<a href="https://android.googlesource.com/device/lge/bullhead/+/nougat-release/fstab_fbe.bullhead">
+https://android.googlesource.com/device/lge/bullhead/+/nougat-release/fstab_fbe.bullhead</a>
+</p>
+<p>
+Whilst testing the FBE implementation on a device, it is possible to specify the
+following flag:
+<code>forcefdeorfbe="<path/to/metadata/partition>"</code>
+</p>
+<p>
+This sets the device up with FDE but allows conversion to FBE for developers. By
+default, this behaves like <code>forceencrypt</code>, putting the device into
+FDE mode. However, it will expose a debug option allowing a device to be put
+into FBE mode as is the case in the developer preview. It is also possible to
+enable FBE from fastboot using this command:
+</p>
+<p>
+<code>$ fastboot --wipe-and-use-fbe</code>
+</p>
+<p>
+This is intended solely for development purposes as a platform for demonstrating
+the feature before actual FBE devices are released. This flag may be deprecated
+in the future.
+</p>
+<h3 id="integrating-with-keymaster">Integrating with Keymaster</h3>
+<p>
+The generation of keys and management of the kernel keyring is handled by
+<code>vold</code>. The AOSP implementation of FBE requires that the device
+support Keymaster HAL version 1.0 or later. There is no support for earlier
+versions of the Keymaster HAL.
+</p>
+<p>
+On first boot, user 0’s keys are generated and installed early in the boot
+process. By the time the <code>on-post-fs</code> phase of <code>init</code>
+completes, the Keymaster must be ready to handle requests. On Nexus devices,
+this is handled by having a script block:
+</p>
+
+<ul>
+<li>Ensure Keymaster is started before <code>/data</code> is mounted
+<li>Specify the file encryption cipher suite: AOSP implementation of file-based
+encryption uses AES-256 in XTS mode
+<p class="note">
+<strong>Note</strong>: All encryption is based on AES-256 in
+XTS mode. Due to the way XTS is defined, it needs two 256-bit keys; so in
+effect, both CE and DE keys are 512-bit keys.i
+</p>
+</li>
+</ul>
+
+<h3 id="encryption-policy">Encryption policy</h3>
+<p>
+Ext4 encryption applies the encryption policy at the directory level. When a
+device’s <code>userdata</code> partition is first created, the basic structures
+and policies are applied by the <code>init</code> scripts. These scripts will
+trigger the creation of the first user’s (user 0’s) CE and DE keys as well as
+define which directories are to be encrypted with these keys. When additional
+users and profiles are created, the necessary additional keys are generated and
+stored in the keystore; their credential and devices storage locations are
+created and the encryption policy links these keys to those directories.
+</p>
+<p>
+In the current AOSP implementation, the encryption policy is hardcoded into this
+location:
+</p>
+<p>
+<code>/system/extras/ext4_utils/ext4_crypt_init_extensions.cpp</code>
+</p>
+<p>
+It is possible to add exceptions in this file to prevent certain directories
+from being encrypted at all, by adding to the <code>directories_to_exclude</code>
+list. If modifications of this sort are made then the device
+manufacturer should include <a href="{@docRoot}security/selinux/device-policy.html">
+SELinux policies</a> that only grant access to the
+applications that need to use the unencrypted directory. This should exclude all
+untrusted applications.
+</p>
+<p>
+The only known acceptable use case for this is in support of legacy OTA
+capabilities.
+</p>
+<h3 id="supporting-direct-boot-in-system-applications">
+Supporting Direct Boot in system applications</h3>
+
+<h4 id="making-applications-direct-boot-aware">
+Making applications Direct Boot aware</h4>
+<p>
+To facilitate rapid migration of system apps, there are two new attributes that
+can be set at the application level. The
+<code>defaultToDeviceProtectedStorage</code> attribute is available only to
+system apps. The <code>directBootAware</code> attribute is available to all.
+</p>
+
+<pre>
+<application
+ android:directBootAware="true"
+ android:defaultToDeviceProtectedStorage="true">
+</pre>
+
+<p>
+The <code>directBootAware</code> attribute at the application level is shorthand for marking
+all components in the app as being encryption aware.
+</p>
+<p>
+The <code>defaultToDeviceProtectedStorage</code> attribute redirects the default
+app storage location to point at DE storage instead of pointing at CE storage.
+System apps using this flag must carefully audit all data stored in the default
+location, and change the paths of sensitive data to use CE storage. Device
+manufactures using this option should carefully inspect the data that they are
+storing to ensure that it contains no personal information.
+</p>
+<p>
+When running in this mode, the following System APIs are
+available to explicitly manage a Context backed by CE storage when needed, which
+are equivalent to their Device Protected counterparts.
+</p>
+
+<ul>
+<li><code>Context.createCredentialProtectedStorageContext()</code>
+<li><code>Context.isCredentialProtectedStorage()</code></li>
+</ul>
+<h4 id="supporting-multiple-users">Supporting multiple users</h4>
+<p>
+Each user in a multi-user environment gets a separate encryption key. Every user
+gets two keys: a DE and a CE key. User 0 must log into the device first as it is
+a special user. This is pertinent for <a
+href="{@docRoot}devices/tech/admin/index.html">Device
+Administration</a> uses.
+</p>
+<p>
+Crypto-aware applications interact across users in this manner:
+<code>INTERACT_ACROSS_USERS</code> and <code>INTERACT_ACROSS_USERS_FULL</code>
+allow an application to act across all the users on the device. However, those
+apps will be able to access only CE-encrypted directories for users that are
+already unlocked.
+</p>
+<p>
+An application may be able to interact freely across the DE areas, but one user
+unlocked does not mean that all the users on the device are unlocked. The
+application should check this status before trying to access these areas.
+</p>
+<p>
+Each work profile user ID also gets two keys: DE and CE. When the work challenge
+is met, the profile user is unlocked and the Keymaster (in TEE) can provide the
+profile’s TEE key.
+</p>
+<h3 id="handling-updates">Handling updates</h3>
+<p>
+The recovery partition is unable to access the DE protected storage on the
+userdata partition. Devices implementing FBE are strongly recommended to support
+OTA using the upcoming A/B system updates. As the OTA can be applied during
+normal operation there is no need for recovery to access data on the encrypted drive.
+</p>
+<p>
+If a legacy OTA solution is used, which requires recovery to access the OTA file
+on the userdata partition then:
+</p>
+
+<ul>
+<li>Create a top level directory (for example “misc_ne”) in the userdata
+partition.
+<li>Add this top level directory to the encryption policy exception (see <a
+href="#encryption-policy">Encryption policy</a> above).
+<li>Create a directory with this to hold OTA packages.
+<li>Add an SELinux rule and file contexts to control access to this folder and
+it contents. Only the process or applications receiving OTA updates should be be
+able to read and write to this folder.
+<li>No other application or process should have access to this folder.</li>
+</ul>
+
+<p>
+Within this folder create a directory to contain the OTA packages.
+</p>
+<h2 id="validation">Validation</h2>
+<p>
+To ensure the implemented version of the feature works as intended, employ the
+many <a href="://android.googlesource.com/platform/cts/+/nougat-cts-release/hostsidetests/appsecurity/src/android/appsecurity/cts/DirectBootHostTest.java">
+CTS encryption tests</a>.
+</p>
+<p>
+Once the kernel builds for your board, it should be tested by building an x86
+kernel that can be tested using QEMU. This will allow the implementation to be
+tested using
+<a hre="https://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/plain/quick-start?h=META">
+xfstest</a>. Test the crypto support using:
+</p>
+<pre>
+$ kvm-xfstests -c encrypt -g auto
+</pre>
+<p>
+In addition, device manufacturers may perform these manual tests. On a device
+with FBE enabled:
+</p>
+
+<ul>
+ <li>Check that <code>ro.crypto.state</code> exists
+ <ul>
+ <li>Ensure <code>ro.crypto.state</code> is encrypted</li>
+ </ul>
+ </li>
+ <li>Check that <code>ro.crypto.type</code> exists
+ <ul>
+ <li>Ensure <code>ro.crypto.type</code> is set to <code>file</code></li>
+ </ul>
+ </li>
+</ul>
+
+<p>
+Additionally, testers can boot a <code>userdebug</code> instance with a lockscreen set on the
+primary user. Then <code>adb</code> shell into the device and use
+<code>su</code> to become root. Make sure <code>/data/data</code> contains
+encrypted filenames; if it does not, something is wrong.
+</p>
+<h2 id="aosp-implementation-details">AOSP implementation details</h2>
+<p>
+This section provides details on the AOSP implementation and describes how
+file-based encryption works. It should not be necessary for device manufacturers
+to make any changes here to use FBE and Direct Boot on their devices.
+</p>
+<h3 id="ext4-encryption">ext4 encryption</h3>
+<p>
+The AOSP implementation uses ext4 encryption in kernel and is configured to:
+</p><ul>
+<li>Encrypt file contents with AES-256 in XTS mode
+<li>Encrypt file names with AES-256 in CBC-CTS mode</li></ul>
+<h3 id="key-derivation">Key derivation</h3>
+<p>
+Disk encryption keys, which are 512-bit AES-XTS keys, are stored encrypted
+by another key (a 256-bit AES-GCM key) held in the TEE. To use this TEE key,
+three requirements must be met:
+</p><ul>
+<li>The auth token
+<li>The stretched credential
+<li>The “secdiscardable hash”</li></ul>
+<p>
+The <em>auth token</em> is a cryptographically authenticated token generated by
+<a
+href="{@docRoot}security/authentication/gatekeeper.html">Gatekeeper</a>
+when a user successfully logs in. The TEE will refuse to use the key unless the
+correct auth token is supplied. If the user has no credential, then no auth
+token is used nor needed.
+</p>
+<p>
+The <em>stretched credential</em> is the user credential after salting and
+stretching with the <code>scrypt</code> algorithm. The credential is actually
+hashed once in the lock settings service before being passed to
+<code>vold</code> for passing to <code>scrypt</code>. This is cryptographically
+bound to the key in the TEE with all the guarantees that apply to
+<code>KM_TAG_APPLICATION_ID</code>. If the user has no credential, then no
+stretched credential is used nor needed.
+</p>
+<p>
+The <code>secdiscardable hash</code> is a 512-bit hash of a random 16 KB file
+stored alongside other information used to reconstruct the key, such as the
+seed. This file is securely deleted when the key is deleted, or it is encrypted
+in a new way; this added protection ensures an attacker must recover every bit
+of this securely deleted file to recover the key. This is cryptographically
+bound to the key in the TEE with all the guarantees that apply to
+<code>KM_TAG_APPLICATION_ID</code>. See the <a
+href="{@docRoot}security/keystore/implementer-ref.html">Keystore
+Implementer's Reference</a>.
diff --git a/src/security/encryption/full-disk.jd b/src/security/encryption/full-disk.jd
new file mode 100644
index 0000000..8a59825
--- /dev/null
+++ b/src/security/encryption/full-disk.jd
@@ -0,0 +1,630 @@
+page.title=Full-Disk Encryption
+@jd:body
+
+<!--
+ Copyright 2014 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<div id="qv-wrapper">
+ <div id="qv">
+ <h2>In this document</h2>
+ <ol id="auto-toc">
+ </ol>
+ </div>
+</div>
+
+<p>Full-disk encryption is the process of encoding all user data on an Android device using an
+encrypted key. Once a device is encrypted, all user-created data is
+automatically encrypted before committing it to disk and all reads
+automatically decrypt data before returning it to the calling process.</p>
+
+<p>
+Full-disk encryption was introduced to Android in 4.4, but Android 5.0 introduced
+these new features:</p>
+<ul>
+ <li>Created fast encryption, which only encrypts used blocks on the data partition
+to avoid first boot taking a long time. Only ext4 and f2fs filesystems
+currently support fast encryption.
+ <li>Added the <a href="{@docRoot}devices/storage/config.html"><code>forceencrypt</code>
+ fstab flag</a> to encrypt on first boot.
+ <li>Added support for patterns and encryption without a password.
+ <li>Added hardware-backed storage of the encryption key using Trusted
+ Execution Environment’s (TEE) signing capability (such as in a TrustZone).
+ See <a href="#storing_the_encrypted_key">Storing the encrypted key</a> for more
+ details.
+</ul>
+
+<p class="caution"><strong>Caution:</strong> Devices upgraded to Android 5.0 and then
+encrypted may be returned to an unencrypted state by factory data reset. New Android 5.0
+devices encrypted at first boot cannot be returned to an unencrypted state.</p>
+
+<h2 id=how_android_encryption_works>How Android full-disk encryption works</h2>
+
+<p>Android full-disk encryption is based on <code>dm-crypt</code>, which is a kernel
+feature that works at the block device layer. Because of
+this, encryption works with Embedded MultiMediaCard<strong> (</strong>eMMC) and
+similar flash devices that present themselves to the kernel as block
+devices. Encryption is not possible with YAFFS, which talks directly to a raw
+NAND flash chip. </p>
+
+<p>The encryption algorithm is 128 Advanced Encryption Standard (AES) with
+cipher-block chaining (CBC) and ESSIV:SHA256. The master key is encrypted with
+128-bit AES via calls to the OpenSSL library. You must use 128 bits or more for
+the key (with 256 being optional). </p>
+
+<p class="note"><strong>Note:</strong> OEMs can use 128-bit or higher to encrypt the master key.</p>
+
+<p>In the Android 5.0 release, there are four kinds of encryption states: </p>
+
+<ul>
+ <li>default
+ <li>PIN
+ <li>password
+ <li>pattern
+</ul>
+
+<p>Upon first boot, the device creates a randomly generated 128-bit master key
+and then hashes it with a default password and stored salt. The default password is: "default_password"
+However, the resultant hash is also signed through a TEE (such as TrustZone),
+which uses a hash of the signature to encrypt the master key.</p>
+
+<p>You can find the default password defined in the Android Open Source Project <a
+href="https://android.googlesource.com/platform/system/vold/+/master/cryptfs.c">cryptfs.c</a>
+file.</p>
+
+<p>When the user sets the PIN/pass or password on the device, only the 128-bit key
+is re-encrypted and stored. (ie. user PIN/pass/pattern changes do NOT cause
+re-encryption of userdata.) Note that
+<a href="http://developer.android.com/guide/topics/admin/device-admin.html">managed device</a>
+may be subject to PIN, pattern, or password restrictions.</p>
+
+<p>Encryption is managed by <code>init</code> and <code>vold</code>.
+<code>init</code> calls <code>vold</code>, and vold sets properties to trigger
+events in init. Other parts of the system
+also look at the properties to conduct tasks such as report status, ask for a
+password, or prompt to factory reset in the case of a fatal error. To invoke
+encryption features in <code>vold</code>, the system uses the command line tool
+<code>vdc</code>’s <code>cryptfs</code> commands: <code>checkpw</code>,
+<code>restart</code>, <code>enablecrypto</code>, <code>changepw</code>,
+<code>cryptocomplete</code>, <code>verifypw</code>, <code>setfield</code>,
+<code>getfield</code>, <code>mountdefaultencrypted</code>, <code>getpwtype</code>,
+<code>getpw</code>, and <code>clearpw</code>.</p>
+
+<p>In order to encrypt, decrypt or wipe <code>/data</code>, <code>/data</code>
+must not be mounted. However, in order to show any user interface (UI), the
+framework must start and the framework requires <code>/data</code> to run. To
+resolve this conundrum, a temporary filesystem is mounted on <code>/data</code>.
+This allows Android to prompt for passwords, show progress, or suggest a data
+wipe as needed. It does impose the limitation that in order to switch from the
+temporary filesystem to the true <code>/data</code> filesystem, the system must
+stop every process with open files on the temporary filesystem and restart those
+processes on the real <code>/data</code> filesystem. To do this, all services
+must be in one of three groups: <code>core</code>, <code>main</code>, and
+<code>late_start</code>.</p>
+
+<ul>
+ <li><code>core</code>: Never shut down after starting.
+ <li><code>main</code>: Shut down and then restart after the disk password is entered.
+ <li><code>late_start</code>: Does not start until after <code>/data</code> has been decrypted and mounted.
+</ul>
+
+<p>To trigger these actions, the <code>vold.decrypt</code> property is set to
+<a href="https://android.googlesource.com/platform/system/vold/+/master/cryptfs.c">various strings</a>.
+To kill and restart services, the <code>init</code> commands are:</p>
+
+<ul>
+ <li><code>class_reset</code>: Stops a service but allows it to be restarted with class_start.
+ <li><code>class_start</code>: Restarts a service.
+ <li><code>class_stop</code>: Stops a service and adds a <code>SVC_DISABLED</code> flag.
+ Stopped services do not respond to <code>class_start</code>.
+</ul>
+
+<h2 id=flows>Flows</h2>
+
+<p>There are four flows for an encrypted device. A device is encrypted just once
+and then follows a normal boot flow. </p>
+
+<ul>
+ <li>Encrypt a previously unencrypted device:
+ <ul>
+ <li>Encrypt a new device with <code>forceencrypt</code>: Mandatory encryption
+ at first boot (starting in Android L).
+ <li>Encrypt an existing device: User-initiated encryption (Android K and earlier).
+ </ul>
+ <li>Boot an encrypted device:
+ <ul>
+ <li>Starting an encrypted device with no password: Booting an encrypted device that
+ has no set password (relevant for devices running Android 5.0 and later).
+ <li>Starting an encrypted device with a password: Booting an encrypted device that
+ has a set password.
+ </ul>
+</ul>
+
+<p>In addition to these flows, the device can also fail to encrypt <code>/data</code>.
+Each of the flows are explained in detail below.</p>
+
+
+<h3 id=encrypt_a_new_device_with_forceencrypt>Encrypt a new device with forceencrypt</h3>
+
+<p>This is the normal first boot for an Android 5.0 device.</p>
+
+<ol>
+ <li><strong>Detect unencrypted filesystem with <code>forceencrypt</code> flag</strong>
+
+<p>
+<code>/data</code> is not encrypted but needs to be because <code>forceencrypt</code> mandates it.
+Unmount <code>/data</code>.</p>
+
+ <li><strong>Start encrypting <code>/data</code></strong>
+
+<p><code>vold.decrypt = "trigger_encryption"</code> triggers <code>init.rc</code>,
+which will cause <code>vold</code> to encrypt <code>/data</code> with no password.
+(None is set because this should be a new device.)</p>
+
+
+ <li><strong>Mount tmpfs</strong>
+
+
+<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options from
+<code>ro.crypto.tmpfs_options</code>) and sets the property <code>vold.encrypt_progress</code> to 0.
+<code>vold</code> prepepares the tmpfs <code>/data</code> for booting an encrypted system and sets the
+property <code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code>
+</p>
+
+ <li><strong>Bring up framework to show progress</strong>
+
+
+<p>Because the device has virtually no data to encrypt, the progress bar will
+often not actually appear because encryption happens so quickly. See
+<a href="#encrypt_an_existing_device">Encrypt an existing device</a> for more
+details about the progress UI.</p>
+
+ <li><strong>When <code>/data</code> is encrypted, take down the framework</strong>
+
+<p><code>vold</code> sets <code>vold.decrypt</code> to
+<code>trigger_default_encryption</code> which starts the
+<code>defaultcrypto</code> service. (This starts the flow below for mounting a
+default encrypted userdata.) <code>trigger_default_encryption</code> checks the
+encryption type to see if <code>/data</code> is encrypted with or without a
+password. Because Android 5.0 devices are encrypted on first boot, there should
+be no password set; therefore we decrypt and mount <code>/data</code>.</p>
+
+ <li><strong>Mount <code>/data</code></strong>
+
+<p><code>init</code> then mounts <code>/data</code> on a tmpfs RAMDisk using
+parameters it picks up from <code>ro.crypto.tmpfs_options</code>, which is set
+in <code>init.rc</code>.</p>
+
+ <li><strong>Start framework</strong>
+
+<p>Set <code>vold</code> to <code>trigger_restart_framework</code>, which
+continues the usual boot process.</p>
+</ol>
+
+<h3 id=encrypt_an_existing_device>Encrypt an existing device</h3>
+
+<p>This is what happens when you encrypt an unencrypted Android K or earlier
+device that has been migrated to L.</p>
+
+<p>This process is user-initiated and is referred to as “inplace encryption” in
+the code. When a user selects to encrypt a device, the UI makes sure the
+battery is fully charged and the AC adapter is plugged in so there is enough
+power to finish the encryption process.</p>
+
+<p class="warning"><strong>Warning:</strong> If the device runs out of power and shuts down before it has finished
+encrypting, file data is left in a partially encrypted state. The device must
+be factory reset and all data is lost.</p>
+
+<p>To enable inplace encryption, <code>vold</code> starts a loop to read each
+sector of the real block device and then write it
+to the crypto block device. <code>vold</code> checks to see if a sector is in
+use before reading and writing it, which makes
+encryption much faster on a new device that has little to no data. </p>
+
+<p><strong>State of device</strong>: Set <code>ro.crypto.state = "unencrypted"</code>
+and execute the <code>on nonencrypted</code> <code>init</code> trigger to continue booting.</p>
+
+<ol>
+ <li><strong>Check password</strong>
+
+<p>The UI calls <code>vold</code> with the command <code>cryptfs enablecrypto inplace</code>
+where <code>passwd</code> is the user's lock screen password.</p>
+
+ <li><strong>Take down the framework</strong>
+
+<p><code>vold</code> checks for errors, returns -1 if it can't encrypt, and
+prints a reason in the log. If it can encrypt, it sets the property <code>vold.decrypt</code>
+to <code>trigger_shutdown_framework</code>. This causes <code>init.rc</code> to
+stop services in the classes <code>late_start</code> and <code>main</code>. </p>
+
+ <li><strong>Create a crypto footer</strong></li>
+ <li><strong>Create a breadcrumb file</strong></li>
+ <li><strong>Reboot</strong></li>
+ <li><strong>Detect breadcrumb file</strong></li>
+ <li><strong>Start encrypting <code>/data</code></strong>
+
+<p><code>vold</code> then sets up the crypto mapping, which creates a virtual crypto block device
+that maps onto the real block device but encrypts each sector as it is written,
+and decrypts each sector as it is read. <code>vold</code> then creates and writes
+out the crypto metadata.</p>
+
+ <li><strong>While it’s encrypting, mount tmpfs</strong>
+
+<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options
+from <code>ro.crypto.tmpfs_options</code>) and sets the property
+<code>vold.encrypt_progress</code> to 0. <code>vold</code> prepares the tmpfs
+<code>/data</code> for booting an encrypted system and sets the property
+<code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code> </p>
+
+ <li><strong>Bring up framework to show progress</strong>
+
+<p><code>trigger_restart_min_framework </code>causes <code>init.rc</code> to
+start the <code>main</code> class of services. When the framework sees that
+<code>vold.encrypt_progress</code> is set to 0, it brings up the progress bar
+UI, which queries that property every five seconds and updates a progress bar.
+The encryption loop updates <code>vold.encrypt_progress</code> every time it
+encrypts another percent of the partition.</p>
+
+ <li><strong>When<code> /data</code> is encrypted, update the crypto footer</strong>
+
+<p>When <code>/data</code> is successfully encrypted, <code>vold</code> clears
+the flag <code>ENCRYPTION_IN_PROGRESS</code> in the metadata.</p>
+
+<p>When the device is successfully unlocked, the password is then used to
+encrypt the master key and the crypto footer is updated.</p>
+
+<p> If the reboot fails for some reason, <code>vold</code> sets the property
+<code>vold.encrypt_progress</code> to <code>error_reboot_failed</code> and
+the UI should display a message asking the user to press a button to
+reboot. This is not expected to ever occur.</p>
+</ol>
+
+<h3 id=starting_an_encrypted_device_with_default_encryption>
+Starting an encrypted device with default encryption</h3>
+
+<p>This is what happens when you boot up an encrypted device with no password.
+Because Android 5.0 devices are encrypted on first boot, there should be no set
+password and therefore this is the <em>default encryption</em> state.</p>
+
+<ol>
+ <li><strong>Detect encrypted <code>/data</code> with no password</strong>
+
+<p>Detect that the Android device is encrypted because <code>/data</code>
+cannot be mounted and one of the flags <code>encryptable</code> or
+<code>forceencrypt</code> is set.</p>
+
+<p><code>vold</code> sets <code>vold.decrypt</code> to
+<code>trigger_default_encryption</code>, which starts the
+<code>defaultcrypto</code> service. <code>trigger_default_encryption</code>
+checks the encryption type to see if <code>/data</code> is encrypted with or
+without a password. </p>
+
+ <li><strong>Decrypt /data</strong>
+
+<p>Creates the <code>dm-crypt</code> device over the block device so the device
+is ready for use.</p>
+
+ <li><strong>Mount /data</strong>
+
+<p><code>vold</code> then mounts the decrypted real <code>/data</code> partition
+and then prepares the new partition. It sets the property
+<code>vold.post_fs_data_done</code> to 0 and then sets <code>vold.decrypt</code>
+to <code>trigger_post_fs_data</code>. This causes <code>init.rc</code> to run
+its <code>post-fs-data</code> commands. They will create any necessary directories
+or links and then set <code>vold.post_fs_data_done</code> to 1.</p>
+
+<p>Once <code>vold</code> sees the 1 in that property, it sets the property
+<code>vold.decrypt</code> to: <code>trigger_restart_framework.</code> This
+causes <code>init.rc</code> to start services in class <code>main</code>
+again and also start services in class <code>late_start</code> for the first
+time since boot.</p>
+
+ <li><strong>Start framework</strong>
+
+<p>Now the framework boots all its services using the decrypted <code>/data</code>,
+and the system is ready for use.</p>
+</ol>
+
+<h3 id=starting_an_encrypted_device_without_default_encryption>
+Starting an encrypted device without default encryption</h3>
+
+<p>This is what happens when you boot up an encrypted device that has a set
+password. The device’s password can be a pin, pattern, or password. </p>
+
+<ol>
+ <li><strong>Detect encrypted device with a password</strong>
+
+<p>Detect that the Android device is encrypted because the flag
+<code>ro.crypto.state = "encrypted"</code></p>
+
+<p><code>vold</code> sets <code>vold.decrypt</code> to
+<code>trigger_restart_min_framework</code> because <code>/data</code> is
+encrypted with a password.</p>
+
+ <li><strong>Mount tmpfs</strong>
+
+<p><code>init</code> sets five properties to save the initial mount options
+given for <code>/data</code> with parameters passed from <code>init.rc</code>.
+<code>vold</code> uses these properties to set up the crypto mapping:</p>
+
+<ol>
+ <li><code>ro.crypto.fs_type</code>
+ <li><code>ro.crypto.fs_real_blkdev</code>
+ <li><code>ro.crypto.fs_mnt_point</code>
+ <li><code>ro.crypto.fs_options</code>
+ <li><code>ro.crypto.fs_flags </code>(ASCII 8-digit hex number preceded by 0x)
+ </ol>
+
+ <li><strong>Start framework to prompt for password</strong>
+
+<p>The framework starts up and sees that <code>vold.decrypt</code> is set to
+<code>trigger_restart_min_framework</code>. This tells the framework that it is
+booting on a tmpfs <code>/data</code> disk and it needs to get the user password.</p>
+
+<p>First, however, it needs to make sure that the disk was properly encrypted. It
+sends the command <code>cryptfs cryptocomplete</code> to <code>vold</code>.
+<code>vold</code> returns 0 if encryption was completed successfully, -1 on internal error, or
+-2 if encryption was not completed successfully. <code>vold</code> determines
+this by looking in the crypto metadata for the <code>CRYPTO_ENCRYPTION_IN_PROGRESS</code>
+flag. If it's set, the encryption process was interrupted, and there is no
+usable data on the device. If <code>vold</code> returns an error, the UI should
+display a message to the user to reboot and factory reset the device, and give
+the user a button to press to do so.</p>
+
+ <li><strong>Decrypt data with password</strong>
+
+<p>Once <code>cryptfs cryptocomplete</code> is successful, the framework
+displays a UI asking for the disk password. The UI checks the password by
+sending the command <code>cryptfs checkpw</code> to <code>vold</code>. If the
+password is correct (which is determined by successfully mounting the
+decrypted <code>/data</code> at a temporary location, then unmounting it),
+<code>vold</code> saves the name of the decrypted block device in the property
+<code>ro.crypto.fs_crypto_blkdev</code> and returns status 0 to the UI. If the
+password is incorrect, it returns -1 to the UI.</p>
+
+ <li><strong>Stop framework</strong>
+
+<p>The UI puts up a crypto boot graphic and then calls <code>vold</code> with
+the command <code>cryptfs restart</code>. <code>vold</code> sets the property
+<code>vold.decrypt</code> to <code>trigger_reset_main</code>, which causes
+<code>init.rc</code> to do <code>class_reset main</code>. This stops all services
+in the main class, which allows the tmpfs <code>/data</code> to be unmounted. </p>
+
+ <li><strong>Mount <code>/data</code></strong>
+
+<p><code>vold</code> then mounts the decrypted real <code>/data</code> partition
+and prepares the new partition (which may never have been prepared if
+it was encrypted with the wipe option, which is not supported on first
+release). It sets the property <code>vold.post_fs_data_done</code> to 0 and then
+sets <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>. This causes
+<code>init.rc</code> to run its <code>post-fs-data</code> commands. They will
+create any necessary directories or links and then set
+<code>vold.post_fs_data_done</code> to 1. Once <code>vold</code> sees the 1 in
+that property, it sets the property <code>vold.decrypt</code> to
+<code>trigger_restart_framework</code>. This causes <code>init.rc</code> to start
+services in class <code>main</code> again and also start services in class
+<code>late_start</code> for the first time since boot.</p>
+
+ <li><strong>Start full framework</strong>
+
+<p>Now the framework boots all its services using the decrypted <code>/data</code>
+filesystem, and the system is ready for use.</p>
+</ol>
+
+<h3 id=failure>Failure</h3>
+
+<p>A device that fails to decrypt might be awry for a few reasons. The device
+starts with the normal series of steps to boot:</p>
+
+<ol>
+ <li>Detect encrypted device with a password
+ <li>Mount tmpfs
+ <li>Start framework to prompt for password
+</ol>
+
+<p>But after the framework opens, the device can encounter some errors:</p>
+
+<ul>
+ <li>Password matches but cannot decrypt data
+ <li>User enters wrong password 30 times
+</ul>
+
+<p>If these errors are not resolved, <strong>prompt user to factory wipe</strong>:</p>
+
+<p>If <code>vold</code> detects an error during the encryption process, and if
+no data has been destroyed yet and the framework is up, <code>vold</code> sets
+the property <code>vold.encrypt_progress </code>to <code>error_not_encrypted</code>.
+The UI prompts the user to reboot and alerts them the encryption process
+never started. If the error occurs after the framework has been torn down, but
+before the progress bar UI is up, <code>vold</code> will reboot the system. If
+the reboot fails, it sets <code>vold.encrypt_progress</code> to
+<code>error_shutting_down</code> and returns -1; but there will not be anything
+to catch the error. This is not expected to happen.</p>
+
+<p>If <code>vold</code> detects an error during the encryption process, it sets
+<code>vold.encrypt_progress</code> to <code>error_partially_encrypted</code>
+and returns -1. The UI should then display a message saying the encryption
+failed and provide a button for the user to factory reset the device. </p>
+
+<h2 id=storing_the_encrypted_key>Storing the encrypted key</h2>
+
+<p>The encrypted key is stored in the crypto metadata. Hardware backing is
+implemented by using Trusted Execution Environment’s (TEE) signing capability.
+Previously, we encrypted the master key with a key generated by applying scrypt
+to the user's password and the stored salt. In order to make the key resilient
+against off-box attacks, we extend this algorithm by signing the resultant key
+with a stored TEE key. The resultant signature is then turned into an appropriate
+length key by one more application of scrypt. This key is then used to encrypt
+and decrypt the master key. To store this key:</p>
+
+<ol>
+ <li>Generate random 16-byte disk encryption key (DEK) and 16-byte salt.
+ <li>Apply scrypt to the user password and the salt to produce 32-byte intermediate
+key 1 (IK1).
+ <li>Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK).
+Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223
+zero bytes.
+ <li>Sign padded IK1 with HBK to produce 256-byte IK2.
+ <li>Apply scrypt to IK2 and salt (same salt as step 2) to produce 32-byte IK3.
+ <li>Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.
+ <li>Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.
+</ol>
+
+<h2 id=changing_the_password>Changing the password</h2>
+
+<p>When a user elects to change or remove their password in settings, the UI sends
+the command <code>cryptfs changepw</code> to <code>vold</code>, and
+<code>vold</code> re-encrypts the disk master key with the new password.</p>
+
+<h2 id=encryption_properties>Encryption properties</h2>
+
+<p><code>vold</code> and <code>init</code> communicate with each other by
+setting properties. Here is a list of available properties for encryption.</p>
+
+<h3 id=vold_properties>Vold properties</h3>
+
+<table>
+ <tr>
+ <th>Property</th>
+ <th>Description</th>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_encryption</code></td>
+ <td>Encrypt the drive with no
+ password.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_default_encryption</code></td>
+ <td>Check the drive to see if it is encrypted with no password.
+If it is, decrypt and mount it,
+else set <code>vold.decrypt</code> to trigger_restart_min_framework.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_reset_main</code></td>
+ <td>Set by vold to shutdown the UI asking for the disk password.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_post_fs_data</code></td>
+ <td> Set by vold to prep /data with necessary directories, et al.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_restart_framework</code></td>
+ <td>Set by vold to start the real framework and all services.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_shutdown_framework</code></td>
+ <td>Set by vold to shutdown the full framework to start encryption.</td>
+ </tr>
+ <tr>
+ <td><code>vold.decrypt trigger_restart_min_framework</code></td>
+ <td>Set by vold to start the
+progress bar UI for encryption or
+prompt for password, depending on
+the value of <code>ro.crypto.state</code>.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress</code></td>
+ <td>When the framework starts up,
+if this property is set, enter
+the progress bar UI mode.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress 0 to 100</code></td>
+ <td>The progress bar UI should
+display the percentage value set.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress error_partially_encrypted</code></td>
+ <td>The progress bar UI should display a message that the encryption failed, and
+give the user an option to
+factory reset the device.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress error_reboot_failed</code></td>
+ <td>The progress bar UI should display a message saying encryption
+ completed, and give the user a button to reboot the device. This error
+ is not expected to happen.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress error_not_encrypted</code></td>
+ <td>The progress bar UI should
+display a message saying an error
+occurred, no data was encrypted or
+lost, and give the user a button to reboot the system.</td>
+ </tr>
+ <tr>
+ <td><code>vold.encrypt_progress error_shutting_down</code></td>
+ <td>The progress bar UI is not running, so it is unclear who will respond
+ to this error. And it should never happen anyway.</td>
+ </tr>
+ <tr>
+ <td><code>vold.post_fs_data_done 0</code></td>
+ <td>Set by <code>vold</code> just before setting <code>vold.decrypt</code>
+ to <code>trigger_post_fs_data</code>.</td>
+ </tr>
+ <tr>
+ <td><code>vold.post_fs_data_done 1</code></td>
+ <td>Set by <code>init.rc</code> or
+ <code>init.rc</code> just after finishing the task <code>post-fs-data</code>.</td>
+ </tr>
+</table>
+<h3 id=init_properties>init properties</h3>
+
+<table>
+ <tr>
+ <th>Property</th>
+ <th>Description</th>
+ </tr>
+ <tr>
+ <td><code>ro.crypto.fs_crypto_blkdev</code></td>
+ <td>Set by the <code>vold</code> command <code>checkpw</code> for later use
+ by the <code>vold</code> command <code>restart</code>.</td>
+ </tr>
+ <tr>
+ <td><code>ro.crypto.state unencrypted</code></td>
+ <td>Set by <code>init</code> to say this system is running with an unencrypted
+ <code>/data ro.crypto.state encrypted</code>. Set by <code>init</code> to say
+ this system is running with an encrypted <code>/data</code>.</td>
+ </tr>
+ <tr>
+ <td><p><code>ro.crypto.fs_type<br>
+ ro.crypto.fs_real_blkdev <br>
+ ro.crypto.fs_mnt_point<br>
+ ro.crypto.fs_options<br>
+ ro.crypto.fs_flags <br>
+ </code></p></td>
+ <td> These five properties are set by
+ <code>init</code> when it tries to mount <code>/data</code> with parameters passed in from
+ <code>init.rc</code>. <code>vold</code> uses these to setup the crypto mapping.</td>
+ </tr>
+ <tr>
+ <td><code>ro.crypto.tmpfs_options</code></td>
+ <td>Set by <code>init.rc</code> with the options init should use when
+ mounting the tmpfs /data filesystem.</td>
+ </tr>
+</table>
+<h2 id=init_actions>Init actions</h2>
+
+<pre>
+on post-fs-data
+on nonencrypted
+on property:vold.decrypt=trigger_reset_main
+on property:vold.decrypt=trigger_post_fs_data
+on property:vold.decrypt=trigger_restart_min_framework
+on property:vold.decrypt=trigger_restart_framework
+on property:vold.decrypt=trigger_shutdown_framework
+on property:vold.decrypt=trigger_encryption
+on property:vold.decrypt=trigger_default_encryption
+</pre>
diff --git a/src/security/encryption/index.jd b/src/security/encryption/index.jd
index 0cad3ec..4e15495 100644
--- a/src/security/encryption/index.jd
+++ b/src/security/encryption/index.jd
@@ -1,4 +1,4 @@
-page.title=Full Disk Encryption
+page.title=Encryption
@jd:body
<!--
@@ -25,496 +25,47 @@
</div>
</div>
-<h2 id=what_is_encryption>What is full disk encryption?</h2>
-
-<p>Full disk encryption is the process of encoding all user data on an Android device using an
-encrypted key. Once a device is encrypted, all user-created data is
-automatically encrypted before committing it to disk and all reads
-automatically decrypt data before returning it to the calling process.</p>
-
-<h2 id=what_we’ve_added_for_android_l>What we’ve added for Android 5.0</h2>
-
-<ul>
- <li>Created fast encryption, which only encrypts used blocks on the data partition
-to avoid first boot taking a long time. Only ext4 and f2fs filesystems
-currently support fast encryption.
- <li>Added the <code>forceencrypt</code> flag to encrypt on first boot.
- <li>Added support for patterns and encryption without a password.
- <li>Added hardware-backed storage of the encryption key using Trusted
- Execution Environment’s (TEE) signing capability (such as in a TrustZone).
- See <a href="#storing_the_encrypted_key">Storing the encrypted key</a> for more
- details.
-</ul>
-
-<p class="caution"><strong>Caution:</strong> Devices upgraded to Android 5.0 and then
-encrypted may be returned to an unencrypted state by factory data reset. New Android 5.0
-devices encrypted at first boot cannot be returned to an unencrypted state.</p>
-
-<h2 id=how_android_encryption_works>How Android full disk encryption works</h2>
-
-<p>Android full disk encryption is based on <code>dm-crypt</code>, which is a kernel
-feature that works at the block device layer. Because of
-this, encryption works with Embedded MultiMediaCard<strong> (</strong>eMMC) and
-similar flash devices that present themselves to the kernel as block
-devices. Encryption is not possible with YAFFS, which talks directly to a raw
-NAND flash chip. </p>
-
-<p>The encryption algorithm is 128 Advanced Encryption Standard (AES) with
-cipher-block chaining (CBC) and ESSIV:SHA256. The master key is encrypted with
-128-bit AES via calls to the OpenSSL library. You must use 128 bits or more for
-the key (with 256 being optional). </p>
-
-<p class="note"><strong>Note:</strong> OEMs can use 128-bit or higher to encrypt the master key.</p>
-
-<p>In the Android 5.0 release, there are four kinds of encryption states: </p>
-
-<ul>
- <li>default
- <li>PIN
- <li>password
- <li>pattern
-</ul>
-
-<p>Upon first boot, the device creates a randomly generated 128-bit master key
-and then hashes it with a default password and stored salt. The default password is: "default_password"
-However, the resultant hash is also signed through a TEE (such as TrustZone),
-which uses a hash of the signature to encrypt the master key.</p>
-
-<p>You can find the default password defined in the Android Open Source Project <a
-href="https://android.googlesource.com/platform/system/vold/+/master/cryptfs.c">cryptfs.c</a>
-file.</p>
-
-<p>When the user sets the PIN/pass or password on the device, only the 128-bit key
-is re-encrypted and stored. (ie. user PIN/pass/pattern changes do NOT cause
-re-encryption of userdata.) Note that
-<a href="http://developer.android.com/guide/topics/admin/device-admin.html">managed device</a>
-may be subject to PIN, pattern, or password restrictions.</p>
-
-<p>Encryption is managed by <code>init</code> and <code>vold</code>. <code>init</code> calls <code>vold</code>, and vold sets properties to trigger events in init. Other parts of the system
-also look at the properties to conduct tasks such as report status, ask for a
-password, or prompt to factory reset in the case of a fatal error. To invoke
-encryption features in <code>vold</code>, the system uses the command line tool <code>vdc</code>’s <code>cryptfs</code> commands: <code>checkpw</code>, <code>restart</code>, <code>enablecrypto</code>, <code>changepw</code>, <code>cryptocomplete</code>, <code>verifypw</code>, <code>setfield</code>, <code>getfield</code>, <code>mountdefaultencrypted</code>, <code>getpwtype</code>, <code>getpw</code>, and <code>clearpw</code>.</p>
-
-<p>In order to encrypt, decrypt or wipe <code>/data</code>, <code>/data</code> must not be mounted. However, in order to show any user interface (UI), the
-framework must start and the framework requires <code>/data</code> to run. To resolve this conundrum, a temporary filesystem is mounted on <code>/data</code>. This allows Android to prompt for passwords, show progress, or suggest a data
-wipe as needed. It does impose the limitation that in order to switch from the
-temporary filesystem to the true <code>/data</code> filesystem, the system must stop every process with open files on the
-temporary filesystem and restart those processes on the real <code>/data</code> filesystem. To do this, all services must be in one of three groups: <code>core</code>, <code>main</code>, and <code>late_start</code>.</p>
-
-<ul>
- <li><code>core</code>: Never shut down after starting.
- <li><code>main</code>: Shut down and then restart after the disk password is entered.
- <li><code>late_start</code>: Does not start until after <code>/data</code> has been decrypted and mounted.
-</ul>
-
-<p>To trigger these actions, the <code>vold.decrypt</code> property is set to <a href="https://android.googlesource.com/platform/system/vold/+/master/cryptfs.c">various strings</a>. To kill and restart services, the <code>init</code> commands are:</p>
-
-<ul>
- <li><code>class_reset</code>: Stops a service but allows it to be restarted with class_start.
- <li><code>class_start</code>: Restarts a service.
- <li><code>class_stop</code>: Stops a service and adds a <code>SVC_DISABLED</code> flag. Stopped services do not respond to <code>class_start</code>.
-</ul>
-
-<h2 id=flows>Flows</h2>
-
-<p>There are four flows for an encrypted device. A device is encrypted just once
-and then follows a normal boot flow. </p>
-
-<ul>
- <li>Encrypt a previously unencrypted device:
- <ul>
- <li>Encrypt a new device with <code>forceencrypt</code>: Mandatory encryption at first boot (starting in Android L).
- <li>Encrypt an existing device: User-initiated encryption (Android K and earlier).
- </ul>
- <li>Boot an encrypted device:
- <ul>
- <li>Starting an encrypted device with no password: Booting an encrypted device that
-has no set password (relevant for devices running Android 5.0 and later).
- <li> Starting an encrypted device with a password: Booting an encrypted device that
-has a set password.
- </ul>
-</ul>
-
-<p>In addition to these flows, the device can also fail to encrypt <code>/data</code>. Each of the flows are explained in detail below.</p>
-
-<h3 id=encrypt_a_new_device_with_forceencrypt>Encrypt a new device with /forceencrypt</h3>
-
-<p>This is the normal first boot for an Android 5.0 device. </p>
-
-<ol>
- <li><strong>Detect unencrypted filesystem with <code>/forceencrypt</code> flag</strong>
-
<p>
-<code>/data</code> is not encrypted but needs to be because <code>/forceencrypt</code> mandates it.
-Unmount <code>/data</code>.</p>
-
- <li><strong>Start encrypting <code>/data</code></strong>
-
-<p><code>vold.decrypt = "trigger_encryption"</code> triggers <code>init.rc</code>, which will cause <code>vold</code> to encrypt <code>/data</code> with no password. (None is set because this should be a new device.)</p>
-
-
- <li><strong>Mount tmpfs</strong>
-
-
-<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options from
-<code>ro.crypto.tmpfs_options</code>) and sets the property <code>vold.encrypt_progress</code> to 0.
-<code>vold</code> prepepares the tmpfs <code>/data</code> for booting an encrypted system and sets the
-property <code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code>
+Encryption is the process of encoding all user data on an Android device using
+symmetric encryption keys. Once a device is encrypted, all user-created data is
+automatically encrypted before committing it to disk and all reads automatically
+decrypt data before returning it to the calling process. Encryption ensures that
+even if an unauthorized party tries to access the data, they won’t be able to
+read it.
</p>
-
- <li><strong>Bring up framework to show progress</strong>
-
-
-<p>Because the device has virtually no data to encrypt, the progress bar will
-often not actually appear because encryption happens so quickly. See <a href="#encrypt_an_existing_device">Encrypt an existing device</a> for more details about the progress UI. </p>
-
- <li><strong>When <code>/data</code> is encrypted, take down the framework</strong>
-
-<p><code>vold</code> sets <code>vold.decrypt</code> to
-<code>trigger_default_encryption</code> which starts the
-<code>defaultcrypto</code> service. (This starts the flow below for mounting a
-default encrypted userdata.) <code>trigger_default_encryption</code> checks the
-encryption type to see if <code>/data</code> is encrypted with or without a
-password. Because Android 5.0 devices are encrypted on first boot, there should
-be no password set; therefore we decrypt and mount <code>/data</code>.</p>
-
- <li><strong>Mount <code>/data</code></strong>
-
-<p><code>init</code> then mounts <code>/data</code> on a tmpfs RAMDisk using parameters it picks up from <code>ro.crypto.tmpfs_options</code>, which is set in <code>init.rc</code>.</p>
-
- <li><strong>Start framework</strong>
-
-<p>Set <code>vold</code> to <code>trigger_restart_framework</code>, which continues the usual boot process.</p>
-</ol>
-
-<h3 id=encrypt_an_existing_device>Encrypt an existing device</h3>
-
-<p>This is what happens when you encrypt an unencrypted Android K or earlier
-device that has been migrated to L. Note that this is the same flow as used in
-K.</p>
-
-<p>This process is user-initiated and is referred to as “inplace encryption” in
-the code. When a user selects to encrypt a device, the UI makes sure the
-battery is fully charged and the AC adapter is plugged in so there is enough
-power to finish the encryption process.</p>
-
-<p class="warning"><strong>Warning:</strong> If the device runs out of power and shuts down before it has finished
-encrypting, file data is left in a partially encrypted state. The device must
-be factory reset and all data is lost.</p>
-
-<p>To enable inplace encryption, <code>vold</code> starts a loop to read each sector of the real block device and then write it
-to the crypto block device. <code>vold</code> checks to see if a sector is in use before reading and writing it, which makes
-encryption much faster on a new device that has little to no data. </p>
-
-<p><strong>State of device</strong>: Set <code>ro.crypto.state = "unencrypted"</code> and execute the <code>on nonencrypted</code> <code>init</code> trigger to continue booting.</p>
-
-<ol>
- <li><strong>Check password</strong>
-
-<p>The UI calls <code>vold</code> with the command <code>cryptfs enablecrypto inplace</code> where <code>passwd</code> is the user's lock screen password.</p>
-
- <li><strong>Take down the framework</strong>
-
-<p><code>vold</code> checks for errors, returns -1 if it can't encrypt, and prints a reason in the
-log. If it can encrypt, it sets the property <code>vold.decrypt</code> to <code>trigger_shutdown_framework</code>. This causes <code>init.rc</code> to stop services in the classes <code>late_start</code> and <code>main</code>. </p>
-
- <li><strong>Unmount <code>/data</code></strong>
-
-<p><code>vold</code> unmounts <code>/mnt/sdcard</code> and then <code>/data</code>.</p>
-
- <li><strong>Start encrypting <code>/data</code></strong>
-
-<p><code>vold</code> then sets up the crypto mapping, which creates a virtual crypto block device
-that maps onto the real block device but encrypts each sector as it is written,
-and decrypts each sector as it is read. <code>vold</code> then creates and writes out the crypto metadata.</p>
-
- <li><strong>While it’s encrypting, mount tmpfs</strong>
-
-<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options from <code>ro.crypto.tmpfs_options</code>) and sets the property <code>vold.encrypt_progress</code> to 0. <code>vold</code> prepares the tmpfs <code>/data</code> for booting an encrypted system and sets the property <code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code> </p>
-
- <li><strong>Bring up framework to show progress</strong>
-
-<p><code>trigger_restart_min_framework </code>causes <code>init.rc</code> to start the <code>main</code> class of services. When the framework sees that <code>vold.encrypt_progress</code> is set to 0, it brings up the progress bar UI, which queries that property
-every five seconds and updates a progress bar. The encryption loop updates <code>vold.encrypt_progress</code> every time it encrypts another percent of the partition. </p>
-
- <li><strong>When<code> /data</code> is encrypted, reboot</strong>
-
-<p>When <code>/data</code> is successfully encrypted, <code>vold</code> clears the flag <code>ENCRYPTION_IN_PROGRESS</code> in the metadata and reboots the system. </p>
-
-<p> If the reboot fails for some reason, <code>vold</code> sets the property <code>vold.encrypt_progress</code> to <code>error_reboot_failed</code> and the UI should display a message asking the user to press a button to
-reboot. This is not expected to ever occur.</p>
-</ol>
-
-<h3 id=starting_an_encrypted_device_with_default_encryption>Starting an encrypted device with default encryption</h3>
-
-<p>This is what happens when you boot up an encrypted device with no password.
-Because Android 5.0 devices are encrypted on first boot, there should be no set
-password and therefore this is the <em>default encryption</em> state.</p>
-
-<ol>
- <li><strong>Detect encrypted <code>/data</code> with no password</strong>
-
-<p>Detect that the Android device is encrypted because <code>/data</code>
-cannot be mounted and one of the flags <code>encryptable</code> or
-<code>forceencrypt</code> is set.</p>
-
-<p><code>vold</code> sets <code>vold.decrypt</code> to <code>trigger_default_encryption</code>, which starts the <code>defaultcrypto</code> service. <code>trigger_default_encryption</code> checks the encryption type to see if <code>/data</code> is encrypted with or without a password. </p>
-
- <li><strong>Decrypt /data</strong>
-
-<p>Creates the <code>dm-crypt</code> device over the block device so the device is ready for use.</p>
-
- <li><strong>Mount /data</strong>
-
-<p><code>vold</code> then mounts the decrypted real <code>/data </code>partition and then prepares the new partition. It sets the property <code>vold.post_fs_data_done</code> to 0 and then sets <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>. This causes <code>init.rc</code> to run its <code>post-fs-data</code> commands. They will create any necessary directories or links and then set <code>vold.post_fs_data_done</code> to 1.</p>
-
-<p>Once <code>vold</code> sees the 1 in that property, it sets the property <code>vold.decrypt</code> to: <code>trigger_restart_framework.</code> This causes <code>init.rc</code> to start services in class <code>main</code> again and also start services in class <code>late_start</code> for the first time since boot.</p>
-
- <li><strong>Start framework</strong>
-
-<p>Now the framework boots all its services using the decrypted <code>/data</code>, and the system is ready for use.</p>
-</ol>
-
-<h3 id=starting_an_encrypted_device_without_default_encryption>Starting an encrypted device without default encryption</h3>
-
-<p>This is what happens when you boot up an encrypted device that has a set
-password. The device’s password can be a pin, pattern, or password. </p>
-
-<ol>
- <li><strong>Detect encrypted device with a password</strong>
-
-<p>Detect that the Android device is encrypted because the flag <code>ro.crypto.state = "encrypted"</code></p>
-
-<p><code>vold</code> sets <code>vold.decrypt</code> to <code>trigger_restart_min_framework</code> because <code>/data</code> is encrypted with a password.</p>
-
- <li><strong>Mount tmpfs</strong>
-
-<p><code>init</code> sets five properties to save the initial mount options given for <code>/data</code> with parameters passed from <code>init.rc</code>. <code>vold</code> uses these properties to set up the crypto mapping:</p>
-
-<ol>
- <li><code>ro.crypto.fs_type</code>
- <li><code>ro.crypto.fs_real_blkdev</code>
- <li><code>ro.crypto.fs_mnt_point</code>
- <li><code>ro.crypto.fs_options</code>
- <li><code>ro.crypto.fs_flags </code>(ASCII 8-digit hex number preceded by 0x)
- </ol>
-
- <li><strong>Start framework to prompt for password</strong>
-
-<p>The framework starts up and sees that <code>vold.decrypt</code> is set to <code>trigger_restart_min_framework</code>. This tells the framework that it is booting on a tmpfs <code>/data</code> disk and it needs to get the user password.</p>
-
-<p>First, however, it needs to make sure that the disk was properly encrypted. It
-sends the command <code>cryptfs cryptocomplete</code> to <code>vold</code>. <code>vold</code> returns 0 if encryption was completed successfully, -1 on internal error, or
--2 if encryption was not completed successfully. <code>vold</code> determines this by looking in the crypto metadata for the <code>CRYPTO_ENCRYPTION_IN_PROGRESS</code> flag. If it's set, the encryption process was interrupted, and there is no
-usable data on the device. If <code>vold</code> returns an error, the UI should display a message to the user to reboot and
-factory reset the device, and give the user a button to press to do so.</p>
-
- <li><strong>Decrypt data with password</strong>
-
-<p>Once <code>cryptfs cryptocomplete</code> is successful, the framework displays a UI asking for the disk password. The
-UI checks the password by sending the command <code>cryptfs checkpw</code> to <code>vold</code>. If the password is correct (which is determined by successfully mounting the
-decrypted <code>/data</code> at a temporary location, then unmounting it), <code>vold</code> saves the name of the decrypted block device in the property <code>ro.crypto.fs_crypto_blkdev</code> and returns status 0 to the UI. If the password is incorrect, it returns -1 to
-the UI.</p>
-
- <li><strong>Stop framework</strong>
-
-<p>The UI puts up a crypto boot graphic and then calls <code>vold</code> with the command <code>cryptfs restart</code>. <code>vold</code> sets the property <code>vold.decrypt</code> to <code>trigger_reset_main</code>, which causes <code>init.rc</code> to do <code>class_reset main</code>. This stops all services in the main class, which allows the tmpfs <code>/data</code> to be unmounted. </p>
-
- <li><strong>Mount <code>/data</code></strong>
-
-<p><code>vold</code> then mounts the decrypted real <code>/data </code>partition and prepares the new partition (which may never have been prepared if
-it was encrypted with the wipe option, which is not supported on first
-release). It sets the property <code>vold.post_fs_data_done</code> to 0 and then sets <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>. This causes <code>init.rc</code> to run its <code>post-fs-data</code> commands. They will create any necessary directories or links and then set <code>vold.post_fs_data_done</code> to 1. Once <code>vold</code> sees the 1 in that property, it sets the property <code>vold.decrypt</code> to <code>trigger_restart_framework</code>. This causes <code>init.rc</code> to start services in class <code>main</code> again and also start services in class <code>late_start</code> for the first time since boot.</p>
-
- <li><strong>Start full framework</strong>
-
-<p>Now the framework boots all its services using the decrypted <code>/data</code> filesystem, and the system is ready for use.</p>
-</ol>
-
-<h3 id=failure>Failure</h3>
-
-<p>A device that fails to decrypt might be awry for a few reasons. The device
-starts with the normal series of steps to boot:</p>
-
-<ol>
- <li>Detect encrypted device with a password
- <li>Mount tmpfs
- <li>Start framework to prompt for password
-</ol>
-
-<p>But after the framework opens, the device can encounter some errors:</p>
-
-<ul>
- <li>Password matches but cannot decrypt data
- <li>User enters wrong password 30 times
-</ul>
-
-<p>If these errors are not resolved, <strong>prompt user to factory wipe</strong>:</p>
-
-<p>If <code>vold</code> detects an error during the encryption process, and if no data has been
-destroyed yet and the framework is up, <code>vold</code> sets the property <code>vold.encrypt_progress </code>to <code>error_not_encrypted</code>. The UI prompts the user to reboot and alerts them the encryption process
-never started. If the error occurs after the framework has been torn down, but
-before the progress bar UI is up, <code>vold</code> will reboot the system. If the reboot fails, it sets <code>vold.encrypt_progress</code> to <code>error_shutting_down</code> and returns -1; but there will not be anything to catch the error. This is not
-expected to happen.</p>
-
-<p>If <code>vold</code> detects an error during the encryption process, it sets <code>vold.encrypt_progress</code> to <code>error_partially_encrypted</code> and returns -1. The UI should then display a message saying the encryption
-failed and provide a button for the user to factory reset the device. </p>
-
-<h2 id=storing_the_encrypted_key>Storing the encrypted key</h2>
-
-<p>The encrypted key is stored in the crypto metadata. Hardware backing is implemented by using Trusted Execution Environment’s (TEE) signing capability.
-Previously, we encrypted the master key with a key generated by applying scrypt to the user's password and the stored salt. In order to make the key resilient
-against off-box attacks, we extend this algorithm by signing the resultant key with a stored TEE key. The resultant signature is then turned into an appropriate length key by one more application of scrypt. This key is then used to encrypt and decrypt the master key. To store this key:</p>
-
-<ol>
- <li>Generate random 16-byte disk encryption key (DEK) and 16-byte salt.
- <li>Apply scrypt to the user password and the salt to produce 32-byte intermediate
-key 1 (IK1).
- <li>Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK).
-Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223
-zero bytes.
- <li>Sign padded IK1 with HBK to produce 256-byte IK2.
- <li>Apply scrypt to IK2 and salt (same salt as step 2) to produce 32-byte IK3.
- <li>Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.
- <li>Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.
-</ol>
-
-<h2 id=changing_the_password>Changing the password</h2>
-
-<p>When a user elects to change or remove their password in settings, the UI sends
-the command <code>cryptfs changepw</code> to <code>vold</code>, and <code>vold</code> re-encrypts the disk master key with the new password.</p>
-
-<h2 id=encryption_properties>Encryption properties</h2>
-
-<p><code>vold</code> and <code>init</code> communicate with each other by setting properties. Here is a list of available
-properties for encryption.</p>
-
-<h3 id=vold_properties>Vold properties </h3>
-
-<table>
- <tr>
- <th>Property</th>
- <th>Description</th>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_encryption</code></td>
- <td>Encrypt the drive with no
- password.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_default_encryption</code></td>
- <td>Check the drive to see if it is encrypted with no password.
-If it is, decrypt and mount it,
-else set <code>vold.decrypt</code> to trigger_restart_min_framework.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_reset_main</code></td>
- <td>Set by vold to shutdown the UI asking for the disk password.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_post_fs_data</code></td>
- <td> Set by vold to prep /data with necessary directories, et al.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_restart_framework</code></td>
- <td>Set by vold to start the real framework and all services.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_shutdown_framework</code></td>
- <td>Set by vold to shutdown the full framework to start encryption.</td>
- </tr>
- <tr>
- <td><code>vold.decrypt trigger_restart_min_framework</code></td>
- <td>Set by vold to start the
-progress bar UI for encryption or
-prompt for password, depending on
-the value of <code>ro.crypto.state</code>.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress</code></td>
- <td>When the framework starts up,
-if this property is set, enter
-the progress bar UI mode.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress 0 to 100</code></td>
- <td>The progress bar UI should
-display the percentage value set.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress error_partially_encrypted</code></td>
- <td>The progress bar UI should display a message that the encryption failed, and
-give the user an option to
-factory reset the device.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress error_reboot_failed</code></td>
- <td>The progress bar UI should
-display a message saying encryption completed, and give the user a button to reboot the device. This error is not expected to happen.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress error_not_encrypted</code></td>
- <td>The progress bar UI should
-display a message saying an error
-occurred, no data was encrypted or
-lost, and give the user a button to reboot the system.</td>
- </tr>
- <tr>
- <td><code>vold.encrypt_progress error_shutting_down</code></td>
- <td>The progress bar UI is not running, so it is unclear who will respond to this error. And it should never happen anyway.</td>
- </tr>
- <tr>
- <td><code>vold.post_fs_data_done 0</code></td>
- <td>Set by <code>vold</code> just before setting <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>.</td>
- </tr>
- <tr>
- <td><code>vold.post_fs_data_done 1</code></td>
- <td>Set by <code>init.rc</code> or
- <code>init.rc</code> just after finishing the task <code>post-fs-data</code>.</td>
- </tr>
-</table>
-<h3 id=init_properties>init properties</h3>
-
-<table>
- <tr>
- <th>Property</th>
- <th>Description</th>
- </tr>
- <tr>
- <td><code>ro.crypto.fs_crypto_blkdev</code></td>
- <td>Set by the <code>vold</code> command <code>checkpw</code> for later use by the <code>vold</code> command <code>restart</code>.</td>
- </tr>
- <tr>
- <td><code>ro.crypto.state unencrypted</code></td>
- <td>Set by <code>init</code> to say this system is running with an unencrypted
- <code>/data ro.crypto.state encrypted</code>. Set by <code>init</code> to say this system is running with an encrypted <code>/data</code>.</td>
- </tr>
- <tr>
- <td><p><code>ro.crypto.fs_type<br>
- ro.crypto.fs_real_blkdev <br>
- ro.crypto.fs_mnt_point<br>
- ro.crypto.fs_options<br>
- ro.crypto.fs_flags <br>
- </code></p></td>
- <td> These five properties are set by
- <code>init</code> when it tries to mount <code>/data</code> with parameters passed in from
- <code>init.rc</code>. <code>vold</code> uses these to setup the crypto mapping.</td>
- </tr>
- <tr>
- <td><code>ro.crypto.tmpfs_options</code></td>
- <td>Set by <code>init.rc</code> with the options init should use when mounting the tmpfs /data filesystem.</td>
- </tr>
-</table>
-<h2 id=init_actions>Init actions</h2>
-
-<pre>
-on post-fs-data
-on nonencrypted
-on property:vold.decrypt=trigger_reset_main
-on property:vold.decrypt=trigger_post_fs_data
-on property:vold.decrypt=trigger_restart_min_framework
-on property:vold.decrypt=trigger_restart_framework
-on property:vold.decrypt=trigger_shutdown_framework
-on property:vold.decrypt=trigger_encryption
-on property:vold.decrypt=trigger_default_encryption
-</pre>
+<p>
+Android has two methods for device encryption: full-disk encryption and
+file-based encryption.
+</p>
+<h2 id=full-disk>Full-disk encryption</h2>
+<p>
+Android 5.0 and above supports <a href="full-disk.html">full-disk encryption</a>.
+Full-disk encryption uses a single key—protected with the user’s device password—to
+protect the whole of a device’s userdata partition. Upon boot, the user must
+provide their credentials before any part of the disk is accessible.
+</p>
+<p>
+While this is great for security, it means that most of the core functionality
+of the phone in not immediately available when users reboot their device.
+Because access to their data is protected behind their single user credential,
+features like alarms could not operate, accessibility services were unavailable,
+and phones could not receive calls.
+</p>
+<h2 id=file-based>File-based encryption</h2>
+<p>
+Android 7.0 and above supports <a href="file-based.html">file-based encryption</a>.
+File-based encryption
+allows different files to be encrypted with different keys that can be unlocked
+independently. Devices that support file-based encryption can also support a new
+feature called <a
+href="https://developer.android.com/preview/features/direct-boot.html">Direct
+Boot</a> that allows encrypted devices to boot straight to the lock screen, thus
+enabling quick access to important device features like accessibility services
+and alarms.
+</p>
+<p>
+With the introduction of file-based encryption and new APIs to make
+applications aware of encryption, it is possible for these apps to operate
+within a limited context. This can happen before users have provided their
+credentials while still protecting private user information.
+</p>
diff --git a/src/security/enhancements/enhancements70.jd b/src/security/enhancements/enhancements70.jd
new file mode 100644
index 0000000..88d4763
--- /dev/null
+++ b/src/security/enhancements/enhancements70.jd
@@ -0,0 +1,53 @@
+page.title=Security Enhancements in Android 7.0
+@jd:body
+<!--
+ Copyright 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<p>Every Android release includes dozens of security enhancements to protect
+users. Here are some of the major security enhancements available in Android
+7.0:</p>
+
+<ul>
+ <li><strong>File-based encryption</strong>. Encrypting at the file level,
+ instead of encrypting the entire storage area as a single unit, better
+ isolates and protects individual users and profiles (such as personal and
+ work) on a device.</li>
+ <li><strong>Direct Boot</strong>. Enabled by file-based encryption, Direct
+ Boot allows certain apps such as alarm clock and accessibility features to
+ run when device is powered on but not unlocked.</li>
+ <li><strong>Verified Boot</strong>. Verified Boot is now strictly enforced to
+ prevent compromised devices from booting; it supports error correction to
+ improve reliability against non-malicious data corruption.</li>
+ <li><strong>SELinux</strong>. Updated SELinux configuration and increased
+ seccomp coverage further locks down the application sandbox and reduces attack
+ surface.</li>
+ <li><strong>Library load-order randomization and improved ASLR</strong>.
+ Increased randomness makes some code-reuse attacks less reliable.</li>
+ <li><strong>Kernel hardening</strong>. Added additional memory protection for
+ newer kernels by marking portions of kernel memory as read-only, restricting
+ kernel access to userspace addresses and further reducing the existing attack
+ surface.</li>
+ <li><strong>APK signature scheme v2</strong>. Introduced a whole-file signature
+ scheme that improves verification speed and strengthens integrity guarantees.</li>
+ <li><strong>Trusted CA store</strong>. To make it easier for apps to control
+ access to their secure network traffic, user-installed certificate authorities
+ and those installed through Device Admin APIs are no longer trusted by default
+ for apps targeting API Level 24+. Additionally, all new Android devices must
+ ship with the same trusted CA store.</li>
+ <li><strong>Network Security Config</strong>. Configure network security and TLS
+ through a declarative configuration file.</li>
+</ul>
+
diff --git a/src/security/images/apk-before-after-signing.png b/src/security/images/apk-before-after-signing.png
new file mode 100644
index 0000000..2202f62
--- /dev/null
+++ b/src/security/images/apk-before-after-signing.png
Binary files differ
diff --git a/src/security/images/apk-integrity-protection.png b/src/security/images/apk-integrity-protection.png
new file mode 100644
index 0000000..1b6e317
--- /dev/null
+++ b/src/security/images/apk-integrity-protection.png
Binary files differ
diff --git a/src/security/images/apk-sections.png b/src/security/images/apk-sections.png
new file mode 100644
index 0000000..27b8af5
--- /dev/null
+++ b/src/security/images/apk-sections.png
Binary files differ
diff --git a/src/security/images/apk-validation-process.png b/src/security/images/apk-validation-process.png
new file mode 100644
index 0000000..c0b2b84
--- /dev/null
+++ b/src/security/images/apk-validation-process.png
Binary files differ
diff --git a/src/security/images/boot_orange.png b/src/security/images/boot_orange.png
index 1a239d3..2f82427 100644
--- a/src/security/images/boot_orange.png
+++ b/src/security/images/boot_orange.png
Binary files differ
diff --git a/src/security/images/boot_red.png b/src/security/images/boot_red.png
deleted file mode 100644
index 44deda3..0000000
--- a/src/security/images/boot_red.png
+++ /dev/null
Binary files differ
diff --git a/src/security/images/boot_red1.png b/src/security/images/boot_red1.png
new file mode 100644
index 0000000..52a5700
--- /dev/null
+++ b/src/security/images/boot_red1.png
Binary files differ
diff --git a/src/security/images/boot_red2.png b/src/security/images/boot_red2.png
new file mode 100644
index 0000000..b472338
--- /dev/null
+++ b/src/security/images/boot_red2.png
Binary files differ
diff --git a/src/security/images/boot_yellow1.png b/src/security/images/boot_yellow1.png
index b68572d..31b87c8 100644
--- a/src/security/images/boot_yellow1.png
+++ b/src/security/images/boot_yellow1.png
Binary files differ
diff --git a/src/security/images/boot_yellow2.png b/src/security/images/boot_yellow2.png
index 57732f5..1dd0c36 100644
--- a/src/security/images/boot_yellow2.png
+++ b/src/security/images/boot_yellow2.png
Binary files differ
diff --git a/src/security/images/verified_boot.png b/src/security/images/verified_boot.png
index b1c5cb6..4bad7bc 100644
--- a/src/security/images/verified_boot.png
+++ b/src/security/images/verified_boot.png
Binary files differ
diff --git a/src/security/overview/app-security.jd b/src/security/overview/app-security.jd
index 033c020..0091182 100644
--- a/src/security/overview/app-security.jd
+++ b/src/security/overview/app-security.jd
@@ -280,8 +280,35 @@
time, the installer will prompt the user asking if the application can access
the information. If the user does not grant access, the application will not be
installed.</p>
+<h2 id="certificate-authorities">Certificate authorities</h2>
+<p>
+Android includes a set of installed system Certificate Authorities, which are
+trusted system-wide. Prior to Android 7.0, device manufacturers could modify the
+set of CAs shipped on their devices. However, devices running 7.0 and above will
+have a uniform set of system CAs as modification by device manufacturers is no
+longer permitted.
+</p>
+<p>
+To be added as a new public CA to the Android stock set, the CA must complete
+the <a href="https://wiki.mozilla.org/CA:How_to_apply">Mozilla CA Inclusion
+Process</a> and then file a feature request against Android (<a
+href="https://code.google.com/p/android/issues/entry">https://code.google.com/p/android/issues/entry</a>)
+to have the CA added to the stock Android CA set in the <a
+href="https://android.googlesource.com/">Android Open Source Project</a>
+(AOSP).
+</p>
+<p>
+There are still CAs that are device-specific and should not be included in the
+core set of AOSP CAs, like carriers’ private CAs that may be needed to securely
+access components of the carrier’s infrastructure, such as SMS/MMS gateways.
+Device manufacturers are encouraged to include the private CAs only in the
+components/apps that need to trust these CAs. See <a
+href="https://developer.android.com/preview/features/security-config.html">Network
+Security Configuration</a> for more details.
+</p>
<h2 id="application-signing">Application Signing</h2>
-<p>Code signing allows developers to identify the author of the application and to
+<p><a href="{@docRoot}security/apksigning/index.html">Code signing</a>
+ allows developers to identify the author of the application and to
update their application without creating complicated interfaces and
permissions. Every application that is run on the Android platform must be
signed by the developer. Applications that attempt to install without being
diff --git a/src/security/security_toc.cs b/src/security/security_toc.cs
index d665262..21cca0b 100644
--- a/src/security/security_toc.cs
+++ b/src/security/security_toc.cs
@@ -33,6 +33,7 @@
</a>
</div>
<ul>
+ <li><a href="<?cs var:toroot ?>security/enhancements/enhancements70.html">Android 7.0</a></li>
<li><a href="<?cs var:toroot ?>security/enhancements/enhancements60.html">Android 6.0</a></li>
<li><a href="<?cs var:toroot ?>security/enhancements/enhancements50.html">Android 5.0</a></li>
<li><a href="<?cs var:toroot ?>security/enhancements/enhancements44.html">Android 4.4</a></li>
@@ -87,6 +88,16 @@
</li>
<li class="nav-section">
<div class="nav-section-header">
+ <a href="<?cs var:toroot ?>security/apksigning/index.html">
+ <span class="en">Application Signing</span>
+ </a>
+ </div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>security/apksigning/v2.html">APK Signature Scheme v2</a></li>
+ </ul>
+ </li>
+ <li class="nav-section">
+ <div class="nav-section-header">
<a href="<?cs var:toroot ?>security/authentication/index.html">
<span class="en">Authentication</span>
</a>
@@ -120,9 +131,13 @@
<li class="nav-section">
<div class="nav-section-header">
<a href="<?cs var:toroot ?>security/encryption/index.html">
- <span class="en">Full Disk Encryption</span>
+ <span class="en">Encryption</span>
</a>
</div>
+ <ul>
+ <li><a href="<?cs var:toroot ?>security/encryption/file-based.html">File-Based Encryption</a></li>
+ <li><a href="<?cs var:toroot ?>security/encryption/full-disk.html">Full-Disk Encryption</a></li>
+ </ul>
</li>
<li class="nav-section">
<div class="nav-section-header">
diff --git a/src/security/selinux/index.jd b/src/security/selinux/index.jd
index 8745f8e..f331a35 100644
--- a/src/security/selinux/index.jd
+++ b/src/security/selinux/index.jd
@@ -33,20 +33,27 @@
Security-Enhanced Linux (SELinux) is used to further define the boundaries of
the Android application sandbox.</p>
-<p>As part of the Android <a href="{@docRoot}devices/tech/security/index.html">security model</a>, Android uses SELinux to enforce mandatory access control (MAC) over all
-processes, even processes running with root/superuser privileges (a.k.a. Linux
-capabilities). SELinux enhances Android security by confining privileged
-processes and automating security policy creation.</p>
+<p>As part of the Android <a href="{@docRoot}security/index.html">
+security model</a>, Android uses SELinux to enforce mandatory access control
+(MAC) over all processes, even processes running with root/superuser privileges
+(a.k.a. Linux capabilities). SELinux enhances Android security by confining
+privileged processes and automating security policy creation.</p>
-<p>Contributions to it have been made by a number of companies and organizations;
-all Android code and contributors are publicly available for review on <a href="https://android.googlesource.com/">android.googlesource.com</a>. With SELinux, Android can better protect and confine system services, control
+<p>Contributions to it have been made by a number
+of companies and organizations; all Android code
+and contributors are publicly available for review on <a
+href="https://android.googlesource.com/">android.googlesource.com</a>. With
+SELinux, Android can better protect and confine system services, control
access to application data and system logs, reduce the effects of malicious
software, and protect users from potential flaws in code on mobile devices.</p>
-<p>Android includes SELinux in enforcing mode and a corresponding security policy
-that works by default across the <a href="https://android.googlesource.com/">Android Open Source Project</a>. In enforcing mode, illegitimate actions are prevented and all attempted
-violations are logged by the kernel to <code>dmesg</code> and <code>logcat</code>. Android device manufacturers should gather information about errors so they
-may refine their software and SELinux policies before enforcing them.</p>
+<p>Android includes SELinux in enforcing mode and a
+corresponding security policy that works by default across the <a
+href="https://android.googlesource.com/">Android Open Source Project</a>. In
+enforcing mode, illegitimate actions are prevented and all attempted violations
+are logged by the kernel to <code>dmesg</code> and <code>logcat</code>. Android
+device manufacturers should gather information about errors so they may
+refine their software and SELinux policies before enforcing them.</p>
<h2 id=background>Background</h2>
@@ -63,38 +70,56 @@
Per-domain permissive mode also enables policy development for new services
while keeping the rest of the system enforcing.</p>
-<p>In the Android 5.0 (L) release, Android moves to full enforcement of SELinux. This builds
-upon the permissive release of 4.3 and the partial enforcement of 4.4. In
-short, Android is shifting from enforcement on a limited set of crucial domains
-(<code>installd</code>, <code>netd</code>, <code>vold</code> and <code>zygote</code>) to everything (more than 60 domains). This means manufacturers will have to
-better understand and scale their SELinux implementations to provide compatible
-devices. Understand that:</p>
+<p>In the Android 5.0 (L) release, Android moves to full enforcement of
+SELinux. This builds upon the permissive release of 4.3 and the partial
+enforcement of 4.4. In short, Android is shifting from enforcement on a
+limited set of crucial domains (<code>installd</code>, <code>netd</code>,
+<code>vold</code> and <code>zygote</code>) to everything (more than 60
+domains). This means manufacturers will have to better understand and scale
+their SELinux implementations to provide compatible devices. Understand
+that:</p>
+
<ul>
- <li> Everything is in enforcing mode in the 5.0 release
- <li> No processes other than <code>init</code> should run in the <code>init</code> domain
- <li> Any generic denial (for a block_device, socket_device, default_service, etc.)
-indicates that device needs a special domain
+<li>Everything is in enforcing mode in the 5.0 release</li>
+<li> No processes other than <code>init</code> should run in the
+<code>init</code> domain</li>
+<li> Any generic denial (for a block_device, socket_device, default_service,
+etc.) indicates that device needs a special domain</li>
</ul>
<h2 id=supporting_documentation>Supporting documentation</h2>
<p>See the documentation below for details on constructing useful policies:</p>
-<p><a href="http://seandroid.bitbucket.org/PapersandPresentations.html">http://seandroid.bitbucket.org/PapersandPresentations.html</a></p>
+<p><a href="http://seandroid.bitbucket.org/PapersandPresentations.html">
+http://seandroid.bitbucket.org/PapersandPresentations.html</a></p>
-<p><a href="https://www.codeproject.com/Articles/806904/Android-Security-Customization-with-SEAndroid">https://www.codeproject.com/Articles/806904/Android-Security-Customization-with-SEAndroid</a></p>
+<p><a href="https://www.codeproject.com/Articles/806904/Android-Security-Customization-with-SEAndroid">
+https://www.codeproject.com/Articles/806904/
+Android-Security-Customization-with-SEAndroid</a></p>
-<p><a href="https://events.linuxfoundation.org/sites/events/files/slides/abs2014_seforandroid_smalley.pdf">https://events.linuxfoundation.org/sites/events/files/slides/abs2014_seforandroid_smalley.pdf</a></p>
+<p><a href="https://events.linuxfoundation.org/sites/events/files/slides/abs2014_seforandroid_smalley.pdf">
+https://events.linuxfoundation.org/sites/events/files/slides/
+abs2014_seforandroid_smalley.pdf</a></p>
-<p><a href="https://www.internetsociety.org/sites/default/files/02_4.pdf">https://www.internetsociety.org/sites/default/files/02_4.pdf</a></p>
+<p><a href="https://www.internetsociety.org/sites/default/files/02_4.pdf">
+https://www.internetsociety.org/sites/default/files/02_4.pdf</a></p>
-<p><a href="http://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf">http://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf</a></p>
+<p><a href="http://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf">
+http://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf</a></p>
-<p><a href="http://selinuxproject.org/page/ObjectClassesPerms">http://selinuxproject.org/page/ObjectClassesPerms</a></p>
+<p><a href="http://selinuxproject.org/page/ObjectClassesPerms">
+http://selinuxproject.org/page/ObjectClassesPerms</a></p>
-<p><a href="https://www.nsa.gov/research/_files/publications/implementing_selinux.pdf">https://www.nsa.gov/research/_files/publications/implementing_selinux.pdf</a></p>
+<p><a href="https://www.nsa.gov/resources/everyone/digital-media-center/publications/research-papers/assets/files/implementing-selinux-as-linux-security-module-report.pdf">
+https://www.nsa.gov/resources/everyone/digital-media-center/publications/
+research-papers/assets/files/
+implementing-selinux-as-linux-security-module-report.pdf</a></p>
-<p><a href="https://www.nsa.gov/research/_files/publications/selinux_configuring_policy.pdf">https://www.nsa.gov/research/_files/publications/selinux_configuring_policy.pdf</a></p>
+<p><a href="https://www.nsa.gov/resources/everyone/digital-media-center/publications/research-papers/assets/files/configuring-selinux-policy-report.pdf">
+https://www.nsa.gov/resources/everyone/digital-media-center/publications/
+research-papers/assets/files/configuring-selinux-policy-report.pdf</a></p>
-<p><a href="https://www.gnu.org/software/m4/manual/index.html">https://www.gnu.org/software/m4/manual/index.html</a></p>
+<p><a href="https://www.gnu.org/software/m4/manual/index.html">
+https://www.gnu.org/software/m4/manual/index.html</a></p>
diff --git a/src/security/verifiedboot/index.jd b/src/security/verifiedboot/index.jd
index 05c034f..9bdd94d 100644
--- a/src/security/verifiedboot/index.jd
+++ b/src/security/verifiedboot/index.jd
@@ -24,12 +24,10 @@
</div>
</div>
-<h2 id="introduction">Introduction</h2>
-
<p>Android 4.4 and later supports verified boot through the optional
device-mapper-verity (dm-verity) kernel feature, which provides transparent
integrity checking of block devices. dm-verity helps prevent persistent rootkits
-that can hold onto root privileges and compromise devices. This experimental
+that can hold onto root privileges and compromise devices. This
feature helps Android users be sure when booting a device it is in the same
state as when it was last used.</p>
@@ -43,7 +41,7 @@
configuration. It does this using a cryptographic hash tree. For every block
(typically 4k), there is a SHA256 hash.</p>
-<p>Since the hash values are stored in a tree of pages, only the top-level
+<p>Because the hash values are stored in a tree of pages, only the top-level
"root" hash must be trusted to verify the rest of the tree. The ability to
modify any of the blocks would be equivalent to breaking the cryptographic hash.
See the following diagram for a depiction of this structure.</p>
@@ -61,7 +59,7 @@
<h3 id="verified-boot">Establishing a verified boot flow</h3>
<p>To greatly reduce the risk of compromise, verify the kernel using a key
-burned into the device. For details, see <a href="verified-boot.html">Verified
+burned into the device. For details, see <a href="verified-boot.html">Verifying
boot</a>.</p>
<h3 id="block-otas">Switching to block-oriented OTAs</h3>
diff --git a/src/security/verifiedboot/verified-boot.jd b/src/security/verifiedboot/verified-boot.jd
index e05e729..c13a3db 100644
--- a/src/security/verifiedboot/verified-boot.jd
+++ b/src/security/verifiedboot/verified-boot.jd
@@ -24,7 +24,6 @@
</div>
</div>
-<h2 id=objective>Objective</h2>
<p>Verified boot guarantees the integrity of the device software starting from a
hardware root of trust up to the system partition. During boot, each stage
verifies the integrity and authenticity of the next stage before executing it.</p>
@@ -35,106 +34,77 @@
encryption and Trusted Execution Environment (TEE) root of trust binding, adds
another layer of protection for user data against malicious system software.</p>
-<p>Note that if verification fails at any stage, the user must be visibly
-notified and always be given an option to continue using the device at
-their own discretion.</p>
+<p>If verification fails at any stage, the user is visibly
+notified.</p>
<h2 id=glossary>Glossary</h2>
-<p class="table-caption" id="table1">
- <strong>Table 1.</strong> Glossary of terms related to verified boot</p>
-
<table>
+ <col width="15%">
+ <col width="85%">
<tr>
- <td>
-<p><strong>Term</strong></p>
-</td>
- <td>
-<p><strong>Definition</strong></p>
-</td>
+ <th>Term</th>
+ <th>Definition</th>
</tr>
<tr>
- <td>
-<p>Boot state</p>
-</td>
- <td>
-<p>The boot state of the device describes the level of protection provided to the
-end user if the device boots. Boot states are GREEN, YELLOW, ORANGE, and RED.</p>
-</td>
+ <td>Boot state</td>
+ <td>The boot state of the device describes the level of protection provided
+ to the end user if the device boots. Boot states are GREEN, YELLOW,
+ ORANGE, and RED.</td>
</tr>
<tr>
- <td>
-<p>Device state</p>
-</td>
- <td>
-<p>The device state indicates how freely software can be flashed to the device.
-Device states are LOCKED and UNLOCKED.</p>
-</td>
+ <td>Device state</td>
+ <td>The device state indicates how freely software can be flashed to the device.
+ Device states are LOCKED and UNLOCKED.</td>
</tr>
<tr>
- <td>
-<p>dm-verity</p>
-</td>
- <td>
-<p>Linux kernel driver for verifying the integrity of a partition at runtime using
-a hash tree and signed metadata.</p>
-</td>
+ <td>dm-verity</td>
+ <td>Linux kernel driver for verifying the integrity of a partition at runtime using
+ a hash tree and signed metadata.</td>
</tr>
<tr>
- <td>
-<p>Keystore</p>
-</td>
- <td>
-<p>A keystore is a signed collection of public keys.</p>
-</td>
- </tr>
- <tr>
- <td>
-<p>OEM key</p>
-</td>
- <td>
-<p>The OEM key is a fixed, tamper-protected key available to the bootloader that
-must be used to verify the boot image.</p>
-</td>
+ <td>OEM key</td>
+ <td>The OEM key is a fixed, tamper-protected key available to the bootloader that
+ must be used to verify the boot image.</td>
</tr>
</table>
<h2 id=overview>Overview</h2>
-<p>In addition to device state - which already exists in devices and controls
-whether the bootloader allows new software to be flashed - we introduce the
-concept of boot state that indicates the state of device integrity.</p>
+<p>In addition to device state—which already exists in devices and controls
+whether the bootloader allows new software to be flashed—verified boot introduces
+the concept of boot state that indicates the state of device integrity.</p>
<h3 id=classes>Classes</h3>
-<p>We define two implementation classes for verified boot depending on how
-fully the device implements this specification, as follows:</p>
+<p>Two implementation classes exist for verified boot. Depending on how
+fully the device implements this specification, they are defined as follows:</p>
<p><strong>Class A</strong> implements verified boot with full chain of trust
-up to verified partitions. This implementation must support the LOCKED device
-state, and GREEN and RED boot states.</p>
+up to verified partitions. In other words, the implementation supports the
+LOCKED device state, and GREEN and RED boot states.</p>
-<p><strong>Class B</strong> implements Class A and additionally supports the
+<p><strong>Class B</strong> implements Class A, and additionally supports the
UNLOCKED device state and the ORANGE boot state.</p>
<h3 id=verification_keys>Verification keys</h3>
-<p>Bootloader integrity must be verified using a hardware root of trust. For
-verifying boot and recovery partitions, the bootloader must have a fixed OEM key
-available to it. It must always attempt to verify the boot partition using the OEM
+<p>Bootloader integrity is always verified using a hardware root of trust. For
+verifying boot and recovery partitions, the bootloader has a fixed OEM key
+available to it. It always attempts to verify the boot partition using the OEM
key first and try other possible keys only if this verification fails.</p>
-<p>In Class B implementations, it must be possible for the user to flash
+<p>In Class B implementations, it is possible for the user to flash
software signed with other keys when the device is UNLOCKED. If the device is
-then LOCKED and verification using the OEM key fails, the bootloader must try
+then LOCKED and verification using the OEM key fails, the bootloader tries
verification using the certificate embedded in the partition signature.
-However, using a partition signed with anything other than the OEM key must
-result in a notification or a warning, as described below.</p>
+However, using a partition signed with anything other than the OEM key
+results in a notification or a warning, as described below.</p>
<h3 id=boot_state>Boot state</h3>
-<p>A verified device will ultimately boot into one of four states during each boot
-attempt:</p>
+<p>A verified device will ultimately boot into one of the four states during
+each boot attempt:</p>
<ul>
<li>GREEN, indicating a full chain of trust extending from the bootloader to
@@ -142,29 +112,30 @@
partitions.
<li>YELLOW, indicating the boot partition has been verified using the
-embedded certificate, and the signature is valid. The bootloader is required to
-display a notification and the fingerprint of the public key during boot.
+embedded certificate, and the signature is valid. The bootloader
+displays a warning and the fingerprint of the public key before allowing
+the boot process to continue.
<li>ORANGE, indicating a device may be freely modified. Device integrity is
-left to the user to verify out-of-band. The bootloader must display a warning
+left to the user to verify out-of-band. The bootloader displays a warning
to the user before allowing the boot process to continue.
- <li>RED, indicating the device has failed verification. The bootloader must
-display a warning to the user before allowing the boot process to continue.
+ <li>RED, indicating the device has failed verification. The bootloader
+displays a warning and stops the boot process.
</ul>
-<p>The recovery partition must also be verified in the exact same way.</p>
+<p>The recovery partition is verified in the exact same way, as well.</p>
<h3 id=device_state>Device state</h3>
-<p>The device is required to be in one of two states at all times:</p>
-
+<p>The possible device states and their relationship with the four verified
+boot states are:</p>
<ol>
- <li>LOCKED, indicating the device cannot be flashed. A LOCKED device must
-boot into the GREEN, YELLOW, or RED states during any attempted boot.
+ <li>LOCKED, indicating the device cannot be flashed. A LOCKED device
+boots into the GREEN, YELLOW, or RED states during any attempted boot.
<li>UNLOCKED, indicating the device may be flashed freely and is not intended
-to be verified. An UNLOCKED device must always boot to the ORANGE boot state.
+to be verified. An UNLOCKED device always boots to the ORANGE boot state.
</ol>
<img src="../images/verified_boot.png" alt="Verified boot flow" id="figure1" />
@@ -174,7 +145,7 @@
<p>Achieving full chain of trust requires support from both the bootloader and the
software on the boot partition, which is responsible for mounting further
-partitions. Verification metadata must also be appended to the system partition
+partitions. Verification metadata is also appended to the system partition
and any additional partitions whose integrity should be verified.</p>
<h3 id=bootloader_requirements>Bootloader requirements</h3>
@@ -182,7 +153,7 @@
<p>The bootloader is the guardian of the device state and is responsible for
initializing the TEE and binding its root of trust.</p>
-<p>Most importantly, the bootloader must verify the integrity of the boot and/or
+<p>Most importantly, the bootloader verifies the integrity of the boot and/or
recovery partition before moving execution to the kernel and display the
warnings specified in the section <a href="#boot_state">Boot state</a>.</p>
@@ -190,78 +161,67 @@
<p>State changes are performed using the <code>fastboot flashing [unlock |
lock]</code> command. And to protect user data, <strong>all</strong>
-state transitions require a data wipe. Note the user must be asked for
+state transitions wipe the data partitions and ask the user for
confirmation before data is deleted.</p>
<ol>
<li>The UNLOCKED to LOCKED transition is anticipated when a user buys a used
development device. As a result of locking the device, the user should have
-confidence that it is in a state produced by the OEM.
+confidence that it is in a state produced by the device manufacturer, as long
+as there is no warning.
<li>The LOCKED to UNLOCKED transition is expected in the case where a developer
wishes to disable verification on the device.
</ol>
-<p>Requirements for <code>fastboot</code> commands that alter device state are listed in the table below:</p>
-<p class="table-caption" id="table2">
- <strong>Table 2.</strong> <code>fastboot</code> commands</p>
+<p><code>fastboot</code> commands that alter device state are listed in the table below:</p>
<table>
+ <col width="25%">
+ <col width="75%">
<tr>
- <td>
-<p><strong><code>fastboot</code> command</strong></p>
-</td>
- <td>
-<p><strong>Requirements</strong></p>
-</td>
+ <th><code>fastboot</code> command</th>
+ <th>Description</th>
</tr>
<tr>
+ <td><code>flashing lock</code></td>
<td>
-<code>
-flashing lock</code></td>
- <td>
-<ul>
- <li>Wipe data after asking the user for confirmation
- <li>Clear a write-protected bit indicating the device is unlocked
-</ul>
-</td>
+ <ul>
+ <li>Wipe data after asking the user for confirmation
+ <li>Clear a write-protected bit, readable by the bootloader, indicating
+ the device is unlocked
+ </ul>
+ </td>
</tr>
<tr>
+ <td><code>flashing unlock</code></td>
<td>
-<code>
-flashing unlock</code></td>
- <td>
-<ul>
- <li>Wipe data after asking the user for confirmation
- <li>Set a write-protected bit indicating the device is unlocked
-</ul>
-</td>
+ <ul>
+ <li>If the unlock device setting has not been enabled by the user,
+ abort unlocking
+ <li>Wipe data after asking the user for confirmation
+ <li>Set a write-protected bit, readable by the bootloader, indicating
+ the device is unlocked
+ </ul>
+ </td>
</tr>
</table>
-<p>When altering partition contents, the bootloader must check the bits set by
+<p>When altering partition contents, the bootloader checks the bits set by
the above commands as described in the following table:</p>
-<p class="table-caption" id="table3">
- <strong>Table 3.</strong> <code>fastboot</code> command requirements</p>
-
<table>
+ <col width="25%">
+ <col width="75%">
<tr>
- <td>
-<p><strong><code>fastboot</code> command</strong></p>
-</td>
- <td>
-<p><strong>Requirements</strong></p>
-</td>
+ <th><code>fastboot</code> command</th>
+ <th>Description</th>
</tr>
<tr>
- <td>
-<code>
-flash <partition></code></td>
- <td>
- <p>If the bit set by <code>flashing unlock</code> is set, flash the
- partition. Otherwise, do not allow flashing.<p>
+ <td><code>flash <partition></code></td>
+ <td>If the bit set by <code>flashing unlock</code> is set, flash the
+ partition. Otherwise, do not allow flashing.
</td>
</tr>
</table>
@@ -269,14 +229,14 @@
<p>The same checks should be performed for any <code>fastboot</code> command
that can be used to change the contents of partitions.</p>
-<p class="note"><strong>Note</strong>: Class B implementations must support
+<p class="note"><strong>Note</strong>: Class B implementations support
changing device state.</p>
<h4 id=binding_tee_root_of_trust>Binding TEE root of trust</h4>
-<p>If TEE is available, the bootloader should pass the following information to
-the TEE to bind the Keymaster root of trust, after partition verification and
-TEE initialization:</p>
+<p>If TEE is available, the bootloader passes the following information to
+the TEE after boot/recovery partition verification and TEE initialization
+to bind the Keymaster root of trust:</p>
<ol>
<li>the public key that was used to sign the boot partition
@@ -290,6 +250,16 @@
device state changes, encrypted user data will no longer be accessible as the
TEE will attempt to use a different key to decrypt the data.</p>
+<h4 id="initializing-attestation">Initializing attestation</h4>
+<p>
+Similar to root of trust binding, if TEE is available, the bootloader passes it
+the following information to initialize attestation:
+</p>
+<ol>
+<li>the current boot state (GREEN, YELLOW, ORANGE)
+<li>the operating system version
+<li>the operating system security patch level
+</ol>
<h4 id=booting_into_recovery>Booting into recovery</h4>
<p>The recovery partition should be verified in exactly the same manner as the
@@ -298,14 +268,11 @@
<h4 id=comm_boot_state>Communicating boot state</h4>
<p>System software needs to be able to determine the verification status of
-previous stages. The bootloader must specify the current boot state as a
+previous stages. The bootloader specifies the current boot state as a
parameter on the kernel command line (or through the device tree under
<code>firmware/android/verifiedbootstate</code>) as described in the table
below:</p>
-<p class="table-caption" id="table4">
- <strong>Table 4.</strong> Kernel command line parameters</p>
-
<table>
<tr>
<th>Kernel command line parameter</th>
@@ -327,86 +294,140 @@
<td>Device has booted into ORANGE boot state.<br>
The device is unlocked and no verification has been performed.</td>
</tr>
- <tr>
- <td><code>androidboot.verifiedbootstate=red</code></td>
- <td>Device has booted into RED boot state.<br>
- The device has failed verification.</td>
- </tr>
</table>
+<p class="note"><strong>Note</strong>: The device cannot boot into kernel when
+in the RED boot state, and therefore the kernel command line never includes the
+parameter <code>androidboot.verifiedbootstate=red</code>.</p>
<h3 id=boot_partition>Boot partition</h3>
<p>Once execution has moved to the boot partition, the software there is responsible
for setting up verification of further partitions. Due to its large size, the
-system partition typically cannot be verified similarly to previous parts but must be
+system partition typically cannot be verified similarly to previous parts but is
verified as it’s being accessed instead using the dm-verity kernel driver or a
similar solution.</p>
<p>If dm-verity is used to verify large partitions, the signature of the verity
-metadata appended to each verified partition must be verified before the
+metadata appended to each verified partition is verified before the
partition is mounted and dm-verity is set up for it.</p>
<h4 id=managing_dm-verity>Managing dm-verity</h4>
-<p>By default, dm-verity operates in enforcing mode and verifies each block read
-from the partition against a hash tree passed to it during setup. If it
-comes across a block that fails to verify, it returns an I/O error and makes
-the block with unexpected contents inaccessible to user space. Depending on
-which block is corrupted, this may cause some of the programs that reside on
-the partition to malfunction.</p>
+<p>Implemented as a device mapper target in kernel, dm-verity adds a layer
+on top of a partition and verifies each read block against a hash tree passed to
+it during setup. If it comes across a block that fails to verify, it makes the
+block inaccessible to user space.</p>
-<p>If dm-verity is always enforcing against correctly signed metadata, nothing
-more needs be done. However, using an optional verity table parameter, dm-verity
-can be configured to function in a logging mode where it detects and logs
-errors but allows I/O to be completed despite them. If dm-verity is not started
-in enforcing mode for any reason, or verity metadata cannot be verified, a
-warning must be displayed to the user if the device is allowed to boot, similar
-to the one shown before booting into the RED state.</p>
-
-<img src="../images/dm-verity_mgmt.png" alt="dm-verity management" id="figure2" />
-<p class="img-caption"><strong>Figure 2.</strong> dm-verity management</p>
+<p>When mounting partitions during boot, fs_mgr sets up dm-verity for a
+partition if the <code>verify</code> fs_mgr flag is specified for it in the
+device’s fstab. Verity metadata signature is verified against the public key
+in <code>/verity_key</code>.</p>
<h4 id=recovering_from_dm-verity_errors>Recovering from dm-verity errors</h4>
-<p>Since the system partition is by far larger than the boot partition, the
+<p>Because the system partition is by far larger than the boot partition, the
probability of verification errors is also higher. Specifically, there is a
larger probability of unintentional disk corruption, which will cause a
verification failure and can potentially make an otherwise functional device
-unusable if a critical block in the partition can no longer be accessed.</p>
+unusable if a critical block in the partition can no longer be accessed.
+Forward error correction can be used with dm-verity to mitigate this risk.
+Providing this alternative recovery path is recommended, though it comes at the
+expense of increasing metadata size.</p>
-<p>If dm-verity is always in enforcing mode, nothing further needs to be done.
-If logging mode is implemented and dm-verity detects an error while in
-enforcing mode, the device must be rebooted and dm-verity must be started in
-logging mode during all subsequent restarts until any of the verified
-partitions is reflashed or changed by an OTA update. This means dm-verity state
-should be stored in a persistent flag. When a verified partition has been
-changed, the flag must be cleared and dm-verity must again be started in
-enforcing mode. Anytime dm-verity is not started in enforcing mode, a warning
-must be shown to the user before any of the verified partitions are
-mounted. No unverified data must be allowed to leak to user space without the
-user being warned.</p>
+<p>
+By default, dm-verity is configured to function in a “restart” mode where it
+immediately restarts the device when a corrupted block is detected. This makes
+it possible to safely warn the user when the device is corrupted, or to fall
+back to device specific recovery, if available.
+</p>
+
+<p>
+To make it possible for users to still access their data, dm-verity switches
+to I/O Error (EIO) mode if the device boots with known corruption. When in EIO mode,
+dm-verity returns I/O errors for any reads that access corrupted blocks but
+allows the device to keep running. Keeping track of the current mode requires
+persistently storing dm-verity state. The state can be managed either by fs_mgr
+or the bootloader:
+</p>
+
+<ol>
+ <li>To manage dm-verity state in fs_mgr, an additional argument is specified to
+ the <code>verify</code> flag to inform fs_mgr where to store dm-verity state.
+ For example, to store the state on the metadata partition, specify
+ <code>verify=/path/to/metadata</code>.
+ <p class="note"><strong>Note:</strong> fs_mgr switches dm-verity to EIO
+ mode after the first corruption has been detected and resets the mode
+ back to “restart” after the metadata signature of any verified partition
+ has changed.</p>
+ </li>
+ <li>Alternatively, to manage dm-verity state in the bootloader, pass the current
+ mode to the kernel in the <code>androidboot.veritymode</code> command line
+ parameter as follows:
+
+ <table>
+ <tr>
+ <th>Kernel command line parameter</th>
+ <th>Description</th>
+ </tr>
+ <tr>
+ <td><code>androidboot.veritymode=enforcing</code></td>
+ <td>Set up dm-verity in the default “restart” mode.</td>
+ </tr>
+ <tr>
+ <td><code>androidboot.veritymode=eio</code></td>
+ <td>Set up dm-verity in EIO mode.</td>
+ </tr>
+ </table>
+
+ <p class="note">
+ <strong>Note:</strong> Managing state in the bootloader also requires the kernel
+ to set the restart reason correctly when the device restarts due to dm-verity.
+ After corruption has been detected, the bootloader should switch back to
+ “restart” mode when any of the verified partitions have changed.</p>
+ </li>
+</ol>
+
+<p>
+If dm-verity is not started in the “restart” mode for any reason, or verity
+metadata cannot be verified, a warning displays to the user if the device is
+allowed to boot, similar to the one shown before booting into the RED boot
+state. The user must consent to the device to continue booting in EIO mode. If
+user consent is not received in 30 seconds, the device powers off.
+</p>
+
+<p class="note">
+<strong>Note:</strong> dm-verity never starts in logging mode to prevent
+unverified data from leaking into userspace.
+</p>
+
+
<h3 id=verified_partition>Verified partition</h3>
-<p>In a verified device, the system partition must always be verified. But any
+<p>In a verified device, the system partition is always verified. But any
other read-only partition should also be set to be verified, as well. Any
-read-only partition that contains executable code must be verified on a
+read-only partition that contains executable code is verified on a
verified device. This includes vendor and OEM partitions, if they exist, for example.</p>
-<p>In order for a partition to be verified, signed verity metadata must be
+<p>To verify a partition, signed verity metadata is
appended to it. The metadata consists of a hash tree of the partition contents
and a verity table containing signed parameters and the root of the hash tree.
If this information is missing or invalid when dm-verity is set up for the
-partition, the user must be warned.</p>
+partition, the device doesn't boot.</p>
<h2 id=implementation_details>Implementation details</h2>
<h3 id=key_types_and_sizes>Key types and sizes</h3>
-<p>The OEM key is recommended to be an RSA key with a modulus of 2048 bits or
-higher and a public exponent of 65537 (F4). The OEM key is required to be of
+<p>The OEM key used in AOSP is an RSA key with a modulus of 2048 bits or
+higher and a public exponent of 65537 (F4), meeting the CDD requirements of
equivalent or greater strength than such a key.</p>
+<p>Note that the OEM key typically cannot be rotated if it's compromised, so
+protecting it is important, preferably using a Hardware Security Module (HSM)
+or a similar solution. It's also recommended to use a different key for each
+type of device.</p>
+
<h3 id=signature_format>Signature format</h3>
<p>The signature on an Android verifiable boot image is an ASN.1 DER-encoded
@@ -418,7 +439,7 @@
AndroidVerifiedBootSignature DEFINITIONS ::=
BEGIN
FormatVersion ::= INTEGER
- Certificate ::= Certificate OPTIONAL
+ Certificate ::= Certificate
AlgorithmIdentifier ::= SEQUENCE {
algorithm OBJECT IDENTIFIER,
parameters ANY DEFINED BY algorithm OPTIONAL
@@ -435,7 +456,7 @@
<p>The <code>Certificate</code> field is the full X.509 certificate containing
the public key used for signing, as defined by <a
href="http://tools.ietf.org/html/rfc5280#section-4.1.1.2">RFC5280</a> section
-4.1. When LOCKED, the bootloader must always use the OEM key for verification
+4.1. When LOCKED, the bootloader uses the OEM key for verification
first, and only boot to YELLOW or RED states if the embedded certificate is
used for verification instead.</p>
@@ -448,7 +469,7 @@
<h3 id=signing_and_verifying_an_image>Signing and verifying an image</h3>
-<p>To produce a signed image:</p>
+<p><strong>To produce a signed image:</strong></p>
<ol>
<li>Generate the unsigned image.
<li>0-pad the image to the next page size boundary (omit this step if already
@@ -459,9 +480,9 @@
<li>Sign the image.
</ol>
-<p>To verify the image:</p>
+<p><strong>To verify the image:</strong></p>
<ol>
- <li>Determine the size of the image to be loaded including padding (eg, by reading
+ <li>Determine the size of the image to be loaded including padding (e.g. by reading
a header).
<li>Read the signature located at the offset above.
<li>Validate the contents of the <code>AuthenticatedAttributes</code> field.
@@ -472,55 +493,46 @@
<h3 id=user_experience>User experience</h3>
<p>A user in the GREEN boot state should see no additional user interaction besides that
-required by normal device boot. In other boot states, the user must see a
+required by normal device boot. In ORANGE and YELLOW boot states, the user sees a
warning for at least five seconds. Should the user interact with the device during
-this time, the warning must remain visible at least 30 seconds longer, or until
-the user dismisses the warning.</p>
+this time, the warning remains visible at least 30 seconds longer, or until
+the user dismisses the warning. In the RED boot state, the warning is shown for
+at least 30 seconds, after which the device powers off.</p>
<p>Sample user interaction screens for other states are shown in the following table:</p>
-<p class="table-caption" id="table5">
- <strong>Table 5.</strong> Sample user interaction screens</p>
-
<table>
<tr>
- <td>
-<p><strong>Device state</strong></p>
-</td>
- <td>
-<p><strong>Sample UX</strong></p>
-</td>
+ <th>Device state</th>
+ <th>Sample UX</th>
+ <th> </th>
</tr>
<tr>
- <td>
-<p>YELLOW (before and after user interaction)</p>
-</td>
- <td>
-<img src="../images/boot_yellow1.png" alt="Yellow device state 1" id="figure4" />
-<p class="img-caption"><strong>Figure 3.</strong> Yellow state example 1 UI</p>
-</td>
- <td>
-<img src="../images/boot_yellow2.png" alt="Yellow device state 2" id="figure5" />
-<p class="img-caption"><strong>Figure 4.</strong> Yellow state example 2 UI</p>
-</td>
-
+ <td>YELLOW</td>
+ <td><img src="../images/boot_yellow1.png" alt="Yellow device state 1" id="figure2" />
+ <p class="img-caption"><strong>Figure 2.</strong> Before user interaction</p>
+ </td>
+ <td><img src="../images/boot_yellow2.png" alt="Yellow device state 2" id="figure3" />
+ <p class="img-caption"><strong>Figure 3.</strong> After user interaction</p>
+ </td>
</tr>
<tr>
- <td>
-<p>ORANGE</p>
-</td>
- <td>
-<img src="../images/boot_orange.png" alt="Orange device state" id="figure6" />
-<p class="img-caption"><strong>Figure 5.</strong> Orange state example UI</p>
-</td>
+ <td>ORANGE</td>
+ <td><img src="../images/boot_orange.png" alt="Orange device state" id="figure4" />
+ <p class="img-caption"><strong>Figure 4.</strong> Warning that device is
+ unlocked and can’t be verified.</p>
+ </td>
+ <td> </td>
</tr>
<tr>
- <td>
-<p>RED</p>
-</td>
- <td>
-<img src="../images/boot_red.png" alt="Red device state" id="figure7" />
-<p class="img-caption"><strong>Figure 6.</strong> Red state example UI</p>
-</td>
+ <td>RED</td>
+ <td><img src="../images/boot_red1.png" alt="Red device state" id="figure5" />
+ <p class="img-caption"><strong>Figure 5.</strong> Verified boot failure
+ warning</p>
+ </td>
+ <td><img src="../images/boot_red2.png" alt="Red device state" id="figure6" />
+ <p class="img-caption"><strong>Figure 6.</strong> Booting into EIO mode
+ warning</p>
+ </td>
</tr>
</table>
diff --git a/src/source/jack.jd b/src/source/jack.jd
index b5d61e1..538939a 100644
--- a/src/source/jack.jd
+++ b/src/source/jack.jd
@@ -44,6 +44,11 @@
Using a separate package such as ProGuard is no longer necessary.
</ul>
+<p class="note">Note that beginning in Android 7.0 (N), Jack supports code coverage with JaCoCo.
+See <a href="https://android.googlesource.com/platform/prebuilts/sdk/+/master/tools/README-jack-code-coverage.md">
+Code Coverage with JaCoCo</a> and <a href="https://developer.android.com/preview/j8-jack.html">
+Java 8 Language Features</a> for details.</p>
+
<img src="{@docRoot}images/jack-overview.png" height="75%" width="75%" alt="Jack overview" />
<p class="img-caption"><strong>Figure 1. </strong>Jack overview</p>
@@ -66,9 +71,13 @@
<h2 id=using_jack_in_your_android_build>Using Jack in your Android build</h2>
-<p>You don’t have to do anything differently to use Jack — just use your standard
-makefile commands to compile the tree or your project. Jack is the default
-Android build toolchain for M.</p>
+<div class="note">For instructions on using Jack in Android 7.0 (N) and later, see the <a
+href="https://android.googlesource.com/platform/prebuilts/sdk/+/master/tools/README-jack-server.md">Jack
+server documentation</a>. For Android 6.0 (M), use the instructions in this section.</div>
+
+<p>You don’t have to do anything differently to use Jack — just use your
+standard makefile commands to compile the tree or your project. Jack is the
+default Android build toolchain for M.</p>
<p>The first time Jack is used, it launches a local Jack compilation server on
your computer:</p>
diff --git a/src/source/read-bug-reports.jd b/src/source/read-bug-reports.jd
index 0f82f3d..595d3bb 100644
--- a/src/source/read-bug-reports.jd
+++ b/src/source/read-bug-reports.jd
@@ -96,6 +96,9 @@
...</pre></p>
</div>
</div>
+<p> </p>
+<p>For other useful event log tags, refer to
+<a href="https://android.googlesource.com/platform/frameworks/base/+/master/services/core/java/com/android/server/EventLogTags.logtags">/services/core/java/com/android/server/EventLogTags.logtags</a>.</p>
<h2 id="anrs-deadlocks">ANRs and deadlocks</h2>
<p>Bugreports can help you identify what's causing
@@ -121,7 +124,7 @@
</div>
</div>
<p></p>
-<p>You can also grep for <code>ANR in</code> in the <code>logcat</code>log,
+<p>You can also grep for <code>ANR in</code> in the <code>logcat</code> log,
which contains more information about what was using CPU at the time of the ANR.
</p>
@@ -800,7 +803,7 @@
</div>
</div>
-<h2 id="monitor contention">Monitor Contention</h2>
+<h2 id="monitor contention">Monitor contention</h2>
<p>Monitor contention logging can sometimes indicate actual monitor contention,
but most often indicates the system is so loaded that everything has slowed down.
You might see long monitor events logged by ART in system or event log.</p>
@@ -811,7 +814,7 @@
<p>In the event log:</p>
<p><pre>10-01 18:12:44.364 29761 29914 I dvm_lock_sample: [com.google.android.youtube,0,pool-3-thread-9,3914,ScheduledTaskMaster.java,138,SQLiteClosable.java,52,100]</pre></p>
-<h2 id="background-compilation">Background Compilation</h2>
+<h2 id="background-compilation">Background compilation</h2>
<p>Compilation can be expensive and load the device.</p>
<div class="toggle-content closed">
@@ -937,6 +940,24 @@
10-18 15:36:37.660 3283 3283 I screen_toggled: 2</pre></p>
</div>
</div>
+<p></p>
+<p>Bug reports also contain statistics about wake locks, a mechanism used by
+application developers to indicate their application needs to have the device
+stay on. (For details on wake locks, refer to
+<a href="https://developer.android.com/reference/android/os/PowerManager.WakeLock.html">PowerManager.WakeLock</a>
+and <a href="https://developer.android.com/training/scheduling/wakelock.html#cpu">Keep
+the CPU on</a>.)
+
+<p>The aggregated wake lock duration statistics track <strong>only</strong> the
+time a wake lock is actually responsible for keeping the device awake and
+<strong>do not</strong> include time with the screen on. In addition, if
+multiple wake locks are held simultaneously, the wake lock duration time is
+distributed across those wake locks.</p>
+
+<p>For more help visualizing power status, use
+<a href="https://github.com/google/battery-historian">Battery Historian</a>, a
+Google open source tool to analyze battery consumers using Android bugreport
+files.</p>
<h2 id="packages">Packages</h2>
<p>The DUMP OF SERVICE package contains application versions (and other useful
@@ -1132,5 +1153,29 @@
Proc #21: cch+6 B/ /CE trm: 0 995:com.google.android.partnersetup/u0a18 (cch-empty)></pre></p>
</div>
</div>
-</body>
-</html>
\ No newline at end of file
+
+<h2 id=scans>Scans</h2>
+<p>Use the following steps to identify applications performing excessive
+Bluetooth Low Energy (BLE) scans:</p>
+<ul>
+<li>Find log messages for <code>BluetoothLeScanner</code>:
+<pre>
+$ grep 'BluetoothLeScanner' ~/downloads/bugreport.txt
+07-28 15:55:19.090 24840 24851 D BluetoothLeScanner: onClientRegistered() - status=0 clientIf=5
+</pre></li>
+<li>Locate the PID in the log messages. In this example, "24840" and
+"24851" are PID (process ID) and TID (thread ID).</li>
+<li>Locate the application associated with the PID:
+<pre>
+PID #24840: ProcessRecord{4fe996a 24840:com.badapp/u0a105}
+</pre>
+<p>In this example, the package name is <code>com.badapp</code>.</li>
+<li>Look up the package name on Google Play to identify the responsible
+application:
+<strong>https://play.google.com/store/apps/details?id=com.badapp</strong>.</li>
+</ul>
+<p class=note><strong>Note</strong>: For devices running Android 7.0, the
+system collects data for BLE scans and associates these activities
+with the initiating application. For details, see
+<a href="{@docRoot}devices/tech/power/values.html#le-bt-scans">Low Energy (LE)
+and Bluetooth scans</a>.</p>
diff --git a/src/source/running.jd b/src/source/running.jd
index c6cd5b9..dcde469 100644
--- a/src/source/running.jd
+++ b/src/source/running.jd
@@ -177,6 +177,79 @@
<p class="note"><strong>Note</strong>: Re-locking the bootloading on a Motoroal Xoom
erases user data (including the shared USB data).</p>
+<h2 id="flash-unlock">Using Flash Unlock</h2>
+
+<p>
+Android 7.0 introduces a new system API, <code>getFlashLockState()</code>, to
+transmit bootloader state.
+</p>
+
+<p>
+Android 7.0 added the following system API that returns the bootloader’s lock
+status on compliant devices:
+</p>
+
+<pre>
+PersistentDataBlockManager.getFlashLockState()
+</pre>
+
+<table>
+ <tr>
+ <th>Return value</th>
+ <th>Conditions</th>
+ </tr>
+ <tr>
+ <td><code>FLASH_LOCK_UNKNOWN</code>
+ </td>
+ <td>Returned only by devices upgrading to Android 7.0 that have not supported
+bootloader changes required to get the flash lock status if they support
+flashing lock/unlock capability.
+<p>
+New Android 7.0 devices must be in either <code>FLASH_LOCK_LOCKED</code> or <code>FLASH_LOCK_UNLOCKED</code> state.
+If a device is upgrading to Android 7.0 and does not support flashing unlock/lock
+capability, then it should simply return <code>FLASH_LOCK_LOCKED</code> state.
+ </td>
+ </tr>
+ <tr>
+ <td><code>FLASH_LOCK_LOCKED</code>
+ </td>
+ <td>Should be returned by any device that does not support flashing
+lock/unlock (i.e. the device is always locked), or any device that does support
+flashing lock/unlock and is in the locked state.
+ </td>
+ </tr>
+ <tr>
+ <td><code>FLASH_LOCK_UNLOCKED</code>
+ </td>
+ <td>Returned by any device that supports flashing lock/unlock and is
+currently in the unlocked state.
+ </td>
+ </tr>
+</table>
+
+<h3 id="examples-and-source">Examples and source</h3>
+
+<p>
+In the Android 7.0 release, the Android Open Source Project (AOSP) contains a reference
+implementation that returns a value based on the <code>ro.boot.flash.locked
+</code>boot property.
+</p>
+
+<p>
+The code lives in:
+</p>
+
+<pre>
+frameworks/base/services/core/java/com/android/server/PersistentDataBlockService.java
+frameworks/base/core/java/android/service/persistentdata/PersistentDataBlockManager.java
+</pre>
+
+<h3 id="validation">Validation</h3>
+<p>
+Manufacturers should test the values returned by devices with locked and
+unlocked bootloaders.
+</p>
+
<h2 id="selecting-device-build">Selecting a device build</h2>
<p>The recommended builds for devices are available from the lunch menu,
diff --git a/src/source/submit-patches.jd b/src/source/submit-patches.jd
index 0298ea9..0cd18a6 100644
--- a/src/source/submit-patches.jd
+++ b/src/source/submit-patches.jd
@@ -24,31 +24,38 @@
</ol>
</div>
</div>
-<p>This page describes the full process of submitting a patch to the AOSP, including
-reviewing and tracking changes with <a href="https://android-review.googlesource.com/">Gerrit</a>.</p>
+<p>This page describes the full process of submitting a patch to the AOSP,
+including
+reviewing and tracking changes with <a
+href="https://android-review.googlesource.com/">Gerrit</a>.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>
-<p>Before you follow the instructions on this page, you need to <a href="{@docRoot}source/initializing.html">
+<p>Before you follow the instructions on this page, you need to <a
+href="{@docRoot}source/initializing.html">
initialize your build environment</a>, <a
href="{@docRoot}source/downloading.html">download the source</a>, <a
href="https://android.googlesource.com/new-password">create a
password</a>, and follow the instructions on the password generator page.</p>
</li>
<li>
-<p>For details about Repo and Git, see the <a href="{@docRoot}source/developing.html">Developing</a> section.</p>
+<p>For details about Repo and Git, see the <a
+href="{@docRoot}source/developing.html">Developing</a> section.</p>
</li>
<li>
<p>For information about the different roles you can play within the Android
-Open Source community, see <a href="{@docRoot}source/roles.html">Project roles</a>.</p>
+Open Source community, see <a href="{@docRoot}source/roles.html">Project
+roles</a>.</p>
</li>
<li>
<p>If you plan to contribute code to the Android platform, be sure to read
-the <a href="{@docRoot}source/licenses.html">AOSP's licensing information</a>.</p>
+the <a href="{@docRoot}source/licenses.html">AOSP's licensing
+information</a>.</p>
</li>
<li>
<p>Note that changes to some of the upstream projects used by Android should be
-made directly to that project, as described in <a href="#upstream-projects">Upstream Projects</a>.</p>
+made directly to that project, as described in <a
+href="#upstream-projects">Upstream Projects</a>.</p>
</li>
</ul>
<h1 id="for-contributors">For contributors</h1>
@@ -60,7 +67,8 @@
href="{@docRoot}source/downloading.html#using-authentication">Using
Authentication</a> for additional details.</p>
<h2 id="start-a-repo-branch">Start a repo branch</h2>
-<p>For each change you intend to make, start a new branch within the relevant git repository:</p>
+<p>For each change you intend to make, start a new branch within the relevant
+git repository:</p>
<pre><code>$ repo start NAME .
</code></pre>
<p>You can start several independent branches at the same time in the same
@@ -76,56 +84,84 @@
description will be pushed to the public AOSP repository, so please follow our
guidelines for writing changelist descriptions: </p>
<ul>
+
<li>
-<p>Start with a one-line summary (60 characters max), followed by a blank line.
-This format is used by git and gerrit for various displays. </p>
+<p>Start with a one-line summary (50 characters maximum), followed by a
+blank line.
+This format is used by git and gerrit for various displays.</p>
+</li>
+
+<li>
+<p>Starting on the third line, enter a longer description, which must
+hard-wrap at 72 characters maximum. This description should focus on what
+issue the change solves, and how it solves it. The second part is somewhat
+optional when implementing new features, though desirable.</p>
+</li>
+<li>
+<p>Include a brief note of any assumptions or background information that
+may be important when another contributor works on this feature next year.</p>
+</li>
+</ul>
+
+<p>Here is an example commit message:</p>
<pre><code>short description on first line
more detailed description of your patch,
which is likely to take up multiple lines.
</code></pre>
-</li>
-<li>
-<p>The description should focus on what issue it solves, and how it solves it. The second part is somewhat optional when implementing new features, though desirable.</p>
-</li>
-<li>
-<p>Include a brief note of any assumptions or background information that may be important when another contributor works on this feature next year. </p>
-</li>
-</ul>
-<p>A unique change ID and your name and email as provided during <code>repo init</code> will be automatically added to your commit message. </p>
+
+<p>A unique change ID and your name and email as provided during <code>repo
+init</code> will be automatically added to your commit message. </p>
<h2 id="upload-to-gerrit">Upload to gerrit</h2>
-<p>Once you have committed your change to your personal history, upload it to gerrit with</p>
+<p>Once you have committed your change to your personal history, upload it
+to gerrit with</p>
<pre><code>$ repo upload
</code></pre>
-<p>If you have started multiple branches in the same repository, you will be prompted to select which one(s) to upload.</p>
+<p>If you have started multiple branches in the same repository, you will
+be prompted to select which one(s) to upload.</p>
<p>After a successful upload, repo will provide you the URL of a new page on
-<a href="https://android-review.googlesource.com/">Gerrit</a>. Visit this link to view
+<a href="https://android-review.googlesource.com/">Gerrit</a>. Visit this
+link to view
your patch on the review server, add comments, or request specific reviewers
for your patch.</p>
<h2 id="uploading-a-replacement-patch">Uploading a replacement patch</h2>
-<p>Suppose a reviewer has looked at your patch and requested a small modification. You can amend your commit within git, which will result in a new patch on gerrit with the same change ID as the original.</p>
-<p><em>Note that if you have made other commits since uploading this patch, you will need to manually move your git HEAD.</em></p>
+<p>Suppose a reviewer has looked at your patch and requested a small
+modification. You can amend your commit within git, which will result in a
+new patch on gerrit with the same change ID as the original.</p>
+<p><em>Note that if you have made other commits since uploading this patch,
+you will need to manually move your git HEAD.</em></p>
<pre><code>$ git add -A
$ git commit --amend
</code></pre>
-<p>When you upload the amended patch, it will replace the original on gerrit and in your local git history.</p>
+<p>When you upload the amended patch, it will replace the original on gerrit
+and in your local git history.</p>
<h2 id="resolving-sync-conflicts">Resolving sync conflicts</h2>
-<p>If other patches are submitted to the source tree that conflict with yours, you will need to rebase your patch on top of the new HEAD of the source repository. The easy way to do this is to run</p>
+<p>If other patches are submitted to the source tree that conflict with
+yours, you will need to rebase your patch on top of the new HEAD of the
+source repository. The easy way to do this is to run</p>
<pre><code>$ repo sync
</code></pre>
-<p>This command first fetches the updates from the source server, then attempts to automatically rebase your HEAD onto the new remote HEAD.</p>
-<p>If the automatic rebase is unsuccessful, you will have to perform a manual rebase.</p>
+<p>This command first fetches the updates from the source server, then
+attempts to automatically rebase your HEAD onto the new remote HEAD.</p>
+<p>If the automatic rebase is unsuccessful, you will have to perform a
+manual rebase.</p>
<pre><code>$ repo rebase
</code></pre>
-<p>Using <code>git mergetool</code> may help you deal with the rebase conflict. Once you have successfully merged the conflicting files,</p>
+<p>Using <code>git mergetool</code> may help you deal with the rebase
+conflict. Once you have successfully merged the conflicting files,</p>
<pre><code>$ git rebase --continue
</code></pre>
-<p>After either automatic or manual rebase is complete, run <code>repo upload</code> to submit your rebased patch.</p>
+<p>After either automatic or manual rebase is complete, run <code>repo
+upload</code> to submit your rebased patch.</p>
<h2 id="after-a-submission-is-approved">After a submission is approved</h2>
-<p>After a submission makes it through the review and verification process, Gerrit automatically merges the change into the public repository. Other users will be able to run <code>repo sync</code> to pull the update into their local client.</p>
+<p>After a submission makes it through the review and verification process,
+Gerrit automatically merges the change into the public repository. Other
+users will be able to run <code>repo sync</code> to pull the update into
+their local client.</p>
<h1 id="for-reviewers-and-verifiers">For reviewers and verifiers</h1>
<h2 id="reviewing-a-change">Reviewing a change</h2>
-<p>If you are assigned to be the Approver for a change, you need to determine the following:</p>
+<p>If you are assigned to be the Approver for a change, you need to determine
+the following:</p>
<ul>
<li>
<p>Does this change fit within this project's stated purpose?</p>
@@ -134,10 +170,12 @@
<p>Is this change valid within the project's existing architecture?</p>
</li>
<li>
-<p>Does this change introduce design flaws that will cause problems in the future?</p>
+<p>Does this change introduce design flaws that will cause problems in
+the future?</p>
</li>
<li>
-<p>Does this change follow the best practices that have been established for this project?</p>
+<p>Does this change follow the best practices that have been established
+for this project?</p>
</li>
<li>
<p>Is this change a good way to perform the described function?</p>
@@ -146,71 +184,122 @@
<p>Does this change introduce any security or instability risks?</p>
</li>
</ul>
-<p>If you approve of the change, mark it with LGTM ("Looks Good to Me") within Gerrit.</p>
+<p>If you approve of the change, mark it with LGTM ("Looks Good to Me")
+within Gerrit.</p>
<h2 id="verifying-a-change">Verifying a change</h2>
-<p>If you are assigned to be the Verifier for a change, you need to do the following:</p>
+<p>If you are assigned to be the Verifier for a change, you need to do the
+following:</p>
<ul>
<li>
-<p>Patch the change into your local client using one of the Download commands.</p>
+<p>Patch the change into your local client using one of the Download
+commands.</p>
</li>
<li>
<p>Build and test the change.</p>
</li>
<li>
-<p>Within Gerrit use Publish Comments to mark the commit as "Verified" or "Fails," and add a message explaining what problems were identified.</p>
+<p>Within Gerrit use Publish Comments to mark the commit as "Verified" or
+"Fails," and add a message explaining what problems were identified.</p>
</li>
</ul>
<h2 id="downloading-changes-from-gerrit">Downloading changes from Gerrit</h2>
-<p>A submission that has been verified and merged will be downloaded with the next <code>repo sync</code>. If you wish to download a specific change that has not yet been approved, run</p>
+<p>A submission that has been verified and merged will be downloaded with
+the next <code>repo sync</code>. If you wish to download a specific change
+that has not yet been approved, run</p>
<pre><code>$ repo download TARGET CHANGE
</code></pre>
-<p>where TARGET is the local directory into which the change should be downloaded and CHANGE is the
-change number as listed in <a href="https://android-review.googlesource.com/">Gerrit</a>. For more information,
+<p>where TARGET is the local directory into which the change should be
+downloaded and CHANGE is the
+change number as listed in <a
+href="https://android-review.googlesource.com/">Gerrit</a>. For more
+information,
see the <a href="{@docRoot}source/using-repo.html">Repo reference</a>.</p>
-<h2 id="how-do-i-become-a-verifier-or-approver">How do I become a Verifier or Approver?</h2>
-<p>In short, contribute high-quality code to one or more of the Android projects.
+<h2 id="how-do-i-become-a-verifier-or-approver">How do I become a Verifier
+or Approver?</h2>
+<p>In short, contribute high-quality code to one or more of the Android
+projects.
For details about the different roles in the Android Open Source community and
-who plays them, see <a href="{@docRoot}source/roles.html">Project Roles</a>.</p>
+who plays them, see <a href="{@docRoot}source/roles.html">Project
+Roles</a>.</p>
<h2 id="diffs-and-comments">Diffs and comments</h2>
-<p>To open the details of the change within Gerrit, click on the "Id number" or "Subject" of a change. To compare the established code with the updated code, click the file name under "Side-by-side diffs."</p>
+<p>To open the details of the change within Gerrit, click on the "Id number"
+or "Subject" of a change. To compare the established code with the updated
+code, click the file name under "Side-by-side diffs."</p>
<h2 id="adding-comments">Adding comments</h2>
-<p>Anyone in the community can use Gerrit to add inline comments to code submissions. A good comment will be relevant to the line or section of code to which it is attached in Gerrit. It might be a short and constructive suggestion about how a line of code could be improved, or it might be an explanation from the author about why the code makes sense the way it is.</p>
-<p>To add an inline comment, double-click the relevant line of the code and write your comment in the text box that opens. When you click Save, only you can see your comment.</p>
-<p>To publish your comments so that others using Gerrit will be able to see them, click the Publish Comments button. Your comments will be emailed to all relevant parties for this change, including the change owner, the patch set uploader (if different from the owner), and all current reviewers.</p>
+<p>Anyone in the community can use Gerrit to add inline comments to code
+submissions. A good comment will be relevant to the line or section of code
+to which it is attached in Gerrit. It might be a short and constructive
+suggestion about how a line of code could be improved, or it might be an
+explanation from the author about why the code makes sense the way it is.</p>
+<p>To add an inline comment, double-click the relevant line of the code
+and write your comment in the text box that opens. When you click Save,
+only you can see your comment.</p>
+<p>To publish your comments so that others using Gerrit will be able to see
+them, click the Publish Comments button. Your comments will be emailed to
+all relevant parties for this change, including the change owner, the patch
+set uploader (if different from the owner), and all current reviewers.</p>
<p><a name="upstream-projects"></a></p>
<h1 id="upstream-projects">Upstream Projects</h1>
-<p>Android makes use of a number of other open source projects, such as the Linux kernel and WebKit, as described in
-<a href="{@docRoot}source/code-lines.html">Codelines, Branches, and Releases</a>. For most projects under <code>external/</code>, changes should be made upstream and then the Android maintainers informed of the new upstream release containing these changes. It may also be useful to upload patches that move us to track a new upstream release, though these can be difficult changes to make if the project is widely used within Android like most of the larger ones mentioned below, where we tend to upgrade with every release.</p>
-<p>One interesting special case is bionic. Much of the code there is from BSD, so unless the change is to code that's new to bionic, we'd much rather see an upstream fix and then pull a whole new file from the appropriate BSD. (Sadly we have quite a mix of different BSDs at the moment, but we hope to address that in future, and get into a position where we track upstream much more closely.)</p>
+<p>Android makes use of a number of other open source projects, such as the
+Linux kernel and WebKit, as described in
+<a href="{@docRoot}source/code-lines.html">Codelines, Branches, and
+Releases</a>. For most projects under <code>external/</code>, changes should
+be made upstream and then the Android maintainers informed of the new upstream
+release containing these changes. It may also be useful to upload patches
+that move us to track a new upstream release, though these can be difficult
+changes to make if the project is widely used within Android like most of the
+larger ones mentioned below, where we tend to upgrade with every release.</p>
+<p>One interesting special case is bionic. Much of the code there is from BSD,
+so unless the change is to code that's new to bionic, we'd much rather see an
+upstream fix and then pull a whole new file from the appropriate BSD. (Sadly
+we have quite a mix of different BSDs at the moment, but we hope to address
+that in future, and get into a position where we track upstream much more
+closely.)</p>
<h2 id="icu4c">ICU4C</h2>
-<p>All changes to the ICU4C project at <code>external/icu4c</code> should be made upstream at
+<p>All changes to the ICU4C project at <code>external/icu4c</code> should
+be made upstream at
<a href="http://site.icu-project.org/">icu-project.org/</a>.
-See <a href="http://site.icu-project.org/bugs">Submitting ICU Bugs and Feature Requests</a> for more.</p>
+See <a href="http://site.icu-project.org/bugs">Submitting ICU Bugs and
+Feature Requests</a> for more.</p>
<h2 id="llvmclangcompiler-rt">LLVM/Clang/Compiler-rt</h2>
-<p>All changes to LLVM-related projects (<code>external/clang</code>, <code>external/compiler-rt</code>,
+<p>All changes to LLVM-related projects (<code>external/clang</code>,
+<code>external/compiler-rt</code>,
<code>external/llvm</code>) should be made upstream at
<a href="http://llvm.org/">llvm.org/</a>.</p>
<h2 id="mksh">mksh</h2>
-<p>All changes to the MirBSD Korn Shell project at <code>external/mksh</code> should be made upstream
-either by sending an email to miros-mksh on the mirbsd.org domain (no subscription
-required to submit there) or (optionally) at <a href="https://launchpad.net/mksh">Launchpad</a>.
+<p>All changes to the MirBSD Korn Shell project at <code>external/mksh</code>
+should be made upstream
+either by sending an email to miros-mksh on the mirbsd.org domain (no
+subscription
+required to submit there) or (optionally) at <a
+href="https://launchpad.net/mksh">Launchpad</a>.
</p>
<h2 id="openssl">OpenSSL</h2>
-<p>All changes to the OpenSSL project at <code>external/openssl</code> should be made upstream at
+<p>All changes to the OpenSSL project at <code>external/openssl</code>
+should be made upstream at
<a href="http://www.openssl.org">openssl.org</a>.</p>
<h2 id="v8">V8</h2>
-<p>All changes to the V8 project at <code>external/v8</code> should be submitted upstream at
-<a href="https://code.google.com/p/v8">code.google.com/p/v8</a>. See <a href="https://code.google.com/p/v8/wiki/Contributing">Contributing to V8</a>
+<p>All changes to the V8 project at <code>external/v8</code> should be
+submitted upstream at
+<a href="https://code.google.com/p/v8">code.google.com/p/v8</a>. See <a
+href="https://code.google.com/p/v8/wiki/Contributing">Contributing to V8</a>
for details.</p>
<h2 id="webkit">WebKit</h2>
-<p>All changes to the WebKit project at <code>external/webkit</code> should be made
-upstream at <a href="http://www.webkit.org">webkit.org</a>. The process begins by filing a WebKit bug.
-This bug should use <code>Android</code> for the <code>Platform</code> and <code>OS</code>
-fields only if the bug is specific to Android. Bugs are far more likely to receive the reviewers'
+<p>All changes to the WebKit project at <code>external/webkit</code> should
+be made
+upstream at <a href="http://www.webkit.org">webkit.org</a>. The process
+begins by filing a WebKit bug.
+This bug should use <code>Android</code> for the <code>Platform</code>
+and <code>OS</code>
+fields only if the bug is specific to Android. Bugs are far more likely to
+receive the reviewers'
attention once a proposed fix is added and tests are included. See
-<a href="http://webkit.org/coding/contributing.html">Contributing Code to WebKit</a> for details.</p>
+<a href="http://webkit.org/coding/contributing.html">Contributing Code to
+WebKit</a> for details.</p>
<h2 id="zlib">zlib</h2>
-<p>All changes to the zlib project at <code>external/zlib</code> should be made upstream at
+<p>All changes to the zlib project at <code>external/zlib</code> should be
+made upstream at
<a href="http://zlib.net">zlib.net</a>.</p>
+