Docs: Vulkan updates for N, arch reorg

Bug: 27621285
Change-Id: Ief97c9247735aaf7bdcedd5e83dd95bd96e2361d
diff --git a/src/devices/devices_toc.cs b/src/devices/devices_toc.cs
index 86b4315..94e4031 100644
--- a/src/devices/devices_toc.cs
+++ b/src/devices/devices_toc.cs
@@ -144,8 +144,25 @@
           </a>
         </div>
         <ul>
-          <li><a href="<?cs var:toroot ?>devices/graphics/architecture.html">Architecture</a></li>
-          <li class="nav-section">
+         <li class="nav-section">
+            <div class="nav-section-header">
+              <a href="<?cs var:toroot ?>devices/graphics/architecture.html">
+                <span class="en">Architecture</span>
+              </a>
+            </div>
+            <ul>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-bq-gralloc.html">BufferQueue</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-sf-hwc.html">SurfaceFlinger and HWC</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-sh.html">Surface and SurfaceHolder</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-egl-opengl.html">OpenGL ES</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-vulkan.html">Vulkan</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-sv-glsv.html">SurfaceView</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-st.html">SurfaceTexture</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-tv.html">TextureView</a></li>
+              <li><a href="<?cs var:toroot ?>devices/graphics/arch-gameloops.html">Game Loops</a></li>
+            </ul>
+         </li>
+         <li class="nav-section">
             <div class="nav-section-header">
               <a href="<?cs var:toroot ?>devices/graphics/implement.html">
                 <span class="en">Implementing</span>
diff --git a/src/devices/graphics/arch-bq-gralloc.jd b/src/devices/graphics/arch-bq-gralloc.jd
new file mode 100644
index 0000000..1bf6019
--- /dev/null
+++ b/src/devices/graphics/arch-bq-gralloc.jd
@@ -0,0 +1,141 @@
+page.title=BufferQueue and gralloc
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Understanding the Android graphics system starts behind the scenes with
+BufferQueue and the gralloc HAL.</p>
+
+<p>The BufferQueue class is at the heart of everything graphical in Android. Its
+role is simple: Connect something that generates buffers of graphical data (the
+<em>producer</em>) to something that accepts the data for display or further
+processing (the <em>consumer</em>). Nearly everything that moves buffers of
+graphical data through the system relies on BufferQueue.</p>
+
+<p>The gralloc memory allocator performs buffer allocations and is
+implemented through a vendor-specific HAL interface (see
+<code>hardware/libhardware/include/hardware/gralloc.h</code>). The
+<code>alloc()</code> function takes expected arguments (width, height, pixel
+format) as well as a set of usage flags (detailed below).</p>
+
+<h2 id="BufferQueue">BufferQueue producers and consumers</h2>
+
+<p>Basic usage is straightforward: The producer requests a free buffer
+(<code>dequeueBuffer()</code>), specifying a set of characteristics including
+width, height, pixel format, and usage flags. The producer populates the buffer
+and returns it to the queue (<code>queueBuffer()</code>). Later, the consumer
+acquires the buffer (<code>acquireBuffer()</code>) and makes use of the buffer
+contents. When the consumer is done, it returns the buffer to the queue
+(<code>releaseBuffer()</code>).</p>
+
+<p>Recent Android devices support the <em>sync framework</em>, which enables the
+system to do nifty things when combined with hardware components that can
+manipulate graphics data asynchronously. For example, a producer can submit a
+series of OpenGL ES drawing commands and then enqueue the output buffer before
+rendering completes. The buffer is accompanied by a fence that signals when the
+contents are ready. A second fence accompanies the buffer when it is returned
+to the free list, so the consumer can release the buffer while the contents are
+still in use. This approach improves latency and throughput as the buffers
+move through the system.</p>
+
+<p>Some characteristics of the queue, such as the maximum number of buffers it
+can hold, are determined jointly by the producer and the consumer. However, the
+BufferQueue is responsible for allocating buffers as it needs them. Buffers are
+retained unless the characteristics change; for example, if the producer
+requests buffers with a different size, old buffers are freed and new buffers
+are allocated on demand.</p>
+
+<p>Producers and consumers can live in different processes. Currently, the
+consumer always creates and owns the data structure. In older versions of
+Android, only the producer side was binderized (i.e. producer could be in a
+remote process but consumer had to live in the process where the queue was
+created). Android 4.4 and later releases moved toward a more general
+implementation.</p>
+
+<p>Buffer contents are never copied by BufferQueue (moving that much data around
+would be very inefficient). Instead, buffers are always passed by handle.</p>
+
+<h2 id="gralloc_HAL">gralloc HAL usage flags</h2>
+
+<p>The gralloc allocator is not just another way to allocate memory on the
+native heap; in some situations, the allocated memory may not be cache-coherent
+or could be totally inaccessible from user space. The nature of the allocation
+is determined by the usage flags, which include attributes such as:</p>
+
+<ul>
+<li>How often the memory will be accessed from software (CPU)</li>
+<li>How often the memory will be accessed from hardware (GPU)</li>
+<li>Whether the memory will be used as an OpenGL ES (GLES) texture</li>
+<li>Whether the memory will be used by a video encoder</li>
+</ul>
+
+<p>For example, if your format specifies RGBA 8888 pixels, and you indicate the
+buffer will be accessed from software (meaning your application will touch
+pixels directly) then the allocator must create a buffer with 4 bytes per pixel
+in R-G-B-A order. If instead, you say the buffer will be only accessed from
+hardware and as a GLES texture, the allocator can do anything the GLES driver
+wants&mdash;BGRA ordering, non-linear swizzled layouts, alternative color
+formats, etc. Allowing the hardware to use its preferred format can improve
+performance.</p>
+
+<p>Some values cannot be combined on certain platforms. For example, the video
+encoder flag may require YUV pixels, so adding software access and specifying
+RGBA 8888 would fail.</p>
+
+<p>The handle returned by the gralloc allocator can be passed between processes
+through Binder.</p>
+
+<h2 id=tracking>Tracking BufferQueue with systrace</h2>
+
+<p>To really understand how graphics buffers move around, use systrace. The
+system-level graphics code is well instrumented, as is much of the relevant app
+framework code.</p>
+
+<p>A full description of how to use systrace effectively would fill a rather
+long document. Start by enabling the <code>gfx</code>, <code>view</code>, and
+<code>sched</code> tags. You'll also see BufferQueues in the trace. If you've
+used systrace before, you've probably seen them but maybe weren't sure what they
+were. As an example, if you grab a trace while
+<a href="https://github.com/google/grafika">Grafika's</a> "Play video
+(SurfaceView)" is running, the row labeled <em>SurfaceView</em> tells you how
+many buffers were queued up at any given time.</p>
+
+<p>The value increments while the app is active&mdash;triggering the rendering
+of frames by the MediaCodec decoder&mdash;and decrements while SurfaceFlinger is
+doing work, consuming buffers. When showing video at 30fps, the queue's value
+varies from 0 to 1 because the ~60fps display can easily keep up with the
+source. (Notice also that SurfaceFlinger only wakes when there's work to
+be done, not 60 times per second. The system tries very hard to avoid work and
+will disable VSYNC entirely if nothing is updating the screen.)</p>
+
+<p>If you switch to Grafika's "Play video (TextureView)" and grab a new trace,
+you'll see a row labeled
+com.android.grafika/com.android.grafika.PlayMovieActivity. This is the main UI
+layer, which is just another BufferQueue. Because TextureView renders into the
+UI layer (rather than a separate layer), you'll see all of the video-driven
+updates here.</p>
+
+<p>For more information about the systrace tool, refer to <a
+href="http://developer.android.com/tools/help/systrace.html">Systrace
+documentation</a>.</p>
diff --git a/src/devices/graphics/arch-egl-opengl.jd b/src/devices/graphics/arch-egl-opengl.jd
new file mode 100644
index 0000000..97ca18e
--- /dev/null
+++ b/src/devices/graphics/arch-egl-opengl.jd
@@ -0,0 +1,88 @@
+page.title=EGLSurfaces and OpenGL ES
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>OpenGL ES defines an API for rendering graphics.  It does not define a windowing
+system.  To allow GLES to work on a variety of platforms, it is designed to be
+combined with a library that knows how to create and access windows through the
+operating system.  The library used for Android is called EGL.  If you want to
+draw textured polygons, you use GLES calls; if you want to put your rendering on
+the screen, you use EGL calls.</p>
+
+<p>Before you can do anything with GLES, you need to create a GL context.  In EGL,
+this means creating an EGLContext and an EGLSurface.  GLES operations apply to
+the current context, which is accessed through thread-local storage rather than
+passed around as an argument.  This means you have to be careful about which
+thread your rendering code executes on, and which context is current on that
+thread.</p>
+
+ <h2 id=egl_surface>EGLSurfaces</h2>
+
+<p>The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer")
+or a window allocated by the operating system.  EGL window surfaces are created
+with the <code>eglCreateWindowSurface()</code> call.  It takes a "window object" as an
+argument, which on Android can be a SurfaceView, a SurfaceTexture, a
+SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath.  When
+you make this call, EGL creates a new EGLSurface object, and connects it to the
+producer interface of the window object's BufferQueue.  From that point onward,
+rendering to that EGLSurface results in a buffer being dequeued, rendered into,
+and queued for use by the consumer.  (The term "window" is indicative of the
+expected use, but bear in mind the output might not be destined to appear
+on the display.)</p>
+
+<p>EGL does not provide lock/unlock calls.  Instead, you issue drawing commands and
+then call <code>eglSwapBuffers()</code> to submit the current frame.  The
+method name comes from the traditional swap of front and back buffers, but the actual
+implementation may be very different.</p>
+
+<p>Only one EGLSurface can be associated with a Surface at a time -- you can have
+only one producer connected to a BufferQueue -- but if you destroy the
+EGLSurface it will disconnect from the BufferQueue and allow something else to
+connect.</p>
+
+<p>A given thread can switch between multiple EGLSurfaces by changing what's
+"current."  An EGLSurface must be current on only one thread at a time.</p>
+
+<p>The most common mistake when thinking about EGLSurface is assuming that it is
+just another aspect of Surface (like SurfaceHolder).  It's a related but
+independent concept.  You can draw on an EGLSurface that isn't backed by a
+Surface, and you can use a Surface without EGL.  EGLSurface just gives GLES a
+place to draw.</p>
+
+<h2 id="anativewindow">ANativeWindow</h2>
+
+<p>The public Surface class is implemented in the Java programming language.  The
+equivalent in C/C++ is the ANativeWindow class, semi-exposed by the <a
+href="https://developer.android.com/tools/sdk/ndk/index.html">Android NDK</a>.  You
+can get the ANativeWindow from a Surface with the <code>ANativeWindow_fromSurface()</code>
+call.  Just like its Java-language cousin, you can lock it, render in software,
+and unlock-and-post.</p>
+
+<p>To create an EGL window surface from native code, you pass an instance of
+EGLNativeWindowType to <code>eglCreateWindowSurface()</code>.  EGLNativeWindowType is just
+a synonym for ANativeWindow, so you can freely cast one to the other.</p>
+
+<p>The fact that the basic "native window" type just wraps the producer side of a
+BufferQueue should not come as a surprise.</p>
diff --git a/src/devices/graphics/arch-gameloops.jd b/src/devices/graphics/arch-gameloops.jd
new file mode 100644
index 0000000..bca4acd
--- /dev/null
+++ b/src/devices/graphics/arch-gameloops.jd
@@ -0,0 +1,155 @@
+page.title=Game Loops
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>A very popular way to implement a game loop looks like this:</p>
+
+<pre>
+while (playing) {
+    advance state by one frame
+    render the new frame
+    sleep until it’s time to do the next frame
+}
+</pre>
+
+<p>There are a few problems with this, the most fundamental being the idea that the
+game can define what a "frame" is.  Different displays will refresh at different
+rates, and that rate may vary over time.  If you generate frames faster than the
+display can show them, you will have to drop one occasionally.  If you generate
+them too slowly, SurfaceFlinger will periodically fail to find a new buffer to
+acquire and will re-show the previous frame.  Both of these situations can
+cause visible glitches.</p>
+
+<p>What you need to do is match the display's frame rate, and advance game state
+according to how much time has elapsed since the previous frame.  There are two
+ways to go about this: (1) stuff the BufferQueue full and rely on the "swap
+buffers" back-pressure; (2) use Choreographer (API 16+).</p>
+
+<h2 id=stuffing>Queue stuffing</h2>
+
+<p>This is very easy to implement: just swap buffers as fast as you can.  In early
+versions of Android this could actually result in a penalty where
+<code>SurfaceView#lockCanvas()</code> would put you to sleep for 100ms.  Now
+it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as
+SurfaceFlinger is able.</p>
+
+<p>One example of this approach can be seen in <a
+href="https://code.google.com/p/android-breakout/">Android Breakout</a>.  It
+uses GLSurfaceView, which runs in a loop that calls the application's
+onDrawFrame() callback and then swaps the buffer.  If the BufferQueue is full,
+the <code>eglSwapBuffers()</code> call will wait until a buffer is available.
+Buffers become available when SurfaceFlinger releases them, which it does after
+acquiring a new one for display.  Because this happens on VSYNC, your draw loop
+timing will match the refresh rate.  Mostly.</p>
+
+<p>There are a couple of problems with this approach.  First, the app is tied to
+SurfaceFlinger activity, which is going to take different amounts of time
+depending on how much work there is to do and whether it's fighting for CPU time
+with other processes.  Since your game state advances according to the time
+between buffer swaps, your animation won't update at a consistent rate.  When
+running at 60fps with the inconsistencies averaged out over time, though, you
+probably won't notice the bumps.</p>
+
+<p>Second, the first couple of buffer swaps are going to happen very quickly
+because the BufferQueue isn't full yet.  The computed time between frames will
+be near zero, so the game will generate a few frames in which nothing happens.
+In a game like Breakout, which updates the screen on every refresh, the queue is
+always full except when a game is first starting (or un-paused), so the effect
+isn't noticeable.  A game that pauses animation occasionally and then returns to
+as-fast-as-possible mode might see odd hiccups.</p>
+
+<h2 id=choreographer>Choreographer</h2>
+
+<p>Choreographer allows you to set a callback that fires on the next VSYNC.  The
+actual VSYNC time is passed in as an argument.  So even if your app doesn't wake
+up right away, you still have an accurate picture of when the display refresh
+period began.  Using this value, rather than the current time, yields a
+consistent time source for your game state update logic.</p>
+
+<p>Unfortunately, the fact that you get a callback after every VSYNC does not
+guarantee that your callback will be executed in a timely fashion or that you
+will be able to act upon it sufficiently swiftly.  Your app will need to detect
+situations where it's falling behind and drop frames manually.</p>
+
+<p>The "Record GL app" activity in Grafika provides an example of this.  On some
+devices (e.g. Nexus 4 and Nexus 5), the activity will start dropping frames if
+you just sit and watch.  The GL rendering is trivial, but occasionally the View
+elements get redrawn, and the measure/layout pass can take a very long time if
+the device has dropped into a reduced-power mode.  (According to systrace, it
+takes 28ms instead of 6ms after the clocks slow on Android 4.4.  If you drag
+your finger around the screen, it thinks you're interacting with the activity,
+so the clock speeds stay high and you'll never drop a frame.)</p>
+
+<p>The simple fix was to drop a frame in the Choreographer callback if the current
+time is more than N milliseconds after the VSYNC time.  Ideally the value of N
+is determined based on previously observed VSYNC intervals.  For example, if the
+refresh period is 16.7ms (60fps), you might drop a frame if you're running more
+than 15ms late.</p>
+
+<p>If you watch "Record GL app" run, you will see the dropped-frame counter
+increase, and even see a flash of red in the border when frames drop.  Unless
+your eyes are very good, though, you won't see the animation stutter.  At 60fps,
+the app can drop the occasional frame without anyone noticing so long as the
+animation continues to advance at a constant rate.  How much you can get away
+with depends to some extent on what you're drawing, the characteristics of the
+display, and how good the person using the app is at detecting jank.</p>
+
+<h2 id=thread>Thread management</h2>
+
+<p>Generally speaking, if you're rendering onto a SurfaceView, GLSurfaceView, or
+TextureView, you want to do that rendering in a dedicated thread.  Never do any
+"heavy lifting" or anything that takes an indeterminate amount of time on the
+UI thread.</p>
+
+<p>Breakout and "Record GL app" use dedicated renderer threads, and they also
+update animation state on that thread.  This is a reasonable approach so long as
+game state can be updated quickly.</p>
+
+<p>Other games separate the game logic and rendering completely.  If you had a
+simple game that did nothing but move a block every 100ms, you could have a
+dedicated thread that just did this:</p>
+
+<pre>
+    run() {
+        Thread.sleep(100);
+        synchronized (mLock) {
+            moveBlock();
+        }
+    }
+</pre>
+
+<p>(You may want to base the sleep time off of a fixed clock to prevent drift --
+sleep() isn't perfectly consistent, and moveBlock() takes a nonzero amount of
+time -- but you get the idea.)</p>
+
+<p>When the draw code wakes up, it just grabs the lock, gets the current position
+of the block, releases the lock, and draws.  Instead of doing fractional
+movement based on inter-frame delta times, you just have one thread that moves
+things along and another thread that draws things wherever they happen to be
+when the drawing starts.</p>
+
+<p>For a scene with any complexity you'd want to create a list of upcoming events
+sorted by wake time, and sleep until the next event is due, but it's the same
+idea.</p>
diff --git a/src/devices/graphics/arch-sf-hwc.jd b/src/devices/graphics/arch-sf-hwc.jd
new file mode 100644
index 0000000..d6749c7
--- /dev/null
+++ b/src/devices/graphics/arch-sf-hwc.jd
@@ -0,0 +1,203 @@
+page.title=SurfaceFlinger and Hardware Composer
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Having buffers of graphical data is wonderful, but life is even better when
+you get to see them on your device's screen. That's where SurfaceFlinger and the
+Hardware Composer HAL come in.</p>
+
+
+<h2 id=surfaceflinger>SurfaceFlinger</h2>
+
+<p>SurfaceFlinger's role is to accept buffers of data from multiple sources,
+composite them, and send them to the display. Once upon a time this was done
+with software blitting to a hardware framebuffer (e.g.
+<code>/dev/graphics/fb0</code>), but those days are long gone.</p>
+
+<p>When an app comes to the foreground, the WindowManager service asks
+SurfaceFlinger for a drawing surface. SurfaceFlinger creates a layer (the
+primary component of which is a BufferQueue) for which SurfaceFlinger acts as
+the consumer. A Binder object for the producer side is passed through the
+WindowManager to the app, which can then start sending frames directly to
+SurfaceFlinger.</p>
+
+<p class="note"><strong>Note:</strong> While this section uses SurfaceFlinger
+terminology, WindowManager uses the term <em>window</em> instead of
+<em>layer</em>&hellip;and uses layer to mean something else. (It can be argued
+that SurfaceFlinger should really be called LayerFlinger.)</p>
+
+<p>Most applications have three layers on screen at any time: the status bar at
+the top of the screen, the navigation bar at the bottom or side, and the
+application UI. Some apps have more, some less (e.g. the default home app has a
+separate layer for the wallpaper, while a full-screen game might hide the status
+bar. Each layer can be updated independently. The status and navigation bars
+are rendered by a system process, while the app layers are rendered by the app,
+with no coordination between the two.</p>
+
+<p>Device displays refresh at a certain rate, typically 60 frames per second on
+phones and tablets. If the display contents are updated mid-refresh, tearing
+will be visible; so it's important to update the contents only between cycles.
+The system receives a signal from the display when it's safe to update the
+contents. For historical reasons we'll call this the VSYNC signal.</p>
+
+<p>The refresh rate may vary over time, e.g. some mobile devices will range from 58
+to 62fps depending on current conditions. For an HDMI-attached television, this
+could theoretically dip to 24 or 48Hz to match a video. Because we can update
+the screen only once per refresh cycle, submitting buffers for display at 200fps
+would be a waste of effort as most of the frames would never be seen. Instead of
+taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the
+display is ready for something new.</p>
+
+<p>When the VSYNC signal arrives, SurfaceFlinger walks through its list of
+layers looking for new buffers. If it finds a new one, it acquires it; if not,
+it continues to use the previously-acquired buffer. SurfaceFlinger always wants
+to have something to display, so it will hang on to one buffer. If no buffers
+have ever been submitted on a layer, the layer is ignored.</p>
+
+<p>After SurfaceFlinger has collected all buffers for visible layers, it asks
+the Hardware Composer how composition should be performed.</p>
+
+<h2 id=hwc>Hardware Composer</h2>
+
+<p>The Hardware Composer HAL (HWC) was introduced in Android 3.0 and has evolved
+steadily over the years. Its primary purpose is to determine the most efficient
+way to composite buffers with the available hardware. As a HAL, its
+implementation is device-specific and usually done by the display hardware OEM.</p>
+
+<p>The value of this approach is easy to recognize when you consider <em>overlay
+planes</em>, the purpose of which is to composite multiple buffers together in
+the display hardware rather than the GPU. For example, consider a typical
+Android phone in portrait orientation, with the status bar on top, navigation
+bar at the bottom, and app content everywhere else. The contents for each layer
+are in separate buffers. You could handle composition using either of the
+following methods:</p>
+
+<ul>
+<li>Rendering the app content into a scratch buffer, then rendering the status
+bar over it, the navigation bar on top of that, and finally passing the scratch
+buffer to the display hardware.</li>
+<li>Passing all three buffers to the display hardware and tell it to read data
+from different buffers for different parts of the screen.</li>
+</ul>
+
+<p>The latter approach can be significantly more efficient.</p>
+
+<p>Display processor capabilities vary significantly. The number of overlays,
+whether layers can be rotated or blended, and restrictions on positioning and
+overlap can be difficult to express through an API. The HWC attempts to
+accommodate such diversity through a series of decisions:</p>
+
+<ol>
+<li>SurfaceFlinger provides HWC with a full list of layers and asks, "How do
+you want to handle this?"</li>
+<li>HWC responds by marking each layer as overlay or GLES composition.</li>
+<li>SurfaceFlinger takes care of any GLES composition, passing the output buffer
+to HWC, and lets HWC handle the rest.</li>
+</ol>
+
+<p>Since hardware vendors can custom tailor decision-making code, it's possible
+to get the best performance out of every device.</p>
+
+<p>Overlay planes may be less efficient than GL composition when nothing on the
+screen is changing. This is particularly true when overlay contents have
+transparent pixels and overlapping layers are blended together. In such cases,
+the HWC can choose to request GLES composition for some or all layers and retain
+the composited buffer. If SurfaceFlinger comes back asking to composite the same
+set of buffers, the HWC can continue to show the previously-composited scratch
+buffer. This can improve the battery life of an idle device.</p>
+
+<p>Devices running Android 4.4 and later typically support four overlay planes.
+Attempting to composite more layers than overlays causes the system to use GLES
+composition for some of them, meaning the number of layers used by an app can
+have a measurable impact on power consumption and performance.</p>
+
+<h2 id=virtual-displays>Virtual displays</h2>
+
+<p>SurfaceFlinger supports a primary display (i.e. what's built into your phone
+or tablet), an external display (such as a television connected through HDMI),
+and one or more virtual displays that make composited output available within
+the system. Virtual displays can be used to record the screen or send it over a
+network.</p>
+
+<p>Virtual displays may share the same set of layers as the main display
+(the layer stack) or have its own set. There is no VSYNC for a virtual display,
+so the VSYNC for the primary display is used to trigger composition for all
+displays.</p>
+
+<p>In older versions of Android, virtual displays were always composited with
+GLES and the Hardware Composer managed composition for the primary display only.
+In Android 4.4, the Hardware Composer gained the ability to participate in
+virtual display composition.</p>
+
+<p>As you might expect, frames generated for a virtual display are written to a
+BufferQueue.</p>
+
+<h2 id=screenrecord>Case Study: screenrecord</h2>
+
+<p>The <a href="https://android.googlesource.com/platform/frameworks/av/+/marshmallow-release/cmds/screenrecord/">screenrecord
+command</a> allows you to record everything that appears on the screen as an
+.mp4 file on disk. To implement, we have to receive composited frames from
+SurfaceFlinger, write them to the video encoder, and then write the encoded
+video data to a file. The video codecs are managed by a separate process
+(mediaserver) so we have to move large graphics buffers around the system. To
+make it more challenging, we're trying to record 60fps video at full resolution.
+The key to making this work efficiently is BufferQueue.</p>
+
+<p>The MediaCodec class allows an app to provide data as raw bytes in buffers,
+or through a <a href="{@docRoot}devices/graphics/arch-sh.html">Surface</a>. When
+screenrecord requests access to a video encoder, mediaserver creates a
+BufferQueue, connects itself to the consumer side, then passes the producer
+side back to screenrecord as a Surface.</p>
+
+<p>The screenrecord command then asks SurfaceFlinger to create a virtual display
+that mirrors the main display (i.e. it has all of the same layers), and directs
+it to send output to the Surface that came from mediaserver. In this case,
+SurfaceFlinger is the producer of buffers rather than the consumer.</p>
+
+<p>After the configuration is complete, screenrecord waits for encoded data to
+appear. As apps draw, their buffers travel to SurfaceFlinger, which composites
+them into a single buffer that gets sent directly to the video encoder in
+mediaserver. The full frames are never even seen by the screenrecord process.
+Internally, mediaserver has its own way of moving buffers around that also
+passes data by handle, minimizing overhead.</p>
+
+<h2 id=simulate-secondary>Case Study: Simulate secondary displays</h2>
+
+<p>The WindowManager can ask SurfaceFlinger to create a visible layer for which
+SurfaceFlinger acts as the BufferQueue consumer. It's also possible to ask
+SurfaceFlinger to create a virtual display, for which SurfaceFlinger acts as
+the BufferQueue producer. What happens if you connect them, configuring a
+virtual display that renders to a visible layer?</p>
+
+<p>You create a closed loop, where the composited screen appears in a window.
+That window is now part of the composited output, so on the next refresh
+the composited image inside the window will show the window contents as well
+(and then it's
+<a href="https://en.wikipedia.org/wiki/Turtles_all_the_way_down">turtles all the
+way down)</a>. To see this in action, enable
+<a href="http://developer.android.com/tools/index.html">Developer options</a> in
+settings, select <strong>Simulate secondary displays</strong>, and enable a
+window. For bonus points, use screenrecord to capture the act of enabling the
+display then play it back frame-by-frame.</p>
diff --git a/src/devices/graphics/arch-sh.jd b/src/devices/graphics/arch-sh.jd
new file mode 100644
index 0000000..2ef6c3c
--- /dev/null
+++ b/src/devices/graphics/arch-sh.jd
@@ -0,0 +1,105 @@
+page.title=Surface and SurfaceHolder
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The
+<a href="http://developer.android.com/reference/android/view/Surface.html">Surface</a>
+class has been part of the public API since 1.0.  Its description simply says,
+"Handle onto a raw buffer that is being managed by the screen compositor."  The
+statement was accurate when initially written but falls well short of the mark
+on a modern system.</p>
+
+<p>The Surface represents the producer side of a buffer queue that is often (but
+not always!) consumed by SurfaceFlinger.  When you render onto a Surface, the
+result ends up in a buffer that gets shipped to the consumer.  A Surface is not
+simply a raw chunk of memory you can scribble on.</p>
+
+<p>The BufferQueue for a display Surface is typically configured for
+triple-buffering; but buffers are allocated on demand.  So if the producer
+generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps
+display -- there might only be two allocated buffers in the queue.  This helps
+minimize memory consumption.  You can see a summary of the buffers associated
+with every layer in the <code>dumpsys SurfaceFlinger</code> output.</p>
+
+<h2 id="canvas">Canvas Rendering</h2>
+
+<p>Once upon a time, all rendering was done in software, and you can still do this
+today.  The low-level implementation is provided by the Skia graphics library.
+If you want to draw a rectangle, you make a library call, and it sets bytes in a
+buffer appropriately.  To ensure that a buffer isn't updated by two clients at
+once, or written to while being displayed, you have to lock the buffer to access
+it.  <code>lockCanvas()</code> locks the buffer and returns a Canvas to use for drawing,
+and <code>unlockCanvasAndPost()</code> unlocks the buffer and sends it to the compositor.</p>
+
+<p>As time went on, and devices with general-purpose 3D engines appeared, Android
+reoriented itself around OpenGL ES.  However, it was important to keep the old
+API working, for apps as well as app framework code, so an effort was made to
+hardware-accelerate the Canvas API.  As you can see from the charts on the
+<a href="http://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware
+Acceleration</a>
+page, this was a bit of a bumpy ride.  Note in particular that while the Canvas
+provided to a View's <code>onDraw()</code> method may be hardware-accelerated, the Canvas
+obtained when an app locks a Surface directly with <code>lockCanvas()</code> never is.</p>
+
+<p>When you lock a Surface for Canvas access, the "CPU renderer" connects to the
+producer side of the BufferQueue and does not disconnect until the Surface is
+destroyed.  Most other producers (like GLES) can be disconnected and reconnected
+to a Surface, but the Canvas-based "CPU renderer" cannot.  This means you can't
+draw on a surface with GLES or send it frames from a video decoder if you've
+ever locked it for a Canvas.</p>
+
+<p>The first time the producer requests a buffer from a BufferQueue, it is
+allocated and initialized to zeroes.  Initialization is necessary to avoid
+inadvertently sharing data between processes.  When you re-use a buffer,
+however, the previous contents will still be present.  If you repeatedly call
+<code>lockCanvas()</code> and <code>unlockCanvasAndPost()</code> without
+drawing anything, you'll cycle between previously-rendered frames.</p>
+
+<p>The Surface lock/unlock code keeps a reference to the previously-rendered
+buffer.  If you specify a dirty region when locking the Surface, it will copy
+the non-dirty pixels from the previous buffer.  There's a fair chance the buffer
+will be handled by SurfaceFlinger or HWC; but since we need to only read from
+it, there's no need to wait for exclusive access.</p>
+
+<p>The main non-Canvas way for an application to draw directly on a Surface is
+through OpenGL ES.  That's described in the <a href="#eglsurface">EGLSurface and
+OpenGL ES</a> section.</p>
+
+<h2 id="surfaceholder">SurfaceHolder</h2>
+
+<p>Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView.
+The original idea was that Surface represented the raw compositor-managed
+buffer, while SurfaceHolder was managed by the app and kept track of
+higher-level information like the dimensions and format.  The Java-language
+definition mirrors the underlying native implementation.  It's arguably no
+longer useful to split it this way, but it has long been part of the public API.</p>
+
+<p>Generally speaking, anything having to do with a View will involve a
+SurfaceHolder.  Some other APIs, such as MediaCodec, will operate on the Surface
+itself.  You can easily get the Surface from the SurfaceHolder, so hang on to
+the latter when you have it.</p>
+
+<p>APIs to get and set Surface parameters, such as the size and format, are
+implemented through SurfaceHolder.</p>
diff --git a/src/devices/graphics/arch-st.jd b/src/devices/graphics/arch-st.jd
new file mode 100644
index 0000000..573ec66
--- /dev/null
+++ b/src/devices/graphics/arch-st.jd
@@ -0,0 +1,206 @@
+page.title=SurfaceTexture
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+
+<p>The SurfaceTexture class was introduced in Android 3.0. Just as SurfaceView
+is the combination of a Surface and a View, SurfaceTexture is a rough
+combination of a Surface and a GLES texture (with a few caveats).</p>
+
+<p>When you create a SurfaceTexture, you are creating a BufferQueue for which
+your app is the consumer. When a new buffer is queued by the producer, your app
+is notified via callback (<code>onFrameAvailable()</code>). Your app calls
+<code>updateTexImage()</code>, which releases the previously-held buffer,
+acquires the new buffer from the queue, and makes some EGL calls to make the
+buffer available to GLES as an external texture.</p>
+
+
+<h2 id=ext_texture>External textures</h2>
+<p>External textures (<code>GL_TEXTURE_EXTERNAL_OES</code>) are not quite the
+same as textures created by GLES (<code>GL_TEXTURE_2D</code>): You have to
+configure your renderer a bit differently, and there are things you can't do
+with them. The key point is that you can render textured polygons directly
+from the data received by your BufferQueue. gralloc supports a wide variety of
+formats, so we need to guarantee the format of the data in the buffer is
+something GLES can recognize. To do so, when SurfaceTexture creates the
+BufferQueue, it sets the consumer usage flags to
+<code>GRALLOC_USAGE_HW_TEXTURE</code>, ensuring that any buffer created by
+gralloc would be usable by GLES.</p>
+
+<p>Because SurfaceTexture interacts with an EGL context, you must be careful to
+call its methods from the correct thread (as detailed in the class
+documentation).</p>
+
+<h2 id=time_transforms>Timestamps and transformations</h2>
+<p>If you look deeper into the class documentation, you will see a couple of odd
+calls. One call retrieves a timestamp, the other a transformation matrix, the
+value of each having been set by the previous call to
+<code>updateTexImage()</code>. It turns out that BufferQueue passes more than
+just a buffer handle to the consumer. Each buffer is accompanied by a timestamp
+and transformation parameters.</p>
+
+<p>The transformation is provided for efficiency. In some cases, the source data
+might be in the incorrect orientation for the consumer; but instead of rotating
+the data before sending it, we can send the data in its current orientation with
+a transform that corrects it. The transformation matrix can be merged with other
+transformations at the point the data is used, minimizing overhead.</p>
+
+<p>The timestamp is useful for certain buffer sources. For example, suppose you
+connect the producer interface to the output of the camera (with
+<code>setPreviewTexture()</code>). To create a video, you need to set the
+presentation timestamp for each frame; but you want to base that on the time
+when the frame was captured, not the time when the buffer was received by your
+app. The timestamp provided with the buffer is set by the camera code, resulting
+in a more consistent series of timestamps.</p>
+
+<h2 id=surfacet>SurfaceTexture and Surface</h2>
+
+<p>If you look closely at the API you'll see the only way for an application
+to create a plain Surface is through a constructor that takes a SurfaceTexture
+as the sole argument. (Prior to API 11, there was no public constructor for
+Surface at all.) This might seem a bit backward if you view SurfaceTexture as a
+combination of a Surface and a texture.</p>
+
+<p>Under the hood, SurfaceTexture is called GLConsumer, which more accurately
+reflects its role as the owner and consumer of a BufferQueue. When you create a
+Surface from a SurfaceTexture, what you're doing is creating an object that
+represents the producer side of the SurfaceTexture's BufferQueue.</p>
+
+<h2 id=continuous_capture>Case Study: Grafika's continuous capture</h2>
+
+<p>The camera can provide a stream of frames suitable for recording as a movie.
+To display it on screen, you create a SurfaceView, pass the Surface to
+<code>setPreviewDisplay()</code>, and let the producer (camera) and consumer
+(SurfaceFlinger) do all the work. To record the video, you create a Surface with
+MediaCodec's <code>createInputSurface()</code>, pass that to the camera, and
+again sit back and relax. To show and record the it at the same time, you have
+to get more involved.</p>
+
+<p>The <em>continuous capture</em> activity displays video from the camera as
+the video is being recorded. In this case, encoded video is written to a
+circular buffer in memory that can be saved to disk at any time. It's
+straightforward to implement so long as you keep track of where everything is.
+</p>
+
+<p>This flow involves three BufferQueues: one created by the app, one created by
+SurfaceFlinger, and one created by mediaserver:</p>
+<ul>
+<li><strong>Application</strong>. The app uses a SurfaceTexture to receive
+frames from Camera, converting them to an external GLES texture.</li>
+<li><strong>SurfaceFlinger</strong>. The app declares a SurfaceView, which we
+use to display the frames.</li>
+<li><strong>MediaServer</strong>. You configure a MediaCodec encoder with an
+input Surface to create the video.</li>
+</ul>
+
+<img src="images/continuous_capture_activity.png" alt="Grafika continuous
+capture activity" />
+
+<p class="img-caption"><strong>Figure 1.</strong>Grafika's continuous capture
+activity. Arrows indicate data propagation from the camera and BufferQueues are
+in color (producers are teal, consumers are green).</p>
+
+<p>Encoded H.264 video goes to a circular buffer in RAM in the app process, and
+is written to an MP4 file on disk using the MediaMuxer class when the capture
+button is hit.</p>
+
+<p>All three of the BufferQueues are handled with a single EGL context in the
+app, and the GLES operations are performed on the UI thread.  Doing the
+SurfaceView rendering on the UI thread is generally discouraged, but since we're
+doing simple operations that are handled asynchronously by the GLES driver we
+should be fine.  (If the video encoder locks up and we block trying to dequeue a
+buffer, the app will become unresponsive. But at that point, we're probably
+failing anyway.)  The handling of the encoded data -- managing the circular
+buffer and writing it to disk -- is performed on a separate thread.</p>
+
+<p>The bulk of the configuration happens in the SurfaceView's <code>surfaceCreated()</code>
+callback.  The EGLContext is created, and EGLSurfaces are created for the
+display and for the video encoder.  When a new frame arrives, we tell
+SurfaceTexture to acquire it and make it available as a GLES texture, then
+render it with GLES commands on each EGLSurface (forwarding the transform and
+timestamp from SurfaceTexture).  The encoder thread pulls the encoded output
+from MediaCodec and stashes it in memory.</p>
+
+<h2 id=st_vid_play>Secure texture video playback</h2>
+<p>Android N supports GPU post-processing of protected video content. This
+allows using the GPU for complex non-linear video effects (such as warps),
+mapping protected video content onto textures for use in general graphics scenes
+(e.g., using OpenGL ES), and virtual reality (VR).</p>
+
+<img src="images/graphics_secure_texture_playback.png" alt="Secure Texture Video Playback" />
+<p class="img-caption"><strong>Figure 2.</strong>Secure texture video playback</p>
+
+<p>Support is enabled using the following two extensions:</p>
+<ul>
+<li><strong>EGL extension</strong>
+(<code><a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</code></a>).
+Allows the creation of protected GL contexts and surfaces, which can both
+operate on protected content.</li>
+<li><strong>GLES extension</strong>
+(<code><a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</code></a>).
+Allows tagging textures as protected so they can be used as framebuffer texture
+attachments.</li>
+</ul>
+
+<p>Android N also updates SurfaceTexture and ACodec
+(<code>libstagefright.so</code>) to allow protected content to be sent even if
+the windows surface does not queue to the window composer (i.e., SurfaceFlinger)
+and provide a protected video surface for use within a protected context. This
+is done by setting the correct protected consumer bits
+(<code>GRALLOC_USAGE_PROTECTED</code>) on surfaces created in a protected
+context (verified by ACodec).</p>
+
+<p>These changes benefit app developers who can create apps that perform
+enhanced video effects or apply video textures using protected content in GL
+(for example, in VR), end users who can view high-value video content (such as
+movies and TV shows) in GL environment (for example, in VR), and OEMs who can
+achieve higher sales due to added device functionality (for example, watching HD
+movies in VR). The new EGL and GLES extensions can be used by system on chip
+(SoCs) providers and other vendors, and are currently implemented on the
+Qualcomm MSM8994 SoC chipset used in the Nexus 6P.
+
+<p>Secure texture video playback sets the foundation for strong DRM
+implementation in the OpenGL ES environment. Without a strong DRM implementation
+such as Widevine Level 1, many content providers would not allow rendering of
+their high-value content in the OpenGL ES environment, preventing important VR
+use cases such as watching DRM protected content in VR.</p>
+
+<p>AOSP includes framework code for secure texture video playback; driver
+support is up to the vendor. Partners must implement the
+<code>EGL_EXT_protected_content</code> and
+<code>GL_EXT_protected_textures extensions</code>. When using your own codec
+library (to replace libstagefright), note the changes in
+<code>/frameworks/av/media/libstagefright/SurfaceUtils.cpp</code> that allow
+buffers marked with <code>GRALLOC_USAGE_PROTECTED</code> to be sent to
+ANativeWindows (even if the ANativeWindow does not queue directly to the window
+composer) as long as the consumer usage bits contain
+<code>GRALLOC_USAGE_PROTECTED</code>. For detailed documentation on implementing
+the extensions, refer to the Khronos Registry
+(<a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</a>,
+<a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</a>).</p>
+
+<p>Partners may also need to make hardware changes to ensure that protected
+memory mapped onto the GPU remains protected and unreadable by unprotected
+code.</p>
diff --git a/src/devices/graphics/arch-sv-glsv.jd b/src/devices/graphics/arch-sv-glsv.jd
new file mode 100644
index 0000000..e8df719
--- /dev/null
+++ b/src/devices/graphics/arch-sv-glsv.jd
@@ -0,0 +1,229 @@
+page.title=SurfaceView and GLSurfaceView
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The Android app framework UI is based on a hierarchy of objects that start
+with View. All UI elements go through a complicated measurement and layout
+process that fits them into a rectangular area, and all visible View objects are
+rendered to a SurfaceFlinger-created Surface that was set up by the
+WindowManager when the app was brought to the foreground. The app's UI thread
+performs layout and rendering to a single buffer (regardless of the number of
+Layouts and Views and whether or not the Views are hardware-accelerated).</p>
+
+<p>A SurfaceView takes the same parameters as other views, so you can give it a
+position and size, and fit other elements around it. When it comes time to
+render, however, the contents are completely transparent; The View part of a
+SurfaceView is just a see-through placeholder.</p>
+
+<p>When the SurfaceView's View component is about to become visible, the
+framework asks the WindowManager to ask SurfaceFlinger to create a new Surface.
+(This doesn't happen synchronously, which is why you should provide a callback
+that notifies you when the Surface creation finishes.) By default, the new
+Surface is placed behind the app UI Surface, but the default Z-ordering can be
+overridden to put the Surface on top.</p>
+
+<p>Whatever you render onto this Surface will be composited by SurfaceFlinger,
+not by the app. This is the real power of SurfaceView: The Surface you get can
+be rendered by a separate thread or a separate process, isolated from any
+rendering performed by the app UI, and the buffers go directly to
+SurfaceFlinger. You can't totally ignore the UI thread&mdash;you still have to
+coordinate with the Activity lifecycle and you may need to adjust something if
+the size or position of the View changes&mdash;but you have a whole Surface all
+to yourself. Blending with the app UI and other layers is handled by the
+Hardware Composer.</p>
+
+<p>The new Surface is the producer side of a BufferQueue, whose consumer is a
+SurfaceFlinger layer. You can update the Surface with any mechanism that can
+feed a BufferQueue, such as surface-supplied Canvas functions, attach an
+EGLSurface and draw on it with GLES, or configure a MediaCodec video decoder to
+write to it.</p>
+
+<h2 id=composition>Composition and the Hardware Scaler</h2>
+
+<p>Let's take a closer look at <code>dumpsys SurfaceFlinger</code>. The
+following output was taken while playing a movie in Grafika's "Play video
+(SurfaceView)" activity on a Nexus 5 in portrait orientation; the video is QVGA
+(320x240):</p>
+<p><pre>
+    type    |          source crop              |           frame           name
+------------+-----------------------------------+--------------------------------
+        HWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceView
+        HWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
+        HWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBar
+        HWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar
+  FB TARGET | [    0.0,    0.0, 1080.0, 1920.0] | [    0,    0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
+</pre></p>
+
+<ul>
+<li>The <strong>list order</strong> is back to front: the SurfaceView's Surface
+is in the back, the app UI layer sits on top of that, followed by the status and
+navigation bars that are above everything else.</li>
+<li>The <strong>source crop</strong> values indicate the portion of the
+Surface's buffer that SurfaceFlinger will display. The app UI was given a
+Surface equal to the full size of the display (1080x1920), but as there is no
+point rendering and compositing pixels that will be obscured by the status and
+navigation bars, the source is cropped to a rectangle that starts 75 pixels from
+the top and ends 144 pixels from the bottom. The status and navigation bars have
+smaller Surfaces, and the source crop describes a rectangle that begins at the
+top left (0,0) and spans their content.</li>
+<li>The <strong>frame</strong> values specify the rectangle where pixels
+appear on the display. For the app UI layer, the frame matches the source crop
+because we are copying (or overlaying) a portion of a display-sized layer to the
+same location in another display-sized layer. For the status and navigation
+bars, the size of the frame rectangle is the same, but the position is adjusted
+so the navigation bar appears at the bottom of the screen.</li>
+<li>The <strong>SurfaceView layer</strong> holds our video content. The source crop
+matches the video size, which SurfaceFlinger knows because the MediaCodec
+decoder (the buffer producer) is dequeuing buffers that size. The frame
+rectangle has a completely different size&mdash;984x738.</li>
+</ul>
+
+<p>SurfaceFlinger handles size differences by scaling the buffer contents to
+fill the frame rectangle, upscaling or downscaling as needed. This particular
+size was chosen because it has the same aspect ratio as the video (4:3), and is
+as wide as possible given the constraints of the View layout (which includes
+some padding at the edges of the screen for aesthetic reasons).</p>
+
+<p>If you started playing a different video on the same Surface, the underlying
+BufferQueue would reallocate buffers to the new size automatically, and
+SurfaceFlinger would adjust the source crop. If the aspect ratio of the new
+video is different, the app would need to force a re-layout of the View to match
+it, which causes the WindowManager to tell SurfaceFlinger to update the frame
+rectangle.</p>
+
+<p>If you're rendering on the Surface through some other means (such as GLES),
+you can set the Surface size using the <code>SurfaceHolder#setFixedSize()</code>
+call. For example, you could configure a game to always render at 1280x720,
+which would significantly reduce the number of pixels that must be touched to
+fill the screen on a 2560x1440 tablet or 4K television. The display processor
+handles the scaling. If you don't want to letter- or pillar-box your game, you
+could adjust the game's aspect ratio by setting the size so that the narrow
+dimension is 720 pixels but the long dimension is set to maintain the aspect
+ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display).
+For an example of this approach, see Grafika's "Hardware scaler exerciser"
+activity.</p>
+
+<h2 id=glsurfaceview>GLSurfaceView</h2>
+
+<p>The GLSurfaceView class provides helper classes for managing EGL contexts,
+inter-thread communication, and interaction with the Activity lifecycle. That's
+it. You do not need to use a GLSurfaceView to use GLES.</p>
+
+<p>For example, GLSurfaceView creates a thread for rendering and configures an
+EGL context there. The state is cleaned up automatically when the activity
+pauses. Most apps won't need to know anything about EGL to use GLES with
+GLSurfaceView.</p>
+
+<p>In most cases, GLSurfaceView is very helpful and can make working with GLES
+easier. In some situations, it can get in the way. Use it if it helps, don't
+if it doesn't.</p>
+
+<h2 id=activity>SurfaceView and the Activity Lifecycle</h2>
+
+<p>When using a SurfaceView, it's considered good practice to render the Surface
+from a thread other than the main UI thread. This raises some questions about
+the interaction between that thread and the Activity lifecycle.</p>
+
+<p>For an Activity with a SurfaceView, there are two separate but interdependent
+state machines:</p>
+
+<ol>
+<li>Application onCreate/onResume/onPause</li>
+<li>Surface created/changed/destroyed</li>
+</ol>
+
+<p>When the Activity starts, you get callbacks in this order:</p>
+
+<ul>
+<li>onCreate</li>
+<li>onResume</li>
+<li>surfaceCreated</li>
+<li>surfaceChanged</li>
+</ul>
+
+<p>If you hit back you get:</p>
+
+<ul>
+<li>onPause</li>
+<li>surfaceDestroyed (called just before the Surface goes away)</li>
+</ul>
+
+<p>If you rotate the screen, the Activity is torn down and recreated and you
+get the full cycle. You can tell it's a quick restart by checking
+<code>isFinishing()</code>. It might be possible to start/stop an Activity so
+quickly that <code>surfaceCreated()</code> might actually happen after
+<code>onPause()</code>.</p>
+
+<p>If you tap the power button to blank the screen, you get only
+<code>onPause()</code>&mdash;no <code>surfaceDestroyed()</code>. The Surface
+remains alive, and rendering can continue. You can even keep getting
+Choreographer events if you continue to request them. If you have a lock
+screen that forces a different orientation, your Activity may be restarted when
+the device is unblanked; but if not, you can come out of screen-blank with the
+same Surface you had before.</p>
+
+<p>This raises a fundamental question when using a separate renderer thread with
+SurfaceView: Should the lifespan of the thread be tied to that of the Surface or
+the Activity? The answer depends on what you want to happen when the screen
+goes blank: (1) start/stop the thread on Activity start/stop or (2) start/stop
+the thread on Surface create/destroy.</p>
+
+<p>Option 1 interacts well with the app lifecycle. We start the renderer thread
+in <code>onResume()</code> and stop it in <code>onPause()</code>. It gets a bit
+awkward when creating and configuring the thread because sometimes the Surface
+will already exist and sometimes it won't (e.g. it's still alive after toggling
+the screen with the power button). We have to wait for the surface to be
+created before we do some initialization in the thread, but we can't simply do
+it in the <code>surfaceCreated()</code> callback because that won't fire again
+if the Surface didn't get recreated. So we need to query or cache the Surface
+state, and forward it to the renderer thread.</p>
+
+<p class="note"><strong>Note:</strong> Be careful when passing objects
+between threads. It is best to pass the Surface or SurfaceHolder through a
+Handler message (rather than just stuffing it into the thread) to avoid issues
+on multi-core systems. For details, refer to
+<a href="http://developer.android.com/training/articles/smp.html">Android
+SMP Primer</a>.</p>
+
+<p>Option 2 is appealing because the Surface and the renderer are logically
+intertwined. We start the thread after the Surface has been created, which
+avoids some inter-thread communication concerns, and Surface created/changed
+messages are simply forwarded. We need to ensure rendering stops when the
+screen goes blank and resumes when it un-blanks; this could be a simple matter
+of telling Choreographer to stop invoking the frame draw callback. Our
+<code>onResume()</code> will need to resume the callbacks if and only if the
+renderer thread is running. It may not be so trivial though&mdash;if we animate
+based on elapsed time between frames, we could have a very large gap when the
+next event arrives; an explicit pause/resume message may be desirable.</p>
+
+<p class="note"><strong>Note:</strong> For an example of Option 2, see Grafika's
+"Hardware scaler exerciser."</p>
+
+<p>Both options are primarily concerned with how the renderer thread is
+configured and whether it's executing. A related concern is extracting state
+from the thread when the Activity is killed (in <code>onPause()</code> or
+<code>onSaveInstanceState()</code>); in such cases, Option 1 works best because
+after the renderer thread has been joined its state can be accessed without
+synchronization primitives.</p>
diff --git a/src/devices/graphics/arch-tv.jd b/src/devices/graphics/arch-tv.jd
new file mode 100644
index 0000000..19eb8cc
--- /dev/null
+++ b/src/devices/graphics/arch-tv.jd
@@ -0,0 +1,146 @@
+page.title=TextureView
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+
+<p>The TextureView class introduced in Android 4.0 and is the most complex of
+the View objects discussed here, combining a View with a SurfaceTexture.</p>
+
+<h2 id=render_gles>Rendering with GLES</h2>
+<p>Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics
+data and making them available as textures.  TextureView wraps a SurfaceTexture,
+taking over the responsibility of responding to the callbacks and acquiring new
+buffers.  The arrival of new buffers causes TextureView to issue a View
+invalidate request.  When asked to draw, the TextureView uses the contents of
+the most recently received buffer as its data source, rendering wherever and
+however the View state indicates it should.</p>
+
+<p>You can render on a TextureView with GLES just as you would SurfaceView.  Just
+pass the SurfaceTexture to the EGL window creation call.  However, doing so
+exposes a potential problem.</p>
+
+<p>In most of what we've looked at, the BufferQueues have passed buffers between
+different processes.  When rendering to a TextureView with GLES, both producer
+and consumer are in the same process, and they might even be handled on a single
+thread.  Suppose we submit several buffers in quick succession from the UI
+thread.  The EGL buffer swap call will need to dequeue a buffer from the
+BufferQueue, and it will stall until one is available.  There won't be any
+available until the consumer acquires one for rendering, but that also happens
+on the UI thread… so we're stuck.</p>
+
+<p>The solution is to have BufferQueue ensure there is always a buffer
+available to be dequeued, so the buffer swap never stalls.  One way to guarantee
+this is to have BufferQueue discard the contents of the previously-queued buffer
+when a new buffer is queued, and to place restrictions on minimum buffer counts
+and maximum acquired buffer counts.  (If your queue has three buffers, and all
+three buffers are acquired by the consumer, then there's nothing to dequeue and
+the buffer swap call must hang or fail.  So we need to prevent the consumer from
+acquiring more than two buffers at once.)  Dropping buffers is usually
+undesirable, so it's only enabled in specific situations, such as when the
+producer and consumer are in the same process.</p>
+
+<h2 id=surface_or_texture>SurfaceView or TextureView?</h2>
+SurfaceView and TextureView fill similar roles, but have very different
+implementations.  To decide which is best requires an understanding of the
+trade-offs.</p>
+
+<p>Because TextureView is a proper citizen of the View hierarchy, it behaves like
+any other View, and can overlap or be overlapped by other elements.  You can
+perform arbitrary transformations and retrieve the contents as a bitmap with
+simple API calls.</p>
+
+<p>The main strike against TextureView is the performance of the composition step.
+With SurfaceView, the content is written to a separate layer that SurfaceFlinger
+composites, ideally with an overlay.  With TextureView, the View composition is
+always performed with GLES, and updates to its contents may cause other View
+elements to redraw as well (e.g. if they're positioned on top of the
+TextureView).  After the View rendering completes, the app UI layer must then be
+composited with other layers by SurfaceFlinger, so you're effectively
+compositing every visible pixel twice.  For a full-screen video player, or any
+other application that is effectively just UI elements layered on top of video,
+SurfaceView offers much better performance.</p>
+
+<p>As noted earlier, DRM-protected video can be presented only on an overlay plane.
+ Video players that support protected content must be implemented with
+SurfaceView.</p>
+
+<h2 id=grafika>Case Study: Grafika's Play Video (TextureView)</h2>
+
+<p>Grafika includes a pair of video players, one implemented with TextureView, the
+other with SurfaceView.  The video decoding portion, which just sends frames
+from MediaCodec to a Surface, is the same for both.  The most interesting
+differences between the implementations are the steps required to present the
+correct aspect ratio.</p>
+
+<p>While SurfaceView requires a custom implementation of FrameLayout, resizing
+SurfaceTexture is a simple matter of configuring a transformation matrix with
+<code>TextureView#setTransform()</code>.  For the former, you're sending new
+window position and size values to SurfaceFlinger through WindowManager; for
+the latter, you're just rendering it differently.</p>
+
+<p>Otherwise, both implementations follow the same pattern.  Once the Surface has
+been created, playback is enabled.  When "play" is hit, a video decoding thread
+is started, with the Surface as the output target.  After that, the app code
+doesn't have to do anything -- composition and display will either be handled by
+SurfaceFlinger (for the SurfaceView) or by TextureView.</p>
+
+<h2 id=decode>Case Study: Grafika's Double Decode</h2>
+
+<p>This activity demonstrates manipulation of the SurfaceTexture inside a
+TextureView.</p>
+
+<p>The basic structure of this activity is a pair of TextureViews that show two
+different videos playing side-by-side.  To simulate the needs of a
+videoconferencing app, we want to keep the MediaCodec decoders alive when the
+activity is paused and resumed for an orientation change.  The trick is that you
+can't change the Surface that a MediaCodec decoder uses without fully
+reconfiguring it, which is a fairly expensive operation; so we want to keep the
+Surface alive.  The Surface is just a handle to the producer interface in the
+SurfaceTexture's BufferQueue, and the SurfaceTexture is managed by the
+TextureView;, so we also need to keep the SurfaceTexture alive.  So how do we deal
+with the TextureView getting torn down?</p>
+
+<p>It just so happens TextureView provides a <code>setSurfaceTexture()</code> call
+that does exactly what we want.  We obtain references to the SurfaceTextures
+from the TextureViews and save them in a static field.  When the activity is
+shut down, we return "false" from the <code>onSurfaceTextureDestroyed()</code>
+callback to prevent destruction of the SurfaceTexture.  When the activity is
+restarted, we stuff the old SurfaceTexture into the new TextureView.  The
+TextureView class takes care of creating and destroying the EGL contexts.</p>
+
+<p>Each video decoder is driven from a separate thread.  At first glance it might
+seem like we need EGL contexts local to each thread; but remember the buffers
+with decoded output are actually being sent from mediaserver to our
+BufferQueue consumers (the SurfaceTextures).  The TextureViews take care of the
+rendering for us, and they execute on the UI thread.</p>
+
+<p>Implementing this activity with SurfaceView would be a bit harder.  We can't
+just create a pair of SurfaceViews and direct the output to them, because the
+Surfaces would be destroyed during an orientation change.  Besides, that would
+add two layers, and limitations on the number of available overlays strongly
+motivate us to keep the number of layers to a minimum.  Instead, we'd want to
+create a pair of SurfaceTextures to receive the output from the video decoders,
+and then perform the rendering in the app, using GLES to render two textured
+quads onto the SurfaceView's Surface.</p>
diff --git a/src/devices/graphics/arch-vulkan.jd b/src/devices/graphics/arch-vulkan.jd
new file mode 100644
index 0000000..417873d
--- /dev/null
+++ b/src/devices/graphics/arch-vulkan.jd
@@ -0,0 +1,108 @@
+page.title=Vulkan
+@jd:body
+
+<!--
+    Copyright 2016 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Android 7.0 adds support for
+<a href="https://www.khronos.org/vulkan/">Vulkan</a>, a low-overhead,
+cross-platform API for high-performance 3D graphics. Like OpenGL ES, Vulkan
+provides tools for creating high-quality, real-time graphics in applications.
+Vulkan advantages include reductions in CPU overhead and support for the
+<a href="https://www.khronos.org/spir">SPIR-V Binary Intermediate</a> language.
+</p>
+
+<p>System on chip vendors (SoCs) such as GPU Independent Hardware Vendors (IHVs)
+can write Vulkan drivers for Android; OEMs simply need to integrate these
+drivers for specific devices. For details on how a Vulkan driver interacts with
+the system, how GPU-specific tools should be installed, and Android-specific
+requirements, see <a href="{@docRoot}devices/graphics/implement-vulkan.html">Implementing
+Vulkan.</a></p>
+
+<p>Application developers can take advantage of Vulkan to create apps that
+execute commands on the GPU with significantly reduced overhead. Vulkan also
+provides a more direct mapping to the capabilities found in current graphics
+hardware, minimizing opportunities for driver bugs and reducing developer
+testing time (e.g. less time required to troubleshoot OpenGL bugs).</p>
+
+<p>For general information on Vulkan, refer to the
+<a href="http://khr.io/vulkanlaunchoverview">Vulkan Overview</a> or see the list
+of <a href="#resources">Resources</a> below.</p>
+
+<h2 id=vulkan_components>Vulkan components</h2>
+<p>Vulkan support includes the following components:</p>
+<p><img src="{@docRoot}devices/graphics/images/ape_graphics_vulkan.png"></p>
+<p class=caption>Figure 1: Vulkan components</p>
+
+<ul>
+<li><strong>Vulkan Runtime </strong><em>(provided by Android)</em>. A native
+library <code>(libvulkan.so</code>) that provides a new public native API
+called <a href="https://www.khronos.org/vulkan">Vulkan</a>. Most functionality
+is implemented by a driver provided by the GPU vendor; the runtime wraps the
+driver, provides API interception capabilities (for debugging and other
+developer tools), and manages the interaction between the driver and platform
+dependencies such as BufferQueue.</li>
+<li><strong>Vulkan Driver </strong><em>(provided by SoC)</em>. Maps the Vulkan
+API onto hardware-specific GPU commands and interactions with the kernel
+graphics driver. It is expected the same single kernel driver will service both
+the Vulkan and OpenGL ES API drivers.</li>
+</ul>
+
+<h2 id=modified_components>Modified components</h2>
+<p>Android 7.0 modifies the following existing graphics components to support
+Vulkan:</p>
+
+<ul>
+<li><strong>BufferQueue</strong>. The Vulkan Runtime interacts with the existing
+BufferQueue component via the existing <code>ANativeWindow</code> interface.
+Includes minor modifications (new enum values and new methods) to
+<code>ANativeWindow</code> and BufferQueue, but no architectural changes.</li>
+<li><strong>Gralloc HAL</strong>. Includes a new, optional interface for
+discovering whether a given format can be used for a particular
+producer/consumer combination without actually allocating a buffer.</li>
+</ul>
+
+<p>For details on these components, see
+<a href="{@docRoot}devices/graphics/arch-bq-gralloc.html">BufferQueue and
+gralloc</a> (for details on <code>ANativeWindow</code>, see
+<a href="{@docRoot}devices/graphics/arch-egl-opengl.html">EGLSurface and OpenGL
+ES</a>).
+
+<h2 id=apis>Vulkan API</h2>
+<p>Vulkan support also includes a new public native (NDK) Vulkan API; details at
+<a href="https://www.khronos.org/vulkan/">https://www.khronos.org/vulkan/</a>.
+No system UI or extensions are required.</p>
+
+<h2 id=resources>Resources</h2>
+<p>Use the following resources to learn more about Vulkan</p>
+<ul>
+<li>
+<a href="https://partner-android.googlesource.com/platform/frameworks/native/+/nyc-dev/vulkan/">Vulkan
+Loader </a>(libvulkan.so) on Partner repo (platform/frameworks/native/vulkan)</li>
+<li>
+<a href="https://partner-android.googlesource.com/platform/frameworks/native/+/4e89bf211e668a4e47c9a70d10d03c4b9dd5b97d/vulkan/doc">Vulkan Developer's
+Guide</a></li>
+<li><a href="https://developer.android.com/ndk/guides/graphics/index.html">Vulkan
+Graphics API</a></li>
+<li><a href="https://www.khronos.org/#slider_vulkan">Vulkan News</a></li>
+</ul>
diff --git a/src/devices/graphics/architecture.jd b/src/devices/graphics/architecture.jd
index 47cc9cc..2548043 100644
--- a/src/devices/graphics/architecture.jd
+++ b/src/devices/graphics/architecture.jd
@@ -25,1118 +25,99 @@
 </div>
 
 
-<p><em>What every developer should know about Surface, SurfaceHolder, EGLSurface,
-SurfaceView, GLSurfaceView, SurfaceTexture, TextureView, and SurfaceFlinger</em>
-</p>
-<p>This page describes the essential elements of system-level graphics
-architecture in Android N and how it is used by the application framework and
-multimedia system. The focus is on how buffers of graphical data move through
-the system. If you've ever wondered why SurfaceView and TextureView behave the
-way they do, or how Surface and EGLSurface interact, you are in the correct
-place.</p>
+<p><em>What every developer should know about Surface, SurfaceHolder,
+EGLSurface, SurfaceView, GLSurfaceView, SurfaceTexture, TextureView,
+SurfaceFlinger, and Vulkan.</em></p>
+
+<p>This page describes essential elements of the Android system-level graphics
+architecture and how they are used by the application framework and multimedia
+system. The focus is on how buffers of graphical data move through the system.
+If you've ever wondered why SurfaceView and TextureView behave the way they do,
+or how Surface and EGLSurface interact, you are in the correct place.</p>
 
 <p>Some familiarity with Android devices and application development is assumed.
 You don't need detailed knowledge of the app framework and very few API calls
 are mentioned, but the material doesn't overlap with other public
-documentation. The goal here is to provide details on the significant events
+documentation. The goal is to provide details on the significant events
 involved in rendering a frame for output to help you make informed choices
 when designing an application. To achieve this, we work from the bottom up,
 describing how the UI classes work rather than how they can be used.</p>
 
-<p>Early sections contain background material used in later sections, so it's a
-good idea to read straight through rather than skipping to a section that sounds
-interesting. We start with an explanation of Android's graphics buffers,
-describe the composition and display mechanism, and then proceed to the
-higher-level mechanisms that supply the compositor with data.</p>
+<p>This section includes several pages covering everything from background
+material to HAL details to use cases. It starts with an explanation of Android
+graphics buffers, describes the composition and display mechanism, then proceeds
+to the higher-level mechanisms that supply the compositor with data. We
+recommend reading pages in the order listed below rather than skipping to a
+topic that sounds interesting.</p>
 
-<p class="note">This page includes references to AOSP source code and
-<a href="https://github.com/google/grafika">Grafika</a>, a Google open source
-project for testing.</p>
-
-<h2 id="BufferQueue">BufferQueue and gralloc</h2>
-
-<p>To understand how Android's graphics system works, we must start behind the
-scenes. At the heart of everything graphical in Android is a class called
-BufferQueue. Its role is simple: connect something that generates buffers of
-graphical data (the <em>producer</em>) to something that accepts the data for
-display or further processing (the <em>consumer</em>). The producer and consumer
-can live in different processes. Nearly everything that moves buffers of
-graphical data through the system relies on BufferQueue.</p>
-
-<p>Basic usage is straightforward: The producer requests a free buffer
-(<code>dequeueBuffer()</code>), specifying a set of characteristics including
-width, height, pixel format, and usage flags. The producer populates the buffer
-and returns it to the queue (<code>queueBuffer()</code>). Some time later, the
-consumer acquires the buffer (<code>acquireBuffer()</code>) and makes use of the
-buffer contents. When the consumer is done, it returns the buffer to the queue
-(<code>releaseBuffer()</code>).</p>
-
-<p>Recent Android devices support the <em>sync framework</em>, which enables the
-system to do nifty things when combined with hardware components that can
-manipulate graphics data asynchronously. For example, a producer can submit a
-series of OpenGL ES drawing commands and then enqueue the output buffer before
-rendering completes. The buffer is accompanied by a fence that signals when the
-contents are ready. A second fence accompanies the buffer when it is returned
-to the free list, so the consumer can release the buffer while the contents are
-still in use. This approach improves latency and throughput as the buffers
-move through the system.</p>
-
-<p>Some characteristics of the queue, such as the maximum number of buffers it
-can hold, are determined jointly by the producer and the consumer.</p>
-
-<p>The BufferQueue is responsible for allocating buffers as it needs them.
-Buffers are retained unless the characteristics change; for example, if the
-producer requests buffers with a different size, old buffers are freed and new
-buffers are allocated on demand.</p>
-
-<p>Currently, the consumer always creates and owns the data structure. In
-Android 4.3, only the producer side was binderized (i.e. producer could be
-in a remote process but consumer had to live in the process where the queue
-was created). Android 4.4 and later releases moved toward a more general
-implementation.</p>
-
-<p>Buffer contents are never copied by BufferQueue (moving that much data around
-would be very inefficient). Instead, buffers are always passed by handle.</p>
-
-<h3 id="gralloc_HAL">gralloc HAL</h3>
-
-<p>Buffer allocations are performed through the <em>gralloc</em> memory
-allocator, which is implemented through a vendor-specific HAL interface (for
-details, refer to <code>hardware/libhardware/include/hardware/gralloc.h</code>).
-The <code>alloc()</code> function takes expected arguments (width, height, pixel
-format) as well as a set of usage flags that merit closer attention.</p>
-
-<p>The gralloc allocator is not just another way to allocate memory on the
-native heap; in some situations, the allocated memory may not be cache-coherent
-or could be totally inaccessible from user space. The nature of the allocation
-is determined by the usage flags, which include attributes such as:</p>
+<h2 id=low_level>Low-level components</h2>
 
 <ul>
-<li>How often the memory will be accessed from software (CPU)</li>
-<li>How often the memory will be accessed from hardware (GPU)</li>
-<li>Whether the memory will be used as an OpenGL ES (GLES) texture</li>
-<li>Whether the memory will be used by a video encoder</li>
+<li><a href="{@docRoot}devices/graphics/arch-bq-gralloc.html">BufferQueue and
+gralloc</a>. BufferQueue connects something that generates buffers of graphical
+data (the <em>producer</em>) to something that accepts the data for display or
+further processing (the <em>consumer</em>). Buffer allocations are performed
+through the <em>gralloc</em> memory allocator implemented through a
+vendor-specific HAL interface.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-sf-hwc.html">SurfaceFlinger,
+Hardware Composer, and virtual displays</a>. SurfaceFlinger accepts buffers of
+data from multiple sources, composites them, and sends them to the display. The
+Hardware Composer HAL (HWC) determines the most efficient way to composite
+buffers with the available hardware, and virtual displays make composited output
+available within the system (recording the screen or sending the screen over a
+network).</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-sh.html">Surface, Canvas, and
+SurfaceHolder</a>. A Surface produces a buffer queue that is often consumed by
+SurfaceFlinger. When rendering onto a Surface, the result ends up in a buffer
+that gets shipped to the consumer. Canvas APIs provide a software implementation
+(with hardware-acceleration support) for drawing directly on a Surface
+(low-level alternative to OpenGL ES). Anything having to do with a View involves
+a SurfaceHolder, whose APIs enable getting and setting Surface parameters such
+as size and format.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-egl-opengl.html">EGLSurface and
+OpenGL ES</a>. OpenGL ES (GLES) defines a graphics-rendering API designed to be
+combined with EGL, a library that knows how to create and access windows through
+the operating system (to draw textured polygons, use GLES calls; to put
+rendering on the screen, use EGL calls). This page also covers ANativeWindow,
+the C/C++ equivalent of the Java Surface class used to create an EGL window
+surface from native code.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-vulkan.html">Vulkan</a>. Vulkan is
+a low-overhead, cross-platform API for high-performance 3D graphics. Like OpenGL
+ES, Vulkan provides tools for creating high-quality, real-time graphics in
+applications. Vulkan advantages include reductions in CPU overhead and support
+for the <a href="https://www.khronos.org/spir">SPIR-V Binary Intermediate</a>
+language.</li>
+
 </ul>
 
-<p>For example, if your format specifies RGBA 8888 pixels, and you indicate the
-buffer will be accessed from software (meaning your application will touch
-pixels directly) then the allocator must create a buffer with 4 bytes per pixel
-in R-G-B-A order. If instead you say the buffer will be only accessed from
-hardware and as a GLES texture, the allocator can do anything the GLES driver
-wants&mdash;BGRA ordering, non-linear swizzled layouts, alternative color
-formats, etc. Allowing the hardware to use its preferred format can improve
-performance.</p>
-
-<p>Some values cannot be combined on certain platforms. For example, the video
-encoder flag may require YUV pixels, so adding software access and specifying
-RGBA 8888 would fail.</p>
-
-<p>The handle returned by the gralloc allocator can be passed between processes
-through Binder.</p>
-
-<h2 id="SurfaceFlinger">SurfaceFlinger and Hardware Composer</h2>
-
-<p>Having buffers of graphical data is wonderful, but life is even better when
-you get to see them on your device's screen. That's where SurfaceFlinger and the
-Hardware Composer HAL come in.</p>
-
-<p>SurfaceFlinger's role is to accept buffers of data from multiple sources,
-composite them, and send them to the display. Once upon a time this was done
-with software blitting to a hardware framebuffer (e.g.
-<code>/dev/graphics/fb0</code>), but those days are long gone.</p>
-
-<p>When an app comes to the foreground, the WindowManager service asks
-SurfaceFlinger for a drawing surface. SurfaceFlinger creates a layer (the
-primary component of which is a BufferQueue) for which SurfaceFlinger acts as
-the consumer. A Binder object for the producer side is passed through the
-WindowManager to the app, which can then start sending frames directly to
-SurfaceFlinger.</p>
-
-<p class="note"><strong>Note:</strong> While this section uses SurfaceFlinger
-terminology, WindowManager uses the term <em>window</em> instead of
-<em>layer</em>&hellip;and uses layer to mean something else. (It can be argued
-that SurfaceFlinger should really be called LayerFlinger.)</p>
-
-<p>Most applications have three layers on screen at any time: the status bar at
-the top of the screen, the navigation bar at the bottom or side, and the
-application UI. Some apps have more, some less (e.g. the default home app has a
-separate layer for the wallpaper, while a full-screen game might hide the status
-bar. Each layer can be updated independently. The status and navigation bars
-are rendered by a system process, while the app layers are rendered by the app,
-with no coordination between the two.</p>
-
-<p>Device displays refresh at a certain rate, typically 60 frames per second on
-phones and tablets. If the display contents are updated mid-refresh, tearing
-will be visible; so it's important to update the contents only between cycles.
-The system receives a signal from the display when it's safe to update the
-contents. For historical reasons we'll call this the VSYNC signal.</p>
-
-<p>The refresh rate may vary over time, e.g. some mobile devices will range from 58
-to 62fps depending on current conditions. For an HDMI-attached television, this
-could theoretically dip to 24 or 48Hz to match a video. Because we can update
-the screen only once per refresh cycle, submitting buffers for display at 200fps
-would be a waste of effort as most of the frames would never be seen. Instead of
-taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the
-display is ready for something new.</p>
-
-<p>When the VSYNC signal arrives, SurfaceFlinger walks through its list of
-layers looking for new buffers. If it finds a new one, it acquires it; if not,
-it continues to use the previously-acquired buffer. SurfaceFlinger always wants
-to have something to display, so it will hang on to one buffer. If no buffers
-have ever been submitted on a layer, the layer is ignored.</p>
-
-<p>After SurfaceFlinger has collected all buffers for visible layers, it asks
-the Hardware Composer how composition should be performed.</p>
-
-<h3 id="hwcomposer">Hardware Composer</h3>
-
-<p>The Hardware Composer HAL (HWC) was introduced in Android 3.0 and has evolved
-steadily over the years. Its primary purpose is to determine the most efficient
-way to composite buffers with the available hardware. As a HAL, its
-implementation is device-specific and usually done by the display hardware OEM.</p>
-
-<p>The value of this approach is easy to recognize when you consider <em>overlay
-planes</em>, the purpose of which is to composite multiple buffers together in
-the display hardware rather than the GPU. For example, consider a typical
-Android phone in portrait orientation, with the status bar on top, navigation
-bar at the bottom, and app content everywhere else. The contents for each layer
-are in separate buffers. You could handle composition using either of the
-following methods:</p>
+<h2 id=high_level>High-level components</h2>
 
 <ul>
-<li>Rendering the app content into a scratch buffer, then rendering the status
-bar over it, the navigation bar on top of that, and finally passing the scratch
-buffer to the display hardware.</li>
-<li>Passing all three buffers to the display hardware and tell it to read data
-from different buffers for different parts of the screen.</li>
+<li><a href="{@docRoot}devices/graphics/arch-sv.html">SurfaceView and
+GLSurfaceView</a>. SurfaceView combines a Surface and a View. SurfaceView's View
+components are composited by SurfaceFlinger (and not the app), enabling
+rendering from a separate thread/process and isolation from app UI rendering.
+GLSurfaceView provides helper classes to manage EGL contexts, inter-thread
+communication, and interaction with the Activity lifecycle (but is not required
+to use GLES).</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-st.html">SurfaceTexture</a>.
+SurfaceTexture combines a Surface and GLES texture to create a BufferQueue for
+which your app is the consumer. When a producer queues a new buffer, it notifies
+your app, which in turn releases the previously-held buffer, acquires the new
+buffer from the queue, and makes EGL calls to make the buffer available to GLES
+as an external texture. Android 7.0 adds support for secure texture video
+playback enabling GPU post-processing of protected video content.</li>
+
+<li><a href="{@docRoot}devices/graphics/arch-tv.html">TextureView</a>.
+TextureView combines a View with a SurfaceTexture. TextureView wraps a
+SurfaceTexture and takes responsibility for responding to callbacks and
+acquiring new buffers. When drawing, TextureView uses the contents of the most
+recently received buffer as its data source, rendering wherever and however the
+View state indicates it should. View composition is always performed with GLES,
+meaning updates to contents may cause other View elements to redraw as well.</li>
 </ul>
-
-<p>The latter approach can be significantly more efficient.</p>
-
-<p>Display processor capabilities vary significantly. The number of overlays,
-whether layers can be rotated or blended, and restrictions on positioning and
-overlap can be difficult to express through an API. The HWC attempts to
-accommodate such diversity through a series of decisions:</p>
-
-<ol>
-<li>SurfaceFlinger provides HWC with a full list of layers and asks, "How do
-you want to handle this?"</li>
-<li>HWC responds by marking each layer as overlay or GLES composition.</li>
-<li>SurfaceFlinger takes care of any GLES composition, passing the output buffer
-to HWC, and lets HWC handle the rest.</li>
-</ol>
-
-<p>Since hardware vendors can custom tailor decision-making code, it's possible
-to get the best performance out of every device.</p>
-
-<p>Overlay planes may be less efficient than GL composition when nothing on the
-screen is changing. This is particularly true when overlay contents have
-transparent pixels and overlapping layers are blended together. In such cases,
-the HWC can choose to request GLES composition for some or all layers and retain
-the composited buffer. If SurfaceFlinger comes back asking to composite the same
-set of buffers, the HWC can continue to show the previously-composited scratch
-buffer. This can improve the battery life of an idle device.</p>
-
-<p>Devices running Android 4.4 and later typically support four overlay planes.
-Attempting to composite more layers than overlays causes the system to use GLES
-composition for some of them, meaning the number of layers used by an app can
-have a measurable impact on power consumption and performance.</p>
-
-<h3 id="virtual-displays">Virtual Displays</h3>
-
-<p>SurfaceFlinger supports a primary display, i.e. what's built into your phone
-or tablet, and an external display, such as a television connected through
-HDMI. It also supports a number of virtual displays that can make composited
-output available within the system. Virtual displays can be used to record the
-screen or send it over a network.</p>
-
-<p>Virtual displays may share the same set of layers as the main display
-(the layer stack) or have its own set. There is no VSYNC for a virtual
-display, so the VSYNC for the primary display is used to trigger composition for
-all displays.</p>
-
-<p>In the past, virtual displays were always composited with GLES; the Hardware
-Composer managed composition for only the primary display. In Android 4.4, the
-Hardware Composer gained the ability to participate in virtual display
-composition.</p>
-
-<p>As you might expect, the frames generated for a virtual display are written
-to a BufferQueue.</p>
-
-<h3 id="screenrecord">Case study: screenrecord</h3>
-
-<p>Now that we've established some background on BufferQueue and SurfaceFlinger,
-it's useful to examine a practical use case.</p>
-
-<p>The <a href="https://android.googlesource.com/platform/frameworks/av/+/kitkat-release/cmds/screenrecord/">screenrecord
-command</a>,
-introduced in Android 4.4, allows you to record everything that appears on the
-screen as an .mp4 file on disk.  To implement this, we have to receive composited
-frames from SurfaceFlinger, write them to the video encoder, and then write the
-encoded video data to a file.  The video codecs are managed by a separate
-process - called "mediaserver" - so we have to move large graphics buffers around
-the system.  To make it more challenging, we're trying to record 60fps video at
-full resolution.  The key to making this work efficiently is BufferQueue.</p>
-
-<p>The MediaCodec class allows an app to provide data as raw bytes in buffers, or
-through a Surface.  We'll discuss Surface in more detail later, but for now just
-think of it as a wrapper around the producer end of a BufferQueue.  When
-screenrecord requests access to a video encoder, mediaserver creates a
-BufferQueue and connects itself to the consumer side, and then passes the
-producer side back to screenrecord as a Surface.</p>
-
-<p>The screenrecord command then asks SurfaceFlinger to create a virtual display
-that mirrors the main display (i.e. it has all of the same layers), and directs
-it to send output to the Surface that came from mediaserver.  Note that, in this
-case, SurfaceFlinger is the producer of buffers rather than the consumer.</p>
-
-<p>Once the configuration is complete, screenrecord can just sit and wait for
-encoded data to appear.  As apps draw, their buffers travel to SurfaceFlinger,
-which composites them into a single buffer that gets sent directly to the video
-encoder in mediaserver.  The full frames are never even seen by the screenrecord
-process.  Internally, mediaserver has its own way of moving buffers around that
-also passes data by handle, minimizing overhead.</p>
-
-<h3 id="simulate-secondary">Case study: Simulate Secondary Displays</h3>
-
-<p>The WindowManager can ask SurfaceFlinger to create a visible layer for which
-SurfaceFlinger will act as the BufferQueue consumer.  It's also possible to ask
-SurfaceFlinger to create a virtual display, for which SurfaceFlinger will act as
-the BufferQueue producer.  What happens if you connect them, configuring a
-virtual display that renders to a visible layer?</p>
-
-<p>You create a closed loop, where the composited screen appears in a window.  Of
-course, that window is now part of the composited output, so on the next refresh
-the composited image inside the window will show the window contents as well.
-It's turtles all the way down.  You can see this in action by enabling
-"<a href="http://developer.android.com/tools/index.html">Developer options</a>" in
-settings, selecting "Simulate secondary displays", and enabling a window.  For
-bonus points, use screenrecord to capture the act of enabling the display, then
-play it back frame-by-frame.</p>
-
-<h2 id="surface">Surface and SurfaceHolder</h2>
-
-<p>The <a
-href="http://developer.android.com/reference/android/view/Surface.html">Surface</a>
-class has been part of the public API since 1.0.  Its description simply says,
-"Handle onto a raw buffer that is being managed by the screen compositor."  The
-statement was accurate when initially written but falls well short of the mark
-on a modern system.</p>
-
-<p>The Surface represents the producer side of a buffer queue that is often (but
-not always!) consumed by SurfaceFlinger.  When you render onto a Surface, the
-result ends up in a buffer that gets shipped to the consumer.  A Surface is not
-simply a raw chunk of memory you can scribble on.</p>
-
-<p>The BufferQueue for a display Surface is typically configured for
-triple-buffering; but buffers are allocated on demand.  So if the producer
-generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps
-display -- there might only be two allocated buffers in the queue.  This helps
-minimize memory consumption.  You can see a summary of the buffers associated
-with every layer in the <code>dumpsys SurfaceFlinger</code> output.</p>
-
-<h3 id="canvas">Canvas Rendering</h3>
-
-<p>Once upon a time, all rendering was done in software, and you can still do this
-today.  The low-level implementation is provided by the Skia graphics library.
-If you want to draw a rectangle, you make a library call, and it sets bytes in a
-buffer appropriately.  To ensure that a buffer isn't updated by two clients at
-once, or written to while being displayed, you have to lock the buffer to access
-it.  <code>lockCanvas()</code> locks the buffer and returns a Canvas to use for drawing,
-and <code>unlockCanvasAndPost()</code> unlocks the buffer and sends it to the compositor.</p>
-
-<p>As time went on, and devices with general-purpose 3D engines appeared, Android
-reoriented itself around OpenGL ES.  However, it was important to keep the old
-API working, for apps as well as app framework code, so an effort was made to
-hardware-accelerate the Canvas API.  As you can see from the charts on the
-<a href="http://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware
-Acceleration</a>
-page, this was a bit of a bumpy ride.  Note in particular that while the Canvas
-provided to a View's <code>onDraw()</code> method may be hardware-accelerated, the Canvas
-obtained when an app locks a Surface directly with <code>lockCanvas()</code> never is.</p>
-
-<p>When you lock a Surface for Canvas access, the "CPU renderer" connects to the
-producer side of the BufferQueue and does not disconnect until the Surface is
-destroyed.  Most other producers (like GLES) can be disconnected and reconnected
-to a Surface, but the Canvas-based "CPU renderer" cannot.  This means you can't
-draw on a surface with GLES or send it frames from a video decoder if you've
-ever locked it for a Canvas.</p>
-
-<p>The first time the producer requests a buffer from a BufferQueue, it is
-allocated and initialized to zeroes.  Initialization is necessary to avoid
-inadvertently sharing data between processes.  When you re-use a buffer,
-however, the previous contents will still be present.  If you repeatedly call
-<code>lockCanvas()</code> and <code>unlockCanvasAndPost()</code> without
-drawing anything, you'll cycle between previously-rendered frames.</p>
-
-<p>The Surface lock/unlock code keeps a reference to the previously-rendered
-buffer.  If you specify a dirty region when locking the Surface, it will copy
-the non-dirty pixels from the previous buffer.  There's a fair chance the buffer
-will be handled by SurfaceFlinger or HWC; but since we need to only read from
-it, there's no need to wait for exclusive access.</p>
-
-<p>The main non-Canvas way for an application to draw directly on a Surface is
-through OpenGL ES.  That's described in the <a href="#eglsurface">EGLSurface and
-OpenGL ES</a> section.</p>
-
-<h3 id="surfaceholder">SurfaceHolder</h3>
-
-<p>Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView.
-The original idea was that Surface represented the raw compositor-managed
-buffer, while SurfaceHolder was managed by the app and kept track of
-higher-level information like the dimensions and format.  The Java-language
-definition mirrors the underlying native implementation.  It's arguably no
-longer useful to split it this way, but it has long been part of the public API.</p>
-
-<p>Generally speaking, anything having to do with a View will involve a
-SurfaceHolder.  Some other APIs, such as MediaCodec, will operate on the Surface
-itself.  You can easily get the Surface from the SurfaceHolder, so hang on to
-the latter when you have it.</p>
-
-<p>APIs to get and set Surface parameters, such as the size and format, are
-implemented through SurfaceHolder.</p>
-
-<h2 id="eglsurface">EGLSurface and OpenGL ES</h2>
-
-<p>OpenGL ES defines an API for rendering graphics.  It does not define a windowing
-system.  To allow GLES to work on a variety of platforms, it is designed to be
-combined with a library that knows how to create and access windows through the
-operating system.  The library used for Android is called EGL.  If you want to
-draw textured polygons, you use GLES calls; if you want to put your rendering on
-the screen, you use EGL calls.</p>
-
-<p>Before you can do anything with GLES, you need to create a GL context.  In EGL,
-this means creating an EGLContext and an EGLSurface.  GLES operations apply to
-the current context, which is accessed through thread-local storage rather than
-passed around as an argument.  This means you have to be careful about which
-thread your rendering code executes on, and which context is current on that
-thread.</p>
-
-<p>The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer")
-or a window allocated by the operating system.  EGL window surfaces are created
-with the <code>eglCreateWindowSurface()</code> call.  It takes a "window object" as an
-argument, which on Android can be a SurfaceView, a SurfaceTexture, a
-SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath.  When
-you make this call, EGL creates a new EGLSurface object, and connects it to the
-producer interface of the window object's BufferQueue.  From that point onward,
-rendering to that EGLSurface results in a buffer being dequeued, rendered into,
-and queued for use by the consumer.  (The term "window" is indicative of the
-expected use, but bear in mind the output might not be destined to appear
-on the display.)</p>
-
-<p>EGL does not provide lock/unlock calls.  Instead, you issue drawing commands and
-then call <code>eglSwapBuffers()</code> to submit the current frame.  The
-method name comes from the traditional swap of front and back buffers, but the actual
-implementation may be very different.</p>
-
-<p>Only one EGLSurface can be associated with a Surface at a time -- you can have
-only one producer connected to a BufferQueue -- but if you destroy the
-EGLSurface it will disconnect from the BufferQueue and allow something else to
-connect.</p>
-
-<p>A given thread can switch between multiple EGLSurfaces by changing what's
-"current."  An EGLSurface must be current on only one thread at a time.</p>
-
-<p>The most common mistake when thinking about EGLSurface is assuming that it is
-just another aspect of Surface (like SurfaceHolder).  It's a related but
-independent concept.  You can draw on an EGLSurface that isn't backed by a
-Surface, and you can use a Surface without EGL.  EGLSurface just gives GLES a
-place to draw.</p>
-
-<h3 id="anativewindow">ANativeWindow</h3>
-
-<p>The public Surface class is implemented in the Java programming language.  The
-equivalent in C/C++ is the ANativeWindow class, semi-exposed by the <a
-href="https://developer.android.com/tools/sdk/ndk/index.html">Android NDK</a>.  You
-can get the ANativeWindow from a Surface with the <code>ANativeWindow_fromSurface()</code>
-call.  Just like its Java-language cousin, you can lock it, render in software,
-and unlock-and-post.</p>
-
-<p>To create an EGL window surface from native code, you pass an instance of
-EGLNativeWindowType to <code>eglCreateWindowSurface()</code>.  EGLNativeWindowType is just
-a synonym for ANativeWindow, so you can freely cast one to the other.</p>
-
-<p>The fact that the basic "native window" type just wraps the producer side of a
-BufferQueue should not come as a surprise.</p>
-
-<h2 id="surfaceview">SurfaceView and GLSurfaceView</h2>
-
-<p>Now that we've explored the lower-level components, it's time to see how they
-fit into the higher-level components that apps are built from.</p>
-
-<p>The Android app framework UI is based on a hierarchy of objects that start with
-View.  Most of the details don't matter for this discussion, but it's helpful to
-understand that UI elements go through a complicated measurement and layout
-process that fits them into a rectangular area.  All visible View objects are
-rendered to a SurfaceFlinger-created Surface that was set up by the
-WindowManager when the app was brought to the foreground.  The layout and
-rendering is performed on the app's UI thread.</p>
-
-<p>Regardless of how many Layouts and Views you have, everything gets rendered into
-a single buffer.  This is true whether or not the Views are hardware-accelerated.</p>
-
-<p>A SurfaceView takes the same sorts of parameters as other views, so you can give
-it a position and size, and fit other elements around it.  When it comes time to
-render, however, the contents are completely transparent.  The View part of a
-SurfaceView is just a see-through placeholder.</p>
-
-<p>When the SurfaceView's View component is about to become visible, the framework
-asks the WindowManager to ask SurfaceFlinger to create a new Surface.  (This
-doesn't happen synchronously, which is why you should provide a callback that
-notifies you when the Surface creation finishes.)  By default, the new Surface
-is placed behind the app UI Surface, but the default "Z-ordering" can be
-overridden to put the Surface on top.</p>
-
-<p>Whatever you render onto this Surface will be composited by SurfaceFlinger, not
-by the app.  This is the real power of SurfaceView: the Surface you get can be
-rendered by a separate thread or a separate process, isolated from any rendering
-performed by the app UI, and the buffers go directly to SurfaceFlinger.  You
-can't totally ignore the UI thread -- you still have to coordinate with the
-Activity lifecycle, and you may need to adjust something if the size or position
-of the View changes -- but you have a whole Surface all to yourself, and
-blending with the app UI and other layers is handled by the Hardware Composer.</p>
-
-<p>It's worth taking a moment to note that this new Surface is the producer side of
-a BufferQueue whose consumer is a SurfaceFlinger layer.  You can update the
-Surface with any mechanism that can feed a BufferQueue.  You can: use the
-Surface-supplied Canvas functions, attach an EGLSurface and draw on it
-with GLES, and configure a MediaCodec video decoder to write to it.</p>
-
-<h3 id="composition">Composition and the Hardware Scaler</h3>
-
-<p>Now that we have a bit more context, it's useful to go back and look at a couple
-of fields from <code>dumpsys SurfaceFlinger</code> that we skipped over earlier
-on.  Back in the <a href="#hwcomposer">Hardware Composer</a> discussion, we
-looked at some output like this:</p>
-
-<pre>
-    type    |          source crop              |           frame           name
-------------+-----------------------------------+--------------------------------
-        HWC | [    0.0,    0.0,  320.0,  240.0] | [   48,  411, 1032, 1149] SurfaceView
-        HWC | [    0.0,   75.0, 1080.0, 1776.0] | [    0,   75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
-        HWC | [    0.0,    0.0, 1080.0,   75.0] | [    0,    0, 1080,   75] StatusBar
-        HWC | [    0.0,    0.0, 1080.0,  144.0] | [    0, 1776, 1080, 1920] NavigationBar
-  FB TARGET | [    0.0,    0.0, 1080.0, 1920.0] | [    0,    0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
-</pre>
-
-<p>This was taken while playing a movie in Grafika's "Play video (SurfaceView)"
-activity, on a Nexus 5 in portrait orientation.  Note that the list is ordered
-from back to front: the SurfaceView's Surface is in the back, the app UI layer
-sits on top of that, followed by the status and navigation bars that are above
-everything else.  The video is QVGA (320x240).</p>
-
-<p>The "source crop" indicates the portion of the Surface's buffer that
-SurfaceFlinger is going to display.  The app UI was given a Surface equal to the
-full size of the display (1080x1920), but there's no point rendering and
-compositing pixels that will be obscured by the status and navigation bars, so
-the source is cropped to a rectangle that starts 75 pixels from the top, and
-ends 144 pixels from the bottom.  The status and navigation bars have smaller
-Surfaces, and the source crop describes a rectangle that begins at the the top
-left (0,0) and spans their content.</p>
-
-<p>The "frame" is the rectangle where the pixels end up on the display.  For the
-app UI layer, the frame matches the source crop, because we're copying (or
-overlaying) a portion of a display-sized layer to the same location in another
-display-sized layer.  For the status and navigation bars, the size of the frame
-rectangle is the same, but the position is adjusted so that the navigation bar
-appears at the bottom of the screen.</p>
-
-<p>Now consider the layer labeled "SurfaceView", which holds our video content.
-The source crop matches the video size, which SurfaceFlinger knows because the
-MediaCodec decoder (the buffer producer) is dequeuing buffers that size.  The
-frame rectangle has a completely different size -- 984x738.</p>
-
-<p>SurfaceFlinger handles size differences by scaling the buffer contents to fill
-the frame rectangle, upscaling or downscaling as needed.  This particular size
-was chosen because it has the same aspect ratio as the video (4:3), and is as
-wide as possible given the constraints of the View layout (which includes some
-padding at the edges of the screen for aesthetic reasons).</p>
-
-<p>If you started playing a different video on the same Surface, the underlying
-BufferQueue would reallocate buffers to the new size automatically, and
-SurfaceFlinger would adjust the source crop.  If the aspect ratio of the new
-video is different, the app would need to force a re-layout of the View to match
-it, which causes the WindowManager to tell SurfaceFlinger to update the frame
-rectangle.</p>
-
-<p>If you're rendering on the Surface through some other means, perhaps GLES, you
-can set the Surface size using the <code>SurfaceHolder#setFixedSize()</code>
-call.  You could, for example, configure a game to always render at 1280x720,
-which would significantly reduce the number of pixels that must be touched to
-fill the screen on a 2560x1440 tablet or 4K television.  The display processor
-handles the scaling.  If you don't want to letter- or pillar-box your game, you
-could adjust the game's aspect ratio by setting the size so that the narrow
-dimension is 720 pixels, but the long dimension is set to maintain the aspect
-ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display).
-You can see an example of this approach in Grafika's "Hardware scaler
-exerciser" activity.</p>
-
-<h3 id="glsurfaceview">GLSurfaceView</h3>
-
-<p>The GLSurfaceView class provides some helper classes that help manage EGL
-contexts, inter-thread communication, and interaction with the Activity
-lifecycle.  That's it.  You do not need to use a GLSurfaceView to use GLES.</p>
-
-<p>For example, GLSurfaceView creates a thread for rendering and configures an EGL
-context there.  The state is cleaned up automatically when the activity pauses.
-Most apps won't need to know anything about EGL to use GLES with GLSurfaceView.</p>
-
-<p>In most cases, GLSurfaceView is very helpful and can make working with GLES
-easier.  In some situations, it can get in the way.  Use it if it helps, don't
-if it doesn't.</p>
-
-<h2 id="surfacetexture">SurfaceTexture</h2>
-
-<p>The SurfaceTexture class was introduced in Android 3.0. Just as SurfaceView
-is the combination of a Surface and a View, SurfaceTexture is a rough
-combination of a Surface and a GLES texture (with a few caveats).</p>
-
-<p>When you create a SurfaceTexture, you are creating a BufferQueue for which
-your app is the consumer. When a new buffer is queued by the producer, your app
-is notified via callback (<code>onFrameAvailable()</code>). Your app calls
-<code>updateTexImage()</code>, which releases the previously-held buffer,
-acquires the new buffer from the queue, and makes some EGL calls to make the
-buffer available to GLES as an external texture.</p>
-
-<p>External textures (<code>GL_TEXTURE_EXTERNAL_OES</code>) are not quite the
-same as textures created by GLES (<code>GL_TEXTURE_2D</code>): You have to
-configure your renderer a bit differently, and there are things you can't do
-with them. The key point is that you can render textured polygons directly
-from the data received by your BufferQueue. gralloc supports a wide variety of
-formats, so we need to guarantee the format of the data in the buffer is
-something GLES can recognize. To do so, when SurfaceTexture creates the
-BufferQueue, it sets the consumer usage flags to
-<code>GRALLOC_USAGE_HW_TEXTURE</code>, ensuring that any buffer created by
-gralloc would be usable by GLES.</p>
-
-<p>Because SurfaceTexture interacts with an EGL context, you must be careful to
-call its methods from the correct thread (this is detailed in the class
-documentation).</p>
-
-<p>If you look deeper into the class documentation, you will see a couple of odd
-calls. One retrieves a timestamp, the other a transformation matrix, the value
-of each having been set by the previous call to <code>updateTexImage()</code>.
-It turns out that BufferQueue passes more than just a buffer handle to the consumer.
-Each buffer is accompanied by a timestamp and transformation parameters.</p>
-
-<p>The transformation is provided for efficiency.  In some cases, the source data
-might be in the "wrong" orientation for the consumer; but instead of rotating
-the data before sending it, we can send the data in its current orientation with
-a transform that corrects it.  The transformation matrix can be merged with
-other transformations at the point the data is used, minimizing overhead.</p>
-
-<p>The timestamp is useful for certain buffer sources.  For example, suppose you
-connect the producer interface to the output of the camera (with
-<code>setPreviewTexture()</code>).  If you want to create a video, you need to
-set the presentation time stamp for each frame; but you want to base that on the time
-when the frame was captured, not the time when the buffer was received by your
-app.  The timestamp provided with the buffer is set by the camera code,
-resulting in a more consistent series of timestamps.</p>
-
-<h3 id="surfacet">SurfaceTexture and Surface</h3>
-
-<p>If you look closely at the API you'll see the only way for an application
-to create a plain Surface is through a constructor that takes a SurfaceTexture
-as the sole argument.  (Prior to API 11, there was no public constructor for
-Surface at all.)  This might seem a bit backward if you view SurfaceTexture as a
-combination of a Surface and a texture.</p>
-
-<p>Under the hood, SurfaceTexture is called GLConsumer, which more accurately
-reflects its role as the owner and consumer of a BufferQueue.  When you create a
-Surface from a SurfaceTexture, what you're doing is creating an object that
-represents the producer side of the SurfaceTexture's BufferQueue.</p>
-
-<h3 id="continuous-capture">Case Study: Grafika's "Continuous Capture" Activity</h3>
-
-<p>The camera can provide a stream of frames suitable for recording as a movie.  If
-you want to display it on screen, you create a SurfaceView, pass the Surface to
-<code>setPreviewDisplay()</code>, and let the producer (camera) and consumer
-(SurfaceFlinger) do all the work.  If you want to record the video, you create a
-Surface with MediaCodec's <code>createInputSurface()</code>, pass that to the
-camera, and again you sit back and relax.  If you want to show the video and
-record it at the same time, you have to get more involved.</p>
-
-<p>The "Continuous capture" activity displays video from the camera as it's being
-recorded.  In this case, encoded video is written to a circular buffer in memory
-that can be saved to disk at any time.  It's straightforward to implement so
-long as you keep track of where everything is.</p>
-
-<p>There are three BufferQueues involved.  The app uses a SurfaceTexture to receive
-frames from Camera, converting them to an external GLES texture.  The app
-declares a SurfaceView, which we use to display the frames, and we configure a
-MediaCodec encoder with an input Surface to create the video.  So one
-BufferQueue is created by the app, one by SurfaceFlinger, and one by
-mediaserver.</p>
-
-<img src="images/continuous_capture_activity.png" alt="Grafika continuous
-capture activity" />
-
-<p class="img-caption">
-  <strong>Figure 2.</strong>Grafika's continuous capture activity
-</p>
-
-<p>In the diagram above, the arrows show the propagation of the data from the
-camera.  BufferQueues are in color (purple producer, cyan consumer).  Note
-“Camera” actually lives in the mediaserver process.</p>
-
-<p>Encoded H.264 video goes to a circular buffer in RAM in the app process, and is
-written to an MP4 file on disk using the MediaMuxer class when the “capture”
-button is hit.</p>
-
-<p>All three of the BufferQueues are handled with a single EGL context in the
-app, and the GLES operations are performed on the UI thread.  Doing the
-SurfaceView rendering on the UI thread is generally discouraged, but since we're
-doing simple operations that are handled asynchronously by the GLES driver we
-should be fine.  (If the video encoder locks up and we block trying to dequeue a
-buffer, the app will become unresponsive. But at that point, we're probably
-failing anyway.)  The handling of the encoded data -- managing the circular
-buffer and writing it to disk -- is performed on a separate thread.</p>
-
-<p>The bulk of the configuration happens in the SurfaceView's <code>surfaceCreated()</code>
-callback.  The EGLContext is created, and EGLSurfaces are created for the
-display and for the video encoder.  When a new frame arrives, we tell
-SurfaceTexture to acquire it and make it available as a GLES texture, then
-render it with GLES commands on each EGLSurface (forwarding the transform and
-timestamp from SurfaceTexture).  The encoder thread pulls the encoded output
-from MediaCodec and stashes it in memory.</p>
-
-
-<h3 id="secure-texture-video-playback">Secure Texture Video Playback</h3>
-<p>Android N supports GPU post-processing of protected video content. This
-allows using the GPU for complex non-linear video effects (such as warps),
-mapping protected video content onto textures for use in general graphics scenes
-(e.g., using OpenGL ES), and virtual reality (VR).</p>
-
-<img src="images/graphics_secure_texture_playback.png" alt="Secure Texture Video Playback" />
-<p class="img-caption"><strong>Figure 3.</strong>Secure texture video playback</p>
-
-<p>Support is enabled using the following two extensions:</p>
-<ul>
-<li><strong>EGL extension</strong>
-(<code><a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</code></a>).
-Allows the creation of protected GL contexts and surfaces, which can both
-operate on protected content.</li>
-<li><strong>GLES extension</strong>
-(<code><a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</code></a>).
-Allows tagging textures as protected so they can be used as framebuffer texture
-attachments.</li>
-</ul>
-
-<p>Android N also updates SurfaceTexture and ACodec
-(<code>libstagefright.so</code>) to allow protected content to be sent even if
-the windows surface does not queue to the window composer (i.e., SurfaceFlinger)
-and provide a protected video surface for use within a protected context. This
-is done by setting the correct protected consumer bits
-(<code>GRALLOC_USAGE_PROTECTED</code>) on surfaces created in a protected
-context (verified by ACodec).</p>
-
-<p>These changes benefit app developers who can create apps that perform
-enhanced video effects or apply video textures using protected content in GL
-(for example, in VR), end users who can view high-value video content (such as
-movies and TV shows) in GL environment (for example, in VR), and OEMs who can
-achieve higher sales due to added device functionality (for example, watching HD
-movies in VR). The new EGL and GLES extensions can be used by system on chip
-(SoCs) providers and other vendors, and are currently implemented on the
-Qualcomm MSM8994 SoC chipset used in the Nexus 6P.
-
-<p>Secure texture video playback sets the foundation for strong DRM
-implementation in the OpenGL ES environment. Without a strong DRM implementation
-such as Widevine Level 1, many content providers would not allow rendering of
-their high-value content in the OpenGL ES environment, preventing important VR
-use cases such as watching DRM protected content in VR.</p>
-
-<p>AOSP includes framework code for secure texture video playback; driver
-support is up to the vendor. Partners must implement the
-<code>EGL_EXT_protected_content</code> and
-<code>GL_EXT_protected_textures extensions</code>. When using your own codec
-library (to replace libstagefright), note the changes in
-<code>/frameworks/av/media/libstagefright/SurfaceUtils.cpp</code> that allow
-buffers marked with <code>GRALLOC_USAGE_PROTECTED</code> to be sent to
-ANativeWindows (even if the ANativeWindow does not queue directly to the window
-composer) as long as the consumer usage bits contain
-<code>GRALLOC_USAGE_PROTECTED</code>. For detailed documentation on implementing
-the extensions, refer to the Khronos Registry
-(<a href="https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_protected_content.txt">EGL_EXT_protected_content</a>,
-<a href="https://www.khronos.org/registry/gles/extensions/EXT/EXT_protected_textures.txt">GL_EXT_protected_textures</a>).</p>
-
-<p>Partners may also need to make hardware changes to ensure that protected
-memory mapped onto the GPU remains protected and unreadable by unprotected
-code.</p>
-
-<h2 id="texture">TextureView</h2>
-
-<p>The TextureView class introduced in Android 4.0 and is the most complex of
-the View objects discussed here, combining a View with a SurfaceTexture.</p>
-
-<p>Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics
-data and making them available as textures.  TextureView wraps a SurfaceTexture,
-taking over the responsibility of responding to the callbacks and acquiring new
-buffers.  The arrival of new buffers causes TextureView to issue a View
-invalidate request.  When asked to draw, the TextureView uses the contents of
-the most recently received buffer as its data source, rendering wherever and
-however the View state indicates it should.</p>
-
-<p>You can render on a TextureView with GLES just as you would SurfaceView.  Just
-pass the SurfaceTexture to the EGL window creation call.  However, doing so
-exposes a potential problem.</p>
-
-<p>In most of what we've looked at, the BufferQueues have passed buffers between
-different processes.  When rendering to a TextureView with GLES, both producer
-and consumer are in the same process, and they might even be handled on a single
-thread.  Suppose we submit several buffers in quick succession from the UI
-thread.  The EGL buffer swap call will need to dequeue a buffer from the
-BufferQueue, and it will stall until one is available.  There won't be any
-available until the consumer acquires one for rendering, but that also happens
-on the UI thread… so we're stuck.</p>
-
-<p>The solution is to have BufferQueue ensure there is always a buffer
-available to be dequeued, so the buffer swap never stalls.  One way to guarantee
-this is to have BufferQueue discard the contents of the previously-queued buffer
-when a new buffer is queued, and to place restrictions on minimum buffer counts
-and maximum acquired buffer counts.  (If your queue has three buffers, and all
-three buffers are acquired by the consumer, then there's nothing to dequeue and
-the buffer swap call must hang or fail.  So we need to prevent the consumer from
-acquiring more than two buffers at once.)  Dropping buffers is usually
-undesirable, so it's only enabled in specific situations, such as when the
-producer and consumer are in the same process.</p>
-
-<h3 id="surface-or-texture">SurfaceView or TextureView?</h3>
-SurfaceView and TextureView fill similar roles, but have very different
-implementations.  To decide which is best requires an understanding of the
-trade-offs.</p>
-
-<p>Because TextureView is a proper citizen of the View hierarchy, it behaves like
-any other View, and can overlap or be overlapped by other elements.  You can
-perform arbitrary transformations and retrieve the contents as a bitmap with
-simple API calls.</p>
-
-<p>The main strike against TextureView is the performance of the composition step.
-With SurfaceView, the content is written to a separate layer that SurfaceFlinger
-composites, ideally with an overlay.  With TextureView, the View composition is
-always performed with GLES, and updates to its contents may cause other View
-elements to redraw as well (e.g. if they're positioned on top of the
-TextureView).  After the View rendering completes, the app UI layer must then be
-composited with other layers by SurfaceFlinger, so you're effectively
-compositing every visible pixel twice.  For a full-screen video player, or any
-other application that is effectively just UI elements layered on top of video,
-SurfaceView offers much better performance.</p>
-
-<p>As noted earlier, DRM-protected video can be presented only on an overlay plane.
- Video players that support protected content must be implemented with
-SurfaceView.</p>
-
-<h3 id="grafika">Case Study: Grafika's Play Video (TextureView)</h3>
-
-<p>Grafika includes a pair of video players, one implemented with TextureView, the
-other with SurfaceView.  The video decoding portion, which just sends frames
-from MediaCodec to a Surface, is the same for both.  The most interesting
-differences between the implementations are the steps required to present the
-correct aspect ratio.</p>
-
-<p>While SurfaceView requires a custom implementation of FrameLayout, resizing
-SurfaceTexture is a simple matter of configuring a transformation matrix with
-<code>TextureView#setTransform()</code>.  For the former, you're sending new
-window position and size values to SurfaceFlinger through WindowManager; for
-the latter, you're just rendering it differently.</p>
-
-<p>Otherwise, both implementations follow the same pattern.  Once the Surface has
-been created, playback is enabled.  When "play" is hit, a video decoding thread
-is started, with the Surface as the output target.  After that, the app code
-doesn't have to do anything -- composition and display will either be handled by
-SurfaceFlinger (for the SurfaceView) or by TextureView.</p>
-
-<h3 id="decode">Case Study: Grafika's Double Decode</h3>
-
-<p>This activity demonstrates manipulation of the SurfaceTexture inside a
-TextureView.</p>
-
-<p>The basic structure of this activity is a pair of TextureViews that show two
-different videos playing side-by-side.  To simulate the needs of a
-videoconferencing app, we want to keep the MediaCodec decoders alive when the
-activity is paused and resumed for an orientation change.  The trick is that you
-can't change the Surface that a MediaCodec decoder uses without fully
-reconfiguring it, which is a fairly expensive operation; so we want to keep the
-Surface alive.  The Surface is just a handle to the producer interface in the
-SurfaceTexture's BufferQueue, and the SurfaceTexture is managed by the
-TextureView;, so we also need to keep the SurfaceTexture alive.  So how do we deal
-with the TextureView getting torn down?</p>
-
-<p>It just so happens TextureView provides a <code>setSurfaceTexture()</code> call
-that does exactly what we want.  We obtain references to the SurfaceTextures
-from the TextureViews and save them in a static field.  When the activity is
-shut down, we return "false" from the <code>onSurfaceTextureDestroyed()</code>
-callback to prevent destruction of the SurfaceTexture.  When the activity is
-restarted, we stuff the old SurfaceTexture into the new TextureView.  The
-TextureView class takes care of creating and destroying the EGL contexts.</p>
-
-<p>Each video decoder is driven from a separate thread.  At first glance it might
-seem like we need EGL contexts local to each thread; but remember the buffers
-with decoded output are actually being sent from mediaserver to our
-BufferQueue consumers (the SurfaceTextures).  The TextureViews take care of the
-rendering for us, and they execute on the UI thread.</p>
-
-<p>Implementing this activity with SurfaceView would be a bit harder.  We can't
-just create a pair of SurfaceViews and direct the output to them, because the
-Surfaces would be destroyed during an orientation change.  Besides, that would
-add two layers, and limitations on the number of available overlays strongly
-motivate us to keep the number of layers to a minimum.  Instead, we'd want to
-create a pair of SurfaceTextures to receive the output from the video decoders,
-and then perform the rendering in the app, using GLES to render two textured
-quads onto the SurfaceView's Surface.</p>
-
-<h2 id="notes">Conclusion</h2>
-
-<p>We hope this page has provided useful insights into the way Android handles
-graphics at the system level.</p>
-
-<p>Some information and advice on related topics can be found in the appendices
-that follow.</p>
-
-<h2 id="loops">Appendix A: Game Loops</h2>
-
-<p>A very popular way to implement a game loop looks like this:</p>
-
-<pre>
-while (playing) {
-    advance state by one frame
-    render the new frame
-    sleep until it’s time to do the next frame
-}
-</pre>
-
-<p>There are a few problems with this, the most fundamental being the idea that the
-game can define what a "frame" is.  Different displays will refresh at different
-rates, and that rate may vary over time.  If you generate frames faster than the
-display can show them, you will have to drop one occasionally.  If you generate
-them too slowly, SurfaceFlinger will periodically fail to find a new buffer to
-acquire and will re-show the previous frame.  Both of these situations can
-cause visible glitches.</p>
-
-<p>What you need to do is match the display's frame rate, and advance game state
-according to how much time has elapsed since the previous frame.  There are two
-ways to go about this: (1) stuff the BufferQueue full and rely on the "swap
-buffers" back-pressure; (2) use Choreographer (API 16+).</p>
-
-<h3 id="stuffing">Queue Stuffing</h3>
-
-<p>This is very easy to implement: just swap buffers as fast as you can.  In early
-versions of Android this could actually result in a penalty where
-<code>SurfaceView#lockCanvas()</code> would put you to sleep for 100ms.  Now
-it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as
-SurfaceFlinger is able.</p>
-
-<p>One example of this approach can be seen in <a
-href="https://code.google.com/p/android-breakout/">Android Breakout</a>.  It
-uses GLSurfaceView, which runs in a loop that calls the application's
-onDrawFrame() callback and then swaps the buffer.  If the BufferQueue is full,
-the <code>eglSwapBuffers()</code> call will wait until a buffer is available.
-Buffers become available when SurfaceFlinger releases them, which it does after
-acquiring a new one for display.  Because this happens on VSYNC, your draw loop
-timing will match the refresh rate.  Mostly.</p>
-
-<p>There are a couple of problems with this approach.  First, the app is tied to
-SurfaceFlinger activity, which is going to take different amounts of time
-depending on how much work there is to do and whether it's fighting for CPU time
-with other processes.  Since your game state advances according to the time
-between buffer swaps, your animation won't update at a consistent rate.  When
-running at 60fps with the inconsistencies averaged out over time, though, you
-probably won't notice the bumps.</p>
-
-<p>Second, the first couple of buffer swaps are going to happen very quickly
-because the BufferQueue isn't full yet.  The computed time between frames will
-be near zero, so the game will generate a few frames in which nothing happens.
-In a game like Breakout, which updates the screen on every refresh, the queue is
-always full except when a game is first starting (or un-paused), so the effect
-isn't noticeable.  A game that pauses animation occasionally and then returns to
-as-fast-as-possible mode might see odd hiccups.</p>
-
-<h3 id="choreographer">Choreographer</h3>
-
-<p>Choreographer allows you to set a callback that fires on the next VSYNC.  The
-actual VSYNC time is passed in as an argument.  So even if your app doesn't wake
-up right away, you still have an accurate picture of when the display refresh
-period began.  Using this value, rather than the current time, yields a
-consistent time source for your game state update logic.</p>
-
-<p>Unfortunately, the fact that you get a callback after every VSYNC does not
-guarantee that your callback will be executed in a timely fashion or that you
-will be able to act upon it sufficiently swiftly.  Your app will need to detect
-situations where it's falling behind and drop frames manually.</p>
-
-<p>The "Record GL app" activity in Grafika provides an example of this.  On some
-devices (e.g. Nexus 4 and Nexus 5), the activity will start dropping frames if
-you just sit and watch.  The GL rendering is trivial, but occasionally the View
-elements get redrawn, and the measure/layout pass can take a very long time if
-the device has dropped into a reduced-power mode.  (According to systrace, it
-takes 28ms instead of 6ms after the clocks slow on Android 4.4.  If you drag
-your finger around the screen, it thinks you're interacting with the activity,
-so the clock speeds stay high and you'll never drop a frame.)</p>
-
-<p>The simple fix was to drop a frame in the Choreographer callback if the current
-time is more than N milliseconds after the VSYNC time.  Ideally the value of N
-is determined based on previously observed VSYNC intervals.  For example, if the
-refresh period is 16.7ms (60fps), you might drop a frame if you're running more
-than 15ms late.</p>
-
-<p>If you watch "Record GL app" run, you will see the dropped-frame counter
-increase, and even see a flash of red in the border when frames drop.  Unless
-your eyes are very good, though, you won't see the animation stutter.  At 60fps,
-the app can drop the occasional frame without anyone noticing so long as the
-animation continues to advance at a constant rate.  How much you can get away
-with depends to some extent on what you're drawing, the characteristics of the
-display, and how good the person using the app is at detecting jank.</p>
-
-<h3 id="thread">Thread Management</h3>
-
-<p>Generally speaking, if you're rendering onto a SurfaceView, GLSurfaceView, or
-TextureView, you want to do that rendering in a dedicated thread.  Never do any
-"heavy lifting" or anything that takes an indeterminate amount of time on the
-UI thread.</p>
-
-<p>Breakout and "Record GL app" use dedicated renderer threads, and they also
-update animation state on that thread.  This is a reasonable approach so long as
-game state can be updated quickly.</p>
-
-<p>Other games separate the game logic and rendering completely.  If you had a
-simple game that did nothing but move a block every 100ms, you could have a
-dedicated thread that just did this:</p>
-
-<pre>
-    run() {
-        Thread.sleep(100);
-        synchronized (mLock) {
-            moveBlock();
-        }
-    }
-</pre>
-
-<p>(You may want to base the sleep time off of a fixed clock to prevent drift --
-sleep() isn't perfectly consistent, and moveBlock() takes a nonzero amount of
-time -- but you get the idea.)</p>
-
-<p>When the draw code wakes up, it just grabs the lock, gets the current position
-of the block, releases the lock, and draws.  Instead of doing fractional
-movement based on inter-frame delta times, you just have one thread that moves
-things along and another thread that draws things wherever they happen to be
-when the drawing starts.</p>
-
-<p>For a scene with any complexity you'd want to create a list of upcoming events
-sorted by wake time, and sleep until the next event is due, but it's the same
-idea.</p>
-
-<h2 id="activity">Appendix B: SurfaceView and the Activity Lifecycle</h2>
-
-<p>When using a SurfaceView, it's considered good practice to render the Surface
-from a thread other than the main UI thread.  This raises some questions about
-the interaction between that thread and the Activity lifecycle.</p>
-
-<p>First, a little background.  For an Activity with a SurfaceView, there are two
-separate but interdependent state machines:</p>
-
-<ol>
-<li>Application onCreate / onResume / onPause</li>
-<li>Surface created / changed / destroyed</li>
-</ol>
-
-<p>When the Activity starts, you get callbacks in this order:</p>
-
-<ul>
-<li>onCreate</li>
-<li>onResume</li>
-<li>surfaceCreated</li>
-<li>surfaceChanged</li>
-</ul>
-
-<p>If you hit "back" you get:</p>
-
-<ul>
-<li>onPause</li>
-<li>surfaceDestroyed (called just before the Surface goes away)</li>
-</ul>
-
-<p>If you rotate the screen, the Activity is torn down and recreated, so you
-get the full cycle.  If it matters, you can tell that it's a "quick" restart by
-checking <code>isFinishing()</code>.  (It might be possible to start / stop an
-Activity so quickly that surfaceCreated() might actually happen after onPause().)</p>
-
-<p>If you tap the power button to blank the screen, you only get
-<code>onPause()</code> -- no <code>surfaceDestroyed()</code>.  The Surface
-remains alive, and rendering can continue.  You can even keep getting
-Choreographer events if you continue to request them.  If you have a lock
-screen that forces a different orientation, your Activity may be restarted when
-the device is unblanked; but if not, you can come out of screen-blank with the
-same Surface you had before.</p>
-
-<p>This raises a fundamental question when using a separate renderer thread with
-SurfaceView: Should the lifespan of the thread be tied to that of the Surface or
-the Activity?  The answer depends on what you want to have happen when the
-screen goes blank. There are two basic approaches: (1) start/stop the thread on
-Activity start/stop; (2) start/stop the thread on Surface create/destroy.</p>
-
-<p>#1 interacts well with the app lifecycle. We start the renderer thread in
-<code>onResume()</code> and stop it in <code>onPause()</code>. It gets a bit
-awkward when creating and configuring the thread because sometimes the Surface
-will already exist and sometimes it won't (e.g. it's still alive after toggling
-the screen with the power button).  We have to wait for the surface to be
-created before we do some initialization in the thread, but we can't simply do
-it in the <code>surfaceCreated()</code> callback because that won't fire again
-if the Surface didn't get recreated.  So we need to query or cache the Surface
-state, and forward it to the renderer thread. Note we have to be a little
-careful here passing objects between threads -- it is best to pass the Surface or
-SurfaceHolder through a Handler message, rather than just stuffing it into the
-thread, to avoid issues on multi-core systems (cf. the <a
-href="http://developer.android.com/training/articles/smp.html">Android SMP
-Primer</a>).</p>
-
-<p>#2 has a certain appeal because the Surface and the renderer are logically
-intertwined. We start the thread after the Surface has been created, which
-avoids some inter-thread communication concerns.  Surface created / changed
-messages are simply forwarded.  We need to make sure rendering stops when the
-screen goes blank, and resumes when it un-blanks; this could be a simple matter
-of telling Choreographer to stop invoking the frame draw callback.  Our
-<code>onResume()</code> will need to resume the callbacks if and only if the
-renderer thread is running.  It may not be so trivial though -- if we animate
-based on elapsed time between frames, we could have a very large gap when the
-next event arrives; so an explicit pause/resume message may be desirable.</p>
-
-<p>The above is primarily concerned with how the renderer thread is configured and
-whether it's executing. A related concern is extracting state from the thread
-when the Activity is killed (in <code>onPause()</code> or <code>onSaveInstanceState()</code>).
-Approach #1 will work best for that, because once the renderer thread has been
-joined its state can be accessed without synchronization primitives.</p>
-
-<p>You can see an example of approach #2 in Grafika's "Hardware scaler exerciser."</p>
-
-<h2 id="tracking">Appendix C: Tracking BufferQueue with systrace</h2>
-
-<p>If you really want to understand how graphics buffers move around, you need to
-use systrace.  The system-level graphics code is well instrumented, as is much
-of the relevant app framework code.  Enable the "gfx" and "view" tags, and
-generally "sched" as well.</p>
-
-<p>A full description of how to use systrace effectively would fill a rather long
-document.  One noteworthy item is the presence of BufferQueues in the trace.  If
-you've used systrace before, you've probably seen them, but maybe weren't sure
-what they were.  As an example, if you grab a trace while Grafika's "Play video
-(SurfaceView)" is running, you will see a row labeled: "SurfaceView"  This row
-tells you how many buffers were queued up at any given time.</p>
-
-<p>You'll notice the value increments while the app is active -- triggering
-the rendering of frames by the MediaCodec decoder -- and decrements while
-SurfaceFlinger is doing work, consuming buffers.  If you're showing video at
-30fps, the queue's value will vary from 0 to 1, because the ~60fps display can
-easily keep up with the source.  (You'll also notice that SurfaceFlinger is only
-waking up when there's work to be done, not 60 times per second.  The system tries
-very hard to avoid work and will disable VSYNC entirely if nothing is updating
-the screen.)</p>
-
-<p>If you switch to "Play video (TextureView)" and grab a new trace, you'll see a
-row with a much longer name
-("com.android.grafika/com.android.grafika.PlayMovieActivity").  This is the
-main UI layer, which is of course just another BufferQueue.  Because TextureView
-renders into the UI layer, rather than a separate layer, you'll see all of the
-video-driven updates here.</p>
-
-<p>For more information about systrace, see the <a
-href="http://developer.android.com/tools/help/systrace.html">Android
-documentation</a> for the tool.</p>
diff --git a/src/devices/graphics/images/ape_graphics_vulkan.png b/src/devices/graphics/images/ape_graphics_vulkan.png
new file mode 100644
index 0000000..d25ac5d
--- /dev/null
+++ b/src/devices/graphics/images/ape_graphics_vulkan.png
Binary files differ
diff --git a/src/devices/graphics/implement-vulkan.jd b/src/devices/graphics/implement-vulkan.jd
index d69ec4b..7a3dee3 100644
--- a/src/devices/graphics/implement-vulkan.jd
+++ b/src/devices/graphics/implement-vulkan.jd
@@ -26,9 +26,282 @@
 </div>
 
 
-<p>Vulkan is a low-overhead, cross-platform API for high-performance 3D graphics.
-Like OpenGL ES, Vulkan provides tools for creating high-quality, real-time
-graphics in applications. Vulkan advantages include reductions in CPU overhead
-and support for the SPIR-V Binary Intermediate language.</p>
+<p>Vulkan is a low-overhead, cross-platform API for high-performance 3D
+graphics. Like OpenGL ES, Vulkan provides tools for creating high-quality,
+real-time graphics in applications. Vulkan advantages include reductions in CPU
+overhead and support for the <a href="https://www.khronos.org/spir">SPIR-V
+Binary Intermediate</a> language.</p>
 
-<p>Details coming soon!</p>
+<p class="note"><strong>Note:</strong> This section describes Vulkan
+implementation; for details on Vulkan architecture, advantages, API, and other
+resources, see <a href="{@docRoot}devices/graphics/arch-vulkan.html">Vulkan
+Architecture</a>.</p>
+
+<p>To implement Vulkan, a device:</p>
+<ul>
+<li>Must include Vulkan Loader (provided by Android) in the build.</li>
+<li>May optionally enumerate a Vulkan Driver (provided by SoCs such as GPU IHVs)
+that implements the
+<a href="https://www.khronos.org/registry/vulkan/specs/1.0-wsi_extensions/xhtml/vkspec.html">Vulkan
+API</a>. The driver is required to support Vulkan functionality in the presence
+of capable GPU hardware. Consult your SoC vendor to request driver support.</li>
+</ul>
+<p>If a Vulkan driver is enumerated, the device must have the
+<code>FEATURE_VULKAN_HARDWARE_LEVEL</code> and
+<code>FEATURE_VULKAN_HARDWARE_VERSION</code> system features, with versions that
+accurately reflect the capabilities of the device.</p>
+
+<h2 id=vulkan_loader>Vulkan Loader</h2>
+<p>The primary interface between Vulkan applications and a device's Vulkan
+driver is the loader, which is part of AOSP
+(<code>platform/frameworks/native/vulkan</code>) and installed at
+<code>/system/lib[64]/libvulkan.so</code>. The loader provides the core Vulkan
+API entry points, as well as entry points of a few extensions that are required
+on Android and always present. In particular, Window System Integration (WSI)
+extensions are exported by the loader and primarily implemented in it rather
+than the driver. The loader also supports enumerating and loading layers that
+can expose additional extensions and/or intercept core API calls on their way to
+the driver.</p>
+
+<p>The NDK includes a stub <code>libvulkan.so</code> exporting the same symbols
+as the loader. Calling the Vulkan functions exported from
+<code>libvulkan.so</code> enters trampoline functions in the loader, which then
+dispatch to the appropriate layer or driver based on their first argument. The
+<code>vkGet*ProcAddr</code> calls return the function pointers to which the
+trampolines would dispatch, so calling through these function pointers (rather
+than the exported symbols) is slightly more efficient as it skips the trampoline
+and dispatch.</p>
+
+<h2 id=driver_emun>Driver enumeration and loading</h2>
+<p>Android expects the GPUs available to the system to be known when the system
+image is built. The loader uses the existing HAL mechanism (see
+<code><a href="https://android.googlesource.com/platform/hardware/libhardware/+/marshmallow-release/include/hardware/hardware.h">hardware.h</code></a>) for
+discovering and loading the driver. Preferred paths for 32-bit and 64-bit Vulkan
+drivers are:</p>
+
+<p>
+<pre>/vendor/lib/hw/vulkan.&lt;ro.product.platform&gt;.so
+/vendor/lib64/hw/vulkan.&lt;ro.product.platform&gt;.so
+</pre>
+</p>
+
+<p>Where &lt;<code>ro.product.platform</code>&gt; is replaced by the value of
+the system property of that name. For details and supported alternative
+locations, refer to
+<code><a href="https://android.googlesource.com/platform/hardware/libhardware/+/marshmallow-release/hardware.c">libhardware/hardware.c</code></a>.</p>
+
+<p>In Android 7.0, the Vulkan <code>hw_module_t</code> derivative is trivial;
+only one driver is supported and the constant string
+<code>HWVULKAN_DEVICE_0</code> is passed to open. If support for multiple
+drivers is added in future versions of Android, the HAL module will export a
+list of strings that can be passed to the <code>module open</code> call.</p>
+
+<p>The Vulkan <code>hw_device_t</code> derivative corresponds to a single
+driver, though that driver can support multiple physical devices. The
+<code>hw_device_t</code> structure can be extended to export
+<code>vkGetGlobalExtensionProperties</code>, <code>vkCreateInstance</code>, and
+<code>vkGetInstanceProcAddr</code> functions. The loader can find all other
+<code>VkInstance</code>, <code>VkPhysicalDevice</code>, and
+<code>vkGetDeviceProcAddr</code> functions by calling
+<code>vkGetInstanceProcAddr</code>.</p>
+
+<h2 id=layer_discover>Layer discovery and loading</h2>
+<p>The Vulkan loader supports enumerating and loading layers that can expose
+additional extensions and/or intercept core API calls on their way to the
+driver. Android 7.0 does not include layers on the system image; however,
+applications may include layers in their APK and SoC developer tools (ARM DS-5,
+Adreno SDK, PowerVR Tools, etc.) may also include layers.</p>
+<p>When using layers, keep in mind that Android's security model and policies
+differ significantly from other platforms. In particular, Android does not allow
+loading external code into a non-debuggable process on production (non-rooted)
+devices, nor does it allow external code to inspect or control the process's
+memory, state, etc. This includes a prohibition on saving core dumps, API
+traces, etc. to disk for later inspection. Only layers delivered as part of the
+application are enabled on production devices, and drivers must not provide
+functionality that violates these policies.</p>
+
+<p>Use cases for layers include:</p>
+<ul>
+<li><strong>Development-time layers</strong>. These layers (validation layers,
+shims for tracing/profiling/debugging tools, etc.) should not be installed on
+the system image of production devices as they waste space for users and should
+be updateable without requiring a system update. Developers who want to use one
+of these layers during development can modify the application package (e.g.
+adding a file to their native libraries directory). IHV and OEM engineers who
+want to diagnose failures in shipping, unmodifiable apps are assumed to have
+access to non-production (rooted) builds of the system image.</li>
+<li><strong>Utility layers</strong>. These layers almost always expose
+extensions, such as a layer that implements a memory manager for device memory.
+Developers choose layers (and versions of those layers) to use in their
+application; different applications using the same layer may still use
+different versions. Developers choose which of these layers to ship in their
+application package.</li>
+<li><strong>Injected layers</strong>. Includes layers such as framerate, social
+network, or game launcher overlays provided by the user or some other
+application without the application's knowledge or consent. These violate
+Android's security policies and will not be supported.</li>
+</ul>
+
+<p>In the normal state, the loader searches for layers only in the application's
+native library directory and attempts to load any library with a name matching a
+particular pattern (e.g. <code>libVKLayer_foo.so</code>). It does not need a
+separate manifest file as the developer deliberately included these layers and
+reasons to avoid loading libraries before enabling them don't apply.</p>
+
+<p>Android allows layers to be ported with build-environment changes between
+Android and other platforms. The interface between layers and the loader must
+match the interface used by the
+<a href="http://lunarg.com/vulkan-sdk/">LunarG</a> loader used on Windows and
+Linux. Versions of the LunarG validation layers that have been verified to build
+and work on Android are hosted in the android_layers branch of the
+<a href="https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/tree/android_layers">KhronosGroup/Vulkan-LoaderAndValidationLayers</a>
+project on GitHub.</p>
+
+<h2 id=wsi>Window System Integration (WSI)</h2>
+<p>The Window System Integration (WSI) extensions <code>VK_KHR_surface</code>,
+<code>VK_KHR_android_surface</code>, and <code>VK_KHR_swapchain</code> are
+implemented by the platform and live in <code>libvulkan.so</code>. The
+<code>VkSwapchain</code> object and all interaction with
+<code>ANativeWindow</code> is handled by the platform and not exposed to
+drivers. The WSI implementation relies on the
+<code>VK_ANDROID_native_buffer</code> extension (described below) which must be
+supported by the driver; this extension is only used by the WSI implementation
+and will not be exposed to applications.</p>
+
+<h3 id=gralloc_usage_flags>Gralloc usage flags</h3>
+<p>Implementations may need swapchain buffers to be allocated with
+implementation-defined private gralloc usage flags. When creating a swapchain,
+the platform asks the driver to translate the requested format and image usage
+flags into gralloc usage flags by calling:</p>
+
+<p>
+<pre>
+VkResult VKAPI vkGetSwapchainGrallocUsageANDROID(
+    VkDevice            device,
+    VkFormat            format,
+    VkImageUsageFlags   imageUsage,
+    int*                grallocUsage
+);
+</pre>
+</p>
+
+<p>The format and <code>imageUsage</code> parameters are taken from the
+<code>VkSwapchainCreateInfoKHR</code> structure. The driver should fill
+<code>*grallocUsage</code> with the gralloc usage flags required for the format
+and usage (which are combined with the usage flags requested by the swapchain
+consumer when allocating buffers).</p>
+
+<h3 id=gralloc_usage_flags>Gralloc-backed images</h3>
+
+<p><code>VkNativeBufferANDROID</code> is a <code>vkCreateImage</code> extension
+structure for creating an image backed by a gralloc buffer. This structure is
+provided to <code>vkCreateImage</code> in the <code>VkImageCreateInfo</code>
+structure chain. Calls to <code>vkCreateImage</code> with this structure happen
+during the first call to <code>vkGetSwapChainInfoWSI(..
+VK_SWAP_CHAIN_INFO_TYPE_IMAGES_WSI ..)</code>. The WSI implementation allocates
+the number of native buffers requested for the swapchain, then creates a
+<code>VkImage</code> for each one:</p>
+
+<p><pre>
+typedef struct {
+    VkStructureType             sType; // must be VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID
+    const void*                 pNext;
+
+    // Buffer handle and stride returned from gralloc alloc()
+    buffer_handle_t             handle;
+    int                         stride;
+
+    // Gralloc format and usage requested when the buffer was allocated.
+    int                         format;
+    int                         usage;
+} VkNativeBufferANDROID;
+</pre></p>
+
+<p>When creating a gralloc-backed image, the <code>VkImageCreateInfo</code> has
+the following data:</p>
+
+<p><pre>
+ .imageType           = VK_IMAGE_TYPE_2D
+  .format              = a VkFormat matching the format requested for the gralloc buffer
+  .extent              = the 2D dimensions requested for the gralloc buffer
+  .mipLevels           = 1
+  .arraySize           = 1
+  .samples             = 1
+  .tiling              = VK_IMAGE_TILING_OPTIMAL
+  .usage               = VkSwapChainCreateInfoWSI::imageUsageFlags
+  .flags               = 0
+  .sharingMode         = VkSwapChainCreateInfoWSI::sharingMode
+  .queueFamilyCount    = VkSwapChainCreateInfoWSI::queueFamilyCount
+  .pQueueFamilyIndices = VkSwapChainCreateInfoWSI::pQueueFamilyIndices
+</pre></p>
+
+
+<h3 id=acquire_image>Aquiring images</h3>
+<p><code>vkAcquireImageANDROID</code> acquires ownership of a swapchain image
+and imports an externally-signalled native fence into both an existing
+<code>VkSemaphore</code> object and an existing <code>VkFence</code> object:</p>
+
+<p><pre>
+VkResult VKAPI vkAcquireImageANDROID(
+    VkDevice            device,
+    VkImage             image,
+    int                 nativeFenceFd,
+    VkSemaphore         semaphore,
+    VkFence             fence
+);
+</pre></p>
+
+<p>This function is called during <code>vkAcquireNextImageWSI</code> to import a
+native fence into the <code>VkSemaphore</code> and <code>VkFence</code> objects
+provided by the application (however, both semaphore and fence objects are
+optional in this call). The driver may also use this opportunity to recognize
+and handle any external changes to the gralloc buffer state; many drivers won't
+need to do anything here. This call puts the <code>VkSemaphore</code> and
+<code>VkFence</code> into the same pending state as
+<code>vkQueueSignalSemaphore</code> and <code>vkQueueSubmit</code> respectively,
+so queues can wait on the semaphore and the application can wait on the fence.</p>
+
+<p>Both objects become signalled when the underlying native fence signals; if
+the native fence has already signalled, then the semaphore is in the signalled
+state when this function returns. The driver takes ownership of the fence fd and
+is responsible for closing it when no longer needed. It must do so even if
+neither a semaphore or fence object is provided, or even if
+<code>vkAcquireImageANDROID</code> fails and returns an error. If fenceFd is -1,
+it is as if the native fence was already signalled.</p>
+
+<h3 id=acquire_image>Releasing images</h3>
+<p><code>vkQueueSignalReleaseImageANDROID</code> prepares a swapchain image for
+external use, and creates a native fence and schedules it to be signalled when
+prior work on the queue has completed:</p>
+
+<p><pre>
+VkResult VKAPI vkQueueSignalReleaseImageANDROID(
+    VkQueue             queue,
+    VkImage             image,
+    int*                pNativeFenceFd
+);
+</pre></p>
+
+<p>This API is called during <code>vkQueuePresentWSI</code> on the provided
+queue. Effects are similar to <code>vkQueueSignalSemaphore</code>, except with a
+native fence instead of a semaphore. Unlike <code>vkQueueSignalSemaphore</code>,
+however, this call creates and returns the synchronization object that will be
+signalled rather than having it provided as input. If the queue is already idle
+when this function is called, it is allowed (but not required) to set
+<code>*pNativeFenceFd</code> to -1. The file descriptor returned in
+*<code>pNativeFenceFd</code> is owned and will be closed by the caller.</p>
+
+
+
+<h3 id=update_drivers>Updating drivers</h3>
+
+<p>Many drivers can ignore the image parameter, but some may need to prepare
+CPU-side data structures associated with a gralloc buffer for use by external
+image consumers. Preparing buffer contents for use by external consumers should
+have been done asynchronously as part of transitioning the image to
+<code>VK_IMAGE_LAYOUT_PRESENT_SRC_KHR</code>.</p>
+
+<h2 id=validation>Validation</h2>
+<p>OEMs can test their Vulkan implementation using CTS, which includes
+<a href="{@docRoot}devices/graphics/cts-integration.html">drawElements
+Quality Program (dEQP)</a> tests that exercise the Vulkan Runtime.</p>
diff --git a/src/devices/graphics/implement.jd b/src/devices/graphics/implement.jd
index 178f4b8..54b4620 100644
--- a/src/devices/graphics/implement.jd
+++ b/src/devices/graphics/implement.jd
@@ -38,6 +38,7 @@
     <li>OpenGL ES 1.x driver</li>
     <li>OpenGL ES 2.0 driver</li>
     <li>OpenGL ES 3.x driver (optional)</li>
+    <li>Vulkan (optional)</li>
     <li>Gralloc HAL implementation</li>
     <li>Hardware Composer HAL implementation</li>
 </ul>
@@ -127,6 +128,16 @@
 <a href="{@docRoot}devices/graphics/implement-vsync.html">Implementing
 VSYNC</a>.</p>
 
+<h3 id=vulkan>Vulkan</h3>
+
+<p>Vulkan is a low-overhead, cross-platform API for high-performance 3D graphics.
+Like OpenGL ES, Vulkan provides tools for creating high-quality, real-time
+graphics in applications. Vulkan advantages include reductions in CPU overhead
+and support for the <a href="https://www.khronos.org/spir">SPIR-V Binary
+Intermediate</a> language. For details on Vulkan, see
+<a href="{@docRoot}devices/graphics/implement-vulkan.html">Implementing
+Vulkan</a>.</p>
+
 <h3 id=virtual_displays>Virtual displays</h3>
 
 <p>Android added platform support for virtual displays in Hardware Composer v1.3.