Docs: Basic edits, 80 char column, consistent phrasing
      Adding feedback

Bug: 18947865

Change-Id: Ie0538c3fc9e2f672bc5b4ce5560cdee774dc65d2
diff --git a/src/devices/audio/terminology.jd b/src/devices/audio/terminology.jd
index 8a4ce60..cec0bcd 100644
--- a/src/devices/audio/terminology.jd
+++ b/src/devices/audio/terminology.jd
@@ -2,7 +2,7 @@
 @jd:body
 
 <!--
-    Copyright 2014 The Android Open Source Project
+    Copyright 2015 The Android Open Source Project
 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
@@ -25,42 +25,44 @@
 </div>
 
 <p>
-This document provides a glossary of audio-related terminology, including
-a list of widely used, generic terms and a list of terms that are specific
-to Android.
+This glossary of audio-related terminology includes widely-used generic terms
+and Android-specific terms.
 </p>
 
 <h2 id="genericTerm">Generic Terms</h2>
 
 <p>
-These are audio terms that are widely used, with their conventional meanings.
+Generic audio-related terms have conventional meanings.
 </p>
 
 <h3 id="digitalAudioTerms">Digital Audio</h3>
+<p>
+Digital audio terms relate to handling sound using audio signals encoded
+in digital form. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>.
+</p>
 
 <dl>
 
 <dt>acoustics</dt>
 <dd>
-The study of the mechanical properties of sound, for example how the
-physical placement of transducers such as speakers and microphones on
-a device affects perceived audio quality.
+Study of the mechanical properties of sound, such as how the physical
+placement of transducers (speakers, microphones, etc.) on a device affects
+perceived audio quality.
 </dd>
 
 <dt>attenuation</dt>
 <dd>
-A multiplicative factor less than or equal to 1.0,
-applied to an audio signal to decrease the signal level.
-Compare to "gain."
+Multiplicative factor less than or equal to 1.0, applied to an audio signal
+to decrease the signal level. Compare to <em>gain</em>.
 </dd>
 
 <dt>audiophile</dt>
 <dd>
-An <a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>
-is an individual who is concerned with a superior music
-reproduction experience, especially someone willing to make tradeoffs
-(of expense, component size, room design, etc.) beyond what an ordinary
-person might choose.
+Person concerned with a superior music reproduction experience, especially
+willing to make substantial tradeoffs (expense, component size, room design,
+etc.) for sound quality. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>.
 </dd>
 
 <dt>bits per sample or bit depth</dt>
@@ -70,90 +72,84 @@
 
 <dt>channel</dt>
 <dd>
-A single stream of audio information, usually corresponding to one
-location of recording or playback.
+Single stream of audio information, usually corresponding to one location of
+recording or playback.
 </dd>
 
 <dt>downmixing</dt>
 <dd>
-To decrease the number of channels, e.g. from stereo to mono, or from 5.1 to stereo.
-This can be accomplished by dropping some channels, mixing channels, or more advanced signal processing.
-Simple mixing without attenuation or limiting has the potential for overflow and clipping.
-Compare to "upmixing."
+Decrease the number of channels, such as from stereo to mono or from 5.1 to
+stereo. Accomplished by dropping channels, mixing channels, or more advanced
+signal processing. Simple mixing without attenuation or limiting has the
+potential for overflow and clipping. Compare to <em>upmixing</em>.
 </dd>
 
 <dt>DSD</dt>
 <dd>
-Direct Stream Digital, a proprietary audio encoding based on
-<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density modulation</a>.
-Whereas PCM encodes a waveform as a sequence of individual audio samples of multiple bits,
-DSD encodes a waveform as a sequence of bits at a very high sample rate.
-For DSD, there is no concept of "samples" in the conventional PCM sense.
-Both PCM and DSD represent multiple channels by independent sequences.
-DSD is better suited to content distribution than as an internal representation for processing,
-as it can be difficult to apply traditional DSP algorithms to DSD.
-DSD is used in
-<a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD</a>
-(SACD) and in DSD over PCM (DoP) for USB.
-See the Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream Digital</a>
-for more information.
+Direct Stream Digital. Proprietary audio encoding based on
+<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density
+modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a
+sequence of individual audio samples of multiple bits, DSD encodes a waveform as
+a sequence of bits at a very high sample rate (without the concept of samples).
+Both PCM and DSD represent multiple channels by independent sequences. DSD is
+better suited to content distribution than as an internal representation for
+processing as it can be difficult to apply traditional digital signal processing
+(DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer
+to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream
+Digital</a>.
 </dd>
 
 <dt>duck</dt>
 <dd>
-To temporarily reduce the volume of one stream, when another stream
-becomes active.  For example, if music is playing and a notification arrives,
-then the music stream could be ducked while the notification plays.
-Compare to "mute."
+Temporarily reduce the volume of a stream when another stream becomes active.
+For example, if music is playing when a notification arrives, the music ducks
+while the notification plays. Compare to <em>mute</em>.
 </dd>
 
 <dt>FIFO</dt>
 <dd>
-A hardware module or software data structure that implements
+First In, First Out. Hardware module or software data structure that implements
 <a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a>
-queueing of data.  In the context of audio, the data stored in the queue
-are typically audio frames.  A FIFO can be implemented by a
+queueing of data. In an audio context, the data stored in the queue are
+typically audio frames. FIFO can be implemented by a
 <a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>.
 </dd>
 
 <dt>frame</dt>
 <dd>
-A set of samples, one per channel, at a point in time.
+Set of samples, one per channel, at a point in time.
 </dd>
 
 <dt>frames per buffer</dt>
 <dd>
-The number of frames handed from one module to the next at once;
-for example the audio HAL interface uses this concept.
+Number of frames handed from one module to the next at one time. The audio HAL
+interface uses the concept of frames per buffer.
 </dd>
 
 <dt>gain</dt>
 <dd>
-A multiplicative factor greater than or equal to 1.0,
-applied to an audio signal to increase the signal level.
-Compare to "attenuation."
+Multiplicative factor greater than or equal to 1.0, applied to an audio signal
+to increase the signal level. Compare to <em>attenuation</em>.
 </dd>
 
 <dt>HD audio</dt>
 <dd>
-High-Definition audio, a synonym for "high-resolution audio."
-Not to be confused with Intel High Definition Audio.
+High-Definition audio. Synonym for high-resolution audio (but different than
+Intel High Definition Audio).
 </dd>
 
 <dt>Hz</dt>
 <dd>
-The units for sample rate or frame rate.
+Units for sample rate or frame rate.
 </dd>
 
 <dt>high-resolution audio</dt>
 <dd>
-There is no standard definition, but high-resolution usually means any representation
-with greater bit-depth and sample rate than CDs (which are stereo 16-bit PCM at 44.1 kHz),
-and with no lossy data compression applied.
-Equivalent to "HD audio."  See the Wikipedia article
-<a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution audio</a>
-for more information.
+Representation with greater bit-depth and sample rate than CDs (stereo 16-bit
+PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio.
+For details, refer to
+<a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution
+audio</a>.
 </dd>
 
 <dt>latency</dt>
@@ -163,28 +159,28 @@
 
 <dt>lossless</dt>
 <dd>
-A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data compression</a>
-algorithm preserves bit accuracy across encoding and decoding.
-The result of decoding any previously encoded data is equivalent to the original data.
-Examples of lossless audio content distribution formats include
-<a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within
+A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data
+compression algorithm</a> that preserves bit accuracy across encoding and
+decoding, where the result of decoding previously encoded data is equivalent
+to the original data. Examples of lossless audio content distribution formats
+include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within
 <a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and
 <a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>.
-Note that the authoring process may reduce the bit depth or sample rate from that of the
-<a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>.
-Distribution formats that preserve the resolution and bit accuracy of masters
-are the subject of "high-resolution audio."
+The authoring process may reduce the bit depth or sample rate from that of the
+<a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution
+formats that preserve the resolution and bit accuracy of masters are the subject
+of high-resolution audio.
 </dd>
 
 <dt>lossy</dt>
 <dd>
-A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data compression</a>
-algorithm attempts to preserve the most important features of media across
-encoding and decoding.  The result of decoding any previously encoded
-data is perceptually similar to the original data, but it is not identical.
-Examples of lossy audio compression algorithms include MP3 and AAC.
-As analog values are from a continuous domain, whereas digital values are discrete,
-ADC and DAC are lossy conversions with respect to amplitude.  See also "transparency."
+A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data
+compression algorithm</a> that attempts to preserve the most important features
+of media across encoding and decoding where the result of decoding previously
+encoded data is perceptually similar to the original data but not identical.
+Examples of lossy audio compression algorithms include MP3 and AAC. As analog
+values are from a continuous domain and digital values are discrete, ADC and DAC
+are lossy conversions with respect to amplitude. See also <em>transparency</em>.
 </dd>
 
 <dt>mono</dt>
@@ -194,61 +190,60 @@
 
 <dt>multichannel</dt>
 <dd>
-See "surround sound."
-Strictly, since stereo is more than one channel, it is also "multi" channel.
-But that usage would be confusing.
+See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one
+channel and could be considered multichannel; however, such usage is confusing
+and thus avoided.
 </dd>
 
 <dt>mute</dt>
 <dd>
-To (temporarily) force volume to be zero, independently from the usual volume controls.
+Temporarily force volume to be zero, independent from the usual volume controls.
 </dd>
 
 <dt>overrun</dt>
 <dd>
-An audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by failure
-to accept supplied data in sufficient time.
-See Wikipedia article <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>
-[sic; the article for "buffer overrun" describes an unrelated failure].
-Compare to "underrun."
+Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
+failure to accept supplied data in sufficient time. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
+Compare to <em>underrun</em>.
 </dd>
 
 <dt>panning</dt>
 <dd>
-To direct a signal to a desired position within a stereo or multi-channel field.
+Direct a signal to a desired position within a stereo or multichannel field.
 </dd>
 
 <dt>PCM</dt>
 <dd>
-Pulse Code Modulation, the most common low-level encoding of digital audio.
-The audio signal is sampled at a regular interval, called the sample rate,
-and then quantized to discrete values within a particular range depending on the bit depth.
-For example, for 16-bit PCM, the sample values are integers between -32768 and +32767.
+Pulse Code Modulation. Most common low-level encoding of digital audio. The
+audio signal is sampled at a regular interval, called the sample rate, then
+quantized to discrete values within a particular range depending on the bit
+depth. For example, for 16-bit PCM the sample values are integers between
+-32768 and +32767.
 </dd>
 
 <dt>ramp</dt>
 <dd>
-To gradually increase or decrease the level of a particular audio parameter,
-for example volume or the strength of an effect.
-A volume ramp is commonly applied when pausing and resuming music, to avoid a hard audible transition.
+Gradually increase or decrease the level of a particular audio parameter, such
+as the volume or the strength of an effect. A volume ramp is commonly applied
+when pausing and resuming music to avoid a hard audible transition.
 </dd>
 
 <dt>sample</dt>
 <dd>
-A number representing the audio value for a single channel at a point in time.
+Number representing the audio value for a single channel at a point in time.
 </dd>
 
 <dt>sample rate or frame rate</dt>
 <dd>
-Number of frames per second;
-note that "frame rate" is thus more accurate,
-but "sample rate" is conventionally used to mean "frame rate."
+Number of frames per second. While <em>frame rate</em> is more accurate,
+<em>sample rate</em> is conventionally used to mean frame rate.
 </dd>
 
 <dt>sonification</dt>
 <dd>
-The use of sound to express feedback or information,
-for example touch sounds and keyboard sounds.
+Use of sound to express feedback or information, such as touch sounds and
+keyboard sounds.
 </dd>
 
 <dt>stereo</dt>
@@ -258,43 +253,45 @@
 
 <dt>stereo widening</dt>
 <dd>
-An effect applied to a stereo signal, to make another stereo signal which sounds fuller and richer.
-The effect can also be applied to a mono signal, in which case it is a type of upmixing.
+Effect applied to a stereo signal to make another stereo signal that sounds
+fuller and richer. The effect can also be applied to a mono signal, where it is
+a type of upmixing.
 </dd>
 
 <dt>surround sound</dt>
 <dd>
-Various techniques for increasing the ability of a listener to perceive
-sound position beyond stereo left and right.
+Techniques for increasing the ability of a listener to perceive sound position
+beyond stereo left and right.
 </dd>
 
 <dt>transparency</dt>
 <dd>
-The ideal result of lossy data compression, as stated in the
-<a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a> Wikipedia article.
-A lossy data conversion is said to be transparent if it is perceptually indistinguishable from the
-original by a human subject.
+Ideal result of lossy data compression. Lossy data conversion is transparent if
+it is perceptually indistinguishable from the original by a human subject. For
+details, refer to
+<a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>.
+
 </dd>
 
 <dt>underrun</dt>
 <dd>
-An audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by failure
-to supply needed data in sufficient time.
-See Wikipedia article <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
-Compare to "overrun."
+Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
+failure to supply needed data in sufficient time. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
+Compare to <em>overrun</em>.
 </dd>
 
 <dt>upmixing</dt>
 <dd>
-To increase the number of channels, e.g. from mono to stereo, or from stereo to surround sound.
-This can be accomplished by duplication, panning, or more advanced signal processing.
-Compare to "downmixing."
+Increase the number of channels, such as from mono to stereo or from stereo to
+surround sound. Accomplished by duplication, panning, or more advanced signal
+processing. Compare to <em>downmixing</em>.
 </dd>
 
 <dt>virtualizer</dt>
 <dd>
-An effect that attempts to spatialize audio channels, such as trying to
-simulate more speakers, or give the illusion that various sound sources have position.
+Effect that attempts to spatialize audio channels, such as trying to simulate
+more speakers or give the illusion that sound sources have position.
 </dd>
 
 <dt>volume</dt>
@@ -304,113 +301,90 @@
 
 </dl>
 
-<h3 id="hardwareTerms">Hardware and Accessories</h3>
+<h3 id="interDeviceTerms">Inter-device interconnect</h3>
 
 <p>
-These terms are related to audio hardware and accessories.
-</p>
-
-<h4 id="interDeviceTerms">Inter-device interconnect</h4>
-
-<p>
-These technologies connect audio and video components between devices,
-and are readily visible at the external connectors.  The HAL implementor
-may need to be aware of these, as well as the end user.
+Inter-device interconnection technologies connect audio and video components
+between devices and are readily visible at the external connectors. The HAL
+implementer and end user should be aware of these terms.
 </p>
 
 <dl>
 
 <dt>Bluetooth</dt>
 <dd>
-A short range wireless technology.
-The major audio-related
+Short range wireless technology. For details on the audio-related
 <a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a>
 and
-<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>
-are described at these Wikipedia articles:
-
-<ul>
-
-<li><a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a>
-for music
-</li>
-
-<li><a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a>
-for telephony
-</li>
-
-<li><a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>
-</li>
-
-</ul>
-
+<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>,
+refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for
+music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>.
 </dd>
 
 <dt>DisplayPort</dt>
 <dd>
-Digital display interface by VESA.
+Digital display interface by the Video Electronics Standards Association (VESA).
 </dd>
 
 <dt>HDMI</dt>
 <dd>
-High-Definition Multimedia Interface, an interface for transferring
-audio and video data.  For mobile devices, either a micro-HDMI (type D) or MHL connector is used.
+High-Definition Multimedia Interface. Interface for transferring audio and
+video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used.
 </dd>
 
 <dt>Intel HDA</dt>
 <dd>
-<a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High Definition Audio</a>
-(commonly shortened to HDA) is a specification for, among other things, a front-panel connector.
-Not to be confused with generic "high-definition audio" or "high-resolution audio."
+Intel High Definition Audio (do not confuse with generic <em>high-definition
+audio</em> or <em>high-resolution audio</em>). Specification for a front-panel
+connector. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High
+Definition Audio</a>.
 </dd>
 
 <dt>MHL</dt>
 <dd>
-Mobile High-Definition Link is a mobile audio/video interface, often
-over micro-USB connector.
+Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB
+connector.
 </dd>
 
 <dt>phone connector</dt>
 <dd>
-A mini or sub-mini phone connector
-connects a device to wired headphones, headset, or line-level amplifier.
+Mini or sub-mini component that connects a device to wired headphones, headset,
+or line-level amplifier.
 </dd>
 
 <dt>SlimPort</dt>
 <dd>
-An adapter from micro-USB to HDMI.
+Adapter from micro-USB to HDMI.
 </dd>
 
 <dt>S/PDIF</dt>
 <dd>
-Sony/Philips Digital Interface Format is an interconnect for uncompressed PCM.
-See Wikipedia article <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
+Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For
+details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
 </dd>
 
 <dt>Thunderbolt</dt>
 <dd>
-<a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>
-is a multimedia interface that competes with USB and HDMI for connecting to high-end peripherals.
+Multimedia interface that competes with USB and HDMI for connecting to high-end
+peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>.
 </dd>
 
 <dt>USB</dt>
 <dd>
-Universal Serial Bus.
-See Wikipedia article <a href="http://en.wikipedia.org/wiki/USB">USB</a>.
+Universal Serial Bus. For details, refer to
+<a href="http://en.wikipedia.org/wiki/USB">USB</a>.
 </dd>
 
 </dl>
 
-<h4 id="intraDeviceTerms">Intra-device interconnect</h4>
+<h3 id="intraDeviceTerms">Intra-device interconnect</h3>
 
 <p>
-These technologies connect internal audio components within a given
-device, and are not visible without disassembling the device.  The HAL
-implementor may need to be aware of these, but not the end user.
-</p>
-
-<p>
-See these Wikipedia articles:
+Intra-device interconnection technologies connect internal audio components
+within a given device and are not visible without disassembling the device. The
+HAL implementer may need to be aware of these, but not the end user. For details
+on intra-device interconnections, refer to the following articles:
 </p>
 <ul>
 <li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
@@ -424,304 +398,289 @@
 <h3 id="signalTerms">Audio Signal Path</h3>
 
 <p>
-These terms are related to the signal path that audio data follows from
-an application to the transducer, or vice-versa.
+Audio signal path terms relate to the signal path that audio data follows from
+an application to the transducer or vice-versa.
 </p>
 
 <dl>
 
 <dt>ADC</dt>
 <dd>
-Analog to digital converter, a module that converts an analog signal
-(continuous in both time and amplitude) to a digital signal (discrete in
-both time and amplitude).  Conceptually, an ADC consists of a periodic
-sample-and-hold followed by a quantizer, although it does not have to
-be implemented that way.  An ADC is usually preceded by a low-pass filter
-to remove any high frequency components that are not representable using
-the desired sample rate.  See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital converter</a>.
+Analog-to-digital converter. Module that converts an analog signal (continuous
+in time and amplitude) to a digital signal (discrete in time and amplitude).
+Conceptually, an ADC consists of a periodic sample-and-hold followed by a
+quantizer, although it does not have to be implemented that way. An ADC is
+usually preceded by a low-pass filter to remove any high frequency components
+that are not representable using the desired sample rate. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital
+converter</a>.
 </dd>
 
 <dt>AP</dt>
 <dd>
-Application processor, the main general-purpose computer on a mobile device.
+Application processor. Main general-purpose computer on a mobile device.
 </dd>
 
 <dt>codec</dt>
 <dd>
-Coder-decoder, a module that encodes and/or decodes an audio signal
-from one representation to another.  Typically this is analog to PCM, or PCM to analog.
-Strictly, the term "codec" is reserved for modules that both encode and decode,
-however it can also more loosely refer to only one of these.
-See Wikipedia article
+Coder-decoder. Module that encodes and/or decodes an audio signal from one
+representation to another (typically analog to PCM or PCM to analog). In strict
+terms, <em>codec</em> is reserved for modules that both encode and decode but
+can be used loosely to refer to only one of these. For details, refer to
 <a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>.
 </dd>
 
 <dt>DAC</dt>
 <dd>
-Digital to analog converter, a module that converts a digital signal
-(discrete in both time and amplitude) to an analog signal
-(continuous in both time and amplitude).  A DAC is usually followed by
-a low-pass filter to remove any high frequency components introduced
-by digital quantization.
-See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog converter</a>.
+Digital-to-analog converter. Module that converts a digital signal (discrete in
+time and amplitude) to an analog signal (continuous in time and amplitude).
+Often followed by a low-pass filter to remove high-frequency components
+introduced by digital quantization. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog
+converter</a>.
 </dd>
 
 <dt>DSP</dt>
 <dd>
-Digital Signal Processor, an optional component which is typically located
-after the application processor (for output), or before the application processor (for input).
-The primary purpose of a DSP is to off-load the application processor,
-and provide signal processing features at a lower power cost.
+Digital Signal Processor. Optional component typically located after the
+application processor (for output) or before the application processor (for
+input). Primary purpose is to off-load the application processor and provide
+signal processing features at a lower power cost.
 </dd>
 
 <dt>PDM</dt>
 <dd>
-Pulse-density modulation
-is a form of modulation used to represent an analog signal by a digital signal,
-where the relative density of 1s versus 0s indicates the signal level.
-It is commonly used by digital to analog converters.
-See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density modulation</a>.
+Pulse-density modulation. Form of modulation used to represent an analog signal
+by a digital signal, where the relative density of 1s versus 0s indicates the
+signal level. Commonly used by digital to analog converters. For details, refer
+to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density
+modulation</a>.
 </dd>
 
 <dt>PWM</dt>
 <dd>
-Pulse-width modulation
-is a form of modulation used to represent an analog signal by a digital signal,
-where the relative width of a digital pulse indicates the signal level.
-It is commonly used by analog to digital converters.
-See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width modulation</a>.
+Pulse-width modulation. Form of modulation used to represent an analog signal by
+a digital signal, where the relative width of a digital pulse indicates the
+signal level. Commonly used by analog-to-digital converters. For details, refer
+to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width
+modulation</a>.
 </dd>
 
 <dt>transducer</dt>
 <dd>
-A transducer converts variations in physical "real-world" quantities to electrical signals.
-In audio, the physical quantity is sound pressure,
-and the transducers are the loudspeaker and microphone.
-See Wikipedia article
+Converts variations in physical real-world quantities to electrical signals. In
+audio, the physical quantity is sound pressure, and the transducers are the
+loudspeaker and microphone. For details, refer to
 <a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>.
 </dd>
 
 </dl>
 
 <h3 id="srcTerms">Sample Rate Conversion</h3>
+<p>
+Sample rate conversion terms relate to the process of converting from one
+sampling rate to another.
+</p>
 
 <dl>
 
 <dt>downsample</dt>
-<dd>To resample, where sink sample rate &lt; source sample rate.</dd>
+<dd>Resample, where sink sample rate &lt; source sample rate.</dd>
 
 <dt>Nyquist frequency</dt>
 <dd>
-The Nyquist frequency, equal to 1/2 of a given sample rate, is the
-maximum frequency component that can be represented by a discretized
-signal at that sample rate.  For example, the human hearing range is
-typically assumed to extend up to approximately 20 kHz, and so a digital
-audio signal must have a sample rate of at least 40 kHz to represent that
-range.  In practice, sample rates of 44.1 kHz and 48 kHz are commonly
-used, with Nyquist frequencies of 22.05 kHz and 24 kHz respectively.
-See
+Maximum frequency component that can be represented by a discretized signal at
+1/2 of a given sample rate. For example, the human hearing range extends to
+approximately 20 kHz, so a digital audio signal must have a sample rate of at
+least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and
+48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz
+respectively. For details, refer to
 <a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a>
 and
-<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>
-for more information.
+<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>.
 </dd>
 
 <dt>resampler</dt>
 <dd>Synonym for sample rate converter.</dd>
 
 <dt>resampling</dt>
-<dd>The process of converting sample rate.</dd>
+<dd>Process of converting sample rate.</dd>
 
 <dt>sample rate converter</dt>
-<dd>A module that resamples.</dd>
+<dd>Module that resamples.</dd>
 
 <dt>sink</dt>
-<dd>The output of a resampler.</dd>
+<dd>Output of a resampler.</dd>
 
 <dt>source</dt>
-<dd>The input to a resampler.</dd>
+<dd>Input to a resampler.</dd>
 
 <dt>upsample</dt>
-<dd>To resample, where sink sample rate &gt; source sample rate.</dd>
+<dd>Resample, where sink sample rate &gt; source sample rate.</dd>
 
 </dl>
 
 <h2 id="androidSpecificTerms">Android-Specific Terms</h2>
 
 <p>
-These are terms specific to the Android audio framework, or that
-may have a special meaning within Android beyond their general meaning.
+Android-specific terms include terms used only in the Android audio framework
+and generic terms that have special meaning within Android.
 </p>
 
 <dl>
 
 <dt>ALSA</dt>
 <dd>
-Advanced Linux Sound Architecture.  As the name suggests, it is an audio
-framework primarily for Linux, but it has influenced other systems.
-See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>
-for the general definition. As used within Android, it refers primarily
-to the kernel audio framework and drivers, not to the user-mode API. See
-tinyalsa.
+Advanced Linux Sound Architecture. An audio framework for Linux that has also
+influenced other systems. For a generic definition, refer to
+<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>.
+In Android, ALSA refers to the kernel audio framework and drivers and not to the
+user-mode API. See also <em>tinyalsa</em>.
 </dd>
 
 <dt>audio device</dt>
 <dd>
-Any audio I/O end-point that is backed by a HAL implementation.
+Audio I/O endpoint backed by a HAL implementation.
 </dd>
 
 <dt>AudioEffect</dt>
 <dd>
-An API and implementation framework for output (post-processing) effects
-and input (pre-processing) effects.  The API is defined at
+API and implementation framework for output (post-processing) effects and input
+(pre-processing) effects. The API is defined at
 <a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>.
 </dd>
 
 <dt>AudioFlinger</dt>
 <dd>
-The sound server implementation for Android. AudioFlinger
-runs within the mediaserver process. See Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>
-for the generic definition.
+Android sound server implementation. AudioFlinger runs within the mediaserver
+process. For a generic definition, refer to
+<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>.
 </dd>
 
 <dt>audio focus</dt>
 <dd>
-A set of APIs for managing audio interactions across multiple independent apps.
-See <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio
-Focus</a> and the focus-related methods and constants of
+Set of APIs for managing audio interactions across multiple independent apps.
+For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of
 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
 </dd>
 
 <dt>AudioMixer</dt>
 <dd>
-The module within AudioFlinger responsible for
-combining multiple tracks and applying attenuation
-(volume) and certain effects. The Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a>
-may be useful for understanding the generic
-concept. But that article describes a mixer more as a hardware device
-or a software application, rather than a software module within a system.
+Module in AudioFlinger responsible for combining multiple tracks and applying
+attenuation (volume) and effects. For a generic definition, refer to
+<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather
+than a software module within a system).
 </dd>
 
 <dt>audio policy</dt>
 <dd>
-Service responsible for all actions that require a policy decision
-to be made first, such as opening a new I/O stream, re-routing after a
-change, and stream volume management.
+Service responsible for all actions that require a policy decision to be made
+first, such as opening a new I/O stream, re-routing after a change, and stream
+volume management.
 </dd>
 
 <dt>AudioRecord</dt>
 <dd>
-The primary low-level client API for receiving data from an audio
-input device such as microphone.  The data is usually in pulse-code modulation
-(PCM) format.
-The API is defined at
+Primary low-level client API for receiving data from an audio input device such
+as a microphone. The data is usually PCM format. The API is defined at
 <a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>.
 </dd>
 
 <dt>AudioResampler</dt>
 <dd>
-The module within AudioFlinger responsible for
-<a href="src.html">sample rate conversion</a>.
+Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>.
 </dd>
 
 <dt>audio source</dt>
 <dd>
-An <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>
-is an enumeration of constants that indicates the desired use case for capturing audio input.
-As of API level 21 and above, <a href="attributes.html">audio attributes</a> are preferred.
+An enumeration of constants that indicates the desired use case for capturing
+audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above,
+<a href="attributes.html">audio attributes</a> are preferred.
 </dd>
 
 <dt>AudioTrack</dt>
 <dd>
-The primary low-level client API for sending data to an audio output
-device such as a speaker.  The data is usually in PCM format.
-The API is defined at
+Primary low-level client API for sending data to an audio output device such as
+a speaker. The data is usually in PCM format. The API is defined at
 <a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>.
 </dd>
 
 <dt>audio_utils</dt>
 <dd>
-An audio utility library for features such as PCM format conversion, WAV file I/O, and
-<a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>,
-which is largely independent of the Android platform.
+Audio utility library for features such as PCM format conversion, WAV file I/O,
+and
+<a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is
+largely independent of the Android platform.
 </dd>
 
 <dt>client</dt>
 <dd>
-Usually same as application or app, but sometimes the "client" of
-AudioFlinger is actually a thread running within the mediaserver system
-process. An example of that is when playing media that is decoded by a
-MediaPlayer object.
+Usually an application or app client. However, an AudioFlinger client can be a
+thread running within the mediaserver system process, such as when playing media
+decoded by a MediaPlayer object.
 </dd>
 
 <dt>HAL</dt>
 <dd>
-Hardware Abstraction Layer. HAL is a generic term in Android. With
-respect to audio, it is a layer between AudioFlinger and the kernel
-device driver with a C API, which replaces the earlier C++ libaudio.
+Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a
+layer between AudioFlinger and the kernel device driver with a C API (which
+replaces the C++ libaudio).
 </dd>
 
 <dt>FastCapture</dt>
 <dd>
-A thread within AudioFlinger that sends audio data to lower latency "fast tracks"
+Thread within AudioFlinger that sends audio data to lower latency fast tracks
 and drives the input device when configured for reduced latency.
 </dd>
 
 <dt>FastMixer</dt>
 <dd>
-A thread within AudioFlinger that receives and mixes audio data from lower latency "fast tracks"
-and drives the primary output device when configured for reduced latency.
+Thread within AudioFlinger that receives and mixes audio data from lower latency
+fast tracks and drives the primary output device when configured for reduced
+latency.
 </dd>
 
 <dt>fast track</dt>
 <dd>
-An AudioTrack or AudioRecord client with lower latency but fewer features, on some devices and routes.
+AudioTrack or AudioRecord client with lower latency but fewer features on some
+devices and routes.
 </dd>
 
 <dt>MediaPlayer</dt>
 <dd>
-A higher-level client API than AudioTrack, for playing either encoded
-content, or content which includes multimedia audio and video tracks.
+Higher-level client API than AudioTrack. Plays encoded content or content that
+includes multimedia audio and video tracks.
 </dd>
 
 <dt>media.log</dt>
 <dd>
-An AudioFlinger debugging feature, available in custom builds only,
-for logging audio events to a circular buffer where they can then be
-dumped retroactively when needed.
+AudioFlinger debugging feature available in custom builds only. Used for logging
+audio events to a circular buffer where they can then be retroactively dumped
+when needed.
 </dd>
 
 <dt>mediaserver</dt>
 <dd>
-An Android system process that contains a number of media-related
-services, including AudioFlinger.
+Android system process that contains media-related services, including
+AudioFlinger.
 </dd>
 
 <dt>NBAIO</dt>
 <dd>
-An abstraction for "non-blocking" audio input/output ports used within
-AudioFlinger. The name can be misleading, as some implementations of
-the NBAIO API actually do support blocking. The key implementations of
-NBAIO are for pipes of various kinds.
+Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term
+can be misleading as some implementations of the NBAIO API support blocking. The
+key implementations of NBAIO are for different types of pipes.
 </dd>
 
 <dt>normal mixer</dt>
 <dd>
-A thread within AudioFlinger that services most full-featured
-AudioTrack clients, and either directly drives an output device or feeds
-its sub-mix into FastMixer via a pipe.
+Thread within AudioFlinger that services most full-featured AudioTrack clients.
+Directly drives an output device or feeds its sub-mix into FastMixer via a pipe.
 </dd>
 
 <dt>OpenSL ES</dt>
 <dd>
-An audio API standard by
+Audio API standard by
 <a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since
 API level 9 support a native audio API that is based on a subset of
 <a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>.
@@ -729,15 +688,14 @@
 
 <dt>silent mode</dt>
 <dd>
-A user-settable feature to mute the phone ringer and notifications,
-without affecting media playback (music, videos, games) or alarms.
+User-settable feature to mute the phone ringer and notifications without
+affecting media playback (music, videos, games) or alarms.
 </dd>
 
 <dt>SoundPool</dt>
 <dd>
-A higher-level client API than AudioTrack, used for playing sampled
-audio clips. It is useful for triggering UI feedback, game sounds, etc.
-The API is defined at
+Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for
+triggering UI feedback, game sounds, etc. The API is defined at
 <a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>.
 </dd>
 
@@ -748,63 +706,61 @@
 
 <dt>StateQueue</dt>
 <dd>
-A module within AudioFlinger responsible for synchronizing state
-among threads. Whereas NBAIO is used to pass data, StateQueue is used
-to pass control information.
+Module within AudioFlinger responsible for synchronizing state among threads.
+Whereas NBAIO is used to pass data, StateQueue is used to pass control
+information.
 </dd>
 
 <dt>strategy</dt>
 <dd>
-A grouping of stream types with similar behavior, used by the audio policy service.
+Group of stream types with similar behavior. Used by the audio policy service.
 </dd>
 
 <dt>stream type</dt>
 <dd>
-An enumeration that expresses a use case for audio output.
-The audio policy implementation uses the stream type, along with other parameters,
-to determine volume and routing decisions.
-Specific stream types are listed at
+Enumeration that expresses a use case for audio output. The audio policy
+implementation uses the stream type, along with other parameters, to determine
+volume and routing decisions. For a list of stream types, see
 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
 </dd>
 
 <dt>tee sink</dt>
 <dd>
-See the separate article on tee sink in
-<a href="debugging.html#teeSink">Audio Debugging</a>.
+See <a href="debugging.html#teeSink">Audio Debugging</a>.
 </dd>
 
 <dt>tinyalsa</dt>
 <dd>
-A small user-mode API above ALSA kernel with BSD license, recommended
-for use in HAL implementations.
+Small user-mode API above ALSA kernel with BSD license. Recommended for HAL
+implementations.
 </dd>
 
 <dt>ToneGenerator</dt>
 <dd>
-A higher-level client API than AudioTrack, used for playing DTMF signals.
-See the Wikipedia article
-<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone multi-frequency signaling</a>,
-and the API definition at
+Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF)
+signals. For details, refer to
+<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone
+multi-frequency signaling</a> and the API definition at
 <a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>.
 </dd>
 
 <dt>track</dt>
 <dd>
-An audio stream, controlled by the AudioTrack or AudioRecord API.
+Audio stream. Controlled by the AudioTrack or AudioRecord API.
 </dd>
 
 <dt>volume attenuation curve</dt>
 <dd>
-A device-specific mapping from a generic volume index to a particular attenuation factor
-for a given output.
+Device-specific mapping from a generic volume index to a specific attenuation
+factor for a given output.
 </dd>
 
 <dt>volume index</dt>
 <dd>
-A unitless integer that expresses the desired relative volume of a stream.
-The volume-related APIs of
+Unitless integer that expresses the desired relative volume of a stream. The
+volume-related APIs of
 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>
 operate in volume indices rather than absolute attenuation factors.
 </dd>
 
-</dl>
+</dl>
\ No newline at end of file