blob: 225f7ec1d15578c0c281de260ce59db712563923 [file] [log] [blame]
<html devsite><head>
<title>TV 音频</title>
<meta name="project_path" value="/_project.yaml"/>
<meta name="book_path" value="/_book.yaml"/>
</head>
<body>
<!--
Copyright 2017 The Android Open Source Project
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p>TV 输入框架 (TIF) 管理器与音频路由 API 配合使用,可支持灵活的音频路径更改。当系统芯片 (SoC) 实现 TV 硬件抽象层 (HAL) 时,每个 TV 输入源(HDMI IN、调谐器等)均提供 <code>TvInputHardwareInfo</code>,用于为音频类型和地址指定 AudioPort 信息。</p>
<ul>
<li><b>物理</b>音频输入/输出设备具有相应的 AudioPort。</li>
<li><b>软件</b>音频输出/输入流表示为 AudioMixPort(AudioPort 的子类)。</li>
</ul>
<p>然后,TIF 将 AudioPort 信息用于音频路由 API。</p>
<p><img src="images/ape_audio_tv_tif.png" alt="Android TV 输入框架 (TIF)"/></p>
<p class="img-caption"><strong>图 1.</strong> TV 输入框架 (TIF)</p>
<h2 id="requirements">要求</h2>
<p>SoC 必须通过以下音频路由 API 支持功能来实现音频 HAL:</p>
<table>
<tbody>
<tr>
<th>音频端口</th>
<td>
<ul>
<li>TV 音频输入端口具有相应的音频源端口实现。</li>
<li>TV 音频输出端口具有相应的音频接收器端口实现。</li>
<li>可在任意 TV 输入音频端口和任意 TV 输出音频端口之间创建音频通路。</li>
</ul>
</td>
</tr>
<tr>
<th>默认输入</th>
<td>AudioRecord(使用默认输入源创建)必须在 Android TV 上采集“虚拟 Null 输入源”,以获取 AUDIO_DEVICE_IN_DEFAULT。<i></i></td>
</tr>
<tr>
<th>设备环回</th>
<td>需要支持 AUDIO_DEVICE_IN_LOOPBACK 输入,这是所有 TV 输出(11 Khz、16bit 单声道或 48 Khz、16bit 单声道)的所有音频输出的完整组合。仅用于音频采集。
</td>
</tr>
</tbody>
</table>
<h2 id="audioDevices">TV 音频设备</h2>
<p>Android 支持使用以下设备处理 TV 音频输入/输出。</p>
<h4><code>system/media/audio/include/system/audio.h</code></h4>
<p class="note"><strong>注意</strong>:在 Android 5.1 及更早版本中,此文件的路径为:<code>system/core/include/system/audio.h</code></p>
<pre class="devsite-click-to-copy">
/* output devices */
AUDIO_DEVICE_OUT_AUX_DIGITAL = 0x400,
AUDIO_DEVICE_OUT_HDMI = AUDIO_DEVICE_OUT_AUX_DIGITAL,
/* HDMI Audio Return Channel */
AUDIO_DEVICE_OUT_HDMI_ARC = 0x40000,
/* S/PDIF out */
AUDIO_DEVICE_OUT_SPDIF = 0x80000,
/* input devices */
AUDIO_DEVICE_IN_AUX_DIGITAL = AUDIO_DEVICE_BIT_IN | 0x20,
AUDIO_DEVICE_IN_HDMI = AUDIO_DEVICE_IN_AUX_DIGITAL,
/* TV tuner input */
AUDIO_DEVICE_IN_TV_TUNER = AUDIO_DEVICE_BIT_IN | 0x4000,
/* S/PDIF in */
AUDIO_DEVICE_IN_SPDIF = AUDIO_DEVICE_BIT_IN | 0x10000,
AUDIO_DEVICE_IN_LOOPBACK = AUDIO_DEVICE_BIT_IN | 0x40000,
</pre>
<h2 id="halExtension">音频 HAL 扩展</h2>
<p>音频路由 API 的音频 HAL 扩展定义如下:</p>
<h4><code>system/media/audio/include/system/audio.h</code></h4>
<p class="note"><strong>注意</strong>:在 Android 5.1 及更早版本中,此文件的路径为:<code>system/core/include/system/audio.h</code></p>
<pre class="devsite-click-to-copy">
/* audio port configuration structure used to specify a particular configuration of an audio port */
struct audio_port_config {
audio_port_handle_t id; /* port unique ID */
audio_port_role_t role; /* sink or source */
audio_port_type_t type; /* device, mix ... */
unsigned int config_mask; /* e.g AUDIO_PORT_CONFIG_ALL */
unsigned int sample_rate; /* sampling rate in Hz */
audio_channel_mask_t channel_mask; /* channel mask if applicable */
audio_format_t format; /* format if applicable */
struct audio_gain_config gain; /* gain to apply if applicable */
union {
struct audio_port_config_device_ext device; /* device specific info */
struct audio_port_config_mix_ext mix; /* mix specific info */
struct audio_port_config_session_ext session; /* session specific info */
} ext;
};
struct audio_port {
audio_port_handle_t id; /* port unique ID */
audio_port_role_t role; /* sink or source */
audio_port_type_t type; /* device, mix ... */
unsigned int num_sample_rates; /* number of sampling rates in following array */
unsigned int sample_rates[AUDIO_PORT_MAX_SAMPLING_RATES];
unsigned int num_channel_masks; /* number of channel masks in following array */
audio_channel_mask_t channel_masks[AUDIO_PORT_MAX_CHANNEL_MASKS];
unsigned int num_formats; /* number of formats in following array */
audio_format_t formats[AUDIO_PORT_MAX_FORMATS];
unsigned int num_gains; /* number of gains in following array */
struct audio_gain gains[AUDIO_PORT_MAX_GAINS];
struct audio_port_config active_config; /* current audio port configuration */
union {
struct audio_port_device_ext device;
struct audio_port_mix_ext mix;
struct audio_port_session_ext session;
} ext;
};
</pre>
<h4><code>hardware/libhardware/include/hardware/audio.h</code></h4>
<pre class="devsite-click-to-copy">
struct audio_hw_device {
:
/**
* Routing control
*/
/* Creates an audio patch between several source and sink ports.
* The handle is allocated by the HAL and should be unique for this
* audio HAL module. */
int (*create_audio_patch)(struct audio_hw_device *dev,
unsigned int num_sources,
const struct audio_port_config *sources,
unsigned int num_sinks,
const struct audio_port_config *sinks,
audio_patch_handle_t *handle);
/* Release an audio patch */
int (*release_audio_patch)(struct audio_hw_device *dev,
audio_patch_handle_t handle);
/* Fills the list of supported attributes for a given audio port.
* As input, "port" contains the information (type, role, address etc...)
* needed by the HAL to identify the port.
* As output, "port" contains possible attributes (sampling rates, formats,
* channel masks, gain controllers...) for this port.
*/
int (*get_audio_port)(struct audio_hw_device *dev,
struct audio_port *port);
/* Set audio port configuration */
int (*set_audio_port_config)(struct audio_hw_device *dev,
const struct audio_port_config *config);
</pre>
<h2 id="testing">测试 DEVICE_IN_LOOPBACK</h2>
<p>要测试用于 TV 监控的 DEVICE_IN_LOOPBACK,请使用以下测试代码。运行测试后,采集到的音频将保存到 <code>/sdcard/record_loopback.raw</code> 中,您可以使用 <code><a href="https://en.wikipedia.org/wiki/FFmpeg">FFmpeg</a></code> 来收听。</p>
<pre class="devsite-click-to-copy">
&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_ROUTING" /&gt;
&lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&gt;
AudioRecord mRecorder;
Handler mHandler = new Handler();
int mMinBufferSize = AudioRecord.getMinBufferSize(RECORD_SAMPLING_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);;
static final int RECORD_SAMPLING_RATE = 48000;
public void doCapture() {
mRecorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, RECORD_SAMPLING_RATE,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mMinBufferSize * 10);
AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
ArrayList&lt;AudioPort&gt; audioPorts = new ArrayList&lt;AudioPort&gt;();
am.listAudioPorts(audioPorts);
AudioPortConfig srcPortConfig = null;
AudioPortConfig sinkPortConfig = null;
for (AudioPort audioPort : audioPorts) {
if (srcPortConfig == null
&amp;&amp; audioPort.role() == AudioPort.ROLE_SOURCE
&amp;&amp; audioPort instanceof AudioDevicePort) {
AudioDevicePort audioDevicePort = (AudioDevicePort) audioPort;
if (audioDevicePort.type() == AudioManager.DEVICE_IN_LOOPBACK) {
srcPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_IN_DEFAULT,
AudioFormat.ENCODING_DEFAULT, null);
Log.d(LOG_TAG, "Found loopback audio source port : " + audioPort);
}
}
else if (sinkPortConfig == null
&amp;&amp; audioPort.role() == AudioPort.ROLE_SINK
&amp;&amp; audioPort instanceof AudioMixPort) {
sinkPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_OUT_DEFAULT,
AudioFormat.ENCODING_DEFAULT, null);
Log.d(LOG_TAG, "Found recorder audio mix port : " + audioPort);
}
}
if (srcPortConfig != null &amp;&amp; sinkPortConfig != null) {
AudioPatch[] patches = new AudioPatch[] { null };
int status = am.createAudioPatch(
patches,
new AudioPortConfig[] { srcPortConfig },
new AudioPortConfig[] { sinkPortConfig });
Log.d(LOG_TAG, "Result of createAudioPatch(): " + status);
}
mRecorder.startRecording();
processAudioData();
mRecorder.stop();
mRecorder.release();
}
private void processAudioData() {
OutputStream rawFileStream = null;
byte data[] = new byte[mMinBufferSize];
try {
rawFileStream = new BufferedOutputStream(
new FileOutputStream(new File("/sdcard/record_loopback.raw")));
} catch (FileNotFoundException e) {
Log.d(LOG_TAG, "Can't open file.", e);
}
long startTimeMs = System.currentTimeMillis();
while (System.currentTimeMillis() - startTimeMs &lt; 5000) {
int nbytes = mRecorder.read(data, 0, mMinBufferSize);
if (nbytes &lt;= 0) {
continue;
}
try {
rawFileStream.write(data);
} catch (IOException e) {
Log.e(LOG_TAG, "Error on writing raw file.", e);
}
}
try {
rawFileStream.close();
} catch (IOException e) {
}
Log.d(LOG_TAG, "Exit audio recording.");
}
</pre>
<p><code>/sdcard/record_loopback.raw</code> 中找到采集到的音频文件,并使用 <code>FFmpeg</code> 来收听:</p>
<pre class="devsite-click-to-copy">
<code class="devsite-terminal">adb pull /sdcard/record_loopback.raw</code>
<code class="devsite-terminal">ffmpeg -f s16le -ar 48k -ac 1 -i record_loopback.raw record_loopback.wav</code>
<code class="devsite-terminal">ffplay record_loopback.wav</code>
</pre>
<h2 id="useCases">用例</h2>
<p>本节包括 TV 音频的常见用例。</p>
<h3 id="tvSpeaker">带有扬声器输出的 TV 调谐器</h3>
<p>当 TV 调谐器进入活动状态时,音频路由 API 会在调谐器和默认输出(例如扬声器)之间创建一个音频通路。调谐器输出无需解码,但最终音频输出会与软件 output_stream 混合。</p>
<img src="images/ape_audio_tv_tuner.png" alt="Android TV 调谐器音频通路"/>
<p class="img-caption">
<strong>图 2.</strong> 带有扬声器输出的 TV 调谐器的音频通路。</p>
<h3 id="hdmiOut">直播 TV 期间的 HDMI OUT</h3>
<p>用户正在观看直播 TV,随后切换到 HDMI 音频输出 (Intent.ACTION_HDMI_AUDIO_PLUG)。所有 output_stream 的输出设备均更改到 HDMI_OUT 端口,而 TIF 管理器将现有调谐器音频通路的接收器端口更改为 HDMI_OUT 端口。</p>
<img src="images/ape_audio_tv_hdmi_tuner.png" alt="Android TV HDMI-OUT 音频通路"/>
<p class="img-caption">
<strong>图 3.</strong> 来自直播 TV 的 HDMI OUT 音频通路。</p>
</body></html>