816 lines
26 KiB
HTML
816 lines
26 KiB
HTML
<html devsite>
|
|
<head>
|
|
<title>Audio Terminology</title>
|
|
<meta name="project_path" value="/_project.yaml" />
|
|
<meta name="book_path" value="/_book.yaml" />
|
|
</head>
|
|
<body>
|
|
<!--
|
|
Copyright 2017 The Android Open Source Project
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
-->
|
|
|
|
|
|
|
|
<p>
|
|
This glossary of audio-related terminology includes widely-used generic terms
|
|
and Android-specific terms.
|
|
</p>
|
|
|
|
<h2 id="genericTerm">Generic Terms</h2>
|
|
|
|
<p>
|
|
Generic audio-related terms have conventional meanings.
|
|
</p>
|
|
|
|
<h3 id="digitalAudioTerms">Digital Audio</h3>
|
|
<p>
|
|
Digital audio terms relate to handling sound using audio signals encoded
|
|
in digital form. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>.
|
|
</p>
|
|
|
|
<dl>
|
|
|
|
<dt>acoustics</dt>
|
|
<dd>
|
|
Study of the mechanical properties of sound, such as how the physical
|
|
placement of transducers (speakers, microphones, etc.) on a device affects
|
|
perceived audio quality.
|
|
</dd>
|
|
|
|
<dt>attenuation</dt>
|
|
<dd>
|
|
Multiplicative factor less than or equal to 1.0, applied to an audio signal
|
|
to decrease the signal level. Compare to <em>gain</em>.
|
|
</dd>
|
|
|
|
<dt>audiophile</dt>
|
|
<dd>
|
|
Person concerned with a superior music reproduction experience, especially
|
|
willing to make substantial tradeoffs (expense, component size, room design,
|
|
etc.) for sound quality. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>.
|
|
</dd>
|
|
|
|
<dt>bits per sample or bit depth</dt>
|
|
<dd>
|
|
Number of bits of information per sample.
|
|
</dd>
|
|
|
|
<dt>channel</dt>
|
|
<dd>
|
|
Single stream of audio information, usually corresponding to one location of
|
|
recording or playback.
|
|
</dd>
|
|
|
|
<dt>downmixing</dt>
|
|
<dd>
|
|
Decrease the number of channels, such as from stereo to mono or from 5.1 to
|
|
stereo. Accomplished by dropping channels, mixing channels, or more advanced
|
|
signal processing. Simple mixing without attenuation or limiting has the
|
|
potential for overflow and clipping. Compare to <em>upmixing</em>.
|
|
</dd>
|
|
|
|
<dt>DSD</dt>
|
|
<dd>
|
|
Direct Stream Digital. Proprietary audio encoding based on
|
|
<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density
|
|
modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a
|
|
sequence of individual audio samples of multiple bits, DSD encodes a waveform as
|
|
a sequence of bits at a very high sample rate (without the concept of samples).
|
|
Both PCM and DSD represent multiple channels by independent sequences. DSD is
|
|
better suited to content distribution than as an internal representation for
|
|
processing as it can be difficult to apply traditional digital signal processing
|
|
(DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer
|
|
to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream
|
|
Digital</a>.
|
|
</dd>
|
|
|
|
<dt>duck</dt>
|
|
<dd>
|
|
Temporarily reduce the volume of a stream when another stream becomes active.
|
|
For example, if music is playing when a notification arrives, the music ducks
|
|
while the notification plays. Compare to <em>mute</em>.
|
|
</dd>
|
|
|
|
<dt>FIFO</dt>
|
|
<dd>
|
|
First In, First Out. Hardware module or software data structure that implements
|
|
<a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a>
|
|
queueing of data. In an audio context, the data stored in the queue are
|
|
typically audio frames. FIFO can be implemented by a
|
|
<a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>.
|
|
</dd>
|
|
|
|
<dt>frame</dt>
|
|
<dd>
|
|
Set of samples, one per channel, at a point in time.
|
|
</dd>
|
|
|
|
<dt>frames per buffer</dt>
|
|
<dd>
|
|
Number of frames handed from one module to the next at one time. The audio HAL
|
|
interface uses the concept of frames per buffer.
|
|
</dd>
|
|
|
|
<dt>gain</dt>
|
|
<dd>
|
|
Multiplicative factor greater than or equal to 1.0, applied to an audio signal
|
|
to increase the signal level. Compare to <em>attenuation</em>.
|
|
</dd>
|
|
|
|
<dt>HD audio</dt>
|
|
<dd>
|
|
High-Definition audio. Synonym for high-resolution audio (but different than
|
|
Intel High Definition Audio).
|
|
</dd>
|
|
|
|
<dt>Hz</dt>
|
|
<dd>
|
|
Units for sample rate or frame rate.
|
|
</dd>
|
|
|
|
<dt>high-resolution audio</dt>
|
|
<dd>
|
|
Representation with greater bit-depth and sample rate than CDs (stereo 16-bit
|
|
PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio.
|
|
For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution
|
|
audio</a>.
|
|
</dd>
|
|
|
|
<dt>latency</dt>
|
|
<dd>
|
|
Time delay as a signal passes through a system.
|
|
</dd>
|
|
|
|
<dt>lossless</dt>
|
|
<dd>
|
|
A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data
|
|
compression algorithm</a> that preserves bit accuracy across encoding and
|
|
decoding, where the result of decoding previously encoded data is equivalent
|
|
to the original data. Examples of lossless audio content distribution formats
|
|
include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within
|
|
<a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and
|
|
<a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>.
|
|
The authoring process may reduce the bit depth or sample rate from that of the
|
|
<a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution
|
|
formats that preserve the resolution and bit accuracy of masters are the subject
|
|
of high-resolution audio.
|
|
</dd>
|
|
|
|
<dt>lossy</dt>
|
|
<dd>
|
|
A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data
|
|
compression algorithm</a> that attempts to preserve the most important features
|
|
of media across encoding and decoding where the result of decoding previously
|
|
encoded data is perceptually similar to the original data but not identical.
|
|
Examples of lossy audio compression algorithms include MP3 and AAC. As analog
|
|
values are from a continuous domain and digital values are discrete, ADC and DAC
|
|
are lossy conversions with respect to amplitude. See also <em>transparency</em>.
|
|
</dd>
|
|
|
|
<dt>mono</dt>
|
|
<dd>
|
|
One channel.
|
|
</dd>
|
|
|
|
<dt>multichannel</dt>
|
|
<dd>
|
|
See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one
|
|
channel and could be considered multichannel; however, such usage is confusing
|
|
and thus avoided.
|
|
</dd>
|
|
|
|
<dt>mute</dt>
|
|
<dd>
|
|
Temporarily force volume to be zero, independent from the usual volume controls.
|
|
</dd>
|
|
|
|
<dt>overrun</dt>
|
|
<dd>
|
|
Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
|
|
failure to accept supplied data in sufficient time. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
|
|
Compare to <em>underrun</em>.
|
|
</dd>
|
|
|
|
<dt>panning</dt>
|
|
<dd>
|
|
Direct a signal to a desired position within a stereo or multichannel field.
|
|
</dd>
|
|
|
|
<dt>PCM</dt>
|
|
<dd>
|
|
Pulse Code Modulation. Most common low-level encoding of digital audio. The
|
|
audio signal is sampled at a regular interval, called the sample rate, then
|
|
quantized to discrete values within a particular range depending on the bit
|
|
depth. For example, for 16-bit PCM the sample values are integers between
|
|
-32768 and +32767.
|
|
</dd>
|
|
|
|
<dt>ramp</dt>
|
|
<dd>
|
|
Gradually increase or decrease the level of a particular audio parameter, such
|
|
as the volume or the strength of an effect. A volume ramp is commonly applied
|
|
when pausing and resuming music to avoid a hard audible transition.
|
|
</dd>
|
|
|
|
<dt>sample</dt>
|
|
<dd>
|
|
Number representing the audio value for a single channel at a point in time.
|
|
</dd>
|
|
|
|
<dt>sample rate or frame rate</dt>
|
|
<dd>
|
|
Number of frames per second. While <em>frame rate</em> is more accurate,
|
|
<em>sample rate</em> is conventionally used to mean frame rate.
|
|
</dd>
|
|
|
|
<dt>sonification</dt>
|
|
<dd>
|
|
Use of sound to express feedback or information, such as touch sounds and
|
|
keyboard sounds.
|
|
</dd>
|
|
|
|
<dt>stereo</dt>
|
|
<dd>
|
|
Two channels.
|
|
</dd>
|
|
|
|
<dt>stereo widening</dt>
|
|
<dd>
|
|
Effect applied to a stereo signal to make another stereo signal that sounds
|
|
fuller and richer. The effect can also be applied to a mono signal, where it is
|
|
a type of upmixing.
|
|
</dd>
|
|
|
|
<dt>surround sound</dt>
|
|
<dd>
|
|
Techniques for increasing the ability of a listener to perceive sound position
|
|
beyond stereo left and right.
|
|
</dd>
|
|
|
|
<dt>transparency</dt>
|
|
<dd>
|
|
Ideal result of lossy data compression. Lossy data conversion is transparent if
|
|
it is perceptually indistinguishable from the original by a human subject. For
|
|
details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>.
|
|
|
|
</dd>
|
|
|
|
<dt>underrun</dt>
|
|
<dd>
|
|
Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
|
|
failure to supply needed data in sufficient time. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
|
|
Compare to <em>overrun</em>.
|
|
</dd>
|
|
|
|
<dt>upmixing</dt>
|
|
<dd>
|
|
Increase the number of channels, such as from mono to stereo or from stereo to
|
|
surround sound. Accomplished by duplication, panning, or more advanced signal
|
|
processing. Compare to <em>downmixing</em>.
|
|
</dd>
|
|
|
|
<dt>virtualizer</dt>
|
|
<dd>
|
|
Effect that attempts to spatialize audio channels, such as trying to simulate
|
|
more speakers or give the illusion that sound sources have position.
|
|
</dd>
|
|
|
|
<dt>volume</dt>
|
|
<dd>
|
|
Loudness, the subjective strength of an audio signal.
|
|
</dd>
|
|
|
|
</dl>
|
|
|
|
<h3 id="interDeviceTerms">Inter-device interconnect</h3>
|
|
|
|
<p>
|
|
Inter-device interconnection technologies connect audio and video components
|
|
between devices and are readily visible at the external connectors. The HAL
|
|
implementer and end user should be aware of these terms.
|
|
</p>
|
|
|
|
<dl>
|
|
|
|
<dt>Bluetooth</dt>
|
|
<dd>
|
|
Short range wireless technology. For details on the audio-related
|
|
<a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a>
|
|
and
|
|
<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>,
|
|
refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for
|
|
music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>.
|
|
</dd>
|
|
|
|
<dt>DisplayPort</dt>
|
|
<dd>
|
|
Digital display interface by the Video Electronics Standards Association (VESA).
|
|
</dd>
|
|
|
|
<dt>dongle</dt>
|
|
<dd>
|
|
A <a href="https://en.wikipedia.org/wiki/Dongle">dongle</a>
|
|
is a small gadget, especially one that hangs off another device.
|
|
</dd>
|
|
|
|
<dt>FireWire</dt>
|
|
<dd>
|
|
See IEEE 1394.
|
|
</dd>
|
|
|
|
<dt>HDMI</dt>
|
|
<dd>
|
|
High-Definition Multimedia Interface. Interface for transferring audio and
|
|
video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used.
|
|
</dd>
|
|
|
|
<dt>IEEE 1394</dt>
|
|
<dd>
|
|
<a href="https://en.wikipedia.org/wiki/IEEE_1394">IEEE 1394</a>, also called FireWire,
|
|
is a serial bus used for real-time low-latency applications such as audio.
|
|
</dd>
|
|
|
|
<dt>Intel HDA</dt>
|
|
<dd>
|
|
Intel High Definition Audio (do not confuse with generic <em>high-definition
|
|
audio</em> or <em>high-resolution audio</em>). Specification for a front-panel
|
|
connector. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High
|
|
Definition Audio</a>.
|
|
</dd>
|
|
|
|
<dt>interface</dt>
|
|
<dd>
|
|
An <a href="https://en.wikipedia.org/wiki/Interface_(computing)">interface</a>
|
|
converts a signal from one representation to another. Common interfaces
|
|
include a USB audio interface and MIDI interface.
|
|
</dd>
|
|
|
|
<dt>line level</dt>
|
|
<dd>
|
|
<a href="http://en.wikipedia.org/wiki/Line_level">Line level</a> is the strength
|
|
of an analog audio signal that passes between audio components, not transducers.
|
|
</dd>
|
|
|
|
<dt>MHL</dt>
|
|
<dd>
|
|
Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB
|
|
connector.
|
|
</dd>
|
|
|
|
<dt>phone connector</dt>
|
|
<dd>
|
|
Mini or sub-mini component that connects a device to wired headphones, headset,
|
|
or line-level amplifier.
|
|
</dd>
|
|
|
|
<dt>SlimPort</dt>
|
|
<dd>
|
|
Adapter from micro-USB to HDMI.
|
|
</dd>
|
|
|
|
<dt>S/PDIF</dt>
|
|
<dd>
|
|
Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For
|
|
details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
|
|
S/PDIF is the consumer grade variant of <a href="https://en.wikipedia.org/wiki/AES3">AES3</a>.
|
|
</dd>
|
|
|
|
<dt>Thunderbolt</dt>
|
|
<dd>
|
|
Multimedia interface that competes with USB and HDMI for connecting to high-end
|
|
peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>.
|
|
</dd>
|
|
|
|
<dt>TOSLINK</dt>
|
|
<dd>
|
|
<a href="https://en.wikipedia.org/wiki/TOSLINK">TOSLINK</a> is an optical audio cable
|
|
used with <em>S/PDIF</em>.
|
|
</dd>
|
|
|
|
<dt>USB</dt>
|
|
<dd>
|
|
Universal Serial Bus. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/USB">USB</a>.
|
|
</dd>
|
|
|
|
</dl>
|
|
|
|
<h3 id="intraDeviceTerms">Intra-device interconnect</h3>
|
|
|
|
<p>
|
|
Intra-device interconnection technologies connect internal audio components
|
|
within a given device and are not visible without disassembling the device. The
|
|
HAL implementer may need to be aware of these, but not the end user. For details
|
|
on intra-device interconnections, refer to the following articles:
|
|
</p>
|
|
<ul>
|
|
<li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
|
|
<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a>, for control channel</li>
|
|
<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a>, for audio data, simpler than SLIMbus</li>
|
|
<li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li>
|
|
<li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li>
|
|
<li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li>
|
|
<li><a href="http://en.wikipedia.org/wiki/AC%2797">AC'97</a></li>
|
|
<li><a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel HDA</a></li>
|
|
<li><a href="http://mipi.org/specifications/soundwire">SoundWire</a></li>
|
|
</ul>
|
|
|
|
<p>
|
|
In
|
|
<a href="http://www.alsa-project.org/main/index.php/ASoC">ALSA System on Chip (ASoC)</a>,
|
|
these are collectively called
|
|
<a href="https://www.kernel.org/doc/Documentation/sound/soc/dai.rst">Digital Audio Interfaces</a>
|
|
(DAI).
|
|
</p>
|
|
|
|
<h3 id="signalTerms">Audio Signal Path</h3>
|
|
|
|
<p>
|
|
Audio signal path terms relate to the signal path that audio data follows from
|
|
an application to the transducer or vice-versa.
|
|
</p>
|
|
|
|
<dl>
|
|
|
|
<dt>ADC</dt>
|
|
<dd>
|
|
Analog-to-digital converter. Module that converts an analog signal (continuous
|
|
in time and amplitude) to a digital signal (discrete in time and amplitude).
|
|
Conceptually, an ADC consists of a periodic sample-and-hold followed by a
|
|
quantizer, although it does not have to be implemented that way. An ADC is
|
|
usually preceded by a low-pass filter to remove any high frequency components
|
|
that are not representable using the desired sample rate. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital
|
|
converter</a>.
|
|
</dd>
|
|
|
|
<dt>AP</dt>
|
|
<dd>
|
|
Application processor. Main general-purpose computer on a mobile device.
|
|
</dd>
|
|
|
|
<dt>codec</dt>
|
|
<dd>
|
|
Coder-decoder. Module that encodes and/or decodes an audio signal from one
|
|
representation to another (typically analog to PCM or PCM to analog). In strict
|
|
terms, <em>codec</em> is reserved for modules that both encode and decode but
|
|
can be used loosely to refer to only one of these. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>.
|
|
</dd>
|
|
|
|
<dt>DAC</dt>
|
|
<dd>
|
|
Digital-to-analog converter. Module that converts a digital signal (discrete in
|
|
time and amplitude) to an analog signal (continuous in time and amplitude).
|
|
Often followed by a low-pass filter to remove high-frequency components
|
|
introduced by digital quantization. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog
|
|
converter</a>.
|
|
</dd>
|
|
|
|
<dt>DSP</dt>
|
|
<dd>
|
|
Digital Signal Processor. Optional component typically located after the
|
|
application processor (for output) or before the application processor (for
|
|
input). Primary purpose is to off-load the application processor and provide
|
|
signal processing features at a lower power cost.
|
|
</dd>
|
|
|
|
<dt>PDM</dt>
|
|
<dd>
|
|
Pulse-density modulation. Form of modulation used to represent an analog signal
|
|
by a digital signal, where the relative density of 1s versus 0s indicates the
|
|
signal level. Commonly used by digital to analog converters. For details, refer
|
|
to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density
|
|
modulation</a>.
|
|
</dd>
|
|
|
|
<dt>PWM</dt>
|
|
<dd>
|
|
Pulse-width modulation. Form of modulation used to represent an analog signal by
|
|
a digital signal, where the relative width of a digital pulse indicates the
|
|
signal level. Commonly used by analog-to-digital converters. For details, refer
|
|
to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width
|
|
modulation</a>.
|
|
</dd>
|
|
|
|
<dt>transducer</dt>
|
|
<dd>
|
|
Converts variations in physical real-world quantities to electrical signals. In
|
|
audio, the physical quantity is sound pressure, and the transducers are the
|
|
loudspeaker and microphone. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>.
|
|
</dd>
|
|
|
|
</dl>
|
|
|
|
<h3 id="srcTerms">Sample Rate Conversion</h3>
|
|
<p>
|
|
Sample rate conversion terms relate to the process of converting from one
|
|
sampling rate to another.
|
|
</p>
|
|
|
|
<dl>
|
|
|
|
<dt>downsample</dt>
|
|
<dd>Resample, where sink sample rate < source sample rate.</dd>
|
|
|
|
<dt>Nyquist frequency</dt>
|
|
<dd>
|
|
Maximum frequency component that can be represented by a discretized signal at
|
|
1/2 of a given sample rate. For example, the human hearing range extends to
|
|
approximately 20 kHz, so a digital audio signal must have a sample rate of at
|
|
least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and
|
|
48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz
|
|
respectively. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a>
|
|
and
|
|
<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>.
|
|
</dd>
|
|
|
|
<dt>resampler</dt>
|
|
<dd>Synonym for sample rate converter.</dd>
|
|
|
|
<dt>resampling</dt>
|
|
<dd>Process of converting sample rate.</dd>
|
|
|
|
<dt>sample rate converter</dt>
|
|
<dd>Module that resamples.</dd>
|
|
|
|
<dt>sink</dt>
|
|
<dd>Output of a resampler.</dd>
|
|
|
|
<dt>source</dt>
|
|
<dd>Input to a resampler.</dd>
|
|
|
|
<dt>upsample</dt>
|
|
<dd>Resample, where sink sample rate > source sample rate.</dd>
|
|
|
|
</dl>
|
|
|
|
<h2 id="androidSpecificTerms">Android-Specific Terms</h2>
|
|
|
|
<p>
|
|
Android-specific terms include terms used only in the Android audio framework
|
|
and generic terms that have special meaning within Android.
|
|
</p>
|
|
|
|
<dl>
|
|
|
|
<dt>ALSA</dt>
|
|
<dd>
|
|
Advanced Linux Sound Architecture. An audio framework for Linux that has also
|
|
influenced other systems. For a generic definition, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>.
|
|
In Android, ALSA refers to the kernel audio framework and drivers and not to the
|
|
user-mode API. See also <em>tinyalsa</em>.
|
|
</dd>
|
|
|
|
<dt>audio device</dt>
|
|
<dd>
|
|
Audio I/O endpoint backed by a HAL implementation.
|
|
</dd>
|
|
|
|
<dt>AudioEffect</dt>
|
|
<dd>
|
|
API and implementation framework for output (post-processing) effects and input
|
|
(pre-processing) effects. The API is defined at
|
|
<a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>.
|
|
</dd>
|
|
|
|
<dt>AudioFlinger</dt>
|
|
<dd>
|
|
Android sound server implementation. AudioFlinger runs within the mediaserver
|
|
process. For a generic definition, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>.
|
|
</dd>
|
|
|
|
<dt>audio focus</dt>
|
|
<dd>
|
|
Set of APIs for managing audio interactions across multiple independent apps.
|
|
For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of
|
|
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
|
|
</dd>
|
|
|
|
<dt>AudioMixer</dt>
|
|
<dd>
|
|
Module in AudioFlinger responsible for combining multiple tracks and applying
|
|
attenuation (volume) and effects. For a generic definition, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather
|
|
than a software module within a system).
|
|
</dd>
|
|
|
|
<dt>audio policy</dt>
|
|
<dd>
|
|
Service responsible for all actions that require a policy decision to be made
|
|
first, such as opening a new I/O stream, re-routing after a change, and stream
|
|
volume management.
|
|
</dd>
|
|
|
|
<dt>AudioRecord</dt>
|
|
<dd>
|
|
Primary low-level client API for receiving data from an audio input device such
|
|
as a microphone. The data is usually PCM format. The API is defined at
|
|
<a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>.
|
|
</dd>
|
|
|
|
<dt>AudioResampler</dt>
|
|
<dd>
|
|
Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>.
|
|
</dd>
|
|
|
|
<dt>audio source</dt>
|
|
<dd>
|
|
An enumeration of constants that indicates the desired use case for capturing
|
|
audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above,
|
|
<a href="attributes.html">audio attributes</a> are preferred.
|
|
</dd>
|
|
|
|
<dt>AudioTrack</dt>
|
|
<dd>
|
|
Primary low-level client API for sending data to an audio output device such as
|
|
a speaker. The data is usually in PCM format. The API is defined at
|
|
<a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>.
|
|
</dd>
|
|
|
|
<dt>audio_utils</dt>
|
|
<dd>
|
|
Audio utility library for features such as PCM format conversion, WAV file I/O,
|
|
and
|
|
<a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is
|
|
largely independent of the Android platform.
|
|
</dd>
|
|
|
|
<dt>client</dt>
|
|
<dd>
|
|
Usually an application or app client. However, an AudioFlinger client can be a
|
|
thread running within the mediaserver system process, such as when playing media
|
|
decoded by a MediaPlayer object.
|
|
</dd>
|
|
|
|
<dt>HAL</dt>
|
|
<dd>
|
|
Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a
|
|
layer between AudioFlinger and the kernel device driver with a C API (which
|
|
replaces the C++ libaudio).
|
|
</dd>
|
|
|
|
<dt>FastCapture</dt>
|
|
<dd>
|
|
Thread within AudioFlinger that sends audio data to lower latency fast tracks
|
|
and drives the input device when configured for reduced latency.
|
|
</dd>
|
|
|
|
<dt>FastMixer</dt>
|
|
<dd>
|
|
Thread within AudioFlinger that receives and mixes audio data from lower latency
|
|
fast tracks and drives the primary output device when configured for reduced
|
|
latency.
|
|
</dd>
|
|
|
|
<dt>fast track</dt>
|
|
<dd>
|
|
AudioTrack or AudioRecord client with lower latency but fewer features on some
|
|
devices and routes.
|
|
</dd>
|
|
|
|
<dt>MediaPlayer</dt>
|
|
<dd>
|
|
Higher-level client API than AudioTrack. Plays encoded content or content that
|
|
includes multimedia audio and video tracks.
|
|
</dd>
|
|
|
|
<dt>media.log</dt>
|
|
<dd>
|
|
AudioFlinger debugging feature available in custom builds only. Used for logging
|
|
audio events to a circular buffer where they can then be retroactively dumped
|
|
when needed.
|
|
</dd>
|
|
|
|
<dt>mediaserver</dt>
|
|
<dd>
|
|
Android system process that contains media-related services, including
|
|
AudioFlinger.
|
|
</dd>
|
|
|
|
<dt>NBAIO</dt>
|
|
<dd>
|
|
Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term
|
|
can be misleading as some implementations of the NBAIO API support blocking. The
|
|
key implementations of NBAIO are for different types of pipes.
|
|
</dd>
|
|
|
|
<dt>normal mixer</dt>
|
|
<dd>
|
|
Thread within AudioFlinger that services most full-featured AudioTrack clients.
|
|
Directly drives an output device or feeds its sub-mix into FastMixer via a pipe.
|
|
</dd>
|
|
|
|
<dt>OpenSL ES</dt>
|
|
<dd>
|
|
Audio API standard by
|
|
<a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since
|
|
API level 9 support a native audio API that is based on a subset of
|
|
<a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>.
|
|
</dd>
|
|
|
|
<dt>silent mode</dt>
|
|
<dd>
|
|
User-settable feature to mute the phone ringer and notifications without
|
|
affecting media playback (music, videos, games) or alarms.
|
|
</dd>
|
|
|
|
<dt>SoundPool</dt>
|
|
<dd>
|
|
Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for
|
|
triggering UI feedback, game sounds, etc. The API is defined at
|
|
<a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>.
|
|
</dd>
|
|
|
|
<dt>Stagefright</dt>
|
|
<dd>
|
|
See <a href="/devices/media.html">Media</a>.
|
|
</dd>
|
|
|
|
<dt>StateQueue</dt>
|
|
<dd>
|
|
Module within AudioFlinger responsible for synchronizing state among threads.
|
|
Whereas NBAIO is used to pass data, StateQueue is used to pass control
|
|
information.
|
|
</dd>
|
|
|
|
<dt>strategy</dt>
|
|
<dd>
|
|
Group of stream types with similar behavior. Used by the audio policy service.
|
|
</dd>
|
|
|
|
<dt>stream type</dt>
|
|
<dd>
|
|
Enumeration that expresses a use case for audio output. The audio policy
|
|
implementation uses the stream type, along with other parameters, to determine
|
|
volume and routing decisions. For a list of stream types, see
|
|
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
|
|
</dd>
|
|
|
|
<dt>tee sink</dt>
|
|
<dd>
|
|
See <a href="debugging.html#teeSink">Audio Debugging</a>.
|
|
</dd>
|
|
|
|
<dt>tinyalsa</dt>
|
|
<dd>
|
|
Small user-mode API above ALSA kernel with BSD license. Recommended for HAL
|
|
implementations.
|
|
</dd>
|
|
|
|
<dt>ToneGenerator</dt>
|
|
<dd>
|
|
Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF)
|
|
signals. For details, refer to
|
|
<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone
|
|
multi-frequency signaling</a> and the API definition at
|
|
<a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>.
|
|
</dd>
|
|
|
|
<dt>track</dt>
|
|
<dd>
|
|
Audio stream. Controlled by the AudioTrack or AudioRecord API.
|
|
</dd>
|
|
|
|
<dt>volume attenuation curve</dt>
|
|
<dd>
|
|
Device-specific mapping from a generic volume index to a specific attenuation
|
|
factor for a given output.
|
|
</dd>
|
|
|
|
<dt>volume index</dt>
|
|
<dd>
|
|
Unitless integer that expresses the desired relative volume of a stream. The
|
|
volume-related APIs of
|
|
<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>
|
|
operate in volume indices rather than absolute attenuation factors.
|
|
</dd>
|
|
|
|
</dl>
|
|
|
|
</body>
|
|
</html>
|