Audio Drive Arch

= OMAP4 Audio Design =

This page describes the overall OMAP4 Audio architecture. It complies with the ALSA/ASoC model to provide user applications with an industry standard interface to the audio components. The OMAP4 hardware includes the integrated Audio Back End (ABE) processor and the Phoenix (TWL6040) codec to provide complete audio support.

ALSA System on Chip (ASoc)


The ASoC layer was designed to better support audio subsystems in embedded system on chip (SoC) platforms. Embedded systems have unique audio needs such as:
 * Detection of insertion/removal of headset/microphones
 * Dynamic routing of audio
 * Power management features (DAPM).

In order to provide these features, ASoC drivers are generally split up in to 3 parts:
 * Codec Driver -
 * Platform Driver
 * Machine Driver

For more information on ASoC design, see: ASoC - ALSA Project

Audio Back End (ABE)


The Audio Back-End module is a sub-system in OMAP4 with dedicated Power domains to support low power Audio use cases. The purpose of the ABE is to handle the audio processing of the application. It means that it will be in charge to receive voice or audio sample either from CPU/DSP or external component (phoenix or other) and send them to the analog part or memories after processing.

The functional responsibilities of the Audio Back-End drivers are:
 * Managing various Audio/voice UL/DL streams between Host like DSP/MPU/DMA (Front End ports) and the physical interfaces like McBSP, McASP, McPDM, DMIC and SLIMBUS (Back End ports).
 * Performing real-time audio processing like equalization, mixing, SRC etc.

Audio BackEnd subsystem integrates:
 * Peripheral connectivity modules
 * 3 McBSPs (to support Modem voice, BT, and Audio)
 * 1 McASP1 module
 * 1 Slimbus (to support new generation of codec MIPI complaint
 * 1 DMIC (to support 3 stereo digital microphones, hence 6 channel data)
 * 1 McPDM (to support interconnect with Audio IC Phoenix)
 * HDMI
 * Audio Engine Sub-system (AESS)
 * ATC (audio traffic controller)
 * Local Interconnect
 * Asynchronous interconnect
 * L3 Master (T2Async)
 * DSP Master (T2Async)
 * MCU Master (T2Async)
 * On chip memory
 * 64KB of RAM 32-bit data length DMEM
 * 4KB for coefficient CMEM
 * 32KB for Sample SMEM (address space with 24KB of physical mem)
 * 8KB for Program PMEM
 * 4 GP Timers and 1 Watchdog timer
 * Independent Power Domain and PRCM controls
 * The ABE power domain can remain always on even when the rest of OMAP is in OFF mode

Codec Driver - Phoenix (TWL6040)


The Phoenix codec in OMAP4 is the analog part of the audio architecture. It consists of the following components:

Audio output:


 * Headset
 * Handsfree
 * Vibrator
 * Auxiliary
 * Earphone

Audio input:


 * Mic left input (Can be Main mic, HS mic or AUX/FM left input)
 * Mic Right input (Can be Sub mic, HS mic or AUX/FM right input)
 * Line In

The Phoenix codec renders samples at 88.4 KHz and 96 KHz. Handsfree speakers and other paths can only work at 96 KHz, but heaphones has two modes of operation:


 * Low power mode (88.4 KHz and 96 KHz). Consumes less power but the audio quality may be affected.
 * High performance mode (96 KHz only). Good for applications in which quality of audio is desired.

There are eight digital input channels on the Phoenix that can be used for rendering audio out to nine different analog outputs. There are 5 McPDM channels (DL0 -> DL4), an I2C channel, and two auxiliary inputs that can be used for the FM radio (AFM0, AFM1). Audio from these inputs can be routed to the various analog outputs on the Phoenix as per the following table:


 * (1) This path can not be concurrent of L/R headset paths
 * (2) These paths can be concurrent but not independent of L/R hands free paths
 * (3) The Frame line can be used for register write (e.g. vibrator data registers) in command mode.

Likewise, the Phoenix has the following 5 analog inputs that can be used for encoding audio. Uplink audio from the Phoenix to the ABE can be transferred over 2 McPDM channels (UL0 and UL1) or via two auxiliary ports, AFM0 and AFM1, which are often used for FM radio audio. The possible mappings of these inputs to uplink channels in to the ABE are as follows:

Platform Driver
The Platform Driver the audio interface drivers (McBSP, I2C, McPDM, etc) for the system

Currently supported platform drivers are:
 * McPDM - More information HERE
 * McBSP - TBD
 * I2C - TBD
 * Vibra - More information HERE

Machine Driver
In the ASoC model, there is a machine driver that is specific to the system hardware. This is the "glue" that ties the platform and the codec drivers together.

The OMAP4430SDP/Blaze machine driver supports several audio interfaces (Front End):
 * Multimedia – Hi fidelity audio.
 * Tones – Tone generation support.
 * Voice - voice grade audio.
 * Digital uplink – audio uplink port
 * Vibrator – eventual support for haptic vibrator support.

This assumes the following:
 * Support for the OMAP4430SDP or Blaze board with ES2.x silicon.
 * Audio will be rendered to TWL6040 via DAI channels over a McPDM interface.
 * Support for I2C (via McBSP) will be supported later.

This defines the audio devices as seen by the ALSA layer:

Multimedia DAI
The Multimedia DAI is used to encode/decode high fidelity audio.

Voice DAI
The Voice DAI is used to render low fidelity (8 or 16 Khz) audio.

Tones DAI
The Tones DAI is a used to render generated tones to one of the outputs. Can be single or dual tone generated.

Vibra DAI
This DAI is used to support haptic feedback through the vibra driver. The Phoenix chipset provides support for using PCM audio data to modulate the vibrators through this DAI. [Feature not implemented yet]

HDMI DAI
The machine driver provides a DAI for access to the HDMI audio port on the OMAP4. Audio written to this DAI are handed off to the HDMI lib which, in turn, is responsible for writing the audio data out the HDMI port. The HDMI lib is responsible for configuration of the stream to work with the rendering device.

OMAP4/5 ASoc Kernel audio driver controls mapping
This section map the kernel control on the OMAP4 HW. The mixer controls can be divided into two parts:
 * Controls for path setting
 * Controls for Volume setting

Path setting
Mixer controls associated to path are checked by the ASoC Dynamic PCM part in order to cross check valid Audio path. The next Graph show the mixer control since 3.1 Audio feature tree kernel. In case the mixer controls for data PATH are not set correctly ASoC driver will return an "error no valid path set"

For example for Multimedia playback a minimum set of mixer control need to be set: amixer cset name= 'DL1 Media Playback' 1 amixer cset name= 'Sidetone Mixer Playback' 1 amixer cset name= 'Headset Left Playback' 1 amixer cset name= 'Headset right Playback' 1 amixer cset name= 'DL1 PDM Switch' 1

After you can set the volume (described inside the next section) amixer cset name= 'DL1 Media Playback Volume' 120 amixer cset name= 'SDT DL Volume' 120 amixer cset name= 'Headset Playback Volume' 12

Then you can make playback on Headset aplay -D plughw:0,0 file.wav



Volume setting
All the volume setting can be done at any time. They are here to controls the volume of the platform. All the ABE digital gain have a range from +30 dB to -120 dB (mute) with step of 1 dB. For ABE gain value 120 is corresponding to 0 dB. Please note that when some paths are stopped the digital gain are muted by the driver automatically so you do not need to do it form user space (except if you wnat to make SW mute).



Earlier graph
the next picture show the mixer control before 3.0 kernel.