深入剖析Android音訊之AudioPolicyService

快樂安卓發表於2014-09-23

AudioPolicyService是策略的制定者,比如什麼時候開啟音訊介面裝置、某種Stream型別的音訊對應什麼裝置等等。而AudioFlinger則是策略的執行者,例如具體如何與音訊裝置通訊,如何維護現有系統中的音訊裝置,以及多個音訊流的混音如何處理等等都得由它來完成。AudioPolicyService根據使用者配置來指導AudioFlinger載入裝置介面,起到路由功能。

AudioPolicyService啟動過程

AudioPolicyService服務執行在mediaserver程式中,隨著mediaserver程式啟動而啟動。

frameworks\av\media\mediaserver\ Main_mediaserver.cpp

int main(int argc, char** argv)
{
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    ALOGI("ServiceManager: %p", sm.get());
    VolumeManager::instantiate(); // volumemanager have to be started before audioflinger
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}

AudioPolicyService繼承了模板類BinderService,該類用於註冊native service。

frameworks\native\include\binder\ BinderService.h

template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated);
    }
    static void instantiate() { publish(); }
};

BinderService是一個模板類,該類的publish函式就是完成向ServiceManager註冊服務。

static const char *getServiceName() { return "media.audio_policy"; }

AudioPolicyService註冊名為media.audio_policy的服務。

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService() , mpAudioPolicyDev(NULL) , mpAudioPolicy(NULL)
{
    char value[PROPERTY_VALUE_MAX];
    const struct hw_module_t *module;
    int forced_val;
    int rc;
    Mutex::Autolock _l(mLock);
    // start tone playback thread
    mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
    // start audio commands thread
    mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
    // start output activity command thread
    mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
    /* instantiate the audio policy manager */
	/* 載入audio_policy.default.so庫得到audio_policy_module模組 */
    rc = hw_get_module(AUDIO_POLICY_HARDWARE_MODULE_ID, &module);
    if (rc)
        return;
	/* 通過audio_policy_module模組開啟audio_policy_device裝置 */
    rc = audio_policy_dev_open(module, &mpAudioPolicyDev);
    ALOGE_IF(rc, "couldn't open audio policy device (%s)", strerror(-rc));
    if (rc)
        return;
	//通過audio_policy_device裝置建立audio_policy
    rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
                                               &mpAudioPolicy);
    ALOGE_IF(rc, "couldn't create audio policy (%s)", strerror(-rc));
    if (rc)
        return;
    rc = mpAudioPolicy->init_check(mpAudioPolicy);
    ALOGE_IF(rc, "couldn't init_check the audio policy (%s)", strerror(-rc));
    if (rc)
        return;
    /* SPRD: maybe set this property better, but here just change the default value @{ */
    property_get("ro.camera.sound.forced", value, "1");
    forced_val = strtol(value, NULL, 0);
    ALOGV("setForceUse() !forced_val=%d ",!forced_val);
    mpAudioPolicy->set_can_mute_enforced_audible(mpAudioPolicy, !forced_val);
    ALOGI("Loaded audio policy from %s (%s)", module->name, module->id);
    // 讀取audio_effects.conf檔案
    if (access(AUDIO_EFFECT_VENDOR_CONFIG_FILE, R_OK) == 0) {
        loadPreProcessorConfig(AUDIO_EFFECT_VENDOR_CONFIG_FILE);
    } else if (access(AUDIO_EFFECT_DEFAULT_CONFIG_FILE, R_OK) == 0) {
        loadPreProcessorConfig(AUDIO_EFFECT_DEFAULT_CONFIG_FILE);
    }
}
  1. 建立AudioCommandThread (ApmToneApmAudioApmOutput)
  2. 載入legacy_ap_module
  3. 開啟legacy_ap_device
  4. 建立legacy_audio_policy
  5. 讀取audio_effects.conf

建立AudioCommandThread執行緒

在AudioPolicyService物件構造過程中,分別建立了ApmTone、ApmAudio、ApmOutput三個AudioCommandThread執行緒:

1、 ApmTone用於播放tone音;

2、 ApmAudio用於執行audio命令;

3、ApmOutput用於執行輸出命令;

在第一次強引用AudioCommandThread執行緒物件時,AudioCommandThread的onFirstRef函式被回撥,在此啟動執行緒

void AudioPolicyService::AudioCommandThread::onFirstRef()
{
    run(mName.string(), ANDROID_PRIORITY_AUDIO);
}

這裡採用非同步方式來執行audio command,當需要執行上表中的命令時,首先將命令投遞到AudioCommandThread的mAudioCommands命令向量表中,然後通過mWaitWorkCV.signal()喚醒AudioCommandThread執行緒,被喚醒的AudioCommandThread執行緒執行完command後,又通過mWaitWorkCV.waitRelative(mLock, waitTime)睡眠等待命令到來。

載入audio_policy_module模組

audio_policy硬體抽象層動態庫位於/system/lib/hw/目錄下,命名為:audio_policy.$(TARGET_BOARD_PLATFORM).so。audiopolicy的硬體抽象層定義在hardware\libhardware_legacy\audio\audio_policy_hal.cpp中,AUDIO_POLICY_HARDWARE_MODULE_ID硬體抽象模組定義如下:

hardware\libhardware_legacy\audio\ audio_policy_hal.cpp【audio_policy.scx15.so】

struct legacy_ap_module HAL_MODULE_INFO_SYM = {
    module: {
        common: {
            tag: HARDWARE_MODULE_TAG,
            version_major: 1,
            version_minor: 0,
            id: AUDIO_POLICY_HARDWARE_MODULE_ID,
            name: "LEGACY Audio Policy HAL",
            author: "The Android Open Source Project",
            methods: &legacy_ap_module_methods,
            dso : NULL,
            reserved : {0},
        },
    },
};

legacy_ap_module繼承於audio_policy_module


 

關於hw_get_module函式載入硬體抽象層模組的過程請參考Android硬體抽象Hardware庫載入過程原始碼分析

開啟audio_policy_device裝置

hardware\libhardware\include\hardware\ audio_policy.h

static inline int audio_policy_dev_open(const hw_module_t* module,
                                    struct audio_policy_device** device)
{
    return module->methods->open(module, AUDIO_POLICY_INTERFACE,
                                 (hw_device_t**)device);
}

通過legacy_ap_module模組的open方法來開啟一個legacy_ap_device裝置。

hardware\libhardware_legacy\audio\ audio_policy_hal.cpp

static int legacy_ap_dev_open(const hw_module_t* module, const char* name,
                                    hw_device_t** device)
{
    struct legacy_ap_device *dev;
    if (strcmp(name, AUDIO_POLICY_INTERFACE) != 0)
        return -EINVAL;
    dev = (struct legacy_ap_device *)calloc(1, sizeof(*dev));
    if (!dev)
        return -ENOMEM;
    dev->device.common.tag = HARDWARE_DEVICE_TAG;
    dev->device.common.version = 0;
    dev->device.common.module = const_cast<hw_module_t*>(module);
    dev->device.common.close = legacy_ap_dev_close;
    dev->device.create_audio_policy = create_legacy_ap;
    dev->device.destroy_audio_policy = destroy_legacy_ap;
    *device = &dev->device.common;
    return 0;
}

開啟得到一個legacy_ap_device裝置,通過該抽象裝置可以建立一個audio_policy物件。

建立audio_policy物件

在開啟legacy_ap_device裝置時,該裝置的create_audio_policy成員初始化為create_legacy_ap函式指標,我們通過legacy_ap_device裝置可以建立一個legacy_audio_policy物件。

rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
                                               &mpAudioPolicy);

這裡通過audio_policy_device裝置建立audio策略物件

hardware\libhardware_legacy\audio\ audio_policy_hal.cpp

static int create_legacy_ap(const struct audio_policy_device *device,
                            struct audio_policy_service_ops *aps_ops,
                            void *service,
                            struct audio_policy **ap)
{
    struct legacy_audio_policy *lap;
    int ret;
    if (!service || !aps_ops)
        return -EINVAL;
    lap = (struct legacy_audio_policy *)calloc(1, sizeof(*lap));
    if (!lap)
        return -ENOMEM;
lap->policy.set_device_connection_state = ap_set_device_connection_state;
…
    lap->policy.dump = ap_dump;
    lap->policy.is_offload_supported = ap_is_offload_supported;
    lap->service = service;
    lap->aps_ops = aps_ops;
    lap->service_client = new AudioPolicyCompatClient(aps_ops, service);
    if (!lap->service_client) {
        ret = -ENOMEM;
        goto err_new_compat_client;
    }
    lap->apm = createAudioPolicyManager(lap->service_client);
    if (!lap->apm) {
        ret = -ENOMEM;
        goto err_create_apm;
    }
    *ap = &lap->policy;
    return 0;
err_create_apm:
    delete lap->service_client;
err_new_compat_client:
    free(lap);
    *ap = NULL;
    return ret;
}

audio_policy實現在audio_policy_hal.cpp中,audio_policy_service_ops實現在AudioPolicyService.cpp中。create_audio_policy()函式就是建立並初始化一個legacy_audio_policy物件。

audio_policy與AudioPolicyService、AudioPolicyCompatClient之間的關係如下:

AudioPolicyClient建立

hardware\libhardware_legacy\audio\ AudioPolicyCompatClient.h

AudioPolicyCompatClient(struct audio_policy_service_ops *serviceOps,void *service) :
		mServiceOps(serviceOps) , mService(service) {}

AudioPolicyCompatClient是對audio_policy_service_ops的封裝類,對外提供audio_policy_service_ops資料結構中定義的介面。

AudioPolicyManager建立

extern "C" AudioPolicyInterface* createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
    ALOGI("SPRD policy manager created.");
    return new AudioPolicyManagerSPRD(clientInterface);
}

使用AudioPolicyClientInterface物件來構造AudioPolicyManagerSPRD物件,AudioPolicyManagerSPRD繼承於AudioPolicyManagerBase,而AudioPolicyManagerBase又繼承於AudioPolicyInterface。

hardware\libhardware_legacy\audio\ AudioPolicyManagerBase.cpp

AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
    :
#ifdef AUDIO_POLICY_TEST
    Thread(false),
#endif //AUDIO_POLICY_TEST
    //變數初始化
    mPrimaryOutput((audio_io_handle_t)0),
    mAvailableOutputDevices(AUDIO_DEVICE_NONE),
    mPhoneState(AudioSystem::MODE_NORMAL),
    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
    mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),
    mA2dpSuspended(false), mHasA2dp(false), mHasUsb(false), mHasRemoteSubmix(false),
    mSpeakerDrcEnabled(false), mFmOffGoing(false)
{
	//引用AudioPolicyCompatClient物件,這樣音訊管理器AudioPolicyManager就可以使用audio_policy_service_ops中的介面
    mpClientInterface = clientInterface;
    for (int i = 0; i < AudioSystem::NUM_FORCE_USE; i++) {
        mForceUse[i] = AudioSystem::FORCE_NONE;
    }
    mA2dpDeviceAddress = String8("");
    mScoDeviceAddress = String8("");
    mUsbCardAndDevice = String8("");
    /**
     * 優先載入/vendor/etc/audio_policy.conf配置檔案,如果該配置檔案不存在,則
     * 載入/system/etc/audio_policy.conf配置檔案,如果該檔案還是不存在,則通過
     * 函式defaultAudioPolicyConfig()來設定預設音訊介面
     */
    if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {
        if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {
            ALOGE("could not load audio policy configuration file, setting defaults");
            defaultAudioPolicyConfig();
        }
    }
    //設定各種音訊流對應的音量調節點,must be done after reading the policy
    initializeVolumeCurves();
    // open all output streams needed to access attached devices
    for (size_t i = 0; i < mHwModules.size(); i++) {
    	//通過名稱開啟對應的音訊介面硬體抽象庫
        mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
        if (mHwModules[i]->mHandle == 0) {
            ALOGW("could not open HW module %s", mHwModules[i]->mName);
            continue;
        }
        // open all output streams needed to access attached devices
        // except for direct output streams that are only opened when they are actually
        // required by an app.
        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
        {
            const IOProfile *outProfile = mHwModules[i]->mOutputProfiles[j];
            //開啟mAttachedOutputDevices對應的輸出
            if ((outProfile->mSupportedDevices & mAttachedOutputDevices) &&
                    ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0)) {
            	//將輸出IOProfile封裝為AudioOutputDescriptor物件
                AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(outProfile);
                //設定當前音訊介面的預設輸出裝置
                outputDesc->mDevice = (audio_devices_t)(mDefaultOutputDevice & outProfile->mSupportedDevices);
                //開啟輸出,在AudioFlinger中建立PlaybackThread執行緒,並返回該執行緒的id
                audio_io_handle_t output = mpClientInterface->openOutput(
                                                outProfile->mModule->mHandle,
                                                &outputDesc->mDevice,
                                                &outputDesc->mSamplingRate,
                                                &outputDesc->mFormat,
                                                &outputDesc->mChannelMask,
                                                &outputDesc->mLatency,
                                                outputDesc->mFlags);
                if (output == 0) {
                    delete outputDesc;
                } else {
                	//設定可以使用的輸出裝置為mAttachedOutputDevices
                    mAvailableOutputDevices =(audio_devices_t)(mAvailableOutputDevices | (outProfile->mSupportedDevices & mAttachedOutputDevices));
                    if (mPrimaryOutput == 0 && outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {
                        mPrimaryOutput = output;
                    }
                    //將輸出描述符物件AudioOutputDescriptor及建立的PlaybackThread執行緒id以鍵值對形式儲存
                    addOutput(output, outputDesc);
                    //設定預設輸出裝置
                    setOutputDevice(output,(audio_devices_t)(mDefaultOutputDevice & outProfile->mSupportedDevices),true);
                }
            }
        }
    }
    ALOGE_IF((mAttachedOutputDevices & ~mAvailableOutputDevices),
             "Not output found for attached devices %08x",
             (mAttachedOutputDevices & ~mAvailableOutputDevices));
    ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output");
    updateDevicesAndOutputs();

    //  add for bug158794 start
    char bootvalue[PROPERTY_VALUE_MAX];
    // prop sys.boot_completed will set 1 when system ready (ActivityManagerService.java)...
    property_get("sys.boot_completed", bootvalue, "");
    if (strncmp("1", bootvalue, 1) != 0) {
        startReadingThread();
    }
    // add for bug158794 end

#ifdef AUDIO_POLICY_TEST
    ...
#endif //AUDIO_POLICY_TEST
}

AudioPolicyManagerBase物件構造過程中主要完成以下幾個步驟:

1、  loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE)載入audio_policy.conf配置檔案;

2、  initializeVolumeCurves()初始化各種音訊流對應的音量調節點;

3、  載入audio policy硬體抽象庫:mpClientInterface->loadHwModule(mHwModules[i]->mName)

4、  開啟attached_output_devices輸出:

mpClientInterface->openOutput();

5、  儲存輸出裝置描述符物件:addOutput(output, outputDesc);

讀取audio_policy.conf檔案

Android為每種音訊介面定義了對應的硬體抽象層,且編譯為單獨的so庫。

每種音訊介面定義了不同的輸入輸出,一個介面可以具有多個輸入或者輸出,每個輸入輸出有可以支援不同的音訊裝置。通過讀取audio_policy.conf檔案可以獲取系統支援的音訊介面引數。

audio_policy.conf檔案定義了兩種音訊配置資訊:

1、  當前系統支援的音訊輸入輸出裝置及預設輸入輸出裝置;

這些資訊時通過global_configuration配置項來設定,在global_configuration中定義了三種音訊裝置資訊:

attached_output_devices:已連線的輸出裝置;

default_output_device:預設輸出裝置;

attached_input_devices:已連線的輸入裝置;

 

1、  系統支援的音訊介面資訊;

audio_policy.conf定義了系統支援的所有音訊介面引數資訊,比如primary、a2dp、usb等,對於primary定義如下:

a2dp定義:

usb定義:

每種音訊介面包含輸入輸出,每種輸入輸出又包含多種輸入輸出配置,每種輸入輸出配置又支援多種音訊裝置。AudioPolicyManagerBase首先載入/vendor/etc/audio_policy.conf,如果該檔案不存在,則加/system/etc/audio_policy.conf。

status_t AudioPolicyManagerBase::loadAudioPolicyConfig(const char *path)
{
    cnode *root;
    char *data;
    data = (char *)load_file(path, NULL);
    if (data == NULL) {
        return -ENODEV;
    }
    root = config_node("", "");
    //讀取配置檔案
    config_load(root, data);
    //解析global_configuration
    loadGlobalConfig(root);
    //解析audio_hw_modules
    loadHwModules(root);
    config_free(root);
    free(root);
    free(data);
    ALOGI("loadAudioPolicyConfig() loaded %s\n", path);
    return NO_ERROR;
}

通過loadGlobalConfig(root)函式來讀取這些全域性配置資訊。

void AudioPolicyManagerBase::loadGlobalConfig(cnode *root)
{
    cnode *node = config_find(root, GLOBAL_CONFIG_TAG);
    if (node == NULL) {
        return;
    }
    node = node->first_child;
    while (node) {
    	//attached_output_devices AUDIO_DEVICE_OUT_EARPIECE
        if (strcmp(ATTACHED_OUTPUT_DEVICES_TAG, node->name) == 0) {
            mAttachedOutputDevices = parseDeviceNames((char *)node->value);
            ALOGW_IF(mAttachedOutputDevices == AUDIO_DEVICE_NONE,
                    "loadGlobalConfig() no attached output devices");
            ALOGV("loadGlobalConfig()mAttachedOutputDevices%04x", mAttachedOutputDevices);
        //default_output_device AUDIO_DEVICE_OUT_SPEAKER
        } else if (strcmp(DEFAULT_OUTPUT_DEVICE_TAG, node->name) == 0) {
            mDefaultOutputDevice= (audio_devices_t)stringToEnum(sDeviceNameToEnumTable,ARRAY_SIZE(sDeviceNameToEnumTable),(char *)node->value);
            ALOGW_IF(mDefaultOutputDevice == AUDIO_DEVICE_NONE,
                    "loadGlobalConfig() default device not specified");
            ALOGV("loadGlobalConfig() mDefaultOutputDevice %04x", mDefaultOutputDevice);
        //attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC
        } else if (strcmp(ATTACHED_INPUT_DEVICES_TAG, node->name) == 0) {
            mAvailableInputDevices = parseDeviceNames((char *)node->value) & ~AUDIO_DEVICE_BIT_IN;
            ALOGV("loadGlobalConfig() mAvailableInputDevices %04x", mAvailableInputDevices);
        //speaker_drc_enabled 
        } else if (strcmp(SPEAKER_DRC_ENABLED_TAG, node->name) == 0) {
            mSpeakerDrcEnabled = stringToBool((char *)node->value);
            ALOGV("loadGlobalConfig() mSpeakerDrcEnabled = %d", mSpeakerDrcEnabled);
        }
        node = node->next;
    }
}

audio_policy.conf同時定義了多個audio 介面,每一個audio 介面包含若干output和input,而每個output和input又同時支援多種輸入輸出模式,每種輸入輸出模式又支援若干種裝置。

通過loadHwModules ()函式來載入系統配置的所有audio 介面:

void AudioPolicyManagerBase::loadHwModules(cnode *root)
{
	//audio_hw_modules
    cnode *node = config_find(root, AUDIO_HW_MODULE_TAG);
    if (node == NULL) {
        return;
    }
    node = node->first_child;
    while (node) {
        ALOGV("loadHwModules() loading module %s", node->name);
        //載入音訊介面
        loadHwModule(node);
        node = node->next;
    }
}

由於audio_policy.conf可以定義多個音訊介面,因此該函式迴圈呼叫loadHwModule()來解析每個音訊介面引數資訊。Android定義HwModule類來描述每一個audio 介面引數,定義IOProfile類來描述輸入輸出模式配置。


到此就將audio_policy.conf檔案中音訊介面配置資訊解析到了AudioPolicyManagerBase的成員變數mHwModules、mAttachedOutputDevices、mDefaultOutputDevice、mAvailableInputDevices中。

初始化音量調節點

音量調節點設定在Android4.1與Android4.4中的實現完全不同,在Android4.1中是通過VolumeManager服務來管理,通過devicevolume.xml檔案來配置,但Android4.4取消了VolumeManager服務,將音量控制放到AudioPolicyManagerBase中。在AudioPolicyManagerBase中定義了音量調節對應的音訊流描述符陣列:

StreamDescriptor mStreams[AudioSystem::NUM_STREAM_TYPES];

initializeVolumeCurves()函式就是初始化該陣列元素:

void AudioPolicyManagerBase::initializeVolumeCurves()
{
    for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
        for (int j = 0; j < DEVICE_CATEGORY_CNT; j++) {
            mStreams[i].mVolumeCurve[j] =
                    sVolumeProfiles[i][j];
        }
    }

    // Check availability of DRC on speaker path: if available, override some of the speaker curves
    if (mSpeakerDrcEnabled) {
mStreams[AUDIO_STREAM_SYSTEM].mVolumeCurve[DEVICE_CATEGORY_SPEAKER] =
                sDefaultSystemVolumeCurveDrc;
mStreams[AUDIO_STREAM_RING].mVolumeCurve[DEVICE_CATEGORY_SPEAKER] =
                sSpeakerSonificationVolumeCurveDrc;
mStreams[AUDIO_STREAM_ALARM].mVolumeCurve[DEVICE_CATEGORY_SPEAKER] =
                sSpeakerSonificationVolumeCurveDrc;
mStreams[AUDIO_STREAM_NOTIFICATION].mVolumeCurve[DEVICE_CATEGORY_SPEAKER] =sSpeakerSonificationVolumeCurveDrc;
    }
}

sVolumeProfiles陣列定義了不同音訊裝置下不同音訊流對應的音量調節檔位,定義如下:

陣列元素為音量調節檔位,每種模式下的音量調節都包含4個檔位,定義如下:

載入audio_module模組

AudioPolicyManager通過讀取audio_policy.conf配置檔案,可以知道系統當前支援那些音訊介面以及attached的輸入輸出裝置、預設輸出裝置。接下來就需要載入這些音訊介面的硬體抽象庫。

這三中音訊介面硬體抽象定義如下:

/vendor/sprd/open-source/libs/audio/audio_hw.c 【audio.primary.scx15.so】

struct audio_module HAL_MODULE_INFO_SYM = {
    .common = {
        .tag = HARDWARE_MODULE_TAG,
        .module_api_version = AUDIO_MODULE_API_VERSION_0_1,
        .hal_api_version = HARDWARE_HAL_API_VERSION,
        .id = AUDIO_HARDWARE_MODULE_ID,
        .name = "Spreadtrum Audio HW HAL",
        .author = "The Android Open Source Project",
        .methods = &hal_module_methods,
    },
};


external/bluetooth/bluedroid/audio_a2dp_hw/audio_a2dp_hw.c【audio.a2dp.default.so】

struct audio_module HAL_MODULE_INFO_SYM = {
    .common = {
        .tag = HARDWARE_MODULE_TAG,
        .version_major = 1,
        .version_minor = 0,
        .id = AUDIO_HARDWARE_MODULE_ID,
        .name = "A2DP Audio HW HAL",
        .author = "The Android Open Source Project",
        .methods = &hal_module_methods,
    },
};

hardware/libhardware/modules/usbaudio/audio_hw.c【audio. usb.default.so】

struct audio_module HAL_MODULE_INFO_SYM = {
    .common = {
        .tag = HARDWARE_MODULE_TAG,
        .module_api_version = AUDIO_MODULE_API_VERSION_0_1,
        .hal_api_version = HARDWARE_HAL_API_VERSION,
        .id = AUDIO_HARDWARE_MODULE_ID,
        .name = "USB audio HW HAL",
        .author = "The Android Open Source Project",
        .methods = &hal_module_methods,
    },
};

AudioPolicyClientInterface提供了載入音訊介面硬體抽象庫的介面函式,通過前面的介紹,我們知道,AudioPolicyCompatClient通過代理audio_policy_service_ops實現AudioPolicyClientInterface介面。

hardware\libhardware_legacy\audio\ AudioPolicyCompatClient.cpp

audio_module_handle_t AudioPolicyCompatClient::loadHwModule(const char *moduleName)
{
    return mServiceOps->load_hw_module(mService, moduleName);
}

AudioPolicyCompatClient將音訊模組載入工作交給audio_policy_service_ops

frameworks\av\services\audioflinger\ AudioPolicyService.cpp

static audio_module_handle_t aps_load_hw_module(void *service,const char *name)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return 0;
    }
    return af->loadHwModule(name);
}

AudioPolicyService又將其轉交給AudioFlinger

frameworks\av\services\audioflinger\ AudioFlinger.cpp

audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
{
    if (!settingsAllowed()) {
        return 0;
    }
    Mutex::Autolock _l(mLock);
    return loadHwModule_l(name);
}


audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }
audio_hw_device_t *dev; 
//載入音訊介面對應的so庫,得到對應的音訊介面裝置audio_hw_device_t
    int rc = load_audio_interface(name, &dev);
    if (rc) {
        ALOGI("loadHwModule() error %d loading module %s ", rc, name);
        return 0;
    }
    mHardwareStatus = AUDIO_HW_INIT;
    rc = dev->init_check(dev);
    mHardwareStatus = AUDIO_HW_IDLE;
    if (rc) {
        ALOGI("loadHwModule() init check error %d for module %s ", rc, name);
        return 0;
    }
    if ((mMasterVolumeSupportLvl != MVS_NONE) &&
        (NULL != dev->set_master_volume)) {
        AutoMutex lock(mHardwareLock);
        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
        dev->set_master_volume(dev, mMasterVolume);
        mHardwareStatus = AUDIO_HW_IDLE;
    }
    audio_module_handle_t handle = nextUniqueId();
    mAudioHwDevs.add(handle, new AudioHwDevice(name, dev));
    ALOGI("loadHwModule() Loaded %s audio interface from %s (%s) handle %d",
          name, dev->common.module->name, dev->common.module->id, handle);
    return handle;
}

函式首先載入系統定義的音訊介面對應的so庫,並開啟該音訊介面的抽象硬體裝置audio_hw_device_t,為每個音訊介面裝置生成獨一無二的ID號,同時將開啟的音訊介面裝置封裝為AudioHwDevice物件,將系統中所有的音訊介面裝置儲存到AudioFlinger的成員變數mAudioHwDevs中。

函式load_audio_interface根據音訊介面名稱來開啟抽象的音訊介面裝置audio_hw_device_t。

static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
    const hw_module_t *mod;
int rc;
//根據名字載入audio_module模組
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    ALOGE_IF(rc, "%s couldn't load audio hw module %s.%s (%s)", __func__,
                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
    if (rc) {
        goto out;
}
//開啟audio_device裝置
    rc = audio_hw_device_open(mod, dev);
    ALOGE_IF(rc, "%s couldn't open audio hw device in %s.%s (%s)", __func__,
                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
    if (rc) {
        goto out;
    }
    if ((*dev)->common.version != AUDIO_DEVICE_API_VERSION_CURRENT) {
        ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);
        rc = BAD_VALUE;
        goto out;
    }
    return 0;
out:
    *dev = NULL;
    return rc;
}

hardware\libhardware\include\hardware\ Audio.h

static inline int audio_hw_device_open(const struct hw_module_t* module,
                                       struct audio_hw_device** device)
{
    return module->methods->open(module, AUDIO_HARDWARE_INTERFACE,
                                 (struct hw_device_t**)device);
}

hardware\libhardware_legacy\audio\ audio_hw_hal.cpp

static int legacy_adev_open(const hw_module_t* module, const char* name,
                            hw_device_t** device)
{
    struct legacy_audio_device *ladev;
    int ret;
    if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0)
        return -EINVAL;
    ladev = (struct legacy_audio_device *)calloc(1, sizeof(*ladev));
    if (!ladev)
        return -ENOMEM;
    ladev->device.common.tag = HARDWARE_DEVICE_TAG;
    ladev->device.common.version = AUDIO_DEVICE_API_VERSION_1_0;
    ladev->device.common.module = const_cast<hw_module_t*>(module);
    ladev->device.common.close = legacy_adev_close;
    ladev->device.get_supported_devices = adev_get_supported_devices;
…
ladev->device.dump = adev_dump;
    ladev->hwif = createAudioHardware();
    if (!ladev->hwif) {
        ret = -EIO;
        goto err_create_audio_hw;
    }
    *device = &ladev->device.common;
    return 0;
err_create_audio_hw:
    free(ladev);
    return ret;
}

開啟音訊介面裝置過程其實就是構造並初始化legacy_audio_device物件過程,legacy_audio_device資料結構關係如下:

 

legacy_adev_open函式就是建立並初始化一個legacy_audio_device物件:


到此就載入完系統定義的所有音訊介面,並生成相應的資料物件,如下圖所示:

開啟音訊輸出

AudioPolicyService載入完所有音訊介面後,就知道了系統支援的所有音訊介面引數,可以為音訊輸出提供決策。

為了能正常播放音訊資料,需要建立抽象的音訊輸出介面物件,開啟音訊輸出過程如下:

audio_io_handle_t AudioPolicyCompatClient::openOutput(audio_module_handle_t module,
                                              audio_devices_t *pDevices,
                                              uint32_t *pSamplingRate,
                                              audio_format_t *pFormat,
                                              audio_channel_mask_t *pChannelMask,  
                                              uint32_t *pLatencyMs,
                                              audio_output_flags_t flags,
                                              const audio_offload_info_t *offloadInfo)
{
    return mServiceOps->open_output_on_module(mService,module, pDevices, pSamplingRate,
                                              pFormat, pChannelMask, pLatencyMs,
                                              flags, offloadInfo);
}

 

static audio_io_handle_t aps_open_output_on_module(void *service,
                                          audio_module_handle_t module,
                                          audio_devices_t *pDevices,
                                          uint32_t *pSamplingRate,
                                          audio_format_t *pFormat,
                                          audio_channel_mask_t *pChannelMask,
                                          uint32_t *pLatencyMs,
                                          audio_output_flags_t flags,
                                          const audio_offload_info_t *offloadInfo)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return 0;
    }
    return af->openOutput(module, pDevices, pSamplingRate, pFormat, pChannelMask,
                          pLatencyMs, flags, offloadInfo);
}



audio_io_handle_t AudioFlinger::openOutput(audio_module_handle_t module,
                                           audio_devices_t *pDevices,
                                           uint32_t *pSamplingRate,
                                           audio_format_t *pFormat,
                                           audio_channel_mask_t *pChannelMask,
                                           uint32_t *pLatencyMs,
                                           audio_output_flags_t flags,
                                           const audio_offload_info_t *offloadInfo)
{
    PlaybackThread *thread = NULL;
    struct audio_config config;
    config.sample_rate = (pSamplingRate != NULL) ? *pSamplingRate : 0;
    config.channel_mask = (pChannelMask != NULL) ? *pChannelMask : 0;
    config.format = (pFormat != NULL) ? *pFormat : AUDIO_FORMAT_DEFAULT;
    if (offloadInfo) {
        config.offload_info = *offloadInfo;
    }
	//建立一個音訊輸出流物件audio_stream_out_t
    audio_stream_out_t *outStream = NULL;
    AudioHwDevice *outHwDev;
    ALOGV("openOutput(), module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x",
              module,
              (pDevices != NULL) ? *pDevices : 0,
              config.sample_rate,
              config.format,
              config.channel_mask,
              flags);
    ALOGV("openOutput(), offloadInfo %p version 0x%04x",
          offloadInfo, offloadInfo == NULL ? -1 : offloadInfo->version );
    if (pDevices == NULL || *pDevices == 0) {
        return 0;
    }
    Mutex::Autolock _l(mLock);
	//從音訊介面列表mAudioHwDevs中查詢出對應的音訊介面,如果找不到,則重新載入音訊介面動態庫
    outHwDev = findSuitableHwDev_l(module, *pDevices);
    if (outHwDev == NULL)
        return 0;
	//取出module對應的audio_hw_device_t裝置
    audio_hw_device_t *hwDevHal = outHwDev->hwDevice();
	//為音訊輸出流生成一個獨一無二的id號
    audio_io_handle_t id = nextUniqueId();
    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
	//開啟音訊輸出流
    status_t status = hwDevHal->open_output_stream(hwDevHal,
                                          id,
                                          *pDevices,
                                          (audio_output_flags_t)flags,
                                          &config,
                                          &outStream);
    mHardwareStatus = AUDIO_HW_IDLE;
    ALOGV("openOutput() openOutputStream returned output %p, SamplingRate %d, Format %#08x, "
            "Channels %x, status %d",
            outStream,
            config.sample_rate,
            config.format,
            config.channel_mask,
            status);
    if (status == NO_ERROR && outStream != NULL) {
		//使用AudioStreamOut來封裝音訊輸出流audio_stream_out_t
        AudioStreamOut *output = new AudioStreamOut(outHwDev, outStream, flags);
		//根據flag標誌位,建立不同型別的執行緒
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, output, id, *pDevices);
            ALOGV("openOutput() created offload output: ID %d thread %p", id, thread);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) ||
            (config.format != AUDIO_FORMAT_PCM_16_BIT) ||
            (config.channel_mask != AUDIO_CHANNEL_OUT_STEREO)) {
            thread = new DirectOutputThread(this, output, id, *pDevices);
            ALOGV("openOutput() created direct output: ID %d thread %p", id, thread);
        } else {
            thread = new MixerThread(this, output, id, *pDevices);
            ALOGV("openOutput() created mixer output: ID %d thread %p", id, thread);
        }
		//將建立的執行緒及id以鍵值對的形式儲存在mPlaybackThreads中
        mPlaybackThreads.add(id, thread);
        if (pSamplingRate != NULL) {
            *pSamplingRate = config.sample_rate;
        }
        if (pFormat != NULL) {
            *pFormat = config.format;
        }
        if (pChannelMask != NULL) {
            *pChannelMask = config.channel_mask;
        }
        if (pLatencyMs != NULL) {
            *pLatencyMs = thread->latency();
        }
        // notify client processes of the new output creation
        thread->audioConfigChanged_l(AudioSystem::OUTPUT_OPENED);
        // the first primary output opened designates the primary hw device
        if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
            ALOGI("Using module %d has the primary audio interface", module);
            mPrimaryHardwareDev = outHwDev;
            AutoMutex lock(mHardwareLock);
            mHardwareStatus = AUDIO_HW_SET_MODE;
            hwDevHal->set_mode(hwDevHal, mMode);
            mHardwareStatus = AUDIO_HW_IDLE;
        }
        return id;
    }
    return 0;
}

開啟音訊輸出流過程其實就是建立AudioStreamOut物件及PlaybackThread執行緒過程。首先通過抽象的音訊介面裝置audio_hw_device_t來建立輸出流物件legacy_stream_out。

static int adev_open_output_stream(struct audio_hw_device *dev,
                                   audio_io_handle_t handle,
                                   audio_devices_t devices,
                                   audio_output_flags_t flags,
                                   struct audio_config *config,
                                   struct audio_stream_out **stream_out)
{
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_out *out;
int ret;
//分配一個legacy_stream_out物件
    out = (struct legacy_stream_out *)calloc(1, sizeof(*out));
    if (!out)
        return -ENOMEM;
devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
//建立AudioStreamOut物件
    out->legacy_out = ladev->hwif->openOutputStream(devices, (int *) &config->format,
                                                    &config->channel_mask,
                                                    &config->sample_rate, &status);
    if (!out->legacy_out) {
        ret = status;
        goto err_open;
}
//初始化成員變數audio_stream
    out->stream.common.get_sample_rate = out_get_sample_rate;
    …
    *stream_out = &out->stream;
    return 0;
err_open:
    free(out);
    *stream_out = NULL;
    return ret;
}

由於legacy_audio_device的成員變數hwif的型別為AudioHardwareInterface,因此通過呼叫AudioHardwareInterface的介面openOutputStream()來建立AudioStreamOut物件。

AudioStreamOut* AudioHardwareStub::openOutputStream(
        uint32_t devices, int *format, uint32_t *channels, uint32_t *sampleRate, status_t *status)
{
    AudioStreamOutStub* out = new AudioStreamOutStub();
    status_t lStatus = out->set(format, channels, sampleRate);
    if (status) {
        *status = lStatus;
    }
    if (lStatus == NO_ERROR)
        return out;
    delete out;
    return 0;
}


開啟音訊輸出後,在AudioFlinger與AudioPolicyService中的表現形式如下:

 

開啟音訊輸入

audio_io_handle_t AudioPolicyCompatClient::openInput(audio_module_handle_t module,
                                             audio_devices_t *pDevices,
                                             uint32_t *pSamplingRate,
                                             audio_format_t *pFormat,
                                             audio_channel_mask_t *pChannelMask)
{
    return mServiceOps->open_input_on_module(mService, module, pDevices,pSamplingRate, pFormat, pChannelMask);
}


 

static audio_io_handle_t aps_open_input_on_module(void *service,
                                       audio_module_handle_t module,
                                       audio_devices_t *pDevices,
                                       uint32_t *pSamplingRate,
                                       audio_format_t *pFormat,
                                       audio_channel_mask_t *pChannelMask)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return 0;
    }
    return af->openInput(module, pDevices, pSamplingRate, pFormat, pChannelMask);
}

audio_io_handle_t AudioFlinger::openInput(audio_module_handle_t module,
                                          audio_devices_t *pDevices,
                                          uint32_t *pSamplingRate,
                                          audio_format_t *pFormat,
                                          audio_channel_mask_t *pChannelMask)
{
    status_t status;
    RecordThread *thread = NULL;
    struct audio_config config;
    config.sample_rate = (pSamplingRate != NULL) ? *pSamplingRate : 0;
    config.channel_mask = (pChannelMask != NULL) ? *pChannelMask : 0;
    config.format = (pFormat != NULL) ? *pFormat : AUDIO_FORMAT_DEFAULT;

    uint32_t reqSamplingRate = config.sample_rate;
    audio_format_t reqFormat = config.format;
    audio_channel_mask_t reqChannels = config.channel_mask;
    audio_stream_in_t *inStream = NULL;
    AudioHwDevice *inHwDev;
    if (pDevices == NULL || *pDevices == 0) {
        return 0;
    }
    Mutex::Autolock _l(mLock);
    inHwDev = findSuitableHwDev_l(module, *pDevices);
    if (inHwDev == NULL)
        return 0;
    audio_hw_device_t *inHwHal = inHwDev->hwDevice();
    audio_io_handle_t id = nextUniqueId();
    status = inHwHal->open_input_stream(inHwHal, id, *pDevices, &config,&inStream);
    ALOGV("openInput() openInputStream returned input %p, SamplingRate %d, Format %d, Channels %x, "
            "status %d",
            inStream,
            config.sample_rate,
            config.format,
            config.channel_mask,
            status);

    // If the input could not be opened with the requested parameters and we can handle the
    // conversion internally, try to open again with the proposed parameters. The AudioFlinger can
    // resample the input and do mono to stereo or stereo to mono conversions on 16 bit PCM inputs.
    if (status == BAD_VALUE &&reqFormat == config.format && config.format == AUDIO_FORMAT_PCM_16_BIT && (config.sample_rate <= 2 * reqSamplingRate) &&
        (popcount(config.channel_mask) <= FCC_2) && (popcount(reqChannels) <= FCC_2)) {
        ALOGV("openInput() reopening with proposed sampling rate and channel mask");
        inStream = NULL;
        status = inHwHal->open_input_stream(inHwHal, id, *pDevices, &config, &inStream);
    }

    if (status == NO_ERROR && inStream != NULL) {

#ifdef TEE_SINK
        // Try to re-use most recently used Pipe to archive a copy of input for dumpsys,
        // or (re-)create if current Pipe is idle and does not match the new format
      ...
#endif
        AudioStreamIn *input = new AudioStreamIn(inHwDev, inStream);
        // Start record thread
        // RecordThread requires both input and output device indication to forward to audio
        // pre processing modules
        thread = new RecordThread(this,
                                  input,
                                  reqSamplingRate,
                                  reqChannels,
                                  id,
                                  primaryOutputDevice_l(),
                                  *pDevices
#ifdef TEE_SINK
                                  , teeSink
#endif
                                  );
        mRecordThreads.add(id, thread);
        ALOGV("openInput() created record thread: ID %d thread %p", id, thread);
        if (pSamplingRate != NULL) {
            *pSamplingRate = reqSamplingRate;
        }
        if (pFormat != NULL) {
            *pFormat = config.format;
        }
        if (pChannelMask != NULL) {
            *pChannelMask = reqChannels;
        }
        // notify client processes of the new input creation
        thread->audioConfigChanged_l(AudioSystem::INPUT_OPENED);
        return id;
    }
    return 0;
}

開啟音訊輸入流過程其實就是建立AudioStreamIn物件及RecordThread執行緒過程。首先通過抽象的音訊介面裝置audio_hw_device_t來建立輸出流物件legacy_stream_in。

static int adev_open_input_stream(struct audio_hw_device *dev,
                                  audio_io_handle_t handle,
                                  audio_devices_t devices,
                                  struct audio_config *config,
                                  struct audio_stream_in **stream_in)
{
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_in *in;
    int ret;
    in = (struct legacy_stream_in *)calloc(1, sizeof(*in));
    if (!in)
        return -ENOMEM;
    devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
    in->legacy_in = ladev->hwif->openInputStream(devices, (int *) &config->format,
                                       &config->channel_mask,
                                       &config->sample_rate,
                                       &status, (AudioSystem::audio_in_acoustics)0);
    if (!in->legacy_in) {
        ret = status;
        goto err_open;
    }
    in->stream.common.get_sample_rate = in_get_sample_rate;
	…
    *stream_in = &in->stream;
    return 0;
err_open:
    free(in);
    *stream_in = NULL;
    return ret;
}

 

AudioStreamIn* AudioHardwareStub::openInputStream(
        uint32_t devices, int *format, uint32_t *channels, uint32_t *sampleRate,
        status_t *status, AudioSystem::audio_in_acoustics acoustics)
{
    // check for valid input source
    if (!AudioSystem::isInputDevice((AudioSystem::audio_devices)devices)) {
        return 0;
    }
    AudioStreamInStub* in = new AudioStreamInStub();
    status_t lStatus = in->set(format, channels, sampleRate, acoustics);
    if (status) {
        *status = lStatus;
    }
    if (lStatus == NO_ERROR)
        return in;
    delete in;
    return 0;
}

開啟音訊輸入建立了以下legacy_stream_in物件:

開啟音訊輸入後,在AudioFlinger與AudioPolicyService中的表現形式如下:

AudioPolicyManagerBase構造時,它會根據使用者提供的audio_policy.conf來分析系統中有哪些audio介面(primary,a2dp以及usb),然後通過AudioFlinger::loadHwModule載入各audio介面對應的庫檔案,並依次開啟其中的output(openOutput)input(openInput)

->開啟音訊輸出時建立一個audio_stream_out通道,並建立AudioStreamOut物件以及新建PlaybackThread播放執行緒。

-> 開啟音訊輸入時建立一個audio_stream_in通道,並建立AudioStreamIn物件以及建立RecordThread錄音執行緒。

相關文章