1、前言
本文主要研究高通平臺Camera驅動和HAL層程式碼架構,熟悉高通Camera的控制流程。 平臺:Qcom-高通平臺 Hal版本:【HAL1】 知識點如下: 從HAL層到driver層:研究Camera以下內容 1.開啟(open)流程 2.預覽(preview)流程 3.拍照(tackPicture)流程
2、Camera軟體架構
由上圖可以看出,Android Camera 框架是 client/service 的架構,
-
1.有兩個程式:
**client 程式:**可以看成是 AP 端,主要包括 JAVA 程式碼與一些 native c/c++程式碼;
**service 進 程:”**屬於服務端,是 native c/c++程式碼,主要負責和 linux kernel 中的 camera driver 互動,蒐集 linuxkernel 中 cameradriver 傳上來的資料,並交給顯示系統SurfaceFlinger顯示。
client 程式與 service 程式通過 Binder 機制通訊, client 端通過呼叫 service 端的介面實現各個具體的功能。
-
2.最下面的是kernel層的驅動,其中按照V4L2架構實現了camera sensor等驅動,向使用者空間提供/dev/video0節點,這些裝置節點檔案,把操作裝置的介面暴露給使用者空間。
-
3.在往上是HAL層,高通程式碼實現了對/dev/video0的基本操作,對接了android的camera相關的interface。
2.1 Camera的open流程
2.1.1 Hal層
Android中Camera的呼叫流程, 基本是 Java -> JNI -> Service -> HAL -> 驅動層。
frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h
status_t initialize(CameraModule *module) {
···
rc = module->open(mName.string(), (hw_device_t **)&mDevice);
···
}
複製程式碼
這裡呼叫module->open開始呼叫到HAL層,那呼叫的是哪個方法呢? 我們繼續往下看:
hardware/qcom/camera/QCamera2/HAL/wrapper/QualcommCamera.cpp
static hw_module_methods_t camera_module_methods = {
open: camera_device_open,
};
複製程式碼
實際上是呼叫了camera_device_open函式,為了對呼叫流程更加清晰的認識, 我畫了一張流程圖(畫圖工具:processon):
open流程圖已經很清晰明瞭,我們關注一些重點函式: 在HAL層的 module->open(mName.string(), (hw_device_t **)&mDevice)層層呼叫,最終會呼叫到函式mm_camera_open(cam_obj);hardware/qcom/camera/QCamera2/HAL/core/src/QCameraHWI.cpp
QCameraHardwareInterface::QCameraHardwareInterface(int cameraId, int mode)
{
···
/* Open camera stack! */
mCameraHandle=camera_open(mCameraId, &mem_hooks);
//Preview
result = createPreview();
//Record
result = createRecord();
//Snapshot
result = createSnapshot();
/* launch jpeg notify thread and raw data proc thread */
mNotifyTh = new QCameraCmdThread();
mDataProcTh = new QCameraCmdThread();
···
}
複製程式碼
分析:new QCameraHardwareInterface()進行初始化:主要做了以下動作:
- 1.開啟camera
- 2.creat preview stream、record stream、snapshot stream
- 3.建立2個執行緒(jpeg notify thread和raw data proc thread)
hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera.c
int32_t mm_camera_open(mm_camera_obj_t *my_obj)
{
···
my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK);
···
}
複製程式碼
在V4L2框架中,Camera被看做一個視訊裝置,使用open函式開啟這個裝置:這裡以阻塞模式開啟Camera。
1. 用非阻塞模式開啟攝像頭裝置
cameraFd = open("/dev/video0", O_RDWR | O_NONBLOCK);
2. 如果用阻塞模式開啟攝像頭裝置,上述程式碼變為:
cameraFd = open("/dev/video0", O_RDWR);
ps:關於阻塞模式和非阻塞模式
應用程式能夠使用阻塞模式或非阻塞模式開啟視訊裝置,如果使用非阻塞模式呼叫視訊裝置,
即使尚未捕獲到資訊,驅動依舊會把快取(DQBUFF)裡的東西返回給應用程式。
複製程式碼
那麼,接下來就會呼叫到Kernel層的程式碼
2.1.2Kernel層
kernel/drivers/media/platform/msm/camera_v2/msm.c
static struct v4l2_file_operations msm_fops = {
.owner = THIS_MODULE,
.open = msm_open,
.poll = msm_poll,
.release = msm_close,
.ioctl = video_ioctl2,
#ifdef CONFIG_COMPAT
.compat_ioctl32 = video_ioctl2,
#endif
};
複製程式碼
實際上是呼叫了msm_open這個函式,我們跟進去看:
static int msm_open(struct file *filep)
{
···
/* !!! only ONE open is allowed !!! */
if (atomic_cmpxchg(&pvdev->opened, 0, 1))
return -EBUSY;
spin_lock_irqsave(&msm_pid_lock, flags);
msm_pid = get_pid(task_pid(current));
spin_unlock_irqrestore(&msm_pid_lock, flags);
/* create event queue */
rc = v4l2_fh_open(filep);
if (rc < 0)
return rc;
spin_lock_irqsave(&msm_eventq_lock, flags);
msm_eventq = filep->private_data;
spin_unlock_irqrestore(&msm_eventq_lock, flags);
/* register msm_v4l2_pm_qos_request */
msm_pm_qos_add_request();
···
}
複製程式碼
分析: 通過呼叫v4l2_fh_open函式開啟Camera,該函式會建立event佇列等進行一些其他操作。
接下來我們跟著log去看: camera open log
<3>[ 12.526811] msm_camera_power_up type 1
<3>[ 12.526818] msm_camera_power_up:1303 gpio set val 33
<3>[ 12.528873] msm_camera_power_up index 6
<3>[ 12.528885] msm_camera_power_up type 1
<3>[ 12.528893] msm_camera_power_up:1303 gpio set val 33
<3>[ 12.534954] msm_camera_power_up index 7
<3>[ 12.534969] msm_camera_power_up type 1
<3>[ 12.534977] msm_camera_power_up:1303 gpio set val 28
<3>[ 12.540162] msm_camera_power_up index 8
<3>[ 12.540177] msm_camera_power_up type 1
<3>[ ·
<3>[ ·
<3>[ ·
<3>[ 12.562753] msm_sensor_match_id: read id: 0x5675 expected id 0x5675:
<3>[ 12.562763] ov5675_back probe succeeded
<3>[ 12.562771] msm_sensor_driver_create_i2c_v4l_subdev camera I2c probe succeeded
<3>[ 12.564930] msm_sensor_driver_create_i2c_v4l_subdev rc 0 session_id 1
<3>[ 12.565495] msm_sensor_driver_create_i2c_v4l_subdev:120
<3>[ 12.565507] msm_camera_power_down:1455
<3>[ 12.565514] msm_camera_power_down index 0
複製程式碼
分析: 最終就是呼叫msm_camera_power_up上電,msm_sensor_match_id識別sensor id,呼叫ov5675_back probe()探測函式去完成匹配裝置和驅動的工作,msm_camera_power_down下電!
到此 我們的open流程就結束了!!!
2.2 Camera的preview流程
2.2.1 Hal層
hardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp
int QCamera2HardwareInterface::startPreview()
{
···
int32_t rc = NO_ERROR;
···
rc = startChannel(QCAMERA_CH_TYPE_PREVIEW);
···
}
複製程式碼
這裡呼叫startChannel(QCAMERA_CH_TYPE_PREVIEW),開啟preview流。 接來下看我畫的一張流程圖:(Hal層)
關注一些重點函式: hardware/qcom/camera/QCamera2/HAL/QCameraChannel.cppint32_t QCameraChannel::start()
{
···
mStreams[i]->start();//流程1
···
rc = m_camOps->start_channel(m_camHandle, m_handle);//流程2
···
}
複製程式碼
進入QCameraChannel::start()函式開始執行兩個流程,分別是 mStreams[i]->start()和m_camOps->start_channel(m_camHandle, m_handle);
流程1:mStreams[i]->start()
1.通過mProcTh.launch(dataProcRoutine, this)開啟新執行緒
2.執行CAMERA_CMD_TYPE_DO_NEXT_JOB分支,
3.從mDataQ佇列中取出資料並放入mDataCB中,等待資料返回到對應的stream回撥中去,
4.最後向kernel請求資料;
複製程式碼
流程2:m_camOps->start_channel(m_camHandle, m_handle);
通過流程圖,我們可以清晰的看到,經過一系列複雜的呼叫用,
最後在mm_camera_channel.c中
呼叫mm_channel_start(mm_channel_t *my_obj)函式,
複製程式碼
來看mm_channel_start做了什麼事情: hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c
int32_t mm_channel_start(mm_channel_t *my_obj)
{
···
/* 需要傳送cb,因此啟動執行緒 */
/* 初始化superbuf佇列 */
mm_channel_superbuf_queue_init(&my_obj->bundle.superbuf_queue);
/* 啟動cb執行緒,通過cb排程superbuf中 */
snprintf(my_obj->cb_thread.threadName, THREAD_NAME_SIZE, "CAM_SuperBuf");
mm_camera_cmd_thread_launch(&my_obj->cb_thread,
mm_channel_dispatch_super_buf,
(void*)my_obj);
/* 啟動 cmd 執行緒,作為superbuf接收資料的回撥函式*/
snprintf(my_obj->cmd_thread.threadName, THREAD_NAME_SIZE, "CAM_SuperBufCB");
mm_camera_cmd_thread_launch(&my_obj->cmd_thread,
mm_channel_process_stream_buf,
(void*)my_obj);
/* 為每個strean分配 buf */
/*allocate buf*/
rc = mm_stream_fsm_fn(s_objs[i],
MM_STREAM_EVT_GET_BUF,
NULL,
NULL);
/* reg buf */
rc = mm_stream_fsm_fn(s_objs[i],
MM_STREAM_EVT_REG_BUF,
NULL,
NULL);
/* 開啟 stream */
rc = mm_stream_fsm_fn(s_objs[i],
MM_STREAM_EVT_START,
NULL,
NULL);
···
}
複製程式碼
過程包括:
- 1.建立cb thread,cmd thread執行緒以及
- 2.為每個stream分配buf
- 3.開啟stream; 我們繼續關注開啟stream後的流程: rc = mm_stream_fsm_fn(s_objs[i],MM_STREAM_EVT_START,NULL,NULL); 呼叫到 rc = mm_stream_fsm_reg(my_obj, evt, in_val, out_val) hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_stream.c
int32_t mm_stream_fsm_reg(···)
{
···
case MM_STREAM_EVT_START:
rc = mm_stream_streamon(my_obj);
···
}
複製程式碼
在mm_camera_stream.c中呼叫mm_stream_streamon(mm_stream_t *my_obj)函式.
向kernel傳送v4l2請求,等待資料回撥
int32_t mm_stream_streamon(mm_stream_t *my_obj)
{
···
enum v4l2_buf_type buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
···
rc = ioctl(my_obj->fd, VIDIOC_STREAMON, &buf_type);
···
}
複製程式碼
2.2.2 Kernel層
kernel/drivers/media/platform/msm/camera_v2/camera/camera.c 通過ioctl的方式,經過層層呼叫,最後呼叫到camera_v4l2_streamon();
static int camera_v4l2_streamon(struct file *filep, void *fh,
enum v4l2_buf_type buf_type)
{
struct v4l2_event event;
int rc;
struct camera_v4l2_private *sp = fh_to_private(fh);
rc = vb2_streamon(&sp->vb2_q, buf_type);
camera_pack_event(filep, MSM_CAMERA_SET_PARM,
MSM_CAMERA_PRIV_STREAM_ON, -1, &event);
rc = msm_post_event(&event, MSM_POST_EVT_TIMEOUT);
···
rc = camera_check_event_status(&event);
return rc;
}
複製程式碼
分析:通過msm_post_event發生資料請求,等待資料回撥。
Preview完整流程圖
到此,preview預覽流程結束
2.3 Camera的tackPicture流程
事實上,tackPicture流程和preview的流程很類似!
以ZSL模式(零延遲模式)為切入點:
2.3.1 Hal層
hardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp
int QCamera2HardwareInterface::takePicture()
{
···
//流程1
mCameraHandle->ops->start_zsl_snapshot(mCameraHandle->camera_handle,
pZSLChannel->getMyHandle());
···
//流程2
rc = pZSLChannel->takePicture(numSnapshots);
···
}
複製程式碼
進入QCamera2HardwareInterface::takePicture後,會走2個流程:
-
1.mCameraHandle->ops->start_zsl_snapshot(···);
-
2.pZSLChannel->takePicture(numSnapshots);
流程1:
經過層層呼叫,最終會呼叫到mm_channel_start_zsl_snapshot hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c
int32_t mm_channel_start_zsl_snapshot(mm_channel_t *my_obj)
{
int32_t rc = 0;
mm_camera_cmdcb_t* node = NULL;
node = (mm_camera_cmdcb_t *)malloc(sizeof(mm_camera_cmdcb_t));
if (NULL != node) {
memset(node, 0, sizeof(mm_camera_cmdcb_t));
node->cmd_type = MM_CAMERA_CMD_TYPE_START_ZSL;
/* enqueue to cmd thread */
cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);
/* wake up cmd thread */
cam_sem_post(&(my_obj->cmd_thread.cmd_sem));
} else {
CDBG_ERROR("%s: No memory for mm_camera_node_t", __func__);
rc = -1;
}
return rc;
}
複製程式碼
分析: 該函式主要做了2件事情:
- 1 cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);入隊
- 2 通過cam_sem_post(&(my_obj->cmd_thread.cmd_sem));喚醒cmd執行緒
這裡的node->cmd_type=MM_CAMERA_CMD_TYPE_START_ZSL
hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_thread.c
static void *mm_camera_cmd_thread(void *data)
{
···
case MM_CAMERA_CMD_TYPE_START_ZSL:
cmd_thread->cb(node, cmd_thread->user_data);
···
}
複製程式碼
這裡cmd_thread->cb是回撥函式: cmd_thread->cb = mm_channel_process_stream_buf,經過層層複雜的回撥 最終: mm_channel_superbuf_skip(ch_obj, &ch_obj->bundle.superbuf_queue); super_buf = (mm_channel_queue_node_t*)node->data; 將buffer 取出 且釋放list中的node,最終將buffer queue給kernel進行下一次填充.
流程2:
同樣,經過層層呼叫,最終呼叫到mm_channel_request_super_buf
hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c
int32_t mm_channel_request_super_buf(mm_channel_t *my_obj, uint32_t num_buf_requested)
{
/* set pending_cnt
* will trigger dispatching super frames if pending_cnt > 0 */
/* send cam_sem_post to wake up cmd thread to dispatch super buffer */
node = (mm_camera_cmdcb_t *)malloc(sizeof(mm_camera_cmdcb_t));
if (NULL != node) {
memset(node, 0, sizeof(mm_camera_cmdcb_t));
node->cmd_type = MM_CAMERA_CMD_TYPE_REQ_DATA_CB;
node->u.req_buf.num_buf_requested = num_buf_requested;
/* enqueue to cmd thread */
cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);
/* wake up cmd thread */
cam_sem_post(&(my_obj->cmd_thread.cmd_sem));
} else {
CDBG_ERROR("%s: No memory for mm_camera_node_t", __func__);
rc = -1;
}
return rc;
}
複製程式碼
分析:該函式和流程1一樣:
- 1 cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);入隊
- 2 通過cam_sem_post(&(my_obj->cmd_thread.cmd_sem));喚醒cmd執行緒
static void *mm_camera_cmd_thread(void *data)
{
···
case MM_CAMERA_CMD_TYPE_START_ZSL:
case MM_CAMERA_CMD_TYPE_REQ_DATA_CB:
cmd_thread->cb(node, cmd_thread->user_data);
···
}
複製程式碼
這裡和流程1一樣,就不再贅述!
2.3.2 Kernel層
int32_t mm_camera_start_zsl_snapshot(mm_camera_obj_t *my_obj)
{
···
rc = mm_camera_util_s_ctrl(my_obj->ctrl_fd,
CAM_PRIV_START_ZSL_SNAPSHOT, &value);
···
}
複製程式碼
int32_t mm_camera_util_s_ctrl(int32_t fd, uint32_t id, int32_t *value)
{
···
rc = ioctl(fd, VIDIOC_S_CTRL, &control);
···
}
複製程式碼
kernel/drivers/media/v4l2-core/v4l2-subdev.c
static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
{
···
case VIDIOC_S_CTRL:
return v4l2_s_ctrl(vfh, vfh->ctrl_handler, arg);
···
}
複製程式碼
通過ioctl(fd, VIDIOC_S_CTRL, &control)的方式,藉助V4L2框架,呼叫到kernel層,
最終buffer queue給kernel進行下一次填充。
takePicture完整流程圖