一、簡介
媒體子系統為開發者提供了媒體相關的很多功能,本文針對其中的影片錄製功能做個詳細的介紹。首先,我將透過媒體子系統提供的影片錄製Test程式碼作為切入點,給大家梳理一下整個錄製的流程。
二、目錄
foundation/multimedia/camera_framework
├── frameworks
│ ├── js
│ │ └── camera_napi #napi實現
│ │ └── src
│ │ ├── input #Camera輸入
│ │ ├── output #Camera輸出
│ │ └── session #會話管理
│ └── native #native實現
│ └── camera
│ ├── BUILD.gn
│ ├── src
│ │ ├── input #Camera輸入
│ │ ├── output #Camera輸出
│ │ └── session #會話管理
├── interfaces #介面定義
│ ├── inner_api #內部native實現
│ │ └── native
│ │ ├── camera
│ │ │ └── include
│ │ │ ├── input
│ │ │ ├── output
│ │ │ └── session
│ └── kits #napi介面
│ └── js
│ └── camera_napi
│ ├── BUILD.gn
│ ├── include
│ │ ├── input
│ │ ├── output
│ │ └── session
│ └── @ohos.multimedia.camera.d.ts
└── services #服務端
└── camera_service
├── binder
│ ├── base
│ ├── client #IPC的客戶端
│ │ └── src
│ └── server #IPC的服務端
│ └── src
└── src
三、錄製的總體流程
四、Native介面使用
在OpenAtom OpenHarmony(以下簡稱“OpenHarmony”)系統中,多媒體子系統透過N-API介面提供給上層JS呼叫,N-API相當於是JS和Native之間的橋樑,在OpenHarmony原始碼中,提供了C++直接呼叫影片錄製功能的例子,foundation/multimedia/camera_framework/interfaces/inner_api/native/test目錄中。本文章主要參考了camera_video.cpp檔案中的影片錄製流程。
首先根據camera_video.cpp的main方法,瞭解下影片錄製的主要流程程式碼。
int main(int argc, char **argv)
{
......
// 建立CameraManager例項
sptr<CameraManager> camManagerObj = CameraManager::GetInstance();
// 設定回撥
camManagerObj->SetCallback(std::make_shared<TestCameraMngerCallback>(testName));
// 獲取支援的相機裝置列表
std::vector<sptr<CameraDevice>> cameraObjList = camManagerObj->GetSupportedCameras();
// 建立採集會話
sptr<CaptureSession> captureSession = camManagerObj->CreateCaptureSession();
// 開始配置採集會話
captureSession->BeginConfig();
// 建立CameraInput
sptr<CaptureInput> captureInput = camManagerObj->CreateCameraInput(cameraObjList[0]);
sptr<CameraInput> cameraInput = (sptr<CameraInput> &)captureInput;
// 開啟CameraInput
cameraInput->Open();
// 設定CameraInput的Error回撥
cameraInput->SetErrorCallback(std::make_shared<TestDeviceCallback>(testName));
// 新增CameraInput例項到採集會話中
ret = captureSession->AddInput(cameraInput);
sptr<Surface> videoSurface = nullptr;
std::shared_ptr<Recorder> recorder = nullptr;
// 建立Video的Surface
videoSurface = Surface::CreateSurfaceAsConsumer();
sptr<SurfaceListener> videoListener = new SurfaceListener("Video", SurfaceType::VIDEO, g_videoFd, videoSurface);
// 註冊Surface的事件監聽
videoSurface->RegisterConsumerListener((sptr<IBufferConsumerListener> &)videoListener);
// 影片的配置
VideoProfile videoprofile = VideoProfile(static_cast<CameraFormat>(videoFormat), videosize, videoframerates);
// 建立VideoOutput例項
sptr<CaptureOutput> videoOutput = camManagerObj->CreateVideoOutput(videoprofile, videoSurface);
// 設定VideoOutput的回撥
((sptr<VideoOutput> &)videoOutput)->SetCallback(std::make_shared<TestVideoOutputCallback>(testName));
// 新增videoOutput到採集會話中
ret = captureSession->AddOutput(videoOutput);
// 提交會話配置
ret = captureSession->CommitConfig();
// 開始錄製
ret = ((sptr<VideoOutput> &)videoOutput)->Start();
sleep(videoPauseDuration);
MEDIA_DEBUG_LOG("Resume video recording");
// 暫停錄製
ret = ((sptr<VideoOutput> &)videoOutput)->Resume();
MEDIA_DEBUG_LOG("Wait for 5 seconds before stop");
sleep(videoCaptureDuration);
MEDIA_DEBUG_LOG("Stop video recording");
// 停止錄製
ret = ((sptr<VideoOutput> &)videoOutput)->Stop();
MEDIA_DEBUG_LOG("Closing the session");
// 停止採集會話
ret = captureSession->Stop();
MEDIA_DEBUG_LOG("Releasing the session");
// 釋放會話採集
captureSession->Release();
// Close video file
TestUtils::SaveVideoFile(nullptr, 0, VideoSaveMode::CLOSE, g_videoFd);
cameraInput->Release();
camManagerObj->SetCallback(nullptr);
return 0;
}
以上是影片錄製的整體流程,其過程主要透過Camera模組支援的能力來實現,其中涉及幾個重要的類:CaptureSession、CameraInput、VideoOutput。CaptureSession是整個過程的控制者,CameraInput和VideoOutput相當於是裝置的輸入和輸出。
五、呼叫流程
後續主要針對上面的呼叫流程,梳理具體的呼叫流程,方便我們對了解影片錄製的整理架構有一個更加深入的瞭解。
建立CameraManager例項
透過CameraManager::GetInstance()獲取CameraManager的例項,後續的一些介面都是透過該例項進行呼叫的。GetInstance使用了單例模式,在OpenHarmony程式碼中這種方式很常見。sptr<CameraManager> &CameraManager::GetInstance() { if (CameraManager::cameraManager_ == nullptr) { MEDIA_INFO_LOG("Initializing camera manager for first time!"); CameraManager::cameraManager_ = new(std::nothrow) CameraManager(); if (CameraManager::cameraManager_ == nullptr) { MEDIA_ERR_LOG("CameraManager::GetInstance failed to new CameraManager"); } } return CameraManager::cameraManager_; }
獲取支援的相機裝置列表
透過呼叫CameraManager的GetSupportedCameras()介面,獲取裝置支援的CameraDevice列表。跟蹤程式碼可以發現serviceProxy_->GetCameras最終會呼叫到Camera服務端的對應介面。std::vector<sptr<CameraDevice>> CameraManager::GetSupportedCameras() { CAMERA_SYNC_TRACE; std::lock_guard<std::mutex> lock(mutex_); std::vector<std::string> cameraIds; std::vector<std::shared_ptr<Camera::CameraMetadata>> cameraAbilityList; int32_t retCode = -1; sptr<CameraDevice> cameraObj = nullptr; int32_t index = 0; if (cameraObjList.size() > 0) { cameraObjList.clear(); } if (serviceProxy_ == nullptr) { MEDIA_ERR_LOG("CameraManager::GetCameras serviceProxy_ is null, returning empty list!"); return cameraObjList; } std::vector<sptr<CameraDevice>> supportedCameras; retCode = serviceProxy_->GetCameras(cameraIds, cameraAbilityList); if (retCode == CAMERA_OK) { for (auto& it : cameraIds) { cameraObj = new(std::nothrow) CameraDevice(it, cameraAbilityList[index++]); if (cameraObj == nullptr) { MEDIA_ERR_LOG("CameraManager::GetCameras new CameraDevice failed for id={public}%s", it.c_str()); continue; } supportedCameras.emplace_back(cameraObj); } } else { MEDIA_ERR_LOG("CameraManager::GetCameras failed!, retCode: %{public}d", retCode); } ChooseDeFaultCameras(supportedCameras); return cameraObjList; }
建立採集會話
下面是比較重要的環節,透過呼叫CameraManager的CreateCaptureSession介面建立採集會話。CameraManager建立採集會話,是透過serviceProxy_->CreateCaptureSession方式進行呼叫,這裡涉及到了OpenHarmony中的IPC的呼叫,serviceProxy_是遠端服務在本地的代理,透過這個代理可以呼叫到具體的服務端,這裡是HCameraService。sptr<CaptureSession> CameraManager::CreateCaptureSession() { CAMERA_SYNC_TRACE; sptr<ICaptureSession> captureSession = nullptr; sptr<CaptureSession> result = nullptr; int32_t retCode = CAMERA_OK; if (serviceProxy_ == nullptr) { MEDIA_ERR_LOG("CameraManager::CreateCaptureSession serviceProxy_ is null"); return nullptr; } retCode = serviceProxy_->CreateCaptureSession(captureSession); if (retCode == CAMERA_OK && captureSession != nullptr) { result = new(std::nothrow) CaptureSession(captureSession); if (result == nullptr) { MEDIA_ERR_LOG("Failed to new CaptureSession"); } } else { MEDIA_ERR_LOG("Failed to get capture session object from hcamera service!, %{public}d", retCode); } return result; }
程式碼最終來到HCameraService::CreateCaptureSession中,該方法中new了一個HCaptureSession物件,並且將該物件傳遞給了引數session,所以前面的captureSession物件就是這裡new出來的HCaptureSession,前面的CameraManager的CreateCaptureSession()方法中將captureSession封裝成CaptureSession物件返回給應用層使用。
int32_t HCameraService::CreateCaptureSession(sptr<ICaptureSession> &session)
{
CAMERA_SYNC_TRACE;
sptr<HCaptureSession> captureSession;
if (streamOperatorCallback_ == nullptr) {
streamOperatorCallback_ = new(std::nothrow) StreamOperatorCallback();
if (streamOperatorCallback_ == nullptr) {
MEDIA_ERR_LOG("HCameraService::CreateCaptureSession streamOperatorCallback_ allocation failed");
return CAMERA_ALLOC_ERROR;
}
}
std::lock_guard<std::mutex> lock(mutex_);
OHOS::Security::AccessToken::AccessTokenID callerToken = IPCSkeleton::GetCallingTokenID();
captureSession = new(std::nothrow) HCaptureSession(cameraHostManager_, streamOperatorCallback_, callerToken);
if (captureSession == nullptr) {
MEDIA_ERR_LOG("HCameraService::CreateCaptureSession HCaptureSession allocation failed");
return CAMERA_ALLOC_ERROR;
}
session = captureSession;
return CAMERA_OK;
}
開始配置採集會話
呼叫CaptureSession的BeginConfig進行採集會話的配置工作。這個工作最終呼叫到被封裝的HCaptureSession中。int32_t HCaptureSession::BeginConfig() { CAMERA_SYNC_TRACE; if (curState_ == CaptureSessionState::SESSION_CONFIG_INPROGRESS) { MEDIA_ERR_LOG("HCaptureSession::BeginConfig Already in config inprogress state!"); return CAMERA_INVALID_STATE; } std::lock_guard<std::mutex> lock(sessionLock_); prevState_ = curState_; curState_ = CaptureSessionState::SESSION_CONFIG_INPROGRESS; tempCameraDevices_.clear(); tempStreams_.clear(); deletedStreamIds_.clear(); return CAMERA_OK; }
建立CameraInput
應用層透過camManagerObj->CreateCameraInput(cameraObjList[0])的方式進行CameraInput的建立,cameraObjList[0]就是前面獲取支援裝置的第一個。根據CameraDevice建立對應的CameraInput物件。sptr<CameraInput> CameraManager::CreateCameraInput(sptr<CameraDevice> &camera) { CAMERA_SYNC_TRACE; sptr<CameraInput> cameraInput = nullptr; sptr<ICameraDeviceService> deviceObj = nullptr; if (camera != nullptr) { deviceObj = CreateCameraDevice(camera->GetID()); if (deviceObj != nullptr) { cameraInput = new(std::nothrow) CameraInput(deviceObj, camera); if (cameraInput == nullptr) { MEDIA_ERR_LOG("failed to new CameraInput Returning null in CreateCameraInput"); return cameraInput; } } else { MEDIA_ERR_LOG("Returning null in CreateCameraInput"); } } else { MEDIA_ERR_LOG("CameraManager::CreateCameraInput: Camera object is null"); } return cameraInput; }
- 開啟CameraInput
呼叫了CameraInput的Open方法,進行輸入裝置的啟動開啟。
void CameraInput::Open()
{
int32_t retCode = deviceObj_->Open();
if (retCode != CAMERA_OK) {
MEDIA_ERR_LOG("Failed to open Camera Input, retCode: %{public}d", retCode);
}
}
新增CameraInput例項到採集會話中
透過呼叫captureSession的AddInput方法,將建立的CameraInput物件新增到採集會話的輸入中,這樣採集會話就知道採集輸入的裝置。int32_t CaptureSession::AddInput(sptr<CaptureInput> &input) { CAMERA_SYNC_TRACE; if (input == nullptr) { MEDIA_ERR_LOG("CaptureSession::AddInput input is null"); return CAMERA_INVALID_ARG; } input->SetSession(this); inputDevice_ = input; return captureSession_->AddInput(((sptr<CameraInput> &)input)->GetCameraDevice()); }
最終呼叫到HCaptureSession的AddInput方法,該方法中核心的程式碼是tempCameraDevices_.emplace_back(localCameraDevice),將需要新增的CameraDevice插入到tempCameraDevices_容器中。
int32_t HCaptureSession::AddInput(sptr<ICameraDeviceService> cameraDevice)
{
CAMERA_SYNC_TRACE;
sptr<HCameraDevice> localCameraDevice = nullptr;
if (cameraDevice == nullptr) {
MEDIA_ERR_LOG("HCaptureSession::AddInput cameraDevice is null");
return CAMERA_INVALID_ARG;
}
if (curState_ != CaptureSessionState::SESSION_CONFIG_INPROGRESS) {
MEDIA_ERR_LOG("HCaptureSession::AddInput Need to call BeginConfig before adding input");
return CAMERA_INVALID_STATE;
}
if (!tempCameraDevices_.empty() || (cameraDevice_ != nullptr && !cameraDevice_->IsReleaseCameraDevice())) {
MEDIA_ERR_LOG("HCaptureSession::AddInput Only one input is supported");
return CAMERA_INVALID_SESSION_CFG;
}
localCameraDevice = static_cast<HCameraDevice*>(cameraDevice.GetRefPtr());
if (cameraDevice_ == localCameraDevice) {
cameraDevice_->SetReleaseCameraDevice(false);
} else {
tempCameraDevices_.emplace_back(localCameraDevice);
CAMERA_SYSEVENT_STATISTIC(CreateMsg("CaptureSession::AddInput"));
}
sptr<IStreamOperator> streamOperator;
int32_t rc = localCameraDevice->GetStreamOperator(streamOperatorCallback_, streamOperator);
if (rc != CAMERA_OK) {
MEDIA_ERR_LOG("HCaptureSession::GetCameraDevice GetStreamOperator returned %{public}d", rc);
localCameraDevice->Close();
return rc;
}
return CAMERA_OK;
}
建立Video的Surface
透過Surface::CreateSurfaceAsConsumer建立Surface。sptr<Surface> Surface::CreateSurfaceAsConsumer(std::string name, bool isShared) { sptr<ConsumerSurface> surf = new ConsumerSurface(name, isShared); GSError ret = surf->Init(); if (ret != GSERROR_OK) { BLOGE("Failure, Reason: consumer surf init failed"); return nullptr; } return surf; }
建立VideoOutput例項
透過呼叫CameraManager的CreateVideoOutput來建立VideoOutput例項。sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface) { CAMERA_SYNC_TRACE; sptr<IStreamRepeat> streamRepeat = nullptr; sptr<VideoOutput> result = nullptr; int32_t retCode = CAMERA_OK; camera_format_t metaFormat; metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat()); retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat, profile.GetSize().width, profile.GetSize().height, streamRepeat); if (retCode == CAMERA_OK) { result = new(std::nothrow) VideoOutput(streamRepeat); if (result == nullptr) { MEDIA_ERR_LOG("Failed to new VideoOutput"); } else { std::vector<int32_t> videoFrameRates = profile.GetFrameRates(); if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2 result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]); } POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO, profile.GetSize().width, profile.GetSize().height); } } else { MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode); } return result; }
該方法中透過IPC的呼叫最終呼叫到了HCameraService的CreateVideoOutput(surface->GetProducer(), format, streamRepeat)。
sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface) { CAMERA_SYNC_TRACE; sptr<IStreamRepeat> streamRepeat = nullptr; sptr<VideoOutput> result = nullptr; int32_t retCode = CAMERA_OK; camera_format_t metaFormat; metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat()); retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat, profile.GetSize().width, profile.GetSize().height, streamRepeat); if (retCode == CAMERA_OK) { result = new(std::nothrow) VideoOutput(streamRepeat); if (result == nullptr) { MEDIA_ERR_LOG("Failed to new VideoOutput"); } else { std::vector<int32_t> videoFrameRates = profile.GetFrameRates(); if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2 result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]); } POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO, profile.GetSize().width, profile.GetSize().height); } } else { MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode); } return result; }
HCameraService的CreateVideoOutput方法中主要建立了HStreamRepeat,並且透過引數傳遞給前面的CameraManager使用,CameraManager透過傳遞的HStreamRepeat物件,進行封裝,建立出VideoOutput物件。
- 新增videoOutput到採集會話中,並且提交採集會話
該步驟類似新增CameraInput到採集會話的過程,可以參考前面的流程。 開始錄製
透過呼叫VideoOutput的Start進行錄製的操作。int32_t VideoOutput::Start() { return static_cast<IStreamRepeat *>(GetStream().GetRefPtr())->Start(); }
該方法中會呼叫到HStreamRepeat的Start方法。
int32_t HStreamRepeat::Start()
{
CAMERA_SYNC_TRACE;
if (streamOperator_ == nullptr) {
return CAMERA_INVALID_STATE;
}
if (curCaptureID_ != 0) {
MEDIA_ERR_LOG("HStreamRepeat::Start, Already started with captureID: %{public}d", curCaptureID_);
return CAMERA_INVALID_STATE;
}
int32_t ret = AllocateCaptureId(curCaptureID_);
if (ret != CAMERA_OK) {
MEDIA_ERR_LOG("HStreamRepeat::Start Failed to allocate a captureId");
return ret;
}
std::vector<uint8_t> ability;
OHOS::Camera::MetadataUtils::ConvertMetadataToVec(cameraAbility_, ability);
CaptureInfo captureInfo;
captureInfo.streamIds_ = {streamId_};
captureInfo.captureSetting_ = ability;
captureInfo.enableShutterCallback_ = false;
MEDIA_INFO_LOG("HStreamRepeat::Start Starting with capture ID: %{public}d", curCaptureID_);
CamRetCode rc = (CamRetCode)(streamOperator_->Capture(curCaptureID_, captureInfo, true));
if (rc != HDI::Camera::V1_0::NO_ERROR) {
ReleaseCaptureId(curCaptureID_);
curCaptureID_ = 0;
MEDIA_ERR_LOG("HStreamRepeat::Start Failed with error Code:%{public}d", rc);
ret = HdiToServiceError(rc);
}
return ret;
}
核心的程式碼是streamOperator_->Capture,其中最後一個引數true,表示採集連續資料。
- 錄製結束,儲存錄制檔案
六、總結
本文主要對OpenHarmony 3.2 Beta多媒體子系統的影片錄製進行介紹,首先梳理了整體的錄製流程,然後對錄製過程中的主要步驟進行了詳細地分析。影片錄製主要分為以下幾個步驟:
(1) 獲取CameraManager例項。
(2) 建立採集會話CaptureSession。
(3) 建立CameraInput例項,並且將輸入裝置新增到CaptureSession中。
(4) 建立Video錄製需要的Surface。
(5) 建立VideoOutput例項,並且將輸出新增到CaptureSession中。
(6) 提交採集會話的配置。
(7) 呼叫VideoOutput的Start方法,進行影片的錄製。
(8) 錄製結束,儲存錄制的檔案。
關於OpenHarmony 3.2 Beta多媒體系列開發,我之前還分享過
《OpenHarmony 3.2 Beta原始碼分析之MediaLibrary》
《OpenHarmony 3.2 Beta多媒體系列——音影片播放框架》
《OpenHarmony 3.2 Beta多媒體系列——音影片播放gstreamer》
這幾篇文章,歡迎感興趣的開發者進行閱讀。