iOS 儲存攝像頭H264視訊流
我們下面完成獲取攝像頭資料,並儲存為h264格式的檔案。
1、初始化獲取攝像頭資料相關程式碼
{
//建立AVCaptureDevice的視訊裝置物件
AVCaptureDevice* videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError* error;
//建立視訊輸入端物件
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (error) {
NSLog(@"建立輸入端失敗,%@",error);
return;
}
//建立功能會話物件
self.captureSession = [[AVCaptureSession alloc] init];
//設定會話輸出的視訊解析度
[self.captureSession setSessionPreset:AVCaptureSessionPreset1280x720];
//新增輸入端
if (![self.captureSession canAddInput:input]) {
NSLog(@"輸入端新增失敗");
return;
}
[self.captureSession addInput:input];
//顯示攝像頭捕捉到的資料
AVCaptureVideoPreviewLayer* layer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
layer.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height - 100);
[self.view.layer addSublayer:layer];
//建立輸出端
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
//會話物件新增輸出端
if ([self.captureSession canAddOutput:videoDataOutput]) {
[self.captureSession addOutput:videoDataOutput];
self.videoDataOutput = videoDataOutput;
//建立輸出呼叫的佇列
dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("videoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
self.videoDataOutputQueue = videoDataOutputQueue;
//設定代理和呼叫的佇列
[self.videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
//設定延時丟幀
self.videoDataOutput.alwaysDiscardsLateVideoFrames = NO;
}
}
然後我們實行代理方法
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
在代理方法裡面可以獲取到我們需要的CMSampleBufferRef物件。
我們呼叫會話的開始方法就可以獲取到視訊資料了。
[self.captureSession startRunning];
獲取到CMSampleBufferRef物件後,我們對CMSampleBufferRef物件進行處理。
2、建立硬解碼VideoToolBox會話物件
根據蘋果文件,我們可以知道攝像頭返回的資料為CVPixelBuffers資料,需要我們進行手動壓縮為H264資料。
在此,我們使用VideoToolBox對資料進行壓縮,我們也可以使用x264庫對資料進行壓縮。使用VideoToolBox可以進行硬編碼,效能更優,使用x264則相容性更高。
VideoToolBox為一套C語言函式庫,使用相對有點複雜。流程如下:
{
self.frameID = 0;
int width = (int)self.width;
int height = (int)self.height;
//建立編碼會話物件
//第八個引數為回撥函式的函式名
OSStatus status = VTCompressionSessionCreate(NULL,
width,
height,
kCMVideoCodecType_H264,
NULL,
NULL,
NULL,
didCompressH264,
(__bridge void *)(self),
&self->_EncodingSession
);
NSLog(@"H264: VTCompressionSessionCreate %d", (int)status);
if (status != 0) {
NSLog(@"H264: Unable to create a H264 session");
self.EncodingSession = NULL;
return ;
}
// 設定實時編碼輸出(避免延遲)
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
// h264 profile, 直播一般使用baseline,可減少由於b幀帶來的延時
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Baseline_AutoLevel);
// 設定關鍵幀(GOPsize)間隔
int frameInterval = (int)(self.frameRate / 2);
CFNumberRef frameIntervalRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &frameInterval);
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_MaxKeyFrameInterval, frameIntervalRef);
// 設定期望幀率
int fps = (int)self.frameRate;
CFNumberRef fpsRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &fps);
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_ExpectedFrameRate, fpsRef);
//不產生B幀
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_AllowFrameReordering, kCFBooleanFalse);
// 設定編碼位元速率(位元率),如果不設定,預設將會以很低的位元速率編碼,導致編碼出來的視訊很模糊
// 設定位元速率,上限,單位是bps
int bitRate = width * height * 3 * 4 * 8;
CFNumberRef bitRateRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bitRate);
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_AverageBitRate, bitRateRef);
// 設定位元速率,均值,單位是byte
int bitRateLimit = width * height * 3 * 4;
CFNumberRef bitRateLimitRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bitRateLimit);
VTSessionSetProperty(self->_EncodingSession, kVTCompressionPropertyKey_DataRateLimits, bitRateLimitRef);
// Tell the encoder to start encoding
status = VTCompressionSessionPrepareToEncodeFrames(self->_EncodingSession);
if (status == 0) {
self.initComplete = YES;
}else {
NSLog(@"init compression session prepare to encode frames failure");
}
}
3、實現回撥函式
我們在第二步中註冊了編碼回撥的函式名,我們實行回撥函式:
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer) {
//在此方法中處理獲取H264編碼後的資料
}
4、編碼資料
把獲取的sampleBuffer進行編碼
- (void)encode:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// 幀時間,如果不設定會導致時間軸過長。
CMTime presentationTimeStamp = CMTimeMake(self.frameID++, 1000);
VTEncodeInfoFlags flags;
OSStatus statusCode = VTCompressionSessionEncodeFrame(_EncodingSession,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
NULL, NULL, &flags);
if (statusCode != noErr) {
if(_EncodingSession != NULL) {
NSLog(@"H264: VTCompressionSessionEncodeFrame failed with %d", (int)statusCode);
VTCompressionSessionInvalidate(_EncodingSession);
CFRelease(_EncodingSession);
_EncodingSession = NULL;
NSLog(@"encodingSession release");
return;
}
}
// NSLog(@"H264: VTCompressionSessionEncodeFrame Success");
}
5、在回撥函式中處理編碼後的H264資料
根據H264的編碼規範,我們需要先取出PPS和SPS資料儲存,然後再依次儲存H264的視訊資料
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer) {
// NSLog(@"didCompressH264 called with status %d infoFlags %d", (int)status, (int)infoFlags);
if (status != 0) {
return;
}
if (!CMSampleBufferDataIsReady(sampleBuffer)) {
NSLog(@"didCompressH264 data is not ready ");
return;
}
ESCSaveToH264FileTool* encoder = (__bridge ESCSaveToH264FileTool *)outputCallbackRefCon;
bool keyframe = !CFDictionaryContainsKey( (CFArrayGetValueAtIndex(CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true), 0)), kCMSampleAttachmentKey_NotSync);
// 判斷當前幀是否為關鍵幀
// 獲取sps & pps資料
if (keyframe) {
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
size_t spsSize, spsCount;
const uint8_t *sparameterSet;
OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sparameterSet, &spsSize, &spsCount, 0 );
if (statusCode == noErr) {
// Found sps and now check for pps
size_t ppsSize, ppsCount;
const uint8_t *pparameterSet;
OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pparameterSet, &ppsSize, &ppsCount, 0 );
if (statusCode == noErr) {
// Found pps
NSData *sps = [NSData dataWithBytes:sparameterSet length:spsSize];
NSData *pps = [NSData dataWithBytes:pparameterSet length:ppsSize];
if (encoder) {
[encoder gotSpsPps:sps pps:pps];
}
}
}
}
CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t length, totalLength;
char *dataPointer;
OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
if (statusCodeRet == noErr) {
size_t bufferOffset = 0;
static const int AVCCHeaderLength = 4; // 返回的nalu資料前四個位元組不是0001的startcode,而是大端模式的幀長度length
// 迴圈獲取nalu資料
while (bufferOffset < totalLength - AVCCHeaderLength) {
uint32_t NALUnitLength = 0;
// Read the NAL unit length
memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);
// 從大端轉系統端
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
NSData* data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
[encoder gotEncodedData:data isKeyFrame:keyframe];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
}
}
寫入SPS和PPS資訊
- (void)gotSpsPps:(NSData *)sps pps:(NSData *)pps {
// NSLog(@"gotSpsPps %d %d", (int)[sps length], (int)[pps length]);
const char bytes[] = "\x00\x00\x00\x01";
NSData *ByteHeader = [NSData dataWithBytes:bytes length:4];
[self.fileHandle writeData:ByteHeader];
[self.fileHandle writeData:sps];
[self.fileHandle writeData:ByteHeader];
[self.fileHandle writeData:pps];
}
寫入H264裸資料
- (void)gotEncodedData:(NSData*)data isKeyFrame:(BOOL)isKeyFrame {
// NSLog(@"gotEncodedData %d", (int)[data length]);
if (self.fileHandle != NULL)
{
const char bytes[] = "\x00\x00\x00\x01";
size_t length = (sizeof bytes) - 1; //string literals have implicit trailing '\0'
NSData *ByteHeader = [NSData dataWithBytes:bytes length:length];
[self.fileHandle writeData:ByteHeader];
[self.fileHandle writeData:data];
}
}
6、結束錄製時停止編碼會話和視訊流會話
停止編碼會話
- (void)EndVideoToolBox {
VTCompressionSessionCompleteFrames(_EncodingSession, kCMTimeInvalid);
VTCompressionSessionInvalidate(_EncodingSession);
CFRelease(_EncodingSession);
_EncodingSession = NULL;
}
停止視訊會話
[self.captureSession stopRunning];
結束視訊的錄製。我們開啟沙盒即可檢視錄製的H264檔案。
可以使用VLC進行播放。
最後歡迎大家留言交流,同時附上Demo地址。
Demo地址:https://github.com/XMSECODE/ESCCameraToH264Demo
相關文章
- 3、Opencv播放視訊、儲存、暫停視訊,開啟攝像頭OpenCV
- opencv python 從攝像頭獲取視訊/從檔案獲取視訊 /儲存視訊OpenCVPython
- Android 呼叫攝像頭功能【拍照與視訊】Android
- matlab呼叫攝像頭並儲存成幀的形式Matlab
- WebRTC網頁開啟攝像頭並錄製視訊Web網頁
- python版opencv:如何用筆記本攝像頭拍照儲存PythonOpenCV筆記
- 電腦釘釘攝像頭許可權在哪設定 電腦釘釘視訊會議攝像頭黑屏
- JavaCV的攝像頭實戰之五:推流Java
- Android切換前後置攝像頭並錄製視訊Android
- JavaCV的攝像頭實戰之三:儲存為mp4檔案Java
- 攝像頭操作指南
- ToDesk勾上攝像頭會看到我嗎?如何關閉攝像頭
- px30-android8.1-USB攝像頭錄製視訊異常Android
- golang 有哪些庫可以做攝像頭/視訊人臉識別的?Golang
- centos下用ffmpeg推流宇視科技攝像頭rtsp流到前端播放(無flash)CentOS前端
- 海康威視攝像頭重置密碼問題密碼
- win10微信視訊無法使用攝像頭怎麼辦 win10電腦微信打不開攝像頭解決方法Win10
- 安卓呼叫攝像頭拍照安卓
- Android呼叫攝像頭Android
- .NET 攝像頭採集
- 頭像點選檢視大圖和儲存功能實現(儲存的細節處理)
- Win10攝像頭如何開啟_WIN10攝像頭在哪裡Win10
- 搭建一個攝像頭應用程式 應用程式內部攝像頭
- 攝像頭不能用怎麼辦 攝像頭不能用解決辦法
- JavaCV的攝像頭實戰之七:推流(帶聲音)Java
- blender 3D 建模模擬攝像頭視角3D
- OpenCV-python多程式實現兩個海康威視攝像頭同時錄入視訊(親測穩定分流無中斷)並儲存本地OpenCVPython
- C#開發可播放攝像頭及任意格式視訊的播放器C#播放器
- 攝像頭黑屏怎麼辦 各種攝像頭不能用的解決方法
- Android中呼叫攝像頭拍照儲存,並在相簿中選擇圖片顯示Android
- JavaCV的攝像頭實戰之六:儲存為mp4檔案(有聲音)Java
- 某CCTV攝像頭漏洞分析
- android opencv 前置攝像頭AndroidOpenCV
- Android呼叫攝像頭拍照Android
- android 開啟攝像頭Android
- jQuery webcam plugin呼叫攝像頭jQueryWebPlugin
- 人工智慧"眼睛"——攝像頭人工智慧
- WebRTC開啟本地攝像頭Web