說明
本文是Ray Wenderlich上《ARKit by Tutorials》的讀書筆記,主要講內容概要和讀後感
沒錯,本文主要講iPhone X的前置TrueDepth攝像頭的AR效果!主要功能:
- 面部檢測和追蹤
- 實時面部表情追蹤
- 同時捕捉彩色影像,和深度影像
- 光照估計
初始化配置
面部AR的啟動和其他的差不多,只需要將配置項換成ARFaceTrackingConfiguration()
就可以了,錯誤處理也類似:
func resetTracking() {
// 1
guard ARFaceTrackingConfiguration.isSupported else {
updateMessage(text: "Face Tracking Not Supported.")
return
}
// 2
updateMessage(text: "Looking for a face.")
// 3
let configuration = ARFaceTrackingConfiguration()
configuration.isLightEstimationEnabled = true /* default setting */
configuration.providesAudioData = false /* default setting */
// 4
session.run(configuration, options:
[.resetTracking, .removeExistingAnchors])
}
複製程式碼
func session(_ session: ARSession, didFailWithError error: Error) {
print("** didFailWithError")
updateMessage(text: "Session failed.")
}
func sessionWasInterrupted(_ session: ARSession) {
print("** sessionWasInterrupted")
updateMessage(text: "Session interrupted.")
}
func sessionInterruptionEnded(_ session: ARSession) {
print("** sessionInterruptionEnded")
updateMessage(text: "Session interruption ended.")
}
複製程式碼
人臉追蹤
ARKit在檢測到人臉時,會新增一個ARFaceAnchor
到場景中,我們就可以用這個錨點來實現定位和追蹤功能.
如果有兩張人臉,ARKit只會追蹤最大,最具有辨識度的那張臉.同時會在人臉上新增一個人臉座標系:
我們用scene的Metal device來建立人臉的幾何體ARSCNFaceGeometry
,併為其建立一個SCNNode物件mask
來持有幾何體:
func createFaceGeometry() {
updateMessage(text: "Creating face geometry.")
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
mask = Mask(geometry: maskGeometry)
}
複製程式碼
這個方法可以在viewDidLoad
裡面直接呼叫,無需等到人臉出現再建立.
最後將mask
新增到檢視上以顯示出來:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
anchorNode = node
//將mask顯示出來
setupFaceNodeContent()
}
func setupFaceNodeContent() {
guard let node = anchorNode else { return }
node.childNodes.forEach { $0.removeFromParentNode() }
if let content = mask {
node.addChildNode(content)
}
}
複製程式碼
前面建立的幾何體ARSCNFaceGeometry
因為還沒有識別到人臉,可能是空的.所以我們需要在後面的update方法中更新.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
updateMessage(text: "Tracking your face.")
// 根據錨點更新幾何體
mask?.update(withFaceAnchor: faceAnchor)
}
// Tag: ARFaceAnchor Update
func update(withFaceAnchor anchor: ARFaceAnchor) {
let faceGeometry = geometry as! ARSCNFaceGeometry
faceGeometry.update(from: anchor.geometry)
}
複製程式碼
預設情況下燈光設定是這樣:
/* default settings */
sceneView.automaticallyUpdatesLighting = true
sceneView.autoenablesDefaultLighting = false
sceneView.scene.lightingEnvironment.intensity = 1.0
複製程式碼
但我們也可以根據環境調整,以實現不同的效果:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// 1
guard let estimate = session.currentFrame?.lightEstimate else {
return
}
// 2 在ARKit中1000意味著光強為中等
let intensity = estimate.ambientIntensity / 1000.0
sceneView.scene.lightingEnvironment.intensity = intensity
// 3
let intensityStr = String(format: "%.2f", intensity)
let sceneLighting = String(format: "%.2f",
sceneView.scene.lightingEnvironment.intensity)
// 4
print("Intensity: \(intensityStr) - \(sceneLighting)")
}
複製程式碼
blend shapes表情動畫
如何實現Animojis中的表情動畫效果呢?這就用到了blend shapes,它本質是一個字典,key是ARFaceAnchor.BlendShapeLocation
常量,而value則是浮點數,範圍0.0(自然狀態)~1.0(最大移動狀態).
比如,我們想新增一個眨眼的效果,取出eyeBlinkLeft
:
這個字典的是隨著ARFaceAnchor更新而更新的:
// - Tag: ARFaceAnchor Update
func update(withFaceAnchor anchor: ARFaceAnchor) {
blendShapes = anchor.blendShapes
}
// - Tag: BlendShapeAnimation
var blendShapes: [ARFaceAnchor.BlendShapeLocation: Any] = [:] {
didSet {
guard
// Brow
let browInnerUp = blendShapes[.browInnerUp] as? Float
// Right eye
let eyeLookInRight = blendShapes[.eyeLookInRight] as? Float,
let eyeLookOutRight = blendShapes[.eyeLookOutRight] as? Float,
let eyeLookUpRight = blendShapes[.eyeLookUpRight] as? Float,
let eyeLookDownRight = blendShapes[.eyeLookDownRight] as? Float,
let eyeBlinkRight = blendShapes[.eyeBlinkRight] as? Float
else { return }
// 在處理動畫
}
複製程式碼
實現後的效果
ReplayKit錄影
ReplayKit是在iOS 9引入的,用來錄製音訊,視訊和麥克風,主要有兩個類:
- RPScreenRecorder:用來啟動/停止錄製的單例.
- RPPreviewViewController:錄製完成後的預覽控制器.
- iOS螢幕錄製和廣播
- 廣播配對
- 快速攝像頭切換
- 應用內螢幕捕捉
開始錄製
主要程式碼
let sharedRecorder = RPScreenRecorder.shared()
private var isRecording = false
複製程式碼
// Private functions
private func startRecording() {
// 1
self.sharedRecorder.isMicrophoneEnabled = true
// 2
sharedRecorder.startRecording( handler: { error in
guard error == nil else {
print("There was an error starting the recording: \(String(describing: error?.localizedDescription))")
return
}
// 3
print("Started Recording Successfully")
self.isRecording = true
// 4
DispatchQueue.main.async {
self.recordButton.setTitle("[ STOP RECORDING ]", for: .normal)
self.recordButton.backgroundColor = UIColor.red
}
})
}
複製程式碼
停止錄製
主要程式碼
func stopRecording() {
// 1
self.sharedRecorder.isMicrophoneEnabled = false
// 2
sharedRecorder.stopRecording( handler: {
previewViewController, error in
guard error == nil else {
print("There was an error stopping the recording: \(String(describing: error?.localizedDescription))")
return
}
// 3
if let unwrappedPreview = previewViewController {
unwrappedPreview.previewControllerDelegate = self
self.present(unwrappedPreview, animated: true, completion: {})
}
})
// 4
self.isRecording = false
DispatchQueue.main.async {
self.recordButton.setTitle("[ RECORD ]", for: .normal)
self.recordButton.backgroundColor = UIColor(red: 0.0039,
green: 0.5882, blue: 1, alpha: 1.0) /* #0196ff */
}
}
複製程式碼
效果如圖:
另外,還可以實現sharedRecorder
的代理方法,監聽錯誤回撥和狀態改變:
// RPScreenRecorderDelegate methods
func screenRecorder(_ screenRecorder: RPScreenRecorder,
didStopRecordingWith previewViewController: RPPreviewViewController?,
error: Error?) {
guard error == nil else {
print("There was an error recording: \(String(describing:
error?.localizedDescription))")
self.isRecording = false
return
}
}
func screenRecorderDidChangeAvailability(_ screenRecorder:
RPScreenRecorder) {
recordButton.isEnabled = sharedRecorder.isAvailable
if !recordButton.isEnabled {
self.isRecording = false
}
}
複製程式碼
使用方法不算太難,此處不再過多展開,詳情可購買閱讀正版書籍.
第四部分讀書筆記結束!