SpringBoot+SeetaFace6搭建人臉識別平臺

code2roc發表於2024-10-08

前言

最近多個專案需要接入人臉識別功能,之前的方案是使用百度雲api整合,但是後續部分專案是內網部署及使用,考慮到接入複雜程度及收費等多種因素,決定參考開源方案自己搭建,保證服務的穩定性與可靠性

專案地址:https://gitee.com/code2roc/fastface

設計

經過檢索對別多個方案後,使用了基於seetaface6+springboot的方式進行搭建,能夠無縫接入應用

seetaface6是中科視拓最新開源的商業正式版本,包含人臉識別的基本能力:人臉檢測、關鍵點定位、人臉識別,同時增加了活體檢測、質量評估、年齡性別估計

官網地址:https://github.com/SeetaFace6Open/index

使用對接的sdk是tracy100大神的封裝,支援 jdk8-jdk14,支援windows和Linux,無需考慮部署問題,直接使用jar包實現業務即可,內部同時封裝了bean物件spring能夠開箱即用

官網地址:https://github.com/tracy100/seetaface6SDK

系統目標實現人臉註冊,人臉比對,人臉查詢基礎功能即可

實現

引用jar包

        <dependency>
            <groupId>com.seeta.sdk</groupId>
            <artifactId>seeta-sdk-platform</artifactId>
            <scope>system</scope>
            <version>1.2.1</version>
            <systemPath>${project.basedir}/lib/seetaface.jar</systemPath>
        </dependency>

bean物件註冊

FaceDetectorProxy為人臉檢測bean,能夠檢測影像中是否有人臉

FaceRecognizerProxy為人臉比對bean,能夠比對兩張人臉的相似度

FaceLandmarkerProxy為人臉關鍵點bean,能夠檢測人臉的關鍵點,支援5個點和68個點

@Configuration
public class FaceConfig {
    @Value("${face.modelPath}")
    private String modelPath;

    @Bean
    public FaceDetectorProxy faceDetector() throws FileNotFoundException {
        SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(
                new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_detector.csta"}, SeetaDevice.SEETA_DEVICE_CPU));
        FaceDetectorProxy faceDetectorProxy = new FaceDetectorProxy(detectorPoolSetting);
        return faceDetectorProxy;
    }

    @Bean
    public FaceRecognizerProxy faceRecognizer() throws FileNotFoundException {
        SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(
                new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_recognizer.csta"}, SeetaDevice.SEETA_DEVICE_CPU));
        FaceRecognizerProxy faceRecognizerProxy = new FaceRecognizerProxy(detectorPoolSetting);
        return faceRecognizerProxy;
    }

    @Bean
    public FaceLandmarkerProxy faceLandmarker() throws FileNotFoundException {
        SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(
                new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_landmarker_pts5.csta"}, SeetaDevice.SEETA_DEVICE_CPU));
        FaceLandmarkerProxy faceLandmarkerProxy = new FaceLandmarkerProxy(detectorPoolSetting);
        return faceLandmarkerProxy;
    }
}

在使用相關bean物件時,需要進行library的本地註冊,指定cpu還是gpu模式

LoadNativeCore.LOAD_NATIVE(SeetaDevice.SEETA_DEVICE_CPU)

人臉檢測

    public FaceEnum.CheckImageFaceStatus getFace(BufferedImage image) throws Exception {
        SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);
        SeetaRect[] detects = faceDetectorProxy.detect(imageData);
        if (detects.length == 0) {
            return FaceEnum.CheckImageFaceStatus.NoFace;
        } else if (detects.length == 1) {
            return FaceEnum.CheckImageFaceStatus.OneFace;
        } else {
            return FaceEnum.CheckImageFaceStatus.MoreFace;
        }
    }

人臉比對

    public FaceEnum.CompareImageFaceStatus compareFace(BufferedImage source, BufferedImage compare) throws Exception {
        float[] sourceFeature = extract(source);
        float[] compareFeature = extract(compare);
        if (sourceFeature != null && compareFeature != null) {
            float calculateSimilarity = faceRecognizerProxy.calculateSimilarity(sourceFeature, compareFeature);
            System.out.printf("相似度:%f\n", calculateSimilarity);
            if (calculateSimilarity >= CHECK_SIM) {
                return FaceEnum.CompareImageFaceStatus.Same;
            } else {
                return FaceEnum.CompareImageFaceStatus.Different;
            }
        } else {
            return FaceEnum.CompareImageFaceStatus.LostFace;
        }
    }

人臉關鍵點

    private float[] extract(BufferedImage image) throws Exception {
        SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);
        SeetaRect[] detects = faceDetectorProxy.detect(imageData);
        if (detects.length > 0) {
            SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);
            float[] features = faceRecognizerProxy.extract(imageData, pointFS);
            return features;
        }
        return null;
    }

人臉資料庫

  • 註冊
    public long registFace(BufferedImage image) throws Exception {
        long result = -1;
        SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);
        SeetaRect[] detects = faceDetectorProxy.detect(imageData);
        if (detects.length > 0) {
            SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);
            result = faceDatabase.Register(imageData, pointFS);
            faceDatabase.Save(dataBasePath);
        }
        return result;
    }
  • 查詢
    public long queryFace(BufferedImage image) throws Exception {
        long result = -1;
        SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);
        SeetaRect[] detects = faceDetectorProxy.detect(imageData);
        if (detects.length > 0) {
            SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);
            long[] index = new long[1];
            float[] sim = new float[1];
            result = faceDatabase.QueryTop(imageData, pointFS, 1, index, sim);
            if (result > 0) {
                float similarity = sim[0];
                if (similarity >= CHECK_SIM) {
                    result = index[0];
                } else {
                    result = -1;
                }
            }
        }
        return result;
    }
  • 刪除
    public long deleteFace(long index) throws Exception {
        long result = faceDatabase.Delete(index);
        faceDatabase.Save(dataBasePath);
        return result;
    }

擴充

整合了face-api.js,實現簡單的張張嘴,搖搖頭活體檢測,精確度不是很高,作為一個參考選項

官網地址:https://github.com/justadudewhohacks/face-api.js

載入模型

        Promise.all([
            faceapi.loadFaceDetectionModel('models'),
            faceapi.loadFaceLandmarkModel('models')
        ]).then(startAnalysis);

    function startAnalysis() {
        console.log('模型載入成功!');
        var canvas1 = faceapi.createCanvasFromMedia(document.getElementById('showImg'))
        faceapi.detectSingleFace(canvas1).then((detection) => {
            if (detection) {
                faceapi.detectFaceLandmarks(canvas1).then((landmarks) => {
                    console.log('模型預熱呼叫成功!');
                })
            }
        })

    }

開啟攝像頭

	<video id="video" muted playsinline></video>
    function AnalysisFaceOnline() {
        var videoElement = document.getElementById('video');
        // 檢查瀏覽器是否支援getUserMedia API
        if (navigator.mediaDevices.getUserMedia) {
            navigator.mediaDevices.getUserMedia({ video: { facingMode: "user" } }) // 請求影片流
                .then(function(stream) {
                    videoElement.srcObject = stream; // 將影片流設定到<video>元素
                    videoElement.play();
                })
                .catch(function(err) {
                    console.error("獲取攝像頭錯誤:", err); // 處理錯誤
                });
        } else {
            console.error("您的瀏覽器不支援getUserMedia API");
        }
    }

捕捉幀計算關鍵點

function vedioCatchInit() {
        video.addEventListener('play', function() {
            function captureFrame() {
                if (!video.paused && !video.ended) {
                    // 設定canvas的尺寸與影片幀相同
                    canvas.width = 200;
                    canvas.height = 300;
                    // 繪製當前影片幀到canvas
                    context.drawImage(video, 0, 0, canvas.width, canvas.height);
                    // 將canvas內容轉換為data URL
                    //outputImage.src = canvas.toDataURL('image/png');
                    // 可以在這裡新增程式碼將data URL傳送到伺服器或進行其他處理
                    faceapi.detectSingleFace(canvas).then((detection) => {
                        if (detection) {
                            faceapi.detectFaceLandmarks(canvas).then((landmarks) => {
                               
     
                            })
                        } else {
                            console.log("no face")
                        }
                    })
                    // 遞迴呼叫以持續捕獲幀
                    setTimeout(captureFrame, 100); // 每500毫秒捕獲一次
                }
            }
            captureFrame(); // 開始捕獲幀
        });
    }

相關文章