Kinect體感機器人(二)—— 體感識別

凝霜發表於2012-09-27

Kinect體感機器人(二)—— 體感識別

By 馬冬亮(凝霜  Loki)

一個人的戰爭(http://blog.csdn.net/MDL13412)

背景知識

        體感技術屬於NUI(自然人機介面)的範疇,可以讓使用者通過肢體語言與周邊裝置或環境互動,其實現手段主要包括:慣性感測、光學感測以及慣性及光學聯合感測。市場上比較成熟的產品主要有:微軟的Kinect、索尼的PS Move、任天堂的Vii以及來自華碩的Xtion。由於沒有華碩Xtion的實物,我不對其進行評測,下表是對其餘三種體感設別的評測:


        通過上圖的對比,來自微軟的Kinect具有壓倒性的優勢,所以Kinect方案最終被我們採納。

親身體驗過的成功應用

        首先是我們製作的體感機器人,實現了對人體動作的模仿,可以應用到災後搜救領域;

        接下來是香港中文大學的“Improving Communication Ability of the Disabled -Chinese Sign Language Recognition and Translation System”;其實就是手語翻譯;

        還有來自上海大學的3D影院,其通過Kinect追蹤使用者的頭部,讓畫面主動適應使用者。

作業系統的選擇

        關於作業系統的選擇肯定是Linux了,參加嵌入式的比賽用Windows沒有好下場,笑:-)

        下表是對常見Linux發行版的評測:


        限於開發板的處理速度與圖形效能,最終方案為Fedora 16發行版。

體感驅動庫的選擇

        體感驅動庫我只找到了兩個選擇:OpenNI和Kinect SDK,後者只能用在Windows上,果斷放棄,其評測如下表所示:


程式碼——初始化體感裝置

// 初始化體感裝置
    XnStatus result;
    xn::Context context;
    xn::ScriptNode scriptNode;
    xn::EnumerationErrors errors;
    
    // 使用XML檔案配置OpenNI庫
    result = context.InitFromXmlFile(CONFIG_XML_PATH, scriptNode, &errors);
    if (XN_STATUS_NO_NODE_PRESENT == result)
    {
        XnChar strError[1024];
        errors.ToString(strError, 1024);
        NsLog()->error(strError);
        return 1;
    }
    else if (!NsLib::CheckOpenNIError(result, "Open config XML fialed"))
        return 1;
    
    NsLib::TrackerViewer::createInstance(context, scriptNode);
    NsLib::TrackerViewer &trackerViewer = NsLib::TrackerViewer::getInstance();
    
    if (!trackerViewer.init())
        return 1;
    
    trackerViewer.run();
        上述程式碼中的TrackerViewer是使用OpenGL進行繪製的人體骨骼影象,整個程式的同步操作也在此處理,下面給出上述程式碼引用到的關鍵程式碼:

// 單例模式,只允許一個例項
TrackerViewer *TrackerViewer::pInstance_ = 0;

void TrackerViewer::createInstance(xn::Context &context,
                                   xn::ScriptNode &scriptNode)
{
    assert(!pInstance_);
    
    pInstance_ = new TrackerViewer(context, scriptNode);
}
// 初始化TrackerViewer
bool TrackerViewer::init()
{
    if (!initDepthGenerator())
        return false;
    
    if (!initUserGenerator())
        return false;
        
    inited_ = true;
    return true;
}

// 初始化深度感測器
bool TrackerViewer::initDepthGenerator()
{
    XnStatus result;
    
    result = Context.FindExistingNode(XN_NODE_TYPE_DEPTH, DepthGenerator);
    if (!CheckOpenNIError(result, 
                         "No depth generator found. Check your XML"))
        return false;  
    
    return true;
}

// 初始化骨骼識別引擎
bool TrackerViewer::initUserGenerator()
{
    XnStatus result;
    
    // DepthGenerator.GetMapOutputMode(ImageInfo);
    
    result = Context.FindExistingNode(XN_NODE_TYPE_USER, UserGenerator);
    if (!CheckOpenNIError(result, 
                         "Use mock user generator"))
    {
        result = UserGenerator.Create(Context);
        if (!CheckOpenNIError(result, 
                         "Create mock user generator failed"))
            return false;
    }
    
	result = UserGenerator.RegisterUserCallbacks(User_NewUser, User_LostUser, 
                                                  NULL, hUserCallbacks_);
    if (!CheckOpenNIError(result, "Register to user callbacks"))
        return false;
	result = UserGenerator.GetSkeletonCap().RegisterToCalibrationStart(
        UserCalibration_CalibrationStart, NULL, hCalibrationStart_);
    if (!CheckOpenNIError(result, "Register to calibration start"))
        return false;
	result = UserGenerator.GetSkeletonCap().RegisterToCalibrationComplete(
        UserCalibration_CalibrationComplete, NULL, hCalibrationComplete_);
    if (!CheckOpenNIError(result, "Register to calibration complete"))
        return false;
    
    if (UserGenerator.GetSkeletonCap().NeedPoseForCalibration())
    {
        NeedPose = true;
        
		if (!UserGenerator.IsCapabilitySupported(XN_CAPABILITY_POSE_DETECTION))
		{
            NsLog()->error("Pose required, but not supported");
			return false;
		}
        
        result = UserGenerator.GetPoseDetectionCap().RegisterToPoseDetected(
            UserPose_PoseDetected, NULL, hPoseDetected_);
        if (!CheckOpenNIError(result, "Register to Pose Detected"))
            return false;
        
        UserGenerator.GetSkeletonCap().GetCalibrationPose(StrPose);
    }

    UserGenerator.GetSkeletonCap().SetSkeletonProfile(XN_SKEL_PROFILE_ALL);
    result = UserGenerator.GetSkeletonCap().RegisterToCalibrationInProgress(
        MyCalibrationInProgress, NULL, hCalibrationInProgress_);
    if (!CheckOpenNIError(result, "Register to calibration in progress"))
        return false;
    result = UserGenerator.GetPoseDetectionCap().RegisterToPoseInProgress(
        MyPoseInProgress, NULL, hPoseInProgress_);
    if (!CheckOpenNIError(result, "Register to pose in progress"))
        return false;
    
    return true;
}

OpenNI用於使用者追蹤的回撥函式
        OpenNI採用的是事件回撥的方式通知使用者進行操作,本人的回撥函式命名基本上可以“望文生意”,如果還有疑問,請查閱OpenNI文件,程式碼如下:

//------------------------------------------------------------------------------
// OpenNI Callbacks
//------------------------------------------------------------------------------
void XN_CALLBACK_TYPE TrackerViewer::User_NewUser(xn::UserGenerator& generator, 
                                                  XnUserID nId, 
                                                  void* pCookie)
{
    std::cout << "New user: " << nId << std::endl;
    
	if (TrackerViewer::getInstance().NeedPose)
	{
        TrackerViewer::getInstance().UserGenerator
            .GetPoseDetectionCap().StartPoseDetection(
                TrackerViewer::getInstance().StrPose, nId);
	}
	else
	{
        TrackerViewer::getInstance().UserGenerator
            .GetSkeletonCap().RequestCalibration(nId, TRUE);
	}
}

void XN_CALLBACK_TYPE TrackerViewer::User_LostUser(xn::UserGenerator& generator, 
                                                   XnUserID nId, 
                                                   void* pCookie)
{
    std::cout << "Lost user: " << nId << std::endl;
}

void XN_CALLBACK_TYPE TrackerViewer::UserPose_PoseDetected(
                                    xn::PoseDetectionCapability& capability, 
                                    const XnChar* strPose, 
                                    XnUserID nId, 
                                    void* pCookie)
{
    std::cout << "Pose " << TrackerViewer::getInstance().StrPose
        << " detected for user " << nId << std::endl;

	TrackerViewer::getInstance().UserGenerator
        .GetPoseDetectionCap().StopPoseDetection(nId);
	TrackerViewer::getInstance().UserGenerator
        .GetSkeletonCap().RequestCalibration(nId, TRUE);
}

void XN_CALLBACK_TYPE TrackerViewer::UserCalibration_CalibrationStart(
                                    xn::SkeletonCapability& capability, 
                                    XnUserID nId, 
                                    void* pCookie)
{
    std::cout << "Calibration started for user " << nId << std::endl;
}

void XN_CALLBACK_TYPE TrackerViewer::UserCalibration_CalibrationComplete(
                                    xn::SkeletonCapability& capability, 
                                    XnUserID nId, 
                                    XnCalibrationStatus eStatus,
                                    void* pCookie)
{
	if (eStatus == XN_CALIBRATION_STATUS_OK)
	{
        std::cout << "Calibration complete, start tracking user " 
            << nId << std::endl;
		TrackerViewer::getInstance().UserGenerator
            .GetSkeletonCap().StartTracking(nId);
	}
	else
	{
        if (TrackerViewer::getInstance().NeedPose)
        {
            TrackerViewer::getInstance().UserGenerator
                .GetPoseDetectionCap().StartPoseDetection(
                    TrackerViewer::getInstance().StrPose, nId);
        }
        else
        {
            TrackerViewer::getInstance().UserGenerator
                .GetSkeletonCap().RequestCalibration(nId, TRUE);
        }
	}
}

追蹤使用者並顯示骨骼圖

// 開始追蹤使用者
void TrackerViewer::run()
{
    assert(inited_);
    
    XnStatus result;
    
    result = Context.StartGeneratingAll();
    if (!CheckOpenNIError(result, "Start generating failed"))
        return;
    
    initOpenGL(&NsAppConfig().Argc, NsAppConfig().Argv);
    glutMainLoop();
}
// 初始化OpenGL
void TrackerViewer::initOpenGL(int *argc, char **argv)
{
	glutInit(argc, argv);
	glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
	glutInitWindowSize(ImageInfo.nXRes, ImageInfo.nYRes);
	glutCreateWindow ("User Tracker Viewer");
	//glutFullScreen();
	glutSetCursor(GLUT_CURSOR_NONE);

	glutKeyboardFunc(glutKeyboard);
	glutDisplayFunc(glutDisplay);
	glutIdleFunc(glutIdle);

	glDisable(GL_DEPTH_TEST);
	glEnable(GL_TEXTURE_2D);

	glEnableClientState(GL_VERTEX_ARRAY);
	glDisableClientState(GL_COLOR_ARRAY);
}
//------------------------------------------------------------------------------
// OpenGL Callbacks
//------------------------------------------------------------------------------
void TrackerViewer::glutDisplay()
{
    if (TrackerViewer::getInstance().SignalExitApp)
        exit(0);
    
	glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	// Setup the OpenGL viewpoint
	glMatrixMode(GL_PROJECTION);
	glPushMatrix();
	glLoadIdentity();
    
    static TrackerViewer &trackerViewer = TrackerViewer::getInstance();

	xn::DepthMetaData depthMD;
	trackerViewer.DepthGenerator.GetMetaData(depthMD);
	glOrtho(0, depthMD.XRes(), depthMD.YRes(), 0, -1.0, 1.0);
    
    glDisable(GL_TEXTURE_2D);
    
    trackerViewer.Context.WaitOneUpdateAll(trackerViewer.UserGenerator);
    
    xn::SceneMetaData sceneMD;
    trackerViewer.DepthGenerator.GetMetaData(depthMD);
    trackerViewer.UserGenerator.GetUserPixels(0, sceneMD);

    DrawDepthMap(depthMD, sceneMD);
    
	glutSwapBuffers();
}

void TrackerViewer::glutIdle()
{
    if (TrackerViewer::getInstance().SignalExitApp)
        exit(0);
    
    glutPostRedisplay();
}

void TrackerViewer::glutKeyboard(unsigned char key, int x, int y)
{
	switch (key)
	{
	case 27:
		TrackerViewer::getInstance().SignalExitApp = true;
    default:
        break;
	}
}
        上述程式碼完成了GUI的邏輯操作,關鍵說明如下:

// 將Kinect採集到的深度影象對映到OpenGL使用的2D座標系中
	xn::DepthMetaData depthMD;
	trackerViewer.DepthGenerator.GetMetaData(depthMD);
	glOrtho(0, depthMD.XRes(), depthMD.YRes(), 0, -1.0, 1.0);
    
    glDisable(GL_TEXTURE_2D);
// 等待Kinect更新
trackerViewer.Context.WaitOneUpdateAll(trackerViewer.UserGenerator);
// 獲取Kinect採集到的深度影象和使用者資訊,為繪製骨骼點和計算關節角度做準備
    xn::SceneMetaData sceneMD;
    trackerViewer.DepthGenerator.GetMetaData(depthMD);
    trackerViewer.UserGenerator.GetUserPixels(0, sceneMD);
// 繪製人體骨骼影象並計算關節角度,詳見“Kinect體感機器人(三)—— 空間向量法計算關節角度”
DrawDepthMap(depthMD, sceneMD);

OpenNI配置用XML
        本設計採用XML檔案+API進行OpenNI的配置工作,用到的XML檔案如下:

<OpenNI>
	<Licenses>
		<!-- Add application-specific licenses here 
		<License vendor="vendor" key="key"/>
		-->
	</Licenses>
	<Log writeToConsole="false" writeToFile="false">
		<!-- 0 - Verbose, 1 - Info, 2 - Warning, 3 - Error (default) -->
		<LogLevel value="3"/>
		<Masks>
			<Mask name="ALL" on="true"/>
		</Masks>
		<Dumps>
		</Dumps>
	</Log>
	<ProductionNodes>
		<!-- Set global mirror -->
		<GlobalMirror on="true"/>
		
		<!-- Create a depth node and give it a name alias (useful if referenced ahead in this script) -->
		<Node type="Depth" name="MyDepth">
			<Query>
				<!-- Uncomment to filter by vendor name, product name, etc.
				<Vendor>MyVendor inc.</Vendor>
				<Name>MyProduct</Name>
				<MinVersion>1.2.3.4</MinVersion>
				<Capabilities>
					<Capability>Cropping</Capability>
				</Capabilities>
				-->
			</Query>
			<Configuration>
				<MapOutputMode xRes="640" yRes="480" FPS="30"/> 

				<!-- Uncomment to override global mirror
				<Mirror on="false" /> 
				-->
			</Configuration>
		</Node>

	</ProductionNodes>
</OpenNI>

效果圖


相關文章