【OpenCV學習】計算兩幅影象的重疊區域

一點心青發表於2013-08-02

問題描述:已知兩幅影象Image1和Image2,計算出兩幅影象的重疊區域,並在Image1和Image2標識出重疊區域。

演算法思想:

若兩幅影象存在重疊區域,則進行影象匹配後,會得到一張完整的全景圖,因而可以轉換成影象匹配問題。

影象匹配問題,可以融合兩幅影象,得到全景圖,但無法標識出在原影象的重疊區域。

將兩幅影象都理解為多邊形,則其重疊區域的計算,相當於求多邊形的交集。

通過多邊形求交,獲取重疊區域的點集,然後利用單應矩陣還原在原始影象的點集資訊,從而標識出重疊區域。

演算法步驟:

1.影象匹配計算,獲取單應矩陣。

2.根據單應矩陣,計算影象2的頂點轉換後的點集。

3.由影象1的頂點集合和影象2的轉換點集,計算多邊形交集。

4.根據單應矩陣的逆,計算多邊形的交集在影象2中的原始點集。

程式碼實現如下所示:

  1 bool ImageOverlap(cv::Mat &img1,cv::Mat &img2,std::vector<cv::Point> &vPtsImg1,std::vector<cv::Point> &vPtsImg2)
  2 {
  3     cv::Mat g1(img1,Rect(0,0,img1.cols,img1.rows));
  4     cv::Mat g2(img2,Rect(0,0,img2.cols,img2.rows));
  5 
  6     cv::cvtColor(g1,g1,CV_BGR2GRAY);
  7     cv::cvtColor(g2,g2,CV_BGR2GRAY);
  8 
  9     std::vector<cv::KeyPoint> keypoints_roi, keypoints_img;  /* keypoints found using SIFT */
 10     cv::Mat descriptor_roi, descriptor_img;             /* Descriptors for SIFT */
 11     cv::FlannBasedMatcher matcher;                      /* FLANN based matcher to match keypoints */
 12     std::vector<cv::DMatch> matches, good_matches;
 13     cv::SIFT sift;
 14     int i, dist=80;
 15 
 16     sift(g1, Mat(), keypoints_roi, descriptor_roi);      /* get keypoints of ROI image */
 17     sift(g2, Mat(), keypoints_img, descriptor_img);         /* get keypoints of the image */
 18     matcher.match(descriptor_roi, descriptor_img, matches);
 19 
 20     double max_dist = 0; double min_dist = 1000;
 21 
 22     //-- Quick calculation of max and min distances between keypoints
 23     for( int i = 0; i < descriptor_roi.rows; i++ )
 24     { 
 25         double dist = matches[i].distance;
 26         if( dist < min_dist ) min_dist = dist;
 27         if( dist > max_dist ) max_dist = dist;
 28     }
 29 
 30     for (i=0; i < descriptor_roi.rows; i++)
 31     {
 32         if (matches[i].distance < 3*min_dist)
 33         {
 34             good_matches.push_back(matches[i]);
 35         }
 36     }
 37 
 38     //printf("%ld no. of matched keypoints in right image\n", good_matches.size());
 39     /* Draw matched keypoints */
 40 
 41     //Mat img_matches;
 42     //drawMatches(img1, keypoints_roi, img2, keypoints_img, 
 43     //    good_matches, img_matches, Scalar::all(-1), 
 44     //    Scalar::all(-1), vector<char>(), 
 45     //    DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
 46     //imshow("matches",img_matches);
 47 
 48     vector<Point2f> keypoints1, keypoints2; 
 49     for (i=0; i<good_matches.size(); i++)
 50     {
 51         keypoints1.push_back(keypoints_img[good_matches[i].trainIdx].pt);
 52         keypoints2.push_back(keypoints_roi[good_matches[i].queryIdx].pt);
 53     }
 54     //計算單應矩陣
 55     Mat H = findHomography( keypoints1, keypoints2, CV_RANSAC );
 56 
 57     //show stitchImage
 58     // cv::Mat stitchedImage;
 59     // int mRows = img2.rows;
 60     // if (img1.rows> img2.rows)
 61     // {
 62         // mRows = img1.rows;
 63     // }
 64     // stitchedImage = Mat::zeros(img2.cols+img1.cols, mRows, CV_8UC3);
 65     // warpPerspective(img2,stitchedImage,H,Size(img2.cols+img1.cols,mRows));
 66     // Mat half(stitchedImage,Rect(0,0,img1.cols,img1.rows));
 67     // img1.copyTo(half);
 68     // imshow("stitchedImage",stitchedImage);
 69 
 70     std::vector<cv::Point> vSrcPtsImg1;
 71     std::vector<cv::Point> vSrcPtsImg2;
 72 
 73     vSrcPtsImg1.push_back(cv::Point(0,0));
 74     vSrcPtsImg1.push_back(cv::Point(0,img1.rows));
 75     vSrcPtsImg1.push_back(cv::Point(img1.cols,img1.rows));
 76     vSrcPtsImg1.push_back(cv::Point(img1.cols,0));
 77 
 78     vSrcPtsImg2.push_back(cv::Point(0,0));
 79     vSrcPtsImg2.push_back(cv::Point(0,img2.rows));
 80     vSrcPtsImg2.push_back(cv::Point(img2.cols,img2.rows));
 81     vSrcPtsImg2.push_back(cv::Point(img2.cols,0));
 82 
 83     //計算影象2在影象1中對應座標資訊
 84     std::vector<cv::Point> vWarpPtsImg2;
 85     for(int i = 0;i < vSrcPtsImg2.size();i++ )
 86     {
 87         cv::Mat srcMat = Mat::zeros(3,1,CV_64FC1);
 88         srcMat.at<double>(0,0) = vSrcPtsImg2[i].x;
 89         srcMat.at<double>(1,0) = vSrcPtsImg2[i].y;
 90         srcMat.at<double>(2,0) = 1.0;
 91         
 92         cv::Mat warpMat = H * srcMat;
 93         cv::Point warpPt;
 94         warpPt.x = cvRound(warpMat.at<double>(0,0)/warpMat.at<double>(2,0));
 95         warpPt.y = cvRound(warpMat.at<double>(1,0)/warpMat.at<double>(2,0));
 96 
 97         vWarpPtsImg2.push_back(warpPt);
 98     }
 99     //計算影象1和轉換後的影象2的交點
100     if(!PolygonClip(vSrcPtsImg1,vWarpPtsImg2,vPtsImg1))
101         return false;
102 
103     for (int i = 0;i < vPtsImg1.size();i++)
104     {
105         cv::Mat srcMat = Mat::zeros(3,1,CV_64FC1);
106         srcMat.at<double>(0,0) = vPtsImg1[i].x;
107         srcMat.at<double>(1,0) = vPtsImg1[i].y;
108         srcMat.at<double>(2,0) = 1.0;
109 
110         cv::Mat warpMat = H.inv() * srcMat;
111         cv::Point warpPt;
112         warpPt.x = cvRound(warpMat.at<double>(0,0)/warpMat.at<double>(2,0));
113         warpPt.y = cvRound(warpMat.at<double>(1,0)/warpMat.at<double>(2,0));
114         vPtsImg2.push_back(warpPt);
115     }
116     return true;
117 }
View Code

其中,多邊形求交集可參考:http://www.cnblogs.com/dwdxdy/p/3232110.html

最終,程式執行的示意圖如下:

相關文章