[PYTHON] Image matching & point cloud alignment with PatchMatch

Let's play with PatchMatch.

What is PatchMatch? (Roughly)

ダウンロード.png

A rough overview of PatchMatch is Find similarities in one image and another! It is an algorithm called.

In Photoshop, it is used to erase a specific part from a picture. hqdefault.jpg

The idea of the base is as follows.

1. 1. Randomly set the correspondence between all pixels of the main image and the target image.
Eventually, this pair of correspondences will be a set of similar pixels.

2. Based on the idea that adjacent pixels are likely to be parts of the same part
Compare the current similarity with the similarity of adjacent pixels and
If it is larger than the currently set similarity (higher similarity), it is updated to that value.

3. 3. Repeat below

PatchMatch is sometimes used like template matching, Besides, based on the common idea that adjacent parts will be the same The same side is the same depth! From this point of view, Colmap is used to estimate the depth. It is a theory that seems to be widely applicable. tester.png

Try to match the two images.

The important point in this article is If you use PatchMatch, you can see similar parts in two images. It means that it can be judged at the pixel level.

ダウンロード.jpeg

When doing SfM, the word feature point matching often comes up, but Since feature point matching is a process with feature points, It is easily affected by the environment, and if the number of feature points is small, processing will fail.

With PatchMatch, you can increase the number of matches, right? With that in mind, let's play with the following processing.

1. Execute PatchMatch with 2 images. Calculate a set of pixels similar to all pixels with SAD
2. Run Lansac on the two matched pairs. Remove outliers.

Let's see how accurately and many cases can be matched with PatchMatch. Next, I will try point clouding and alignment using the RGBD image and internal parameter information. If you can match well, You may be able to align well without being affected by noise.

3. 3. Point cloud from the set of extracted pixels
4. Align the extracted point cloud
Align with the main point cloud using the parameter of 5.4

code

https://github.com/MingtaoGuo/PatchMatch

The above code is borrowed for the core processing part. If you use it as it is, Maria's face will become an avatar, so Modify to use only the execution result of PatchMatch.

reconstruction.py



def reconstruction(f, A, B,AdepthPath="",BdepthPath=""):
    A_h = np.size(A, 0)
    A_w = np.size(A, 1)
    temp = np.zeros_like(A)

    srcA=[]
    dstB=[]

    colorA=np.zeros_like(A)
    colorB=np.zeros_like(A)

    for i in range(A_h):
        for j in range(A_w):
            colorA[i, j, :] = A[[i],[j],:]
            colorB[i, j, :] = B[f[i, j][0], f[i, j][1], :]

            temp[i, j, :] = B[f[i, j][0], f[i, j][1], :]
            srcA.append([i,j])
            dstB.append([f[i, j][0], f[i, j][1]])

    #Next, apply the similar pixels obtained by Patch Match.
    #Image reconstructing image A using pixels in image B
    cv2.imwrite('colorB.jpg', colorB)
    cv2.imwrite('colorA.jpg', colorA)

    src=np.array(srcA)
    dst=np.array(dstB)
    print(src.shape)
    print(dst.shape)

    srcA_pts = src.reshape(-1, 1, 2)
    dstB_pts = dst.reshape(-1, 1, 2)
    # RANSAC
    print(srcA_pts.shape)
    print(dstB_pts.shape)

    M, mask = cv2.findHomography(srcA_pts, dstB_pts, cv2.RANSAC, 5.0)
    matchesMask = mask.ravel().tolist()

    im_h = cv2.hconcat([A,B])
    cv2.imwrite('outputMerge.jpg', im_h)

    outputA = np.zeros_like(A)
    outputB = np.zeros_like(A)

    for i in range(srcA_pts.shape[0]):
        if mask[i] == 0: continue

        srcIm = [srcA_pts[i][0][0],srcA_pts[i][0][1]]
        dstIm = [dstB_pts[i][0][0],dstB_pts[i][0][1]]

        outputA[srcIm[0],srcIm[1],:]=A[srcIm[0],srcIm[1],:]
        outputB[dstIm[0],dstIm[1],:]=B[dstIm[0],dstIm[1],:]

    im_h = cv2.hconcat([outputA, outputB])
    cv2.imwrite('outputMatch.jpg', im_h)D

    im_h = cv2.hconcat([A, B])

    for i in range(srcA_pts.shape[0]):
        if mask[i] == 0: continue

        srcIm = [srcA_pts[i][0][0],srcA_pts[i][0][1]]
        dstIm = [dstB_pts[i][0][0],dstB_pts[i][0][1]]
        cv2.line(im_h,   (srcIm[0], srcIm[1]),
                        (dstIm[0]+int(1280 * 0.3), dstIm[1]),
                        (0, 255, 0), thickness=1, lineType=cv2.LINE_4)

    cv2.imwrite('outputMatchAddLine.jpg', im_h)

    if AdepthPath!="":
        #If there is image depth data / internal parameters,
        #Try to make a point cloud and align with the matching information.
        #PatchMatch only A No need for the following processing

        import open3d as o3d
        def CPD_rejister(source, target):
            from probreg import cpd
            import copy
            type = 'rigid'
            tf_param, _, _ = cpd.registration_cpd(source, target, type)

            result = copy.deepcopy(source)
            result.points = tf_param.transform(result.points)

            return result, tf_param.rot, tf_param.t, tf_param.scale

        def Register(pclMVS_Main, pclMVS_Target):
            # CPD : step1 Run CPD for SfM Data
            result, rot, t, scale = CPD_rejister(pclMVS_Target, pclMVS_Main)

            # CPD : step2 Apply CPD result for MVS Data
            lastRow = np.array([[0, 0, 0, 1]])
            ret_R = np.array(rot)
            ret_t = np.array([t])
            ret_R = scale * ret_R

            transformation = np.concatenate((ret_R, ret_t.T), axis=1)
            transformation = np.concatenate((transformation, lastRow), axis=0)
            return transformation, rot, t, scale

        def getPLYfromNumpy_RGB(nplist, colorList):
            # nplist = np.array(nplist)
            pcd = o3d.geometry.PointCloud()
            pcd.points = o3d.utility.Vector3dVector(nplist)
            pcd.colors = o3d.utility.Vector3dVector(colorList)
            return pcd

        def numpy2Dto1D(arr):
            if type(np.array([])) != type(arr):
                arr = np.array(arr)
            if arr.shape == (3, 1):
                return np.array([arr[0][0], arr[1][0], arr[2][0]])
            if arr.shape == (2, 1):
                return np.array([arr[0][0], arr[1][0]])
            else:
                assert False, "numpy2Dto1D:Not compatible"

        def TransformPointI2C(pixel2D, K):
            X = float(float(pixel2D[0] - K[0][2]) * pixel2D[2] / K[0][0])
            Y = float(float(pixel2D[1] - K[1][2]) * pixel2D[2] / K[1][1])
            Z = pixel2D[2]
            CameraPos3D = np.array([[X], [Y], [Z]])
            return CameraPos3D

        def getDepthCSPerView(path):
            import csv
            #depth data acquisition
            in_csvPath = path
            with open(in_csvPath) as f:
                reader = csv.reader(f)
                csvlist = [row for row in reader]

            return csvlist

        #depth data(CSV)
        Adepthlist = getDepthCSPerView(AdepthPath)
        Bdepthlist = getDepthCSPerView(BdepthPath)
        pclA=[]
        pclB=[]
        colorA=[]
        colorB=[]

        ALLpclA=[]
        ALLpclB=[]
        ALLcolorA=[]
        ALLcolorB=[]

        #Depth value is 1.Restore 5m range to point cloud
        depth_threshold = 1.5  #Meter
        depth_scale = 0.0002500000118743628
        threshold = depth_threshold / depth_scale

        #RGB camera internal parameters
        # width: 1280, height: 720, ppx: 648.721, ppy: 365.417, fx: 918.783, fy: 919.136,
        retK = np.array([[918.783, 0, 648.721],
                         [0, 919.136, 365.417],
                         [0, 0, 1]])


        cnt = 0

        for y in range(len(Adepthlist)):
            for x in range(len(Adepthlist[0])):
                ADepth = float(Adepthlist[y][x])
                BDepth = float(Bdepthlist[y][x])

                if (ADepth == 0 or ADepth > threshold) or (BDepth == 0 or BDepth > threshold):
                    continue

                AXYZ = TransformPointI2C([x,y, ADepth], retK)
                BXYZ = TransformPointI2C([x,y, BDepth], retK)

                ALLpclA.append(numpy2Dto1D(AXYZ))
                ALLpclB.append(numpy2Dto1D(BXYZ))
                color = A[int(srcIm[0])][int(srcIm[1])]
                ALLcolorA.append([float(color[0] / 255), float(color[1] / 255), float(color[2] / 255)])

                color = B[int(dstIm[0])][int(dstIm[1])]
                ALLcolorB.append([float(color[0] / 255), float(color[1] / 255), float(color[2] / 255)])

        ALLpclA = getPLYfromNumpy_RGB(ALLpclA,ALLcolorA)
        ALLpclB = getPLYfromNumpy_RGB(ALLpclB,ALLcolorB)
        o3d.io.write_point_cloud("ALL_pclA_Before.ply", ALLpclA)
        o3d.io.write_point_cloud("ALL_pclB_Before.ply", ALLpclB)

        for i in range(srcA_pts.shape[0]):
            if mask[i] == 0: continue

            srcIm = [srcA_pts[i][0][0], srcA_pts[i][0][1]]
            dstIm = [dstB_pts[i][0][0], dstB_pts[i][0][1]]

            ADepth = float(Adepthlist[int(srcIm[0])][int(srcIm[1])])
            BDepth = float(Bdepthlist[int(dstIm[0])][int(dstIm[1])])


            if (ADepth == 0 or ADepth > threshold) or (BDepth == 0 or BDepth > threshold):
                continue

            AXYZ = TransformPointI2C([int(srcIm[1]), int(srcIm[0]), ADepth], retK)
            BXYZ = TransformPointI2C([int(dstIm[1]), int(dstIm[0]), BDepth], retK)

            pclA.append(numpy2Dto1D(AXYZ))
            pclB.append(numpy2Dto1D(BXYZ))
            color = A[int(srcIm[0])][int(srcIm[1])]
            colorA.append([float(color[0] / 255), float(color[1] / 255), float(color[2] / 255)])

            color = B[int(dstIm[0])][int(dstIm[1])]
            colorB.append([float(color[0] / 255), float(color[1] / 255), float(color[2] / 255)])


        pclA = getPLYfromNumpy_RGB(pclA,colorA)
        pclB = getPLYfromNumpy_RGB(pclB,colorB)

        o3d.io.write_point_cloud("pclA_Before.ply", pclA)
        o3d.io.write_point_cloud("pclB_Before.ply", pclB)

        trans, rot, t, scale = Register(pclA, pclB)
        pclB.transform(trans)

        o3d.io.write_point_cloud("pclA_After.ply", pclA)
        o3d.io.write_point_cloud("pclB_After.ply", pclB)

        ALLpclB.transform(trans)
        o3d.io.write_point_cloud("ALL_pclA_After.ply", ALLpclA)
        o3d.io.write_point_cloud("ALL_pclB_After.ply", ALLpclB)

Result: Image matching

I will try it.

outputMerge.jpg

This is

outputMatchAddLine.jpg

like this. I don't really understand. Pixels that are matched and outliers removed by ransac are extracted and displayed.

outputMatch.jpg

I don't know more than no. But somehow the shapes seem to match. I feel that it can be used for tracking, motion detection, etc.

Result: Point cloud alignment

I don't know if the answer will be the same as the previous result, but Let's align the point cloud of the two viewpoints based on the extracted information.

3. 3. Point cloud from the set of extracted pixels
4. Align the extracted point cloud
Align with the main point cloud using the parameter of 5.4

This is 無題.png

like this. 無題.png

Hmm! It's subtle! that's all.

[Addition] 無題.png

Since the previous result was not accurate in depth Validated with different data.

It's a point cloud full of noise, Maybe thanks to the successful 2D matching I was able to match well naturally.

I used only Ransac to remove outliers during matching, Since I made a point cloud with much effort, next time I will consider the distance of the matched points Try removing the outliers.

Recommended Posts

Image matching & point cloud alignment with PatchMatch
Point Cloud with Pepper
Image processing with MyHDL
Image recognition with keras
Image processing with Python
I tried to find the affine matrix in image alignment (feature point matching) using affine transformation.
Image Processing with PIL
Acquisition of 3D point cloud with Softbank Pepper (Choregraphe, Python)