[PYTHON] Learn with Pixel 4 camera Depth map-2 (Dark Shading correction)

Introduction

This article cuts out only the Code part handled in the article serialized in Note. If you are interested in the technical background, please refer to the Note article. Some codes are slightly modified based on the references. I have decided that I do not have Originality just by modifying it, and I have not posted the Code (I am sorry that it is difficult to read). Only the part I created from scratch is listed this time. The references are very educational and detailed on how to implement Code, so please refer to them.

Can be done with Python and Colab-RAW development from scratch 638e8eb0-da18-40b2-aa20-8366051eaff2_base_resized.jpg

Importance of image preparation

As with machine learning and other image processing, we rarely use the images obtained from the camera as they are. The reason is that in its original state, there is noise, unintended information, and variations between images, and it is highly possible that the intended results will not be obtained due to these effects. Therefore, it is necessary to carry out some processing to obtain variability and intended results.

Types of pretreatment to be performed

The following pre-processing will be processed in order. These processes are summarized in the Dark class. The functions etc. that appear below are basically implemented as Method of Dark class.

1. Dark shading correction 2. Noise processing 3. Edge enhancement process 4. Convert to the specified format and output

What is Dark Shading Correction?

If you shoot with a smartphone or a cheap Module camera, the output value at the edge of the image will drop due to the influence of the lens. If you use the image as it is, the edge of the image will remain dark, so you need to apply Gain to the edge of the image in advance to increase the output. This lifting process is called Dark Shading correction. Here is the one processed for Dual-Pixel based on the above reference materials.

Radius distance extraction from the center of the image

First of all, we will start by extracting the shading profile from the reference image. If you shoot with proper equipment, you can align the optical axis, but assuming that there is no facility there, I created a function with a certain degree of freedom.

dark.py


def _calc_radials(self, kernel_size: int = 32, \
                 offset_x: int=0, offset_y: int=0) ->[int]:
         ''' Calculate radial
         Paramters
         ---------
         kernal_size: int
          kernel size, default value is 32
         offset_x: int
          offset value for x-axis, to give the function center positioning flexibility
         offset_y: int
          offset value for y-axis, to give the function center positioning flexibility

        Returns
        -------
        radials: [int]
         radial of each position
        '''
        radials = []
        for y in range(0, self._img_height, kernel_size):
            for x in range(0, self._img_width - kernel_size, kernel_size):
        #The calculation part of Core is almost a reference material(p60~)Please refer to the references as it is the same as
        #I will omit this part.

        # return
        return radials

Extracting the average value of each block in the defined image

Next is a function that extracts the value of each point from the reference image. The average value inside the kernel is calculated by referring to the references. The difference from the references is that the PGM image is not a Bayer array, only the brightness information, which simplifies the retrieval of data from the array.

dark.py


def _extract_shading_profile(self, img: np.ndarray, kernel_size: int = 32):
        ''' Extract shading profile from image file
        Paramers
        ---------
        img: np.ndarray
         Reference image input
        kernel_size: int
         Kernel size for profile extraction, the number should match with the kernel size in radial calculation
        Returns
        -------
        val: array
         extracted profile values
        '''
        val = [[], [], []]
        for y in range(0, self._img_height, kernel_size):
            for x in range(0, self._img_width - kernel_size, kernel_size):
        #The calculation part of Core is almost a reference material(p61~)Please refer to the references as it is the same as
        #I will omit this part.
        return val

Profile approximate curve calculation

Next, create an approximate curve using Radials and the profile of the extracted reference image. This is also a partial modification based on the reference materials.

dark.py


    def _proximate_shading(self, radials: [int], profiles: []):
       ''' priximate shading profile from data
       Parameters
       -----------
       radials: [int]
        input data of radial
       profiles: []
        input data of profiles
       Returns
       -------
       '''
    #The calculation part of Core is almost a reference material(p63~)Please refer to the references as it is the same as
    #I will omit this part.

Gain Map calculation

It will be a little more when you come here. Create an actual Gain map based on the above approximation calculation results. Considering versatility, we made it by adding Gain to the entire image. In rare cases, when shooting in a dark place, Analog Gain is applied to the entire image, so this is taken into consideration. The functions implemented so far are called to create a data flow, and they are integrated to calculate and output the final Gain map. It determines whether it is Left or Right. The reason will be explained later.

dark.py


def _gain_map(self, left: bool = True, kernal_size: int=32, \
        analog_gain= [1.0, 1.0, 1.0], offset=(0, 0)) -> np.ndarray:

        img = self._left_img
        if not left:
            img = self._right_img
        
        # calculate radials
        offset_x, offset_y = offset
        radials = self._calc_radials(kernal_size, offset_x=offset_x, offset_y=offset_y)

        # extract raw data profiles
        profiles = self._extract_shading_profile(img, kernel_size=kernal_size)

        # left gain map
        self._proximate_shading(radials=radials, profiles=profiles)

        # create gain map
        return self._create_dksh_gain_map(analog_gain=analog_gain)

After various experiments, I found that the Left image and the Rgight image have some problems that do not change the Gain map for Dark shading. Therefore, it is necessary to specify in advance whether the image to be Dark shading processed is Left or Right. The following is implemented with that in mind.

dark.py


def get_gain_map(self, kernal_size: int=32, analog_gain= [1.0, 1.0, 1.0], 
    left_offset=(0, 0), right_offset=(0, 0)) -> (np.ndarray, np.ndarray):
        # left side gain map
        left_gain_map = self._gain_map(left=True, kernal_size=kernal_size, \
            analog_gain=analog_gain, offset=left_offset)

        # right side gain map
        right_gain_map = self._gain_map(left=True, kernal_size=kernal_size, \
            analog_gain=analog_gain, offset=right_offset)

        return left_gain_map, right_gain_map

Dark class body definition

It's easy, but I've shown you how to implement the Dark shading correction part. Finally, I would like to finish by showing the definition of Dark class.

dark.py


import cv2
import numpy as np
import helper

class LinearPoly():
    def __init__(self):
        self.slope = 0.0
        self.offset = 0.0

class Dark():
    """ Dark image handler
    """
    def __init__(self, path: str, ext: str='pgm', dsize= (2016, 1512)):
        # make file path list
        self._file_path_list = helper.make_file_path_list(path, ext)
        self._file_path_list.sort()

        # image size
        self._img_width, self._img_height = dsize
        self._img_center_x = self._img_width // 2
        self._img_center_y = self._img_height // 2

        # read dark images
        self._dark_imgs = helper.read_img_from_path(self._file_path_list, self._img_width, self._img_height)

        # recognize postion left or right
        for idx, loc in enumerate(self._file_path_list):
            if helper.loc_detector_from_name(loc):
                self._left_img = self._dark_imgs[idx]
            else:
                self._right_img = self._dark_imgs[idx]

        # dark shading fitting data
        self._dksh_para = [LinearPoly(), LinearPoly(), LinearPoly()]

Recommended Posts

Learn with Pixel 4 camera Depth map-2 (Dark Shading correction)
Learn with Pixel 4 camera Depth map-3 (image correction)
Learn with Pixel 4 camera Depth map-4 (pre-process final)
Learn with Pixel 4 camera Depth map-1 (helper implementation)
Single pixel camera to experience with Python