** One photo ** ** High-precision 3D image ↓ Easy creation **. github is below. https://github.com/vt-vl-lab/3d-photo-inpainting
Paper "3D Photography using Context-aware Layered Depth Inpainting" Meng-Li Shih1 and others are as follows. https://arxiv.org/pdf/2004.04727.pdf
According to the abstract of the excerpt below
** New view composition with hallucinatory color and depth structure in areas blocked by the original view **
It has become.
Paper abstract (Google Translate mom)
738/5000 We suggest how to convert a single RGB-D input image to a 3D photo. This is a new view composition multi-layer representation that includes hallucinatory color and depth structures in areas blocked by the original view. Using a layered depth image with explicit pixel connections as a basic representation, we present a learning-based repair model that synthesizes new local color and depth content into a closed area in a spatial context-aware way. .. The resulting 3D photo can be efficiently rendered with motion parallax using a standard graphics engine. Verify the effectiveness of the method in a variety of challenging everyday situations and reduce artifacts compared to state-of-the-art technology.
↓ ** The theme of this article is only the following one point. ** **
Regarding "Is the hidden area really visible?", It is needless to verify that it cannot be seen without special restoration technology. The actual situation is shown below. Here's an example that ** doesn't work **.
(Original one image) (Original one image) (Original one image)
⇒There is no special opinion. How do you usually feel when there are cases like this that don't work? ?? When.
The title "** Can you really see the hidden area?" ** ” ⇒ ** Of course, I can't see it. ** ** ⇒ It is a comprehensive technology to make invisible parts visible. I imagine it will be generalized in the future. (Technology or interface?)
I'm not sure how to handle the technology that has such a bad case, so I made it clear in the article. (Maybe some people don't like this kind of event and don't get close to it?)
** If you have any comments, please let us know. ** **
I tried to easily create a high-precision 3D image with one photo [2]. (Try processing depth with numpy)
【1】 ・ With GFORCE GTX1050Ti (dedicated GPU memory 4.0GB) ** May not work due to insufficient GPU memory **. This is ** not dependent on image size **, Perhaps it's related to the amount of memory required to represent the 3D in the image content or something. I haven't found a solution. It may be possible to manage by changing parameters. Since it is a red car, about 3.4GB of dedicated GPU memory was used, and about 2.7GB of white one was used. For this memory problem, why not use Google Colab for the time being? -I showed editing depth information with PNG, but surely the default is numpy ?. If you can handle the data (numpy) rattlingly, you may edit it. -I feel that the depth information expressed in PNG is normalized somewhere. PNG depth range I feel that the result was the same even if I changed it. : star:
【3】 ・ If the entire surface is a wall-like image, I feel that the vertical surface cannot be detected well. (I think it's easy, so it's strange.) I tried some depth methods, I haven't come across any good ones so far.
Recommended Posts