Depth Estimation from Light Field

Go back to Top Page

Unsupervised Disparity Estimation from Light Field

We investigated disparity estimation from a light field using a convolutional neural network (CNN). Most of the methods used for this task implemented a supervised learning framework, where the predicted disparity map was compared directly to the corresponding ground-truth disparity map in the training stage. However, light field data accompanied with ground-truth disparity maps were insufficient and rarely available for real-world scenes. The lack of training data resulted in limited generality of the methods trained with them. To tackle this problem, we took a simple plug-and-play approach to remake a supervised method into an unsupervised (self-supervised) one. We replaced the loss function of the original method with one that does not depend on the ground-truth disparity maps. More specifically, our loss function is designed to indirectly evaluate the accuracy of the disparity map by using weighted warping errors among the input light field views. Our loss function was also designed to be aware of the edge alignment between the image and the disparity map. As a result of this unsupervised learning framework, our method can use more abundant training datasets (even those without ground-truth disparity maps) than the original supervised method. Our method was evaluated on computer-generated scenes (4D Light Field Benchmark) and real-world scenes captured by Lytro Illum cameras. Our method achieved the state-of-the-art performance as an unsupervised method on the benchmark. We also demonstrated that our method can estimate disparity maps more accurately than the original supervised method for various real-world scenes.

Project Members

Toshiaki FUJII (Professor)

Keita TAKAHASHI (Associate Professor)

Taisei IWATSUKI (Former Graduate Student: -- 2021.3)

Publications

Supplementary Materials