Supplementary video (mainly focused on experimental results)
We propose Coded-E2LF (coded event to light field), a computational imaging method for acquiring a 4-D light field using a coded aperture and a stationary event-only camera. In a previous work, an imaging system similar to ours was adopted, but both events and intensity images were captured and used for light field reconstruction. In contrast, our method is purely event-based, which relaxes restrictions for hardware implementation. We also introduce several advancements from the previous work that enable us to theoretically support and practically improve light field reconstruction from events alone. In particular, we clarify the key role of a black pattern in aperture coding patterns. We finally implemented our method on real imaging hardware to demonstrate its effectiveness in capturing real 3-D scenes. To the best of our knowledge, we are the first to demonstrate that a 4-D light field with pixel-level accuracy can be reconstructed from events alone.
Tomoya Tsuchida (Graduate Student)
Keita Takahashi (Associate Professor)
Chihiro Tsutake (Assistant Professor)
Toshiaki Fujii (Professor)
Hajime Nagahara (Professor, Osaka University)
Tomoya Tsuchida, Keita Takahashi, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara: "Coded-E2LF: Coded Aperture Light Field Imaging from Events", accepted to IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2026, June 2026. [ arxiv version ]
Our software (using Python + PyTorch) for the above paper is available. Please find the "readme.txt" file for the terms of use and usage. [Get our software ]
Supplementary video (mainly focused on experimental results)
Paper summary
We propose a computational imaging method for time-efficient light-field acquisition that combines a coded aperture with an event-based camera. Different from the conventional coded-aperture imaging method, our method applies a sequence of coding patterns during a single exposure for an image frame. The parallax information, which is related to the differences in coding patterns, is recorded as events. The image frame and events, all of which are measured in a single exposure, are jointly used to computationally reconstruct a light field. We also designed an algorithm pipeline for our method that is end-to-end trainable on the basis of deep optics and compatible with real camera hardware. We experimentally showed that our method can achieve more accurate reconstruction than several other imaging methods with a single exposure. We also developed a hardware prototype with the potential to complete the measurement on the camera within 22 msec and demonstrated that light fields from real 3-D scenes can be obtained with convincing visual quality.
Shuji Habuchi (Graduate Student: --2025)
Keita Takahashi (Associate Professor)
Chihiro Tsutake (Assistant Professor)
Toshiaki Fujii (Professor)
Hajime Nagahara (Professor, Osaka University)
Shuji Habuchi, Keita Takahashi, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara: "Time-Efficient Light-Field Acquisition Using Coded Aperture and Events", Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024, pp. 24923--24933, June 2024. [ arxiv version ] [ CVF version ] [ CVPR virtual poster ]
Our software (using Python + PyTorch) for the above paper is available. Please find the "ReadME.txt" file for the terms of use and usage. [Get our software ]
2025/1/9 New: Our software version 2 newly includes the training script. Please find the "ReadME.txt" file for the terms of use and usage. [Get our software ]