Monocular Depth Estimation

Go back to Top Page

Adversarial Attack on Monocular Depth Estimation

Thanks to the excellent learning capability of deep convolutional neural networks (CNN), monocular depth estimation using CNNs has achieved great success in recent years. However, depth estimation from a monocular image alone is essentially an ill-posed problem, and thus, it seems that this approach would have inherent vulnerabilities. To reveal this limitation, we propose a method of adversarial patch attack on monocular depth estimation. More specifically, we generate artificial patterns (adversarial patches) that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed. Our method can be implemented in the real world by physically placing the printed patterns in real scenes. We also analyze the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.

Project Members

Toshiaki FUJII (Professor)

Keita TAKAHASHI (Associate Professor)

Ryutaroh MATSUMOTO (Associate Professor @ Titech)

Koichiro YAMANAKA (Graduate Student)


Supplementary Materials