OmniFusion

360 Monocular Depth Estimation via Geometry-Aware Fusion

Abstract

A well-known challenge in applying deep-learning methods to omnidirectional images is spherical distortion. In dense regression tasks such as depth estimation, where structural details are required, using a vanilla CNN layer on the distorted 360 image results in undesired information loss. In this paper, we propose a 360 monocular depth estimation pipeline, OmniFusion, to tackle the spherical distortion issue. Our pipeline transforms a 360 image into less-distorted perspective patches (i.e. tangent images) to obtain patch-wise predictions via CNN, and then merge the patch-wise results for final output. To handle the discrepancy between patch-wise predictions which is a major issue affecting the merging quality, we propose a new framework with the following key components. First, we propose a geometry-aware feature fusion mechanism that combines 3D geometric features with 2D image features to compensate for the patch-wise discrepancy. Second, we employ the self-attention-based transformer architecture to conduct a global aggregation of patch-wise information, which further improves the consistency. Last, we introduce an iterative depth refinement mechanism, to further refine the estimated depth based on the more accurate geometric features. Experiments show that our method greatly mitigates the distortion issue, and achieves state-of-the-art performances on several 360 monocular depth estimation benchmark datasets.

General diagram of OmniFusion.
Fig. Our method, Omnifusion, produces high-quality dense depth from a monocular ERP input. Our method uses a set of N perspective patches (i.e. tangent images) to represent the ERP image, and fuse the image features with 3D geometric features to improve the estimation of the merged depth map. The corresponding camera poses of the tangent images are shown in the middle row.
General diagram of OmniFusion.
Fig. Qualitative results on Stanford2D3D, Matterport3D and 360D.
General diagram of OmniFusion.
Fig. Qualitative comparisons regarding individual components. The top row shows the visual comparisons in depth maps, and the bottom row shows the visual comparisons of the corresponding error maps between the predicted depth maps. The middle two rows show the close-up views of the highlighted areas in the top and bottom rows, respectively.
OmniFusion Result Example in 3D.
OmniFusion Result Example in 3D.
OmniFusion Result Example in 3D.
OmniFusion Result Example in 3D.
OmniFusion Result Example in 3D.
OmniFusion Result Example in 3D.
Fig. 3D point cloud reconstructed from estimated depth using OmniFusion.

Cite

@misc{arxiv.2203.00838,
    doi = {10.48550/ARXIV.2203.00838},
    url = {https://arxiv.org/abs/2203.00838},
    author = {Li, Yuyan and Guo, Yuliang and Yan, Zhixin and Huang, Xinyu and Duan, Ye and Ren, Liu},
    title = {OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware Fusion},
    publisher = {arXiv},
    year = {2022},
}