Depth completion (DC) aims to predict a dense depth map from an RGB image and sparse depth observations. Existing methods for DC generalize poorly on new datasets or unseen sparse depth patterns, limiting their practical applications. We propose OMNI-DC, a highly robust DC model that generalizes well across various scenarios. Our method incorporates several novel designs in the model architecture and loss functions and can deal with sparse depth maps of varying densities. Moreover, we train OMNI-DC on a mixture of synthetic datasets with a scale normalization technique. To evaluate our model, we establish a new evaluation protocol named Robust-DC for zero-shot testing under various sparse depth patterns. Experimental results on Robust-DC and conventional benchmarks show that OMNI-DC significantly outperforms all baselines.
Our model builds on top of OGNI-DC but introduces a novel multi-resolution differentiable depth integration layer (Multi-res DDI) to allow explicit modeling of long-range depth relationships. We train OMNI-DC on 5 large-scale synthetic datasets, covering indoor, outdoor, and urban scenes, with diverse depth patterns. Finally, we use a probability-based Laplacian loss to model the depth uncertainty during training.
@article{zuo2024omni, title={OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration}, author={Zuo, Yiming and Yang, Willow and Ma, Zeyu and Deng, Jia}, journal={arXiv preprint arXiv:2411.19278}, year={2024} }