[OcCo] Unsupervised Point Cloud Pre-Training
via Occlusion Completion
ICCV, 2021



We describe a simple pre-training approach for point clouds. It works in three steps: 1. Mask all points occluded in a camera view; 2. Learn an encoder-decoder model to re- construct the occluded points; 3. Use the encoder weights as initialisation for downstream point cloud tasks. We find that even when we construct a single pre-training dataset (from ModelNet40), this pre-training method improves ac- curacy across different datasets and encoders, on a wide range of downstream tasks. Specifically, we show that our method outperforms previous pre-training methods in ob- ject classification, and both part-based and semantic seg- mentation tasks. We study the pre-trained features and find that they lead to wide downstream minima, have high trans- formation invariance, and have activations that are highly correlated with part labels.

Overview Video (TBD)

Related links

Multi-View Partial (MVP) Challenge, joint with Sensing, Understanding and Synthesizing Humans Workshop @ ICCV' 21.

Workshop on Weakly Supervised Learning @ ICLR' 21.



We would like to thank Qingyong Hu, Shengyu Huang, Matthias Niessner, Kilian Q. Weinberger, and Trevor Darrell for valuable discussions and feedbacks. The website template is borrowed from Michaël Gharbi and Ben Mildenhall.