The Wayback Machine - https://web.archive.org/web/20200614122226/https://github.com/XiaohangZhan/deocclusion
Skip to content
Code for our CVPR 2020 work.
Python Shell
Branch: master
Clone or download

Latest commit

Latest commit c8439ea May 18, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
datasets fix mc Apr 27, 2020
demos update May 8, 2020
experiments update Apr 5, 2020
models update Apr 5, 2020
tools fix pretrain conversion May 10, 2020
utils update Apr 5, 2020
.gitignore add teaser Apr 7, 2020
LICENSE init Mar 14, 2020
README.md Update README.md May 18, 2020
inference.py update Apr 2, 2020
main.py update Apr 5, 2020
requirements.txt Fix Syntax of the requirements file Apr 20, 2020
source.sh init Mar 14, 2020
tensorboard.sh update Apr 5, 2020
trainer.py update Apr 5, 2020

README.md

Paper

Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, Chen Change Loy, "Self-Supervised Scene De-occlusion", accepted to CVPR 2020 as an Oral Paper. [Project page].

For further information, please contact Xiaohang Zhan.

Demo Video

  • Watch the full demo video in YouTube or bilibili. The demo video contains vivid explanations of the idea, and interesting applications.

  • Below is an application of scene de-occlusion: image manipulation.

Requirements

  • python: 3.7

  • pytorch>=0.4.1

  • install pycocotools:

    pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
  • others:

    pip install -r requirements.txt

Run Demos

  1. Download released models here and put the folder released under deocclusion.

  2. Run demos/demo_cocoa.ipynb or demos/demo_kins.ipynb. There are some test examples for demos/demo_cocoa.ipynb in the repo, so you don't have to download the COCOA dataset if you just want to try a few samples.

Data Preparation

COCOA dataset proposed in Semantic Amodal Segmentation.

  1. Download COCO2014 train and val images from here and unzip.

  2. Download COCOA annotations from here and untar.

  3. Ensure the COCOA folder looks like:

    COCOA/
      |-- train2014/
      |-- val2014/
      |-- annotations/
        |-- COCO_amodal_train2014.json
        |-- COCO_amodal_val2014.json
        |-- COCO_amodal_test2014.json
        |-- ...
    
  4. Create symbolic link:

    cd deocclusion
    mkdir data
    cd data
    ln -s /path/to/COCOA
    

KINS dataset proposed in Amodal Instance Segmentation with KINS Dataset.

  1. Download left color images of object data in KITTI dataset from here and unzip.

  2. Download KINS annotations from here corresponding to this commit.

  3. Ensure the KINS folder looks like:

    KINS/
      |-- training/image_2/
      |-- testing/image_2/
      |-- instances_train.json
      |-- instances_val.json
    
  4. Create symbolic link:

    cd deocclusion/data
    ln -s /path/to/KINS
    

LVIS dataset

  1. Download training and validation sets from here

Train

train PCNet-M

  1. Train (taking COCOA for example).

    sh experiments/COCOA/pcnet_m/train.sh # you may have to set --nproc_per_node=#YOUR_GPUS
    
  2. Monitoring status and visual results using tensorboard.

    sh tensorboard.sh $PORT
    

train PCNet-C

  1. Download the pre-trained image inpainting model using partial convolution here to pretrains/partialconv.pth

  2. Convert the model to accept 4 channel inputs.

    python tools/convert_pcnetc_pretrain.py
  3. Train (taking COCOA for example).

    sh experiments/COCOA/pcnet_c/train.sh # you may have to set --nproc_per_node=#YOUR_GPUS
    
  4. Monitoring status and visual results using tensorboard.

Evaluate

  • Execute:

    sh tools/test_cocoa.sh

Bibtex

@inproceedings{zhan2020self,
 author = {Zhan, Xiaohang and Pan, Xingang and Dai, Bo and Liu, Ziwei and Lin, Dahua and Loy, Chen Change},
 title = {Self-Supervised Scene De-occlusion},
 booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)},
 month = {June},
 year = {2020}
}

Acknowledgement

  1. We used the code and models of GCA-Matting in our demo.

  2. We modified some code from pytorch-inpainting-with-partial-conv to train the PCNet-C.

You can’t perform that action at this time.