[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Overview

Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS)

Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang

CVPR 2021

[arXiv] [Paper PDF] [Project Page] [Demo] [Papers with Code]

demo1 demo2 demo3

Credit (left to right): DAVIS 2017, Academy of Historical Fencing, Modern History TV

We manage the project using three different repositories (which are actually in the paper title). This is the main repo, see also Mask-Propagation and Scribble-to-Mask.

Overall structure and capabilities

MiVOS Mask-Propagation Scribble-to-Mask
DAVIS/YouTube semi-supervised evaluation ✔️
DAVIS interactive evaluation ✔️
User interaction GUI tool ✔️
Dense Correspondences ✔️
Train propagation module ✔️
Train S2M (interaction) module ✔️
Train fusion module ✔️
Generate more synthetic data ✔️

Framework

framework

Requirements

We used these packages/versions in the development of this project. It is likely that higher versions of the same package will also work. This is not an exhaustive list -- other common python packages (e.g. pillow) are expected and not listed.

Refer to the official PyTorch guide for installing PyTorch/torchvision. The rest can be installed by:

pip install PyQt5 davisinteractive progressbar2 opencv-python networkx gitpython gdown Cython

Quick start

  1. python download_model.py to get all the required models.
  2. python interactive_gui.py --video or python interactive_gui.py --images . A video has been prepared for you at examples/example.mp4.
  3. If you need to label more than one object, additionally specify --num_objects
  4. There are instructions in the GUI. You can also watch the demo videos for some ideas.

Main Results

DAVIS/YouTube semi-supervised results

DAVIS Interactive Track

All results are generated using the unmodified official DAVIS interactive bot without saving masks (--save_mask not specified) and with an RTX 2080Ti. We follow the official protocol.

Precomputed result, with the json summary: [Google Drive] [OneDrive]

eval_interactive_davis.py

Model AUC-J&F J&F @ 60s
Baseline 86.0 86.6
(+) Top-k 87.2 87.8
(+) BL30K pretraining 87.4 88.0
(+) Learnable fusion 87.6 88.2
(+) Difference-aware fusion (full model) 87.9 88.5

Pretrained models

python download_model.py should get you all the models that you need. (pip install gdown required.)

[OneDrive Mirror]

Training

Data preparation

Datasets should be arranged as the following layout. You can use download_datasets.py (same as the one Mask-Propagation) to get the DAVIS dataset and manually download and extract fusion_data ([OneDrive]) and BL30K.

├── BL30K
├── DAVIS
│   └── 2017
│       ├── test-dev
│       │   ├── Annotations
│       │   └── ...
│       └── trainval
│           ├── Annotations
│           └── ...
├── fusion_data
└── MiVOS

BL30K

BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories greedily to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See generation/blender/generate_yaml.py for details.

We noted that using probably half of the data is sufficient to reach full performance (although we still used all), but using less than one-sixth (5K) is insufficient.

Download

You can either use the automatic script download_bl30k.py or download it manually below. Note that each segment is about 115GB in size -- 700GB in total. You are going to need ~1TB of free disk space to run the script (including extraction buffer).

Google Drive is much faster in my experience. Your mileage might vary.

Manual download: [Google Drive] [OneDrive]

Generation

  1. Download ShapeNet.
  2. Install Blender. (We used 2.82)
  3. Download a bunch of background and texture images. We used this repo (we specified "non-commercial reuse" in the script) and the list of keywords are provided in generation/blender/*.json.
  4. Generate a list of configuration files (generation/blender/generate_yaml.py).
  5. Run rendering on the configurations. See here (Not documented in detail, ask if you have a question)

Fusion data

We use the propagation module to run through some data and obtain real outputs to train the fusion module. See the script generate_fusion.py.

Or you can download pre-generated fusion data:

Training commands

These commands are to train the fusion module only.

CUDA_VISIBLE_DEVICES=[a,b] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=2 train.py --id [defg] --stage [h]

We implemented training with Distributed Data Parallel (DDP) with two 11GB GPUs. Replace a, b with the GPU ids, cccc with an unused port number, defg with a unique experiment identifier, and h with the training stage (0/1).

The model is trained progressively with different stages (0: BL30K; 1: DAVIS). After each stage finishes, we start the next stage by loading the trained weight. A pretrained propagation model is required to train the fusion module.

One concrete example is:

Pre-training on the BL30K dataset: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 0 --id retrain_s0

Main training: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 1 --id retrain_s012 --load_network [path_to_trained_s0.pth]

Credit

f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation

ivs-demo: https://github.com/seoungwugoh/ivs-demo

deeplab: https://github.com/VainF/DeepLabV3Plus-Pytorch

STM: https://github.com/seoungwugoh/STM

BlenderProc: https://github.com/DLR-RM/BlenderProc

Citation

Please cite our paper if you find this repo useful!

@inproceedings{MiVOS_2021,
  title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
  author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle={CVPR},
  year={2021}
}

Contact: [email protected]

Comments
  • Some problem when train Fusion

    Some problem when train Fusion

    Hello, I encountered some problems when retraining the fusion model. Some key parameter guidelines for training fusion are not given in the code warehouse. Can you provide it? Specifically as follows: (1) generate_fusion.py: parameter "separation" not given

    Can you provide the relevant parameter descriptions of fusion training and the instructions to run so that I can reproduce the results of your paper?

    and when I try to train(python train.py),I meet some code mistake in fusion_dataset.py: (1)are there some mistake When you assign a value to self.vid_to_instance? and It will return error at: self.videos = [v for v in self.videos if v in self.vid_to_instance](line 60 in fusion_datast.py)

    opened by nazimii 12
  • Process killed

    Process killed

    I tried the MIVOS + STCN on a 1.5 minute 4k video that was down sampled to 480p and the program crashed.

    What are the steps to reformat/sample a 4k video to make it work for this tool?

    Also can this tool run on multiple GPUs?

    opened by zdhernandez 11
  • Fine-tune guidance

    Fine-tune guidance

    Hi really loved the work, I'm trying to fine-tune the downloaded models(using the downlaod_model.py) to another domain. I was wondering if you could help me where to put the data and which command to run the training.

    Thank you

    opened by be-redAsmara 8
  • RuntimeError: " not implemented for 'BFloat16'(example.mp4)">

    RuntimeError: "slow_conv_dilated<>" not implemented for 'BFloat16'(example.mp4)

    Hello! I followed the instructions of Quickstart with these settings: python interactive_gui.py --video .\example\example.mp4 As I don't have a GPU, I change the map location to 'CPU'. When I select the "click" radio button and click on the object to create the mask, a runtime error is thrown. image Could you give me some suggestions? Looking forward to your reply.

    opened by xwhkkk 6
  • -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    To replicate:

    • create folder with only one image
    • with these settings run: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/
    • select the "click" radio button
    • click on the image to create mask
    • select "scribble" radio button
    • "scribble" an area in the picture
    • runtime error is thrown Screenshot from 2021-12-03 22-06-00
    opened by zdhernandez 5
  • Overlay and Mask files not equal to size of original input image.

    Overlay and Mask files not equal to size of original input image.

    @hkchengrex Doing one image larger than 1k resolution in one folder with command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/

    • clicking on an object to produce the mask
    • click "save" to save the overlay and masks
    • Both overlay and mask files are reduced to a fix resolution of: width: 480px, height: 640px

    Q. Can we keep the size of the output files to be equal to the input size of the original image? Q. Can we add a flag to use either current behavior or preserve the resolution of the input image ?

    opened by zdhernandez 4
  • Getting

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Traceback (most recent call last): File "/home/bereket/Desktop/IRCAD-Data/MiVOS/MiVOS-MiVOS-STCN/eval_interactive_davis.py", line 76, in with DavisInteractiveSession(davis_root=davis_path+'/trainval', report_save_dir='../output', max_nb_interactions=8, max_time=8*30) as sess: File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/session/session.py", line 89, in enter samples, max_t, max_i = self.connector.start_session( File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/connector/local.py", line 29, in start_session self.service = EvaluationService(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/evaluation/service.py", line 27, in init self.davis = Davis(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/dataset/davis.py", line 93, in init raise ValueError('Davis root folder must be named "DAVIS"') ValueError: Davis root folder must be named "DAVIS"

    opened by be-redAsmara 4
  • Processing on long video with high resolution

    Processing on long video with high resolution

    Hello! Thank you for the amazing framework!

    I have an issue while processing on long video with high resolution. I ran out of GPU memory. As I understand, mivos tries to upload all images directly to GPU and if the video is too long or in high-resolution mivos can't handle such cases. Is there is a way to fix this issue? Maybe modify code to work with data chunks?

    Thank you in advance!

    opened by devidlatkin 4
  • Has anyone met the following problem during the running of

    Has anyone met the following problem during the running of "interactive_gui.py"?

    Traceback (most recent call last): File "interactive_gui.py", line 23, in from PyQt5.QtWidgets import (QWidget, QApplication, QMainWindow, QComboBox, QGridLayout, ImportError: /usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5.15' not found (required by /home/fg/anaconda3/envs/MiVOS/lib/python3.7/site-packages/PyQt5/QtWidgets.abi3.so)

    opened by Starboy-at-earth 4
  • static dataset in download_dataset.py

    static dataset in download_dataset.py

    I note that there are a static dataset in download_dataset.py so, where is this static dataset used?

    and in readme.md, you say, you use BL30K to train fusion model, and the BL30K is very large(600G), so ,you use 600 G dataset to pretrain fusion model?

    opened by nazimii 3
  • Temporal Information

    Temporal Information

    Hi, I am interested in your project and I would like to go in detail for an aspect related to temporal information. Are you training your model on video datasets? Are you getting temporal information from the dataset? or your model has been trained on single images considering only spatial information?

    Thank you so much. Best, Francesca

    opened by FrancescaCi 3
  • CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    @hkchengrex To replicate:

    • load only one image of width(3024 by 4032) in folder./example/test_folder/
    • run command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/ --resolution -1 --num_objects 4
    • click on one object to create overlay of the first object (red)
    • select num keypad 2 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • click "propagate" Throws error. See picture. Screenshot from 2021-12-11 12-03-39

    Even though I am doing one image if I click "Save" it does what is supposed to to (save overlay and mask). But clicking "Propagate" should not throw and error with cuda when --mem_profile was set to 2, right ? should not have used GPU.

    opened by zdhernandez 7
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 06, 2022
Compare neural networks by their feature similarity

PyTorch Model Compare A tiny package to compare two neural networks in PyTorch. There are many ways to compare two neural networks, but one robust and

Anand Krishnamoorthy 181 Jan 04, 2023
Source code for From Stars to Subgraphs

GNNAsKernel Official code for From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness Visualizations GNN-AK(+) GNN-AK(+) with Subgra

44 Dec 19, 2022
Office source code of paper UniFuse: Unidirectional Fusion for 360$^\circ$ Panorama Depth Estimation

UniFuse (RAL+ICRA2021) Office source code of paper UniFuse: Unidirectional Fusion for 360$^\circ$ Panorama Depth Estimation, arXiv, Demo Preparation I

Alibaba 47 Dec 26, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Compositional Sketch Search

Compositional Sketch Search Official repository for ICIP 2021 Paper: Compositional Sketch Search Requirements Install and activate conda environment c

Alexander Black 8 Sep 06, 2021
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
This repository contains a PyTorch implementation of the paper Learning to Assimilate in Chaotic Dynamical Systems.

Amortized Assimilation This repository contains a PyTorch implementation of the paper Learning to Assimilate in Chaotic Dynamical Systems. Abstract: T

4 Aug 16, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Code for Environment Inference for Invariant Learning (ICML 2020 UDL Workshop Paper)

Environment Inference for Invariant Learning This code accompanies the paper Environment Inference for Invariant Learning, which appears at ICML 2021.

Elliot Creager 40 Dec 09, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

37 Nov 21, 2022
Using machine learning to predict undergrad college admissions.

College-Prediction Project- Overview: Many have tried, many have failed. Few trailblazers are ambitious enought to chase acceptance into the top 15 un

John H Klinges 1 Jan 05, 2022
BlockUnexpectedPackets - Preventing BungeeCord CPU overload due to Layer 7 DDoS attacks by scanning BungeeCord's logs

BlockUnexpectedPackets This script automatically blocks DDoS attacks that are sp

SparklyPower 3 Mar 31, 2022
pcnaDeep integrates cutting-edge detection techniques with tracking and cell cycle resolving models.

pcnaDeep: a deep-learning based single-cell cycle profiler with PCNA signal Welcome! pcnaDeep integrates cutting-edge detection techniques with tracki

ChanLab 8 Oct 18, 2022
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

118 Dec 26, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 09, 2022
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks

Code for CCA-SSG model proposed in the NeurIPS 2021 paper From Canonical Correlation Analysis to Self-supervised Graph Neural Networks.

Hengrui Zhang 44 Nov 27, 2022