LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

Overview

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong


This is an official implementation of LoveDA in our NeurIPS2021 paper " LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation"

Citation

If you use FactSeg in your research, please cite our coming NeurIPS2021 paper.

    @inproceedings{
    wang2021loveda,
    title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
    author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=bLBIbVaGDu}
    }

Dataset

Coming Soon!

Comments
  • bad cbst result

    bad cbst result

    hello, we re-run the cbst_train with the default settings you provide, but get bad results as shown in the fig, even worse than the source only method. i wonder the stability of the training of cbst, and i will appreciate that if you can provide the training log of the cbst. THANK YOU VERY MUCH! Uploading 屏幕截图 2021-11-11 112454.png…

    bug 
    opened by Luffy03 14
  • About the accuracy of the CodaLab website

    About the accuracy of the CodaLab website

    Why is the domain adaptation MIOU on the CodaLab site so high? Shouldn't the "Oracle" MIOU provided in the paper be the highest MIOU for this domain adaptation task?

    question 
    opened by Hcshenziyang 6
  • Results submitted to Codalab

    Results submitted to Codalab

    The results submitted to the CodaLab get zero score and zero ExecutionTime. I wonder is it any wrong with the CodaLab or it is just my own mistake. The output class index is 0~6 with 1024*1024 pixels.

    question 
    opened by Luffy03 6
  • Invitation of incoporating LoveDA dataset into MMSegmentation.

    Invitation of incoporating LoveDA dataset into MMSegmentation.

    Hi, I am member of OpenMMLab who develops MMSegmentation. Our vision is provide up-to-date methods and dataset(i.e., benchmark) for researchers and community around the world.

    First, congrats for acceptance of NeurIPS'21. I think this dataset and benchmark would definitely help Remote Sensing Image field where semantic segmentation plays an important role.

    Frankly speaking, right now we do not have too much human resources. Would you like to help us incpoorate your dataset into MMSegmentation? We appreciate all contibutors and users, here is our contributing details.

    I think if LoveDA is provided by MMSegmentation, it could let more people use & cite this excellent work, especially for those who want to establish standard segmentation benchmark.

    Looking forward to your reply. Wish you all the best.

    Best,

    good first issue 
    opened by MengzhangLI 6
  • Potential shift in class labels

    Potential shift in class labels

    Following up on the discussion from #23, I was wondering whether in the context of the segmantic segmentation task there could be a shift in class labels between the data on which the pretrained model hrnetw32.pth was trained on and the data provided in this repo.

    Here I have visualised the true and predicted segmentations on training image 1338 for 2 different COLOR_MAP-s from the repo (render.py and data.loveda.py)

    Screenshot 2022-03-26 at 10 06 23 Screenshot 2022-03-26 at 10 06 31

    Based on the input image we can see that the colours are correct for the top left and bottom right visualisations. Also, the black colours in top right image corresponds to label IGNORE with RGB values (0,0,0) while in the bottom left the black colour has RGB values (7,7,7), which seems to be because in data.loveda.py the COLOR_MAP only has 7 classes and with indexing 0-6 with agriculture having label 7 in the masked images, it is not colour mapped.

    This seems to be related to the difference between labels in the current repo:

    Category labels: background – 1, building – 2, road – 3, water – 4, barren – 5, forest – 6, agriculture – 7. And the no-data regions were assigned 0 which should be ignored. The provided data loader will help you construct your pipeline.

    and the ones described on CodaLab:

    Classes indexes: Background - 0, Building - 1, Road - 2, Water - 3, Barren - 4, Forest - 5, Agriculture - 6

    Could this class label offset be the case or perhaps there is an alternative explanation which I have not thought about?

    question 
    opened by keliive 3
  • Dataset links for Google drive return a 404 error

    Dataset links for Google drive return a 404 error

    The links mentioned on the README.md of this repository as well as the competition page for google drive of the dataset are broken as of 30-01-2022 and return a 404 error. Please update the link with a working one.

    opened by AnkushMalaker 3
  • The different resolutions in training and testing

    The different resolutions in training and testing

    I found that in the training process, the input resolution is 512x512, while in the test phase, the input resolution is 1024x1024. Would you please tell me why?

    question 
    opened by Luffy03 3
  • Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Hello,

    Thank you very much for making your excellent work open to the public.

    May I ask you the meaning of line 228 in tools.py for Unsupervised Domain Adaptation? I found that when running bash ./scripts/predict_cbst.sh, it will generate a bug saying AttributeError: 'NoneType' object has no attribute 'info'. This bug is due to line 228 and also the default setting _default_logger=None. Hence, I wonder what this line is for. Also, I would like to let you know that after commenting the line 228, the command can be run successfully.

    Many thanks for your help.

    opened by simonep1052 2
  • [Request] Release codalab evaluation script

    [Request] Release codalab evaluation script

    Would it be possible to release the evaluation script from codalab? File format detail is a bit confusing. For example, if I set empty regions as transparent or embed color palette within the image the evaluation script shows warning:

    /opt/conda/lib/python2.7/site-packages/PIL/Image.py:870: UserWarning: Palette images with Transparency   expressed in bytes should be converted to RGBA images
      'to RGBA images')
    

    Even if i remove the color palette I get the following error:

    Traceback (most recent call last):
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 157, in <module>
        metric.forward(gt[valid_inds], mask[valid_inds])
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 22, in forward
        cm = sparse.coo_matrix((v, (y_true, y_pred)), shape=(self.num_classes, self.num_classes), dtype=np.float32)
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 182, in __init__
        self._check()
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 219, in _check
        nnz = self.nnz
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 196, in getnnz
        raise ValueError('row, column, and data array must all be the '
    ValueError: row, column, and data array must all be the same length
    

    I made sure all my images are 1024 × 1024 with a single uint8 channel. The class ids have been assigned as per the specification, with empty regions assigned with value 15

    Classes indexes

    Background - 0
    Building - 1
    Road - 2
    Water - 3
    Barren - 4
    Forest - 5
    Agriculture - 6
    

    So, it would be helpful to see the evaluation script and generate compatible prediction images.

    opened by digital-idiot 2
  • Can you provide the pre-training weights of the adversarial learning?

    Can you provide the pre-training weights of the adversarial learning?

    Hi, I would like to use the visualized results of Adaptseg and CLAN for comparison, could you provide the pre-training weights (Rural to Urban weights) of these two networks?

    opened by csliujw 2
  • Running pretrained model without CUDA

    Running pretrained model without CUDA

    Hi,

    Is there a way to run ./scripts/predict_test.sh without CUDA?

    I am using the LoveDA dataset and pretrained model weights hrnetw32.pth as described in the ReadME.

    Initially I got the error urllib.error.HTTPError: HTTP Error 403: Forbidden, which I fixed by setting pretrained=False as recommended here: https://github.com/Junjue-Wang/LoveDA/issues/9.

    Then when rerunning the predict_test.sh, I got the error:

    Traceback (most recent call last):
      File "predict.py", line 52, in <module>
        predict_test(args.ckpt_path, args.config_path, args.out_dir)
      File "predict.py", line 38, in predict_test
        model.cuda()
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
        param_applied = fn(param)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    

    I then commented out the line 38: https://github.com/Junjue-Wang/LoveDA/blob/4d574ce08f84cbc8d27becf2bd9dce8fbb7f50f8/Semantic_Segmentation/predict.py#L38 and after rerunning predict_test.sh, I got the output:

    Load model!
    INFO:data.loveda:./LoveDA/Val/Urban/images_png -- Dataset images: 0
    INFO:data.loveda:./LoveDA/Val/Rural/images_png -- Dataset images: 0
    INFO:ever.core.logger:HRNetEncoder: pretrained = False
    0it [00:00, ?it/s]
    
    question 
    opened by keliive 2
  • bash eval_hrnetw32.sh  Error!

    bash eval_hrnetw32.sh Error!

    Traceback (most recent call last). File ""home/libowen/LoveDA-master/Semantic_Segmentation/predict.py", line 52, in smodule> predict test(argsckpt path, args.config path, args.out dir) File "/home/libowen/LoveDA-master/Semantic_Segmentation/predictpy", line 37, in predict test model.load_state_dictmodel_state_dict) File "home/libowen/.conda/envs/bw/ib/python3.8/site-packages/torch/nn/modules/module,py", line 1667, in load_state_dictraise RuntimeError('Error(s) in loading state_dic for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state dict for HRNetFusion. Missing keys) in state dict: "ackbone.het.conv1.weight"ackbone hmet bn1.weight, "ackbone hretbn1.bias""backbone hmet bn1.running mean"

    question 
    opened by kukujoyyo 1
  • Predict.py Problem

    Predict.py Problem

    I download pretrained weight and use predict.py to test some images, but meet this bug, what's the problem of the fuse_layers?

    File "test4/Road/LoveDA-master/Semantic_Segmentation/module/baseline/base_hrnet/_hrnet.py", line 394, in forward y = y + self.fuse_layers[i][j](x[j]) RuntimeError: The size of tensor a (500) must match the size of tensor b (504) at non-singleton dimension 3

    question 
    opened by Acid-knight 3
  • Can run with One GPU in this work?

    Can run with One GPU in this work?

    **Shall we run this work with One GPU? If possible how to set parameters? **

    I'v got the issue below:

    PS F:\Models\LoveDA-master\Semantic_Segmentation> bash ./scripts/train_hrnetw32.sh NOTE: Redirects are currently not supported in Windows or MacOs. Init Trainer Set Seed Torch Traceback (most recent call last): File "train.py", line 79, in trainer = er.trainer.get_trainer('th_amp_ddp')() File "D:\ProgramData\Anaconda3\lib\site-packages\ever\api\trainer\th_amp_ddp_trainer.py", line 77, in init torch.cuda.set_device(self.args.local_rank) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_init_.py", line 311, in set_device device = _get_device_index(device) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_utils.py", line 34, in _get_device_index return _torch_get_device_index(device, optional, allow_cpu) File "D:\ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 537, in _get_device_index 'or an integer, but got:{}'.format(device)) ValueError: Expected a torch.device with a specified index or an integer, but got:None ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 108252) of binary: D:\ProgramData\Anaconda3\python.exe Traceback (most recent call last): File "D:\ProgramData\Anaconda3\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "D:\ProgramData\Anaconda3\Scripts\torchrun.exe_main.py", line 7, in File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\elastic\multiprocessing\errors_init.py", line 345, in wrapper return f(*args, **kwargs) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 724, in main run(args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 718, in run )(*cmd_args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    train.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-11-13_13:10:33 host : KWPAACQRFTY8V05 rank : 0 (local_rank: 0) exitcode : 1 (pid: 108252) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by kukujoyyo 1
  • no such file problem when training ST 2urban scripts

    no such file problem when training ST 2urban scripts

    When training self-training 2urban scripts, such as CBST_train.py and IAST_train.py, there is a problem which is 'FileNotFoundError: No such file: '/home/xxx/ssuda/UDA/log/cbst/2urban/pseudo_label/3814.png''. I guess this is because that the batch size is set to 2, and as expected the problem is solved when batch size is modified to 1.

    So, I wonder that if this is a bug or something what?

    Thanks for your excellent works!

    question 
    opened by lyhnsn 2
Releases(v0.2.0-alpha)
Owner
Kingdrone
Deep learning in RS
Kingdrone
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
Official implementation of deep-multi-trajectory-based single object tracking (IEEE T-CSVT 2021).

DeepMTA_PyTorch Officical PyTorch Implementation of "Dynamic Attention-guided Multi-TrajectoryAnalysis for Single Object Tracking", Xiao Wang, Zhe Che

Xiao Wang(王逍) 7 Dec 03, 2022
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
AlphaBot2 Pi Core software for interfacing with the various components.

AlphaBot2-Pi-Core AlphaBot2 Pi Core software for interfacing with the various components. This project is currently a W.I.P. I will update this readme

KyleDev 1 Feb 13, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

This repo is for the paper: Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration The DAC environment is based on the Dynam

Carola Doerr 1 Aug 19, 2022
DeepLab2: A TensorFlow Library for Deep Labeling

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

Google Research 845 Jan 04, 2023
Unity Propagation in Bayesian Networks Handling Inconsistency via Unity Smoothing

This repository contains the scripts needed to generate the results from the paper Unity Propagation in Bayesian Networks Handling Inconsistency via U

0 Jan 19, 2022
tree-math: mathematical operations for JAX pytrees

tree-math: mathematical operations for JAX pytrees tree-math makes it easy to implement numerical algorithms that work on JAX pytrees, such as iterati

Google 137 Dec 28, 2022
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback This is our Pytorch implementation for the paper: Yinwei Wei,

17 Jun 10, 2022
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
Simulations for Turring patterns on an apically expanding domain. T

Turing patterns on expanding domain Simulations for Turring patterns on an apically expanding domain. The details about the models and numerical imple

Yue Liu 0 Aug 03, 2021
Your interactive network visualizing dashboard

Your interactive network visualizing dashboard Documentation: Here What is Jaal Jaal is a python based interactive network visualizing tool built usin

Mohit 177 Jan 04, 2023
Realtime_Multi-Person_Pose_Estimation

Introduction Multi Person PoseEstimation By PyTorch Results Require Pytorch Installation git submodule init && git submodule update Demo Download conv

tensorboy 1.3k Jan 05, 2023
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
Machine learning library for fast and efficient Gaussian mixture models

This repository contains code which implements the Stochastic Gaussian Mixture Model (S-GMM) for event-based datasets Dependencies CMake Premake4 Blaz

Omar Oubari 1 Dec 19, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks

NNProject - DeepMask This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks. Th

189 Nov 16, 2022
Migration of Edge-based Distributed Federated Learning

FedFly: Towards Migration in Edge-based Distributed Federated Learning About the research Due to mobility, a device participating in Federated Learnin

qub-blesson 11 Nov 13, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022