The official GitHub repository for the Argoverse 2 dataset.

Overview

PyPI Versions CI Status License: MIT

Argoverse 2 API

Official GitHub repository for the Argoverse 2 family of datasets.

If you have any questions or run into any problems with either the data or API, please feel free to open a GitHub issue!

TL;DR

  • Install the API: pip install av2
  • Read the instructions to download the data.

Overview

Getting Started

Setup

The easiest way to install the API is via pip by running the following command:

pip install av2

Datasets

The Argoverse 2 family consists of four distinct datasets:

Dataset Name Scenarios Camera Imagery Lidar Maps Additional Information
Sensor 1,000 Sensor Dataset README
Lidar 20,000 Lidar Dataset README
Motion Forecasting 250,000 Motion Forecasting Dataset README
Map Change (Trust, but Verify) 1,045 Map Change Dataset README

Please see DOWNLOAD.md for detailed instructions on how to download each dataset.

Map API

Please refer to the map README for additional details about the common format for vector and raster maps that we employ across all AV2 datasets.

Compatibility Matrix

Python Version linux macOS windows
3.8
3.9
3.10

Testing

All incoming pull requests are tested using nox as part of the CI process. This ensures that the latest version of the API is always stable on all supported platforms. You can run the full suite of automated checks and tests locally using the following command:

nox -r

Contributing

Have a cool feature you'd like to add? Found an unhandled corner case? The Argoverse team welcomes contributions from the open source community - please open a PR using the following template!

Citing

Please use the following citation when referencing the Argoverse 2 Sensor, Lidar, or Motion Forecasting Datasets:

@INPROCEEDINGS { Argoverse2,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

Use the following citation when referencing the Argoverse 2 Map Change Dataset:

@INPROCEEDINGS { TrustButVerify,
  author = {John Lambert and James Hays},
  title = {Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

License

All code provided within this repository is released under the MIT license and bound by the Argoverse terms of use, please see LICENSE and NOTICE for additional details.

Comments
  • Downloading the tbv dataset.

    Downloading the tbv dataset.

    I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?

    One here:

    1. https://github.com/argoai/argoverse2-api/blob/main/DOWNLOAD.md s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory

    And another here: 2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}

    When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.

    When I try 2, I get an error "Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."

    1. probably downloads half of the dataset, while 2. doesn't initiate the download. I will probably continue with 1, but 2. probably is faster. I'm using Linux Ubuntu 18.04.
    opened by tom-bu 13
  • What is the format of the submission for 3D object detection competition?

    What is the format of the submission for 3D object detection competition?

    The Submission Guidelines have nothing about the submission format, could you give more details? Or could you provide a submission sample? Thank you very much!

    opened by fangjin-cool 7
  • questions for visualization

    questions for visualization

    Dear all:

    When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'. Is any parameter in the program that needs to be debugged or something else? Hoping for your reply and thanks so much.

    opened by tommygojerry 5
  • Argoverse 2.0 vs Argoverse 1.1 API

    Argoverse 2.0 vs Argoverse 1.1 API

    Hi folks,

    I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?

    opened by Cram3r95 5
  • lane label annotation method inquery.

    lane label annotation method inquery.

    hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.

    hope you can help me figure out and thanks in advance :)

    question 
    opened by Mollylulu 4
  • Similarity argoverse 1 / argoverse 2

    Similarity argoverse 1 / argoverse 2

    Hey, the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset. Looking forward to your answer! Thanks a lot!

    question 
    opened by odunkel 4
  • Motion forecasting: Focal agent not always observered over the full scenario length

    Motion forecasting: Focal agent not always observered over the full scenario length

    Hey everyone,

    I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent. According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations: "Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"

    However, this is not the case for some scenarios (~3% of the scenarios). One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.

    Can you reproduce my problem? Is this intended or can we expect this to be fixed in the near future?

    Looking forward hearing from you!

    Best regards

    SchDevel

    bug 
    opened by SchDevel 4
  • How to evaluate 3D object detection on validation split?

    How to evaluate 3D object detection on validation split?

    Thanks for your excellent work! I would like to know how to do the evaluation of 3D object detection on validation split. And I notice there is PR about this. When will the stable version be released? I am looking forward to it!

    opened by Abyssaledge 4
  • Is it possible to extract the route information ?

    Is it possible to extract the route information ?

    Hi, thank you for providing the outstanding dataset.

    I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?

    opened by panda2020-sky 4
  • Error with generate_sensor_dataset_visualizations.py

    Error with generate_sensor_dataset_visualizations.py

    Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.

    question 
    opened by DuZzzs 3
  • Follow up for https://github.com/argoai/av2-api/issues/77

    Follow up for https://github.com/argoai/av2-api/issues/77

    Hi,

    Sorry for the delay. Thank you for your help! I went through the dataset API and was able to isolate individual point clouds.

    Joint(L), Top (R) image

    Top(L), Bottom (R) image

    Does this look sensible? Here is the code snippet. ` dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True, with_cache=True) for index, data_frame in enumerate(dataset): sweep = data_frame.sweep # has lidar info annotations = data_frame.annotations # has boxes pose = data_frame.timestamp_city_SE3_ego_dict

        # get the lidar - both combined into single pcl
        pcl_joint = sweep.xyz
    
        # append reflectances and laser numbers
        pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1), 
                                            np.expand_dims(sweep.laser_number, -1)])
    
        # laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
        r_up = np.where(pcl_joint[:, -1] < 32)
        pcl_up = pcl_joint[r_up]  # get top lidar point cloud
    
        r_down = np.where(pcl_joint[:, -1] >= 32)
        pcl_down = pcl_joint[r_down]
    

    `

    Please let me know if this is the correct way, just to be sure.

    Best Regards Sambit

    opened by SM1991CODES 2
  • centerline of static map

    centerline of static map

    I noticed that we have two methods to get the centerline of lane_segment. First, we just get the data from raw map file. Second, we can use the function of class ArgoverseStaticMap which is "get_lane_segment_centerline" to get the centerline. I wanna know the difference of these two methods.

    opened by ChevinB 0
  • Interestingness score

    Interestingness score

    Hey,

    you were roughly explaining the interestingness score in your paper and in the supplementaries. Are you planning to share more details about the process of selecting interesting scenarios or is this confidential functionality?

    I am looking forward to your answer.

    Best regards

    opened by odunkel 0
  • Path issue in from_map_dir function of map_api

    Path issue in from_map_dir function of map_api

    The vector_data_json_path variable seems to extract the wrong path definition (with relative path being passed in the Map_Tutorial notebook)

    Setting it to just vector_data_fname seems to be working for me -- instead of log_map_dirpath / vector_data_fname

    Check it out please?

    Thanks!

    opened by Shivanshu17 1
  • Pytorch Dataloader.

    Pytorch Dataloader.

    PR Summary

    Testing

    In order to ensure this PR works as intended, it is:

    • [ ] unit tested.
    • [ ] other or not applicable (additional detail/rationale required)

    Compliance with Standards

    As the author, I certify that this PR conforms to the following standards:

    • [ ] Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
    • [ ] A well-written summary explains what was done and why it was done.
    • [ ] The PR is adequately tested and the testing details and links to external results are included.
    opened by benjaminrwilson 0
  • timestamps_ns in motion forecast dataset

    timestamps_ns in motion forecast dataset

    I tried to convert timestamps_ns assuming epoch format and all scenarios seem to refer to date and time in the year 1980. Has there been any deliberate anonymization of the timestamp or am I doing the conversion wrong?

    Thanks in advance!

    opened by sun1612 0
Releases(v0.2.1)
  • v0.2.1(Jun 2, 2022)

    What's Changed

    • Add UNKNOWN lane mark type to map schema by @wqi in https://github.com/argoai/av2-api/pull/58
    • Competition announcements by @benjaminrwilson in https://github.com/argoai/av2-api/pull/57
    • Add additional 3D object detection submission details. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/63

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 5, 2022)

    • Evaluation code is now available to 3D object detection and motion forecasting.

    What's Changed

    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/6
    • Add gifs to TbV readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/10
    • Fix broken link to Argoverse website in motion forecasting readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/13
    • add support for rendering LaneMarkType.SOLID_DASH_WHITE in EgoViewMapRenderer by @senselessdev1 in https://github.com/argoai/av2-api/pull/9
    • Replace TbV gifs to illustrate map changes more clearly by @senselessdev1 in https://github.com/argoai/av2-api/pull/15
    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/16
    • Fix typo in Sensor Dataset readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/19
    • Improve TbV Download Instructions by @senselessdev1 in https://github.com/argoai/av2-api/pull/14
    • Add city distribution for logs to Sensor Dataset Readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/22
    • Clarify which datasets certain tutorials apply to by @senselessdev1 in https://github.com/argoai/av2-api/pull/24
    • Add get_city_name() method to dataloader, to fetch name of city where a log was captured. by @senselessdev1 in https://github.com/argoai/av2-api/pull/27
    • Small formatting fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/33
    • Fix map tutorial issues. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/35
    • Update ci.yml by @benjaminrwilson in https://github.com/argoai/av2-api/pull/5
    • 3D Object Detection Evaluation by @benjaminrwilson in https://github.com/argoai/av2-api/pull/31
    • Add converter between AV2 city coordinate systems, and WGS84 and UTM by @senselessdev1 in https://github.com/argoai/av2-api/pull/28
    • Add get_ordered_log_lidar_timestamps() method to Sensor / TbV dataloa… by @senselessdev1 in https://github.com/argoai/av2-api/pull/29
    • Add TbV log clustering by scene (i.e. spatial location). by @senselessdev1 in https://github.com/argoai/av2-api/pull/26
    • 3D Detection Eval docstrings + typing fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/40
    • Add integration test to verify that TbV download was successful by @senselessdev1 in https://github.com/argoai/av2-api/pull/23
    • Sensor Dataset Visualization by @benjaminrwilson in https://github.com/argoai/av2-api/pull/39
    • Add dataclass for AV2 MF challenge submissions by @wqi in https://github.com/argoai/av2-api/pull/41
    • Add Brier metrics to motion forecasting evaluation module by @wqi in https://github.com/argoai/av2-api/pull/44
    • Detection evaluation tweaks by @benjaminrwilson in https://github.com/argoai/av2-api/pull/48
    • v0.1.0 -> v0.1.1 by @benjaminrwilson in https://github.com/argoai/av2-api/pull/49
    • Update setup.cfg to add pypi metadata by @wqi in https://github.com/argoai/av2-api/pull/51
    • Update init.py by @benjaminrwilson in https://github.com/argoai/av2-api/pull/52

    New Contributors

    • @benjaminrwilson made their first contribution in https://github.com/argoai/av2-api/pull/6
    • @senselessdev1 made their first contribution in https://github.com/argoai/av2-api/pull/10
    • @wqi made their first contribution in https://github.com/argoai/av2-api/pull/41

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2022)

Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

Computer Vision and Intelligence Research (CVIR) 13 Dec 10, 2022
This repository contains the official MATLAB implementation of the TDA method for reverse image filtering

ReverseFilter TDA This repository contains the official MATLAB implementation of the TDA method for reverse image filtering proposed in the paper: "Re

Fergaletto 2 Dec 13, 2021
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
Weakly Supervised 3D Object Detection from Point Cloud with Only Image Level Annotation

SCCKTIM Weakly Supervised 3D Object Detection from Point Cloud with Only Image-Level Annotation Our code will be available soon. The class knowledge t

1 Nov 12, 2021
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER (WIP) Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEER is an e

Alipay 6 Dec 17, 2022
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
Pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks."

alpha-GAN Unofficial pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks." arXi

Victor Shepardson 78 Dec 08, 2022
Generating retro pixel game characters with Generative Adversarial Networks. Dataset "TinyHero" included.

pixel_character_generator Generating retro pixel game characters with Generative Adversarial Networks. Dataset "TinyHero" included. Dataset TinyHero D

Agnieszka Mikołajczyk 88 Nov 17, 2022
PyTorch-Multi-Style-Transfer - Neural Style and MSG-Net

PyTorch-Style-Transfer This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included

Hang Zhang 906 Jan 04, 2023
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 06, 2023
Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance

Nested Graph Neural Networks About Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance.

Muhan Zhang 38 Jan 05, 2023
TFOD-MASKRCNN - Tensorflow MaskRCNN With Python

Tensorflow- MaskRCNN Steps git clone https://github.com/amalaj7/TFOD-MASKRCNN.gi

Amal Ajay 2 Jan 18, 2022
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning This repository is the official implementation of CARE.

ChongjianGE 89 Dec 02, 2022
Ratatoskr: Worcester Tech's conference scheduling system

Ratatoskr: Worcester Tech's conference scheduling system In Norse mythology, Ratatoskr is a squirrel who runs up and down the world tree Yggdrasil to

4 Dec 22, 2022
Cross-modal Retrieval using Transformer Encoder Reasoning Networks (TERN). With use of Metric Learning and FAISS for fast similarity search on GPU

Cross-modal Retrieval using Transformer Encoder Reasoning Networks This project reimplements the idea from "Transformer Reasoning Network for Image-Te

Minh-Khoi Pham 5 Nov 05, 2022
Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates

Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates Installation Clone the repository: git clone https://github.com/Zengyi-Qi

Zengyi Qin 3 Oct 18, 2022
AAAI-22 paper: SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning

SimSR Code and dataset for the paper SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning (AAAI-22). Requirements We assum

7 Dec 19, 2022