MS in Data Science capstone project. Studying attacks on autonomous vehicles.

Overview

Surveying Attack Models for CAVs

Guide to Installing CARLA and Collecting Data

Our project focuses on surveying attack models for Connveced Autonomous Vehicles (CAVs). The primary tool we will be using throughout the project is CARLA, a vehicle simulation platform. This document serves a guide to how to get CARLA running on your system as well as show off a script we adapted from orginal CARLA install. This closely follows the quick installation from the CARLA leaderboard challenge.

Requirements

CARLA runs best for Ubuntu 18.04 and for Windows. The following requirements are for an Ubuntu system:

  • Python 3
  • Anaconda
  • pip installer
  • 6 GB GPU
  • 20 GB of disk space

Installation

  1. Download the CARLA 0.9.10.1 release found here and unzip the package into a folder named CARLA.

  2. Download leaderboard repo:

    git clone -b stable --single-branch https://github.com/carla-simulator/leaderboard.git

  3. Switch to your CARLA root directory and run the following:

    pip3 install -r requirements.txt

  4. Clone the scenario runner repo

    git clone -b leaderboard --single-branch https://github.com/carla-simulator/scenario_runner.git

  5. cd into the scenario runner directory and run the following:

    pip3 install -r requirements.txt

  6. Now the environment variables need to be defined. Open a fresh terminal and open the bash profile:

    gedit ~/.bashrc

  7. Add the following to the bash profile:

export CARLA_ROOT=PATH_TO_CARLA_ROOT export SCENARIO_RUNNER_ROOT=PATH_TO_SCENARIO_RUNNER export LEADERBOARD_ROOT=PATH_TO_LEADERBOARD export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}

  1. source so the changes can take place:

    source ~/.bashrc

Running CARLA

To run CARLA simply open the terminal, change into your CARLA directory and run the following:

./CarlaUE4.sh

This should open an environment that looks like this:

Collecting Data

For our project, we are primary concerned with collecting LiDAR, Radar, Camera and GPS data. Thus far, we have a basic test python script that will log the latitude and longitude of an vehicle placed in the environment.

The test script can be found under tutorials/alana_test.py. This script should be placed under your CARLA root directory in PythonAPI.

The output will be .csv, a sample of which can be found under tutorials/outputgnss.csv. It should look like:

time latitude longitude altitude
8.076389706286136 0.001497075674478765 0.0013884266232332388 2.5777945518493652
12.463837534713093 0.0015216552946100137 0.0013888545621243858 2.003244638442993
16.73477230645949 0.0015536632594717048 0.0013894208066981615 1.2948381900787354
21.952862125413958 0.001598166573884896 0.0013902203478743502 0.5753694176673889
27.885887853393797 0.001605972963332647 0.001390361257925631 0.47810444235801697
Owner
Isabela Caetano
Masters student studying data science
Isabela Caetano
PyChemia, Python Framework for Materials Discovery and Design

PyChemia, Python Framework for Materials Discovery and Design PyChemia is an open-source Python Library for materials structural search. The purpose o

Materials Discovery Group 61 Oct 02, 2022
Data cleaning tools for Business analysis

Datacleaning datacleaning tools for Business analysis This program is made for Vicky's work. You can use it, too. 数据清洗 该数据清洗工具是为了商业分析 这个程序是为了Vicky的工作而

Lin Jian 3 Nov 16, 2021
MotorcycleParts DataAnalysis python

We work with the accounting department of a company that sells motorcycle parts. The company operates three warehouses in a large metropolitan area.

NASEEM A P 1 Jan 12, 2022
Analysis of a dataset of 10000 passwords to find common trends and mistakes people generally make while setting up a password.

Analysis of a dataset of 10000 passwords to find common trends and mistakes people generally make while setting up a password.

Aryan Raj 7 Sep 04, 2022
Pizza Orders Data Pipeline Usecase Solved by SQL, Sqoop, HDFS, Hive, Airflow.

PizzaOrders_DataPipeline There is a Tony who is owning a New Pizza shop. He knew that pizza alone was not going to help him get seed funding to expand

Melwin Varghese P 4 Jun 05, 2022
Stitch together Nanopore tiled amplicon data without polishing a reference

Stitch together Nanopore tiled amplicon data using a reference guided approach Tiled amplicon data, like those produced from primers designed with pri

Amanda Warr 14 Aug 30, 2022
Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown.

Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown

915 Dec 26, 2022
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
Includes all files needed to satisfy hw02 requirements

HW 02 Data Sets Mean Scale Score for Asian and Hispanic Students, Grades 3 - 8 This dataset provides insights into the New York City education system

7 Oct 28, 2021
Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation

Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation Overview Consider the scenario in which advertisement

Manuel Bressan 2 Nov 18, 2021
Maximum Covariance Analysis in Python

xMCA | Maximum Covariance Analysis in Python The aim of this package is to provide a flexible tool for the climate science community to perform Maximu

Niclas Rieger 39 Jan 03, 2023
BAyesian Model-Building Interface (Bambi) in Python.

Bambi BAyesian Model-Building Interface in Python Overview Bambi is a high-level Bayesian model-building interface written in Python. It's built on to

861 Dec 29, 2022
Snakemake workflow for converting FASTQ files to self-contained CRAM files with maximum lossless compression.

Snakemake workflow: name A Snakemake workflow for description Usage The usage of this workflow is described in the Snakemake Workflow Catalog. If

Algorithms for reproducible bioinformatics (Koesterlab) 1 Dec 16, 2021
Single machine, multiple cards training; mix-precision training; DALI data loader.

Template Script Category Description Category script comparison script train.py, loader.py for single-machine-multiple-cards training train_DP.py, tra

2 Jun 27, 2022
Very basic but functional Kakuro solver written in Python.

kakuro.py Very basic but functional Kakuro solver written in Python. It uses a reduction to exact set cover and Ali Assaf's elegant implementation of

Louis Abraham 4 Jan 15, 2022
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

DataHerb 4 Feb 11, 2022
statDistros is a Python library for dealing with various statistical distributions

StatisticalDistributions statDistros statDistros is a Python library for dealing with various statistical distributions. Now it provides various stati

1 Oct 03, 2021
Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video.

Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video. You can chose the cha

2 Jul 22, 2022
Kennedy Institute of Rheumatology University of Oxford Project November 2019

TradingBot6M Kennedy Institute of Rheumatology University of Oxford Project November 2019 Run Change api.txt to binance api key: https://www.binance.c

Kannan SAR 2 Nov 16, 2021