MS in Data Science capstone project. Studying attacks on autonomous vehicles.

Overview

Surveying Attack Models for CAVs

Guide to Installing CARLA and Collecting Data

Our project focuses on surveying attack models for Connveced Autonomous Vehicles (CAVs). The primary tool we will be using throughout the project is CARLA, a vehicle simulation platform. This document serves a guide to how to get CARLA running on your system as well as show off a script we adapted from orginal CARLA install. This closely follows the quick installation from the CARLA leaderboard challenge.

Requirements

CARLA runs best for Ubuntu 18.04 and for Windows. The following requirements are for an Ubuntu system:

  • Python 3
  • Anaconda
  • pip installer
  • 6 GB GPU
  • 20 GB of disk space

Installation

  1. Download the CARLA 0.9.10.1 release found here and unzip the package into a folder named CARLA.

  2. Download leaderboard repo:

    git clone -b stable --single-branch https://github.com/carla-simulator/leaderboard.git

  3. Switch to your CARLA root directory and run the following:

    pip3 install -r requirements.txt

  4. Clone the scenario runner repo

    git clone -b leaderboard --single-branch https://github.com/carla-simulator/scenario_runner.git

  5. cd into the scenario runner directory and run the following:

    pip3 install -r requirements.txt

  6. Now the environment variables need to be defined. Open a fresh terminal and open the bash profile:

    gedit ~/.bashrc

  7. Add the following to the bash profile:

export CARLA_ROOT=PATH_TO_CARLA_ROOT export SCENARIO_RUNNER_ROOT=PATH_TO_SCENARIO_RUNNER export LEADERBOARD_ROOT=PATH_TO_LEADERBOARD export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}

  1. source so the changes can take place:

    source ~/.bashrc

Running CARLA

To run CARLA simply open the terminal, change into your CARLA directory and run the following:

./CarlaUE4.sh

This should open an environment that looks like this:

Collecting Data

For our project, we are primary concerned with collecting LiDAR, Radar, Camera and GPS data. Thus far, we have a basic test python script that will log the latitude and longitude of an vehicle placed in the environment.

The test script can be found under tutorials/alana_test.py. This script should be placed under your CARLA root directory in PythonAPI.

The output will be .csv, a sample of which can be found under tutorials/outputgnss.csv. It should look like:

time latitude longitude altitude
8.076389706286136 0.001497075674478765 0.0013884266232332388 2.5777945518493652
12.463837534713093 0.0015216552946100137 0.0013888545621243858 2.003244638442993
16.73477230645949 0.0015536632594717048 0.0013894208066981615 1.2948381900787354
21.952862125413958 0.001598166573884896 0.0013902203478743502 0.5753694176673889
27.885887853393797 0.001605972963332647 0.001390361257925631 0.47810444235801697
Owner
Isabela Caetano
Masters student studying data science
Isabela Caetano
CSV database for chihuahua (HUAHUA) blockchain transactions

super-fiesta Shamelessly ripped components from https://github.com/hodgerpodger/staketaxcsv - Thanks for doing all the hard work. This code does only

Arlene Macciaveli 1 Jan 07, 2022
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data struc

Zed(Zijun) Chen 40 Dec 12, 2022
Repository created with LinkedIn profile analysis project done

EN/en Repository created with LinkedIn profile analysis project done. The datase

Mayara Canaver 4 Aug 06, 2022
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

2 Nov 20, 2021
Scraping and analysis of leetcode-compensations page.

Leetcode compensations report Scraping and analysis of leetcode-compensations page.

utsav 96 Jan 01, 2023
Data and code accompanying the paper Politics and Virality in the Time of Twitter

Politics and Virality in the Time of Twitter Data and code accompanying the paper Politics and Virality in the Time of Twitter. In specific: the code

Cardiff NLP 3 Jul 02, 2022
First steps with Python in Life Sciences

First steps with Python in Life Sciences This course material is part of the "First Steps with Python in Life Science" three-day course of SIB-trainin

SIB Swiss Institute of Bioinformatics 22 Jan 08, 2023
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

DAGsHub 359 Dec 22, 2022
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Quadram Institute Bioscience 2 Feb 16, 2022
Powerful, efficient particle trajectory analysis in scientific Python.

freud Overview The freud Python library provides a simple, flexible, powerful set of tools for analyzing trajectories obtained from molecular dynamics

Glotzer Group 195 Dec 20, 2022
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
Clean and reusable data-sciency notebooks.

KPACUBO KPACUBO is a set Jupyter notebooks focused on the best practices in both software development and data science, namely, code reuse, explicit d

Matvey Morozov 1 Jan 28, 2022
Calculate multilateral price indices in Python (with Pandas and PySpark).

IndexNumCalc Calculate multilateral price indices using the GEKS-T (CCDI), Time Product Dummy (TPD), Time Dummy Hedonic (TDH), Geary-Khamis (GK) metho

Dr. Usman Kayani 3 Apr 27, 2022
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022
Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I scraped data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

Nico Van den Hooff 17 Aug 21, 2022
Important dataframe statistics with a single command

quick_eda Receiving dataframe statistics with one command Project description A python package for Data Scientists, Students, ML Engineers and anyone

Sven Eschlbeck 2 Dec 19, 2021
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
Data processing with Pandas.

Processing-data-with-python This is a simple example showing how to use Pandas to create a dataframe and the processing data with python. The jupyter

1 Jan 23, 2022
A fast, flexible, and performant feature selection package for python.

linselect A fast, flexible, and performant feature selection package for python. Package in a nutshell It's built on stepwise linear regression When p

88 Dec 06, 2022
Titanic data analysis for python

Titanic-data-analysis This Repo is an analysis on Titanic_mod.csv This csv file contains some assumed data of the Titanic ship after sinking This full

Hardik Bhanot 1 Dec 26, 2021