Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Overview

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV)

Title

FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) Dataset
Alt Text

Paper

You can find the article related to this code here at Elsevier or
You can find the preprint from the Arxiv website.

Dataset

  • The dataset is uploaded on IEEE dataport. You can find the dataset here at IEEE Dataport or DOI. IEEE account is free, so you can create an account and access the dataset files without any payment or subscription.

  • This table below shows all available data for the dataset.

  • This project uses items 7, 8, 9, and 10 from the dataset. Items 7 and 8 are being used for the "Fire_vs_NoFire" image classification. Items 9 and 10 are for the fire segmentation.

  • If you clone this repository on your local drive, please download item 7 from the dataset and unzip in directory /frames/Training/... for the Training phase of the "Fire_vs_NoFire" image classification. The direcotry looks like this:

Repository/frames/Training
                    ├── Fire/*.jpg
                    ├── No_Fire/*.jpg
  • For testing your trained model, please use item 8 and unzip it in direcotry /frame/Test/... . The direcotry looks like this:
Repository/frames/Test
                    ├── Fire/*.jpg
                    ├── No_Fire/*.jpg
  • Items 9 and 10 should be unzipped in these directories frames/Segmentation/Data/Image/... and frames/Segmentation/Data/Masks/... accordingly. The direcotry looks like this:
Repository/frames/Segmentation/Data
                                ├── Images/*.jpg
                                ├── Masks/*.png
  • Please remove other README files from those directories and make sure that only images are there.

Model

  • The binary fire classifcation model of this project is based on the Xception Network:

Alt text

  • The fire segmentation model of this project is based on the U-NET:

Alt text

Sample

  • A short sample video of the dataset is available on YouTube: Alt text

Requirements

  • os
  • re
  • cv2
  • copy
  • tqdm
  • scipy
  • pickle
  • numpy
  • random
  • itertools
  • Keras 2.4.0
  • scikit-image
  • Tensorflow 2.3.0
  • matplotlib.pyplot

Code

This code is run and tested on Python 3.6 on linux (Ubuntu 18.04) machine with no issues. There is a config.py file in this directoy which shows all the configuration parameters such as Mode, image target size, Epochs, batch size, train_validation ratio, etc. All dependency files are available in the root directory of this repository.

  • To run the training phase for the "Fire_vs_NoFire" image classification, change the mode value to 'Training' in the config.py file. Like This
Mode = 'Training'

Make sure that you have copied and unzipped the data in correct direcotry.

  • To run the test phase for the "Fire_vs_NoFire" image classification, change the mode value to 'Classification' in the config.py file. Change This
Mode = 'Classification'

Make sure that you have copied and unzipped the data in correct direcotry.

  • To run the test phase for the Fire segmentation, change the mode value to 'Classification' in the config.py file. Change This
Mode = 'Segmentation'

Make sure that you have copied and unzipped the data in correct direcotry.

Then after setting your parameters, just run the main.py file.

python main.py

Results

  • Fire classification accuracy:

Alt text

  • Fire classification Confusion Matrix:

  • Fire segmentation metrics and evaluation:

Alt text

  • Comparison between generated masks and grount truth mask:

Alt text

  • Federated Learning sample
    To consider future challenges, we defined a new sample of federated learning on a local node (NVidia Jetson Nano, 4GB RAM). Jetson Nano is available in two versions: 1) 4GB RAM developer kit, and 2) 2GB RAM developer kit. In this Implementation, the 4GB version is used with the technical specifications of a 128-core Maxwell GPU, a Quad-core ARM A57 @ 1.43 GHz CPU, 4GB LPDDR4 RAM, and a 32GB microSD storage. To test Jetson Nano for the federated learning, items (9) and (10) from Dataset are used for the fire segmentation. Since Jetson Nano has limited RAM, we assumed that each drone has access to a portion of the FLAME dataset. Only 500 fire images and masks are considered for the training and validation phase on the drone. As we aimed at learning a model on a smaller subset of the FLAME dataset and inferring that model, the default Tensorflow version is used here. Also, the image and mask dimension for each input is reduced to 128 x 128 x 3 rather than 512 x 512 x 3. To save more memory on the RAM, all peripherals were turned off and only WiFi was working at that time for the Secure Shell (SSH) connection. The setup of this node is:

Citation

If you find it useful, please cite our paper as follows:

@article{shamsoshoara2021aerial,
  title={Aerial Imagery Pile burn detection using Deep Learning: the FLAME dataset},
  author={Shamsoshoara, Alireza and Afghah, Fatemeh and Razi, Abolfazl and Zheng, Liming and Ful{\'e}, Peter Z and Blasch, Erik},
  journal={Computer Networks},
  pages={108001},
  year={2021},
  publisher={Elsevier}
}

Other related repositories and articles

License

For academtic and non-commercial usage

Owner
Ph.D. in Informatics and Computing from Northern Arizona University, M.Sc. in Informatics, M.Sc, in Electrical Engineering, B.Sc. in Electrical Engineering
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
MediaPipeのPythonパッケージのサンプルです。2020/12/11時点でPython実装のある4機能(Hands、Pose、Face Mesh、Holistic)について用意しています。

mediapipe-python-sample MediaPipeのPythonパッケージのサンプルです。 2020/12/11時点でPython実装のある以下4機能について用意しています。 Hands Pose Face Mesh Holistic Requirement mediapipe 0.

KazuhitoTakahashi 217 Dec 12, 2022
an implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Simon Niklaus 985 Jan 08, 2023
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
Pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021).

Pytorch code for SS-Net This is a pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021). Environment Code is tested

Sun Ran 1 May 18, 2022
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
Heterogeneous Temporal Graph Neural Network

Heterogeneous Temporal Graph Neural Network This repository contains the datasets and source code of HTGNN. run_mag.ipynb is the training and testing

15 Dec 22, 2022
Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Pytorch implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Det

123 Jan 04, 2023
A general framework for inferring CNNs efficiently. Reduce the inference latency of MobileNet-V3 by 1.3x on an iPhone XS Max without sacrificing accuracy.

GFNet-Pytorch (NeurIPS 2020) This repo contains the official code and pre-trained models for the glance and focus network (GFNet). Glance and Focus: a

Rainforest Wang 169 Oct 28, 2022
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

120 Dec 12, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
This repository contains the code used for the implementation of the paper "Probabilistic Regression with HuberDistributions"

Public_prob_regression_with_huber_distributions This repository contains the code used for the implementation of the paper "Probabilistic Regression w

David Mohlin 1 Dec 04, 2021
METS/ALTO OCR enhancing tool by the National Library of Luxembourg (BnL)

Nautilus-OCR The National Library of Luxembourg (BnL) started its first initiative in digitizing newspapers, with layout recognition and OCR on articl

National Library of Luxembourg 36 Dec 05, 2022
Pure python implementation reverse-mode automatic differentiation

MiniGrad A minimal implementation of reverse-mode automatic differentiation (a.k.a. autograd / backpropagation) in pure Python. Inspired by Andrej Kar

Kenny Song 76 Sep 12, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayes

Intel Labs 210 Jan 04, 2023
Synthesizing and manipulating 2048x1024 images with conditional GANs

pix2pixHD Project | Youtube | Paper Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translatio

NVIDIA Corporation 6k Dec 27, 2022
Python utility to generate filesystem content for Obsidian.

Security Vault Generator Quickly parse, format, and output common frameworks/content for Obsidian.md. There is a strong focus on MITRE ATT&CK because

Justin Angel 73 Dec 02, 2022