A custom DeepStack model for detecting 16 human actions.

Overview

DeepStack_ActionNET

This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for detecting 16 human actions present in the ActionNET Dataset dataset. Also included in this repository is that dataset with the YOLO annotations.

>> Watch Video Demo

  • Download DeepStack Model and Dataset
  • Create API and Detect Objects
  • Discover more Custom Models
  • Train your own Model

Download DeepStack Model and Dataset

You can download the pre-trained DeepStack_ActionNET model and the annotated dataset via the links below.

Create API and Detect Actions

The Trained Model can detect the following actions in images and videos.

  • calling
  • clapping
  • cycling
  • dancing
  • drinking
  • eating
  • fighting
  • hugging
  • kissing
  • laughing
  • listening-to-music
  • running
  • sitting
  • sleeping
  • texting
  • using-laptop

To start detecting, follow the steps below

  • Install DeepStack: Install DeepStack AI Server with instructions on DeepStack's documentation via https://docs.deepstack.cc

  • Download Custom Model: Download the trained custom model actionnetv2.pt from this GitHub release. Create a folder on your machine and move the downloaded model to this folder.

    E.g A path on Windows Machine C\Users\MyUser\Documents\DeepStack-Models, which will make your model file path C\Users\MyUser\Documents\DeepStack-Models\actionnet.pt

  • Run DeepStack: To run DeepStack AI Server with the custom ActionNET model, run the command that applies to your machine as detailed on DeepStack's documentation linked here.

    E.g

    For a Windows version, you run the command below

    deepstack --MODELSTORE-DETECTION "C\Users\MyUser\Documents\DeepStack-Models" --PORT 80

    For a Linux machine

    sudo docker run -v /home/MyUser/Documents/DeepStack-Models -p 80:5000 deepquestai/deepstack

    Once DeepStack runs, you will see a log like the one below in your Terminal/Console

    That means DeepStack is running your custom actionnet.pt model and now ready to start detecting actions images via the API endpoint http://localhost:80/v1/vision/custom/actionnet or http://your_machine_ip:80/v1/vision/custom/actionnet

  • Detect actions in image: You can detect objects in an image by sending a POST request to the url mentioned above with the paramater image set to an image using any proggramming language or with a tool like POSTMAN. For the purpose of this repository, we have provided a sample Python code below.

    • A sample image can be found in images/test.jpg of this repository

    • Install Python and install the DeepStack Python SDK via the command below

      pip install deepstack_sdk
    • Run the Python file detect.py in this repository.

      python detect.py
    • After the code runs, you will find a new image in images/test_detected.jpg with the detection visualized, with the following results printed in the Terminal/Console.

      Name: dancing
      Confidence: 0.91482425
      x_min: 270
      x_max: 516
      y_min: 18
      y_max: 480
      -----------------------
      

    • You can try running action detection for other images.

Discover more Custom Models

For more custom DeepStack models that has been trained and ready to use, visit the Custom Models sample page on DeepStack's documentation https://docs.deepstack.cc/custom-models-samples/ .

Train your own Model

If you will like to train a custom model yourself, follow the instructions below.

  • Prepare and Annotate: Collect images on and annotate object(s) you plan to detect as detailed here
  • Train your Model: Train the model as detailed here
You might also like...
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

🎓Automatically Update CV Papers Daily using Github Actions (Update at 12:00 UTC Every Day)

An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions
An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions

Agar.io_Q-Learning_AI An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available act

Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Python TFLite scripts for detecting objects of any class in an image without knowing their label.
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Comments
  • How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    How to download a Custom Model action net v2.pt in Deepstack Server Docker?

    Tell me how to load a custom action network model correctly v2.pt in the Deepstack server docker? Did I do the right thing?

    DeepStack: Version 2021.09.01 I created the /model store/detection folders and threw the action net file there v2.pt image

    After the reboot, I got a v1/vision/custom/action net v2 entry in the logs. Did I do the right thing? It just confuses me that there is a v1/vision/custom/action net v2 entry in the logs, and the rest are written like this.

    /v1/vision/face
    /v1/vision/face/recognize
    ....
    

    image

    Is it necessary to enter here as in the case of face and object recognition? image image

    opened by DivanX10 0
Releases(v2)
  • v2(Aug 26, 2021)

    Version 2 of the DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnetv2.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.995 [email protected]: 0.913

    Source code(tar.gz)
    Source code(zip)
    actionnetv2.pt(169.41 MB)
  • v1(Aug 14, 2021)

    A DeepStack Custom Model for object detection API to detect human actions in images and videos. It detects the following actions

    • calling
    • clapping
    • cycling
    • dancing
    • drinking
    • eating
    • fighting
    • hugging
    • kissing
    • laughing
    • listening-to-music
    • running
    • sitting
    • sleeping
    • texting
    • using-laptop

    Download the model actionnet.pt from the Assets section (below) in this release.

    This Model is a YOLOv5x DeepStack custom model and that was trained for 150 epochs, generating a best model with the following evaluation result.

    [email protected]: 0.9858 [email protected]: 0.8051

    Source code(tar.gz)
    Source code(zip)
    actionnet.pt(169.41 MB)
Owner
MOSES OLAFENWA
Software Engineer @Microsoft , A self-Taught computer programmer, Deep Learning, Computer Vision Researcher and Developer. Creator of ImageAI.
MOSES OLAFENWA
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 EAMLP will come soon Jitto

MenghaoGuo 357 Dec 11, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
[ICML 2020] Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

PG-MORL This repository contains the implementation for the paper Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Contro

MIT Graphics Group 65 Jan 07, 2023
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems - NAACL 2021

Fine-grained Post-training for Multi-turn Response Selection Implements the model described in the following paper Fine-grained Post-training for Impr

Janghoon Han 83 Dec 20, 2022
Official Pytorch implementation of "Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral)"

Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral): Official Project Webpage This repository provides the off

Kakao Enterprise Corp. 68 Dec 17, 2022
An self sufficient AI that crawls the web to learn how to generate art from keywords

Roxx-IO - The Smart Artist AI! TO DO / IDEAS Implement Web-Scraping Functionality Figure out a less annoying (and an off button for it) text to speech

Tatz 5 Mar 21, 2022
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

浅梦 6.6k Jan 08, 2023
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.

pytorch Implementation of U-Net, R2U-Net, Attention U-Net, Attention R2U-Net U-Net: Convolutional Networks for Biomedical Image Segmentation https://a

leejunhyun 2k Jan 02, 2023
Software & Hardware to do multi color printing with Sharpies

3D Print Colorizer is a combination of 3D printed parts and a Cura plugin which allows anyone with an Ender 3 like 3D printer to produce multi colored

343 Jan 06, 2023
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

0. Introduction This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning". Notes The netwo

NetX Group 68 Nov 24, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Stochastic gradient descent with model building

Stochastic Model Building (SMB) This repository includes a new fast and robust stochastic optimization algorithm for training deep learning models. Th

S. Ilker Birbil 22 Jan 19, 2022
Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique

AOS: Airborne Optical Sectioning Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned airc

JKU Linz, Institute of Computer Graphics 39 Dec 09, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Dec 27, 2022
The official GitHub repository for the Argoverse 2 dataset.

Argoverse 2 API Official GitHub repository for the Argoverse 2 family of datasets. If you have any questions or run into any problems with either the

Argo AI 156 Dec 23, 2022