Face recognition. Redefined.

Overview

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

FaceFinder

Use a powerful CNN to identify faces in images!

TABLE OF CONTENTS
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

screenshot

There is lots of face recognition software out there on github, but most of it focuses on speed over accuracy and uses models such as 'hog'. However, FaceFinder is one of the most powerful face recognition programs which uses a very large CNN to make accurate predictions.

Here's why:

  • Several modern technologies make use of face recognition and its importance in the world is constantly increasing.
  • You shouldn't have to train a full neural net of your own every time you want to perform face recognition.
  • FaceFinder contains code which runs approximately 3.7 times faster than average.

If you're making an app of your own and want it to perform face recognition, this is your go-to option.

A list of commonly used resources that I find helpful are listed in the acknowledgements.

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

  • Latest versions of pip and setuptools
    pip install --upgrade pip setuptools
  • Conda
    pip install conda
  • Dlib
    python -m conda install dlib
  • Required packages
    pip install -r requirements.txt

Installation

  1. Ensure you're in your home directory:

    cd ~

    When you clone the repository it should show up as a subfolder in your home folder. You can change its location whenever you want.

  2. Clone the repo:

    git clone https://github.com/BleepLogger/FaceFinder

    Clone the repository by its URL.

  3. Navigate to cloned repository:

    cd FaceFinder

    Commands that you run should be run within the cloned repository.

  4. To run the program, execute tasks.py with command line arguments:

    python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

    Replace the and with the real paths. They're just placeholders.

Usage

To run it from the command line, you will need to pass two arguments.

python Scripts/tasks.py --data-dir '<data folder path>' --input_image '<path to image>'

Replace the and with the real paths.

This program needs one directory containing different images labelled with whose face is present in the image. And then, you need an input image which will be classified.

So if you want to check whether an image is an image of your mom or your dad, then this is how you could do it:

  1. Create a directory called dataset/ in the FaceFinder directory in ~.
  2. Create two subdirectories, dataset/mom and dataset/dad.
  3. Add images of your mother to the mom subdir and your father to your dad subdir.
  4. Click an image of either your mom or your dad, the one you want to classify. Title it 2bclassified.jpg and put it in the FaceFinder directory.
  5. Run this command:
    python Scripts/tasks.py --data-dir 'dataset/' --input_image '2bclassified.jpg'

Then, after about 20 minutes of processing (6-7 if you have a GPU), a window will open up displaying your image, with a box highlighting the detected face and a box of text saying either "Mom" or saying "Dad".

You will have to install dlib from source if you want your GPU to be utilized. Look up the instructions to do that.

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Kanav Bhasin - @kanav_bhasin - [email protected]

Project Link: https://github.com/BleepLogger/FaceFinder


# Thank you!
Owner
BleepLogger
App/system developer specializing in C, Python, and JavaScript. Writes unreadable but very fast code. Skills include AI/ML, Web Scraping, and The Cloud.
BleepLogger
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

677 Dec 28, 2022
TipToiDog - Tip Toi Dog With Python

TipToiDog Was ist dieses Projekt? Meine 5-jährige Tochter spielt sehr gerne das

1 Feb 07, 2022
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

435 Dec 30, 2022
Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021)

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision (ICCV 2021) PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurat

Zhilu Zhang 53 Dec 20, 2022
Relaxed-machines - explorations in neuro-symbolic differentiable interpreters

Relaxed Machines Explorations in neuro-symbolic differentiable interpreters. Baby steps: inc_stop Libraries JAX Haiku Optax Resources Chapter 3 (∂4: A

Nada Amin 6 Feb 02, 2022
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"

Optimal Model Design for Reinforcement Learning This repository contains JAX code for the paper Control-Oriented Model-Based Reinforcement Learning wi

Evgenii Nikishin 43 Sep 28, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 08, 2022
DvD-TD3: Diversity via Determinants for TD3 version

DvD-TD3: Diversity via Determinants for TD3 version The implementation of paper Effective Diversity in Population Based Reinforcement Learning. Instal

3 Feb 11, 2022
Framework that uses artificial intelligence applied to mathematical models to make predictions

LiconIA Framework that uses artificial intelligence applied to mathematical models to make predictions Interface Overview Table of contents [TOC] 1 Ar

4 Jun 20, 2021
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
🌾 PASTIS 🌾 Panoptic Agricultural Satellite TIme Series

🌾 PASTIS 🌾 Panoptic Agricultural Satellite TIme Series (optical and radar) The PASTIS Dataset Dataset presentation PASTIS is a benchmark dataset for

86 Jan 04, 2023
Contains source code for the winning solution of the xView3 challenge

Winning Solution for xView3 Challenge This repository contains source code and pretrained models for my (Eugene Khvedchenya) solution to xView 3 Chall

Eugene Khvedchenya 51 Dec 30, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Deep Exemplar-based Video Colorization (Pytorch Implementation) Paper | Pretrained Model | Youtube video 🔥 | Colab demo Deep Exemplar-based Video Col

Bo Zhang 253 Dec 27, 2022
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs SMORE is a a versatile framework that scales multi-hop query emb

Google Research 135 Dec 27, 2022