This repository compare a selfie with images from identity documents and response if the selfie match.

Overview

aws-rekognition-facecompare

This repository compare a selfie with images from identity documents and response if the selfie match.

This code was made in a Python Notebook under SageMaker.

Set up:

  • Create a Notebook Instance in SageMaker
  • Notebook instance type : ml.t2.medium
  • Volume Size : 5GB EBS
  • Create a role for SageMaker with the following policies:
  • AmazonS3FullAccess
  • AmazonRekognitionFullAccess
  • AmazonSageMakerFullAccess
  1. Create a S3 Bucket
  2. Inside bucket create folder to insert the dataset images

Code Explanation

boto3 is needed to use the aws client of S3 and Rekognition. Just like what we do with variables, data can be kept as bytes in an in-memory buffer when we use the io module’s Byte IO operations, so we can load images froms S3. At least Pillow is needed for image plotting.

import boto3
import io
from PIL import Image, ImageDraw, ExifTags, ImageColor

rekognition_client=boto3.client('rekognition')
s3_resource = boto3.resource('s3')

In this notebook I use two functions of AWS Rekognition

  • detect_faces : Detect faces in the image. It also evaluate different metrics and create different landmarks for all elements of the face like eyes positions.
  • compare_faces : Evaluate the similarity of two faces.

Case of use

Here I explain how to compare two images

The compare function

IMG_SOURCE ="dataset-CI/imgsource.jpg"
IMG_TARGET ="dataset-CI/img20.jpg"
response = rekognition_client.compare_faces(
                SourceImage={
                    'S3Object': {
                        'Bucket': BUCKET,
                        'Name': IMG_SOURCE
                    }
                },
                TargetImage={
                    'S3Object': {
                        'Bucket': BUCKET,
                        'Name': IMG_TARGET                    
                    }
                }
)

response

{'SourceImageFace': {'BoundingBox': {'Width': 0.3676206171512604,
   'Height': 0.5122320055961609,
   'Left': 0.33957839012145996,
   'Top': 0.18869829177856445},
  'Confidence': 99.99957275390625},
 'FaceMatches': [{'Similarity': 99.99634552001953,
   'Face': {'BoundingBox': {'Width': 0.14619407057762146,
     'Height': 0.26241832971572876,
     'Left': 0.13103649020195007,
     'Top': 0.40437373518943787},
    'Confidence': 99.99955749511719,
    'Landmarks': [{'Type': 'eyeLeft',
      'X': 0.17260463535785675,
      'Y': 0.5030772089958191},
     {'Type': 'eyeRight', 'X': 0.23902645707130432, 'Y': 0.5023221969604492},
     {'Type': 'mouthLeft', 'X': 0.17937719821929932, 'Y': 0.5977044105529785},
     {'Type': 'mouthRight', 'X': 0.23477530479431152, 'Y': 0.5970458984375},
     {'Type': 'nose', 'X': 0.20820103585720062, 'Y': 0.5500822067260742}],
    'Pose': {'Roll': 0.4675966203212738,
     'Yaw': 1.592366099357605,
     'Pitch': 8.6331205368042},
    'Quality': {'Brightness': 85.35185241699219,
     'Sharpness': 89.85481262207031}}}],
 'UnmatchedFaces': [],
 'ResponseMetadata': {'RequestId': '3ae9032d-de8a-41ef-b22f-f95c70eed783',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'x-amzn-requestid': '3ae9032d-de8a-41ef-b22f-f95c70eed783',
   'content-type': 'application/x-amz-json-1.1',
   'content-length': '911',
   'date': 'Wed, 26 Jan 2022 17:21:53 GMT'},
  'RetryAttempts': 0}}

If the source image match with the target image, the json return a key "FaceMatches" with a non-empty, otherwise it returns a key "UnmatchedFaces" with a non-empty array.

# Analisis imagen source
s3_object = s3_resource.Object(BUCKET,IMG_SOURCE)
s3_response = s3_object.get()
stream = io.BytesIO(s3_response['Body'].read())
image=Image.open(stream)
imgWidth, imgHeight = image.size  
draw = ImageDraw.Draw(image)  

box = response['SourceImageFace']['BoundingBox']
left = imgWidth * box['Left']
top = imgHeight * box['Top']
width = imgWidth * box['Width']
height = imgHeight * box['Height']

print('Left: ' + '{0:.0f}'.format(left))
print('Top: ' + '{0:.0f}'.format(top))
print('Face Width: ' + "{0:.0f}".format(width))
print('Face Height: ' + "{0:.0f}".format(height))

points = (
    (left,top),
    (left + width, top),
    (left + width, top + height),
    (left , top + height),
    (left, top)

)
draw.line(points, fill='#00d400', width=2)

image.show()
Left: 217
Top: 121
Face Width: 235
Face Height: 328

png

0: for face in response['FaceMatches']: face_match = face['Face'] box = face_match['BoundingBox'] left = imgWidth * box['Left'] top = imgHeight * box['Top'] width = imgWidth * box['Width'] height = imgHeight * box['Height'] print('FaceMatches') print('Left: ' + '{0:.0f}'.format(left)) print('Top: ' + '{0:.0f}'.format(top)) print('Face Width: ' + "{0:.0f}".format(width)) print('Face Height: ' + "{0:.0f}".format(height)) points = ( (left,top), (left + width, top), (left + width, top + height), (left , top + height), (left, top) ) draw.line(points, fill='#00d400', width=2) image.show()">
# Analisis imagen target
s3_object = s3_resource.Object(BUCKET,IMG_TARGET)
s3_response = s3_object.get()
stream = io.BytesIO(s3_response['Body'].read())
image=Image.open(stream)
imgWidth, imgHeight = image.size  
draw = ImageDraw.Draw(image)
if len(response['UnmatchedFaces']) > 0:
    for face in response['UnmatchedFaces']:
        box = face['BoundingBox']
        left = imgWidth * box['Left']
        top = imgHeight * box['Top']
        width = imgWidth * box['Width']
        height = imgHeight * box['Height']
        print('UnmatchedFaces')
        print('Left: ' + '{0:.0f}'.format(left))
        print('Top: ' + '{0:.0f}'.format(top))
        print('Face Width: ' + "{0:.0f}".format(width))
        print('Face Height: ' + "{0:.0f}".format(height))

        points = (
            (left,top),
            (left + width, top),
            (left + width, top + height),
            (left , top + height),
            (left, top)

        )
        draw.line(points, fill='#ff0000', width=2)
        
if len(response['FaceMatches']) > 0:
    for face in response['FaceMatches']:
        face_match = face['Face']
        box = face_match['BoundingBox']
        left = imgWidth * box['Left']
        top = imgHeight * box['Top']
        width = imgWidth * box['Width']
        height = imgHeight * box['Height']
        print('FaceMatches')
        print('Left: ' + '{0:.0f}'.format(left))
        print('Top: ' + '{0:.0f}'.format(top))
        print('Face Width: ' + "{0:.0f}".format(width))
        print('Face Height: ' + "{0:.0f}".format(height))

        points = (
            (left,top),
            (left + width, top),
            (left + width, top + height),
            (left , top + height),
            (left, top)

        )
        draw.line(points, fill='#00d400', width=2)        
image.show()
FaceMatches
Left: 671
Top: 1553
Face Width: 749
Face Height: 1008

png

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
Computer vision - fun segmentation experience using classic and deep tools :)

Computer_Vision_Segmentation_Fun Segmentation of Images and Video. Tools: pytorch Models: Classic model - GrabCut Deep model - Deeplabv3_resnet101 Flo

Mor Ventura 1 Dec 18, 2021
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
PyTorch implementation of Glow

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039) Usage: python train.p

Kim Seonghyeon 433 Dec 27, 2022
Autonomous racing with the Anki Overdrive

Anki Autonomous Racing Autonomous racing with the Anki Overdrive. Using the Overdrive-Python API (https://github.com/xerodotc/overdrive-python) develo

3 Dec 11, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Image Completion with Deep Learning in TensorFlow

Image Completion with Deep Learning in TensorFlow See my blog post for more details and usage instructions. This repository implements Raymond Yeh and

Brandon Amos 1.3k Dec 23, 2022
ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet)

ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet) (

Wei-Ting Chen 49 Dec 27, 2022
Official PyTorch implementation of the paper "Graph-based Generative Face Anonymisation with Pose Preservation" in ICIAP 2021

Contents AnonyGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowledgments Citat

Nicola Dall'Asen 10 May 24, 2022
Python implementation of "Multi-Instance Pose Networks: Rethinking Top-Down Pose Estimation"

MIPNet: Multi-Instance Pose Networks This repository is the official pytorch python implementation of "Multi-Instance Pose Networks: Rethinking Top-Do

Rawal Khirodkar 57 Dec 12, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

tianyuluan 3 Jun 18, 2022
Official Pytorch implementation of C3-GAN

Official pytorch implemenation of C3-GAN Contrastive Fine-grained Class Clustering via Generative Adversarial Networks [Paper] Authors: Yunji Kim, Jun

NAVER AI 114 Dec 02, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

5 Dec 10, 2022
Adversarial Framework for (non-) Parametric Image Stylisation Mosaics

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper "Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Imag

Zalando Research 120 Dec 24, 2022
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization

University1652-Baseline [Paper] [Slide] [Explore Drone-view Data] [Explore Satellite-view Data] [Explore Street-view Data] [Video Sample] [中文介绍] This

Zhedong Zheng 335 Jan 06, 2023
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022

Dual Correlation Reduction Network An official source code for paper Deep Graph Clustering via Dual Correlation Reduction, accepted by AAAI 2022. Any

yueliu1999 109 Dec 23, 2022