DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques

Overview

alt text

forthebadge made-with-python GPLv3 license Open Source Love svg3

For educational purposes only.

DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques, based in France Only. The particularity of this program is its ability to find your targets e-mail adresses.

alt text

DaProfiler

DaProfiler allows you to create a profile on your target based in France only. The particularity of this program is its ability to find the e-mail addresses of a target via searches on Skype, Pinterest and tests of combining e-mail addresses followed by a check to know if the email address exists or not (Beware of false negatives, the results displayed do not necessarily relate to the target you are looking for if another person has the same name - first name). DaProfiler is also able to check the words of a an instagram bio to find interesting information such as : Email addresses, Paypal.me profiles,Sexual Orientation,City,School,Age,Ethnicity,Religions,Hobbies and more ...

Official website www.cnil.me/daprofiler/

Install

Python 3.8 required

git clone https://github.com/TheRealDalunacrobate/DaProfiler.git
cd DaProfiler
pip install -r requirements.txt

Use

profiler.py -n [NAME] -ln [LAST NAME] -l True -O txt_file.txt
(Target Name) (Target Last Name) (Enable Terminal Logging) (Output to txt_file.txt)

=====================================================================

usage: profiler.py [-h] [-n NAME] [-l LOGGING] [-ln LASTNAME] [-O OUTPUT]

  -h, --help            show this help message and exit
  -n NAME, --name NAME  Victim name
  -ln LASTNAME, --lastname LASTNAME
                        Last name of victim
  -u UPDATE, --update UPDATE
                        Update DaProfiler (Optional)

WARNING !

If you are not login to the hub and you search people, your search will be save to the hub. Please register to the hub and use -pp to push private your search.

Demo

alt text

Api

Source Service type Subscription Key in code
Leakcheck.net Breach Search Premium No
apilayer.net Phone infos Free (In code) Yes

Add your premium api keys :

  • Go to modules\api_modules then open your API module (ex Leakcheck), replace "YOUR_KEY" to your key, save and quit your text editor.

Hub

This branch of daprofiler is supporting the OSINT-HUB integration

create "./user/key.txt" with your osint-hub key inside see exemple.txt

Args Description Stable
-hubR,--hub-register Permit to register user in the hub true
-hubL,--hub-login Permet to login to the hub true
-hubU,--hub-username Set your username true
-hubP,--hub-password Set your password true
-hubS,--hub-search Also search on the hub true
-pp,--push-private Set your search in private true
-pg,--push-group Set your search in private true

Exemple

Register to the hub

python profiler.py --hub-register True --hub-username rabbit --hub-password Aqwzsx1234

Search with hub research

python profiler.py -n amelie -ln delacroix --hub-search True

note that a public search use your participation coin :p

Hub info

Made by: Alice Snow
Possible to use a nodejs client to explore: Here
Website www.cnil.me is the official acces hub website

Admin can delete all your data just if he want ! no need reason !

educational propose only

Contact

Mail : [email protected].
Discord : Dalunacrobate#6166
Discord server : Here

Contributions

All suggestions are welcome.

Code parts used under license and authors

Owner
Dalunacrobate
New account ! I lost github.com/Dalunacrobate
Dalunacrobate
Automatically scrapes all menu items from the Taco Bell website

Automatically scrapes all menu items from the Taco Bell website. Returns as PANDAS dataframe.

Sasha 2 Jan 15, 2022
Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Eric DE MARIA 1 Nov 30, 2021
Pro Football Reference Game Data Webscraper

Pro Football Reference Game Data Webscraper Code Copyright Yeetzsche This is a simple Pro Football Reference Webscraper that can either collect all ga

6 Dec 21, 2022
Scrape Twitter for Tweets

Backers Thank you to all our backers! 🙏 [Become a backer] Sponsors Support this project by becoming a sponsor. Your logo will show up here with a lin

Ahmet Taspinar 2.2k Jan 05, 2023
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 05, 2022
Works very well and you can ask for the type of image you want the scrapper to collect.

Works very well and you can ask for the type of image you want the scrapper to collect. Also follows a specific urls path depending on keyword selection.

Memo Sim 1 Feb 17, 2022
Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

Fazil vk 1 Jan 08, 2022
A python script to extract answers to any question on Quora (Quora+ included)

quora-plus-bypass A python script to extract answers to any question on Quora (Quora+ included) Requirements Python 3.x

Nitin Narayanan 10 Aug 18, 2022
Creating Scrapy scrapers via the Django admin interface

django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and

Holger Drewes 1.1k Dec 17, 2022
mlscraper: Scrape data from HTML pages automatically with Machine Learning

🤖 Scrape data from HTML websites automatically with Machine Learning

Karl Lorey 798 Dec 29, 2022
A package designed to scrape data from Yahoo Finance.

yahoostock A package designed to scrape data from Yahoo Finance. Installation The most simple installation method is through PIP. pip install yahoosto

Rohan Singh 2 May 28, 2022
Scraping Top Repositories for Topics on GitHub,

0.-Webscrapping-using-python Scraping Top Repositories for Topics on GitHub, Web scraping is the process of extracting and parsing data from websites

Dev Aravind D Satprem 2 Mar 18, 2022
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
Python Web Scrapper Project

Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e

Jordan Ítalo Amaral 2 Jan 04, 2022
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.7k Jan 04, 2023
A simple python script to fetch the latest covid info

covid-tracker-script A simple python script to fetch the latest covid info How it works First, get the current date in MM-DD-YYYY format. Check if the

Dot 0 Dec 15, 2021
A scrapy pipeline that provides an easy way to store files and images using various folder structures.

scrapy-folder-tree This is a scrapy pipeline that provides an easy way to store files and images using various folder structures. Supported folder str

Panagiotis Simakis 7 Oct 23, 2022
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022