💬 Python scripts to parse Messenger, Hangouts, WhatsApp and Telegram chat logs into DataFrames.

Overview

Chatistics

Python 3 scripts to convert chat logs from various messaging platforms into Pandas DataFrames. Can also generate histograms and word clouds from the chat logs.

Changelog

10 Jan 2020: UPDATED ALL THE THINGS! Thanks to mar-muel and manueth, pretty much everything has been updated and improved, and WhatsApp is now supported!

21 Oct 2018: Updated Facebook Messenger and Google Hangouts parsers to make them work with the new exported file formats.

9 Feb 2018: Telegram support added thanks to bmwant.

24 Oct 2016: Initial release supporting Facebook Messenger and Google Hangouts.

Support Matrix

Platform Direct Chat Group Chat
Facebook Messenger ✔ ✘
Google Hangouts ✔ ✘
Telegram ✔ ✘
WhatsApp ✔ ✔

Exported data

Data exported for each message regardless of the platform:

Column Content
timestamp UNIX timestamp (in seconds)
conversationId A conversation ID, unique by platform
conversationWithName Name of the other people in a direct conversation, or name of the group conversation
senderName Name of the sender
outgoing Boolean value whether the message is outgoing/coming from owner
text Text of the message
language Language of the conversation as inferred by langdetect
platform Platform (see support matrix above)

Exporting your chat logs

1. Download your chat logs

Google Hangouts

Warning: Google Hangouts archives can take a long time to be ready for download - up to one hour in our experience.

  1. Go to Google Takeout: https://takeout.google.com/settings/takeout
  2. Request an archive containing your Hangouts chat logs
  3. Download the archive, then extract the file called Hangouts.json
  4. Move it to ./raw_data/hangouts/

Facebook Messenger

Warning: Facebook archives can take a very long time to be ready for download - up to 12 hours! They can weight several gigabytes. Start with an archive containing just a few months of data if you want to quickly get started, this shouldn't take more than a few minutes to complete.

  1. Go to the page "Your Facebook Information": https://www.facebook.com/settings?tab=your_facebook_information
  2. Click on "Download Your Information"
  3. Select the date range you want. The format must be JSON. Media won't be used, so you can set the quality to "Low" to speed things up.
  4. Click on "Deselect All", then scroll down to select "Messages" only
  5. Click on "Create File" at the top of the list. It will take Facebook a while to generate your archive.
  6. Once the archive is ready, download and extract it, then move the content of the messages folder into ./raw_data/messenger/

WhatsApp

Unfortunately, WhatsApp only lets you export your conversations from your phone and one by one.

  1. On your phone, open the chat conversation you want to export
  2. On Android, tap on â‹® > More > Export chat. On iOS, tap on the interlocutor's name > Export chat
  3. Choose "Without Media"
  4. Send chat to yourself eg via Email
  5. Unpack the archive and add the individual .txt files to the folder ./raw_data/whatsapp/

Telegram

The Telegram API works differently: you will first need to setup Chatistics, then query your chat logs programmatically. This process is documented below. Exporting Telegram chat logs is very fast.

2. Setup Chatistics

First, install the required Python packages using conda:

conda env create -f environment.yml
conda activate chatistics

You can now parse the messages by using the command python parse.py .

By default the parsers will try to infer your own name (i.e. your username) from the data. If this fails you can provide your own name to the parser by providing the --own-name argument. The name should match your name exactly as used on that chat platform.

# Google Hangouts
python parse.py hangouts

# Facebook Messenger
python parse.py messenger

# WhatsApp
python parse.py whatsapp

Telegram

  1. Create your Telegram application to access chat logs (instructions). You will need api_id and api_hash which we will now set as environment variables.
  2. Run cp secrets.sh.example secrets.sh and fill in the values for the environment variables TELEGRAM_API_ID, TELEGRAMP_API_HASH and TELEGRAM_PHONE (your phone number including country code).
  3. Run source secrets.sh
  4. Execute the parser script using python parse.py telegram

The pickle files will now be ready for analysis in the data folder!

For more options use the -h argument on the parsers (e.g. python parse.py telegram --help).

3. All done! Play with your data

Chatistics can print the chat logs as raw text. It can also create histograms, showing how many messages each interlocutor sent, or generate word clouds based on word density and a base image.

Export

You can view the data in stdout (default) or export it to csv, json, or as a Dataframe pickle.

python export.py

You can use the same filter options as described above in combination with an output format option:

  -f {stdout,json,csv,pkl}, --format {stdout,json,csv,pkl}
                        Output format (default: stdout)

Histograms

Plot all messages with:

python visualize.py breakdown

Among other options you can filter messages as needed (also see python visualize.py breakdown --help):

  --platforms {telegram,whatsapp,messenger,hangouts}
                        Use data only from certain platforms (default: ['telegram', 'whatsapp', 'messenger', 'hangouts'])
  --filter-conversation
                        Limit by conversations with this person/group (default: [])
  --filter-sender
                        Limit to messages sent by this person/group (default: [])
  --remove-conversation
                        Remove messages by these senders/groups (default: [])
  --remove-sender
                        Remove all messages by this sender (default: [])
  --contains-keyword
                        Filter by messages which contain certain keywords (default: [])
  --outgoing-only       
                        Limit by outgoing messages (default: False)
  --incoming-only       
                        Limit by incoming messages (default: False)

Eg to see all the messages sent between you and Jane Doe:

python visualize.py breakdown --filter-conversation "Jane Doe"

To see the messages sent to you by the top 10 people with whom you talk the most:

python visualize.py breakdown -n 10 --incoming-only

You can also plot the conversation densities using the --as-density flag.

Word Cloud

You will need a mask file to render the word cloud. The white bits of the image will be left empty, the rest will be filled with words using the color of the image. See the WordCloud library documentation for more information.

python visualize.py cloud -m raw_outlines/users.jpg

You can filter which messages to use using the same flags as with histograms.

Development

Install dev environment using

conda env create -f environment_dev.yml

Run tests from project root using

python -m pytest

Improvement ideas

  • Parsers for more chat platforms: Discord? Signal? Pidgin? ...
  • Handle group chats on more platforms.
  • See open issues for more ideas.

Pull requests are welcome!

Social medias

Projects using Chatistics

Meet your Artificial Self: Generate text that sounds like you workshop

Credits

Owner
Florian
🤖 Machine Learning
Florian
PandaPy has the speed of NumPy and the usability of Pandas 10x to 50x faster (by @firmai)

PandaPy "I came across PandaPy last week and have already used it in my current project. It is a fascinating Python library with a lot of potential to

Derek Snow 527 Jan 02, 2023
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
Elasticsearch tool for easily collecting and batch inserting Python data and pandas DataFrames

ElasticBatch Elasticsearch buffer for collecting and batch inserting Python data and pandas DataFrames Overview ElasticBatch makes it easy to efficien

Dan Kaslovsky 21 Mar 16, 2022
PyPSA: Python for Power System Analysis

1 Python for Power System Analysis Contents 1 Python for Power System Analysis 1.1 About 1.2 Documentation 1.3 Functionality 1.4 Example scripts as Ju

758 Dec 30, 2022
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
A data analysis using python and pandas to showcase trends in school performance.

A data analysis using python and pandas to showcase trends in school performance. A data analysis to showcase trends in school performance using Panda

Jimmy Faccioli 0 Sep 07, 2021
Template for a Dataflow Flex Template in Python

Dataflow Flex Template in Python This repository contains a template for a Dataflow Flex Template written in Python that can easily be used to build D

STOIX 5 Apr 28, 2022
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 07, 2022
A meta plugin for processing timelapse data timepoint by timepoint in napari

napari-time-slicer A meta plugin for processing timelapse data timepoint by timepoint. It enables a list of napari plugins to process 2D+t or 3D+t dat

Robert Haase 2 Oct 13, 2022
Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format

Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format.

Brady Law 2 Dec 01, 2021
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 09, 2023
Yet Another Workflow Parser for SecurityHub

YAWPS Yet Another Workflow Parser for SecurityHub "Screaming pepper" by Rum Bucolic Ape is licensed with CC BY-ND 2.0. To view a copy of this license,

myoung34 8 Dec 22, 2022
A crude Hy handle on Pandas library

Quickstart Hyenas is a curde Hy handle written on top of Pandas API to allow for more elegant access to data-scientist's powerhouse that is Pandas. In

Peter Výboch 4 Sep 05, 2022
Tools for the analysis, simulation, and presentation of Lorentz TEM data.

ltempy ltempy is a set of tools for Lorentz TEM data analysis, simulation, and presentation. Features Single Image Transport of Intensity Equation (SI

McMorran Lab 1 Dec 26, 2022
Data science/Analysis Health Care Portfolio

Health-Care-DS-Projects Data Science/Analysis Health Care Portfolio Consists Of 3 Projects: Mexico Covid-19 project, analyze the patient medical histo

Mohamed Abd El-Mohsen 1 Feb 13, 2022
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
Mining the Stack Overflow Developer Survey

Mining the Stack Overflow Developer Survey A prototype data mining application to compare the accuracy of decision tree and random forest regression m

1 Nov 16, 2021
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023
A distributed block-based data storage and compute engine

Nebula is an extremely-fast end-to-end interactive big data analytics solution. Nebula is designed as a high-performance columnar data storage and tabular OLAP engine.

Columns AI 131 Dec 26, 2022
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022