💬 Python scripts to parse Messenger, Hangouts, WhatsApp and Telegram chat logs into DataFrames.

Overview

Chatistics

Python 3 scripts to convert chat logs from various messaging platforms into Pandas DataFrames. Can also generate histograms and word clouds from the chat logs.

Changelog

10 Jan 2020: UPDATED ALL THE THINGS! Thanks to mar-muel and manueth, pretty much everything has been updated and improved, and WhatsApp is now supported!

21 Oct 2018: Updated Facebook Messenger and Google Hangouts parsers to make them work with the new exported file formats.

9 Feb 2018: Telegram support added thanks to bmwant.

24 Oct 2016: Initial release supporting Facebook Messenger and Google Hangouts.

Support Matrix

Platform Direct Chat Group Chat
Facebook Messenger
Google Hangouts
Telegram
WhatsApp

Exported data

Data exported for each message regardless of the platform:

Column Content
timestamp UNIX timestamp (in seconds)
conversationId A conversation ID, unique by platform
conversationWithName Name of the other people in a direct conversation, or name of the group conversation
senderName Name of the sender
outgoing Boolean value whether the message is outgoing/coming from owner
text Text of the message
language Language of the conversation as inferred by langdetect
platform Platform (see support matrix above)

Exporting your chat logs

1. Download your chat logs

Google Hangouts

Warning: Google Hangouts archives can take a long time to be ready for download - up to one hour in our experience.

  1. Go to Google Takeout: https://takeout.google.com/settings/takeout
  2. Request an archive containing your Hangouts chat logs
  3. Download the archive, then extract the file called Hangouts.json
  4. Move it to ./raw_data/hangouts/

Facebook Messenger

Warning: Facebook archives can take a very long time to be ready for download - up to 12 hours! They can weight several gigabytes. Start with an archive containing just a few months of data if you want to quickly get started, this shouldn't take more than a few minutes to complete.

  1. Go to the page "Your Facebook Information": https://www.facebook.com/settings?tab=your_facebook_information
  2. Click on "Download Your Information"
  3. Select the date range you want. The format must be JSON. Media won't be used, so you can set the quality to "Low" to speed things up.
  4. Click on "Deselect All", then scroll down to select "Messages" only
  5. Click on "Create File" at the top of the list. It will take Facebook a while to generate your archive.
  6. Once the archive is ready, download and extract it, then move the content of the messages folder into ./raw_data/messenger/

WhatsApp

Unfortunately, WhatsApp only lets you export your conversations from your phone and one by one.

  1. On your phone, open the chat conversation you want to export
  2. On Android, tap on > More > Export chat. On iOS, tap on the interlocutor's name > Export chat
  3. Choose "Without Media"
  4. Send chat to yourself eg via Email
  5. Unpack the archive and add the individual .txt files to the folder ./raw_data/whatsapp/

Telegram

The Telegram API works differently: you will first need to setup Chatistics, then query your chat logs programmatically. This process is documented below. Exporting Telegram chat logs is very fast.

2. Setup Chatistics

First, install the required Python packages using conda:

conda env create -f environment.yml
conda activate chatistics

You can now parse the messages by using the command python parse.py .

By default the parsers will try to infer your own name (i.e. your username) from the data. If this fails you can provide your own name to the parser by providing the --own-name argument. The name should match your name exactly as used on that chat platform.

# Google Hangouts
python parse.py hangouts

# Facebook Messenger
python parse.py messenger

# WhatsApp
python parse.py whatsapp

Telegram

  1. Create your Telegram application to access chat logs (instructions). You will need api_id and api_hash which we will now set as environment variables.
  2. Run cp secrets.sh.example secrets.sh and fill in the values for the environment variables TELEGRAM_API_ID, TELEGRAMP_API_HASH and TELEGRAM_PHONE (your phone number including country code).
  3. Run source secrets.sh
  4. Execute the parser script using python parse.py telegram

The pickle files will now be ready for analysis in the data folder!

For more options use the -h argument on the parsers (e.g. python parse.py telegram --help).

3. All done! Play with your data

Chatistics can print the chat logs as raw text. It can also create histograms, showing how many messages each interlocutor sent, or generate word clouds based on word density and a base image.

Export

You can view the data in stdout (default) or export it to csv, json, or as a Dataframe pickle.

python export.py

You can use the same filter options as described above in combination with an output format option:

  -f {stdout,json,csv,pkl}, --format {stdout,json,csv,pkl}
                        Output format (default: stdout)

Histograms

Plot all messages with:

python visualize.py breakdown

Among other options you can filter messages as needed (also see python visualize.py breakdown --help):

  --platforms {telegram,whatsapp,messenger,hangouts}
                        Use data only from certain platforms (default: ['telegram', 'whatsapp', 'messenger', 'hangouts'])
  --filter-conversation
                        Limit by conversations with this person/group (default: [])
  --filter-sender
                        Limit to messages sent by this person/group (default: [])
  --remove-conversation
                        Remove messages by these senders/groups (default: [])
  --remove-sender
                        Remove all messages by this sender (default: [])
  --contains-keyword
                        Filter by messages which contain certain keywords (default: [])
  --outgoing-only       
                        Limit by outgoing messages (default: False)
  --incoming-only       
                        Limit by incoming messages (default: False)

Eg to see all the messages sent between you and Jane Doe:

python visualize.py breakdown --filter-conversation "Jane Doe"

To see the messages sent to you by the top 10 people with whom you talk the most:

python visualize.py breakdown -n 10 --incoming-only

You can also plot the conversation densities using the --as-density flag.

Word Cloud

You will need a mask file to render the word cloud. The white bits of the image will be left empty, the rest will be filled with words using the color of the image. See the WordCloud library documentation for more information.

python visualize.py cloud -m raw_outlines/users.jpg

You can filter which messages to use using the same flags as with histograms.

Development

Install dev environment using

conda env create -f environment_dev.yml

Run tests from project root using

python -m pytest

Improvement ideas

  • Parsers for more chat platforms: Discord? Signal? Pidgin? ...
  • Handle group chats on more platforms.
  • See open issues for more ideas.

Pull requests are welcome!

Social medias

Projects using Chatistics

Meet your Artificial Self: Generate text that sounds like you workshop

Credits

BigDL - Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems

Evaluate the performance of BigDL (Distributed Deep Learning on Apache Spark) in big data analysis problems.

Vo Cong Thanh 1 Jan 06, 2022
Picka: A Python module for data generation and randomization.

Picka: A Python module for data generation and randomization. Author: Anthony Long Version: 1.0.1 - Fixed the broken image stuff. Whoops What is Picka

Anthony 108 Nov 30, 2021
Udacity-api-reporting-pipeline - Udacity api reporting pipeline

udacity-api-reporting-pipeline In this exercise, you'll use portions of each of

Fabio Barbazza 1 Feb 15, 2022
An Integrated Experimental Platform for time series data anomaly detection.

Curve Sorry to tell contributors and users. We decided to archive the project temporarily due to the employee work plan of collaborators. There are no

Baidu 486 Dec 21, 2022
PandaPy has the speed of NumPy and the usability of Pandas 10x to 50x faster (by @firmai)

PandaPy "I came across PandaPy last week and have already used it in my current project. It is a fascinating Python library with a lot of potential to

Derek Snow 527 Jan 02, 2023
Flenser is a simple, minimal, automated exploratory data analysis tool.

Flenser Have you ever been handed a dataset you've never seen before? Flenser is a simple, minimal, automated exploratory data analysis tool. It runs

John McCambridge 79 Sep 20, 2022
Senator Trades Monitor

Senator Trades Monitor This monitor will grab the most recent trades by senators and send them as a webhook to discord. Installation To use the monito

Yousaf Cheema 5 Jun 11, 2022
DaCe is a parallel programming framework that takes code in Python/NumPy and other programming languages

aCe - Data-Centric Parallel Programming Decoupling domain science from performance optimization. DaCe is a parallel programming framework that takes c

SPCL 330 Dec 30, 2022
MotorcycleParts DataAnalysis python

We work with the accounting department of a company that sells motorcycle parts. The company operates three warehouses in a large metropolitan area.

NASEEM A P 1 Jan 12, 2022
Convert tables stored as images to an usable .csv file

Convert an image of numbers to a .csv file This Python program aims to convert images of array numbers to corresponding .csv files. It uses OpenCV for

711 Dec 26, 2022
BErt-like Neurophysiological Data Representation

BENDR BErt-like Neurophysiological Data Representation This repository contains the source code for reproducing, or extending the BERT-like self-super

114 Dec 23, 2022
Data cleaning tools for Business analysis

Datacleaning datacleaning tools for Business analysis This program is made for Vicky's work. You can use it, too. 数据清洗 该数据清洗工具是为了商业分析 这个程序是为了Vicky的工作而

Lin Jian 3 Nov 16, 2021
Stitch together Nanopore tiled amplicon data without polishing a reference

Stitch together Nanopore tiled amplicon data using a reference guided approach Tiled amplicon data, like those produced from primers designed with pri

Amanda Warr 14 Aug 30, 2022
Snakemake workflow for converting FASTQ files to self-contained CRAM files with maximum lossless compression.

Snakemake workflow: name A Snakemake workflow for description Usage The usage of this workflow is described in the Snakemake Workflow Catalog. If

Algorithms for reproducible bioinformatics (Koesterlab) 1 Dec 16, 2021
DefAP is a program developed to facilitate the exploration of a material's defect chemistry

DefAP is a program developed to facilitate the exploration of a material's defect chemistry. A large number of features are provided and rapid exploration is supported through the use of autoplotting

6 Oct 25, 2022
An Aspiring Drop-In Replacement for NumPy at Scale

Legate NumPy is a Legate library that aims to provide a distributed and accelerated drop-in replacement for the NumPy API on top of the Legion runtime. Using Legate NumPy you do things like run the f

Legate 502 Jan 03, 2023
For making Tagtog annotation into csv dataset

tagtog_relation_extraction for making Tagtog annotation into csv dataset How to Use On Tagtog 1. Go to Project Downloads 2. Download all documents,

hyeong 4 Dec 28, 2021
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
An interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks

qgrid Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your D

Quantopian, Inc. 2.9k Jan 08, 2023