💬 Python scripts to parse Messenger, Hangouts, WhatsApp and Telegram chat logs into DataFrames.

Overview

Chatistics

Python 3 scripts to convert chat logs from various messaging platforms into Pandas DataFrames. Can also generate histograms and word clouds from the chat logs.

Changelog

10 Jan 2020: UPDATED ALL THE THINGS! Thanks to mar-muel and manueth, pretty much everything has been updated and improved, and WhatsApp is now supported!

21 Oct 2018: Updated Facebook Messenger and Google Hangouts parsers to make them work with the new exported file formats.

9 Feb 2018: Telegram support added thanks to bmwant.

24 Oct 2016: Initial release supporting Facebook Messenger and Google Hangouts.

Support Matrix

Platform Direct Chat Group Chat
Facebook Messenger ✔ ✘
Google Hangouts ✔ ✘
Telegram ✔ ✘
WhatsApp ✔ ✔

Exported data

Data exported for each message regardless of the platform:

Column Content
timestamp UNIX timestamp (in seconds)
conversationId A conversation ID, unique by platform
conversationWithName Name of the other people in a direct conversation, or name of the group conversation
senderName Name of the sender
outgoing Boolean value whether the message is outgoing/coming from owner
text Text of the message
language Language of the conversation as inferred by langdetect
platform Platform (see support matrix above)

Exporting your chat logs

1. Download your chat logs

Google Hangouts

Warning: Google Hangouts archives can take a long time to be ready for download - up to one hour in our experience.

  1. Go to Google Takeout: https://takeout.google.com/settings/takeout
  2. Request an archive containing your Hangouts chat logs
  3. Download the archive, then extract the file called Hangouts.json
  4. Move it to ./raw_data/hangouts/

Facebook Messenger

Warning: Facebook archives can take a very long time to be ready for download - up to 12 hours! They can weight several gigabytes. Start with an archive containing just a few months of data if you want to quickly get started, this shouldn't take more than a few minutes to complete.

  1. Go to the page "Your Facebook Information": https://www.facebook.com/settings?tab=your_facebook_information
  2. Click on "Download Your Information"
  3. Select the date range you want. The format must be JSON. Media won't be used, so you can set the quality to "Low" to speed things up.
  4. Click on "Deselect All", then scroll down to select "Messages" only
  5. Click on "Create File" at the top of the list. It will take Facebook a while to generate your archive.
  6. Once the archive is ready, download and extract it, then move the content of the messages folder into ./raw_data/messenger/

WhatsApp

Unfortunately, WhatsApp only lets you export your conversations from your phone and one by one.

  1. On your phone, open the chat conversation you want to export
  2. On Android, tap on â‹® > More > Export chat. On iOS, tap on the interlocutor's name > Export chat
  3. Choose "Without Media"
  4. Send chat to yourself eg via Email
  5. Unpack the archive and add the individual .txt files to the folder ./raw_data/whatsapp/

Telegram

The Telegram API works differently: you will first need to setup Chatistics, then query your chat logs programmatically. This process is documented below. Exporting Telegram chat logs is very fast.

2. Setup Chatistics

First, install the required Python packages using conda:

conda env create -f environment.yml
conda activate chatistics

You can now parse the messages by using the command python parse.py .

By default the parsers will try to infer your own name (i.e. your username) from the data. If this fails you can provide your own name to the parser by providing the --own-name argument. The name should match your name exactly as used on that chat platform.

# Google Hangouts
python parse.py hangouts

# Facebook Messenger
python parse.py messenger

# WhatsApp
python parse.py whatsapp

Telegram

  1. Create your Telegram application to access chat logs (instructions). You will need api_id and api_hash which we will now set as environment variables.
  2. Run cp secrets.sh.example secrets.sh and fill in the values for the environment variables TELEGRAM_API_ID, TELEGRAMP_API_HASH and TELEGRAM_PHONE (your phone number including country code).
  3. Run source secrets.sh
  4. Execute the parser script using python parse.py telegram

The pickle files will now be ready for analysis in the data folder!

For more options use the -h argument on the parsers (e.g. python parse.py telegram --help).

3. All done! Play with your data

Chatistics can print the chat logs as raw text. It can also create histograms, showing how many messages each interlocutor sent, or generate word clouds based on word density and a base image.

Export

You can view the data in stdout (default) or export it to csv, json, or as a Dataframe pickle.

python export.py

You can use the same filter options as described above in combination with an output format option:

  -f {stdout,json,csv,pkl}, --format {stdout,json,csv,pkl}
                        Output format (default: stdout)

Histograms

Plot all messages with:

python visualize.py breakdown

Among other options you can filter messages as needed (also see python visualize.py breakdown --help):

  --platforms {telegram,whatsapp,messenger,hangouts}
                        Use data only from certain platforms (default: ['telegram', 'whatsapp', 'messenger', 'hangouts'])
  --filter-conversation
                        Limit by conversations with this person/group (default: [])
  --filter-sender
                        Limit to messages sent by this person/group (default: [])
  --remove-conversation
                        Remove messages by these senders/groups (default: [])
  --remove-sender
                        Remove all messages by this sender (default: [])
  --contains-keyword
                        Filter by messages which contain certain keywords (default: [])
  --outgoing-only       
                        Limit by outgoing messages (default: False)
  --incoming-only       
                        Limit by incoming messages (default: False)

Eg to see all the messages sent between you and Jane Doe:

python visualize.py breakdown --filter-conversation "Jane Doe"

To see the messages sent to you by the top 10 people with whom you talk the most:

python visualize.py breakdown -n 10 --incoming-only

You can also plot the conversation densities using the --as-density flag.

Word Cloud

You will need a mask file to render the word cloud. The white bits of the image will be left empty, the rest will be filled with words using the color of the image. See the WordCloud library documentation for more information.

python visualize.py cloud -m raw_outlines/users.jpg

You can filter which messages to use using the same flags as with histograms.

Development

Install dev environment using

conda env create -f environment_dev.yml

Run tests from project root using

python -m pytest

Improvement ideas

  • Parsers for more chat platforms: Discord? Signal? Pidgin? ...
  • Handle group chats on more platforms.
  • See open issues for more ideas.

Pull requests are welcome!

Social medias

Projects using Chatistics

Meet your Artificial Self: Generate text that sounds like you workshop

Credits

Owner
Florian
🤖 Machine Learning
Florian
Time ranges with python

timeranges Time ranges. Read the Docs Installation pip timeranges is available on pip: pip install timeranges GitHub You can also install the latest v

Micael Jarniac 2 Sep 01, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
CPSPEC is an astrophysical data reduction software for timing

CPSPEC manual Introduction CPSPEC is an astrophysical data reduction software for timing. Various timing properties, such as power spectra and cross s

Tenyo Kawamura 1 Oct 20, 2021
apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly.

Please consider citing the manuscript if you use apricot in your academic work! You can find more thorough documentation here. apricot implements subm

Jacob Schreiber 457 Dec 20, 2022
PyChemia, Python Framework for Materials Discovery and Design

PyChemia, Python Framework for Materials Discovery and Design PyChemia is an open-source Python Library for materials structural search. The purpose o

Materials Discovery Group 61 Oct 02, 2022
PipeChain is a utility library for creating functional pipelines.

PipeChain Motivation PipeChain is a utility library for creating functional pipelines. Let's start with a motivating example. We have a list of Austra

Michael Milton 2 Aug 07, 2022
This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks

This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks

Donald F. Ferguson 4 Mar 06, 2022
Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format

Finds, downloads, parses, and standardizes public bikeshare data into a standard pandas dataframe format.

Brady Law 2 Dec 01, 2021
Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation

Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation Overview Consider the scenario in which advertisement

Manuel Bressan 2 Nov 18, 2021
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather

Tuplex 791 Jan 04, 2023
songplays datamart provide details about the musical taste of our customers and can help us to improve our recomendation system

Songplays User activity datamart The following document describes the model used to build the songplays datamart table and the respective ETL process.

Leandro Kellermann de Oliveira 1 Jul 13, 2021
Calculate multilateral price indices in Python (with Pandas and PySpark).

IndexNumCalc Calculate multilateral price indices using the GEKS-T (CCDI), Time Product Dummy (TPD), Time Dummy Hedonic (TDH), Geary-Khamis (GK) metho

Dr. Usman Kayani 3 Apr 27, 2022
This creates a ohlc timeseries from downloaded CSV files from NSE India website and makes a SQLite database for your research.

NSE-timeseries-form-CSV-file-creator-and-SQL-appender- This creates a ohlc timeseries from downloaded CSV files from National Stock Exchange India (NS

PILLAI, Amal 1 Oct 02, 2022
Titanic data analysis for python

Titanic-data-analysis This Repo is an analysis on Titanic_mod.csv This csv file contains some assumed data of the Titanic ship after sinking This full

Hardik Bhanot 1 Dec 26, 2021
Big Data & Cloud Computing for Oceanography

DS2 Class 2022, Big Data & Cloud Computing for Oceanography Home of the 2022 ISblue Big Data & Cloud Computing for Oceanography class (IMT-A, ENSTA, I

Ocean's Big Data Mining 5 Mar 19, 2022
Random dataframe and database table generator

Random database/dataframe generator Authored and maintained by Dr. Tirthajyoti Sarkar, Fremont, USA Introduction Often, beginners in SQL or data scien

Tirthajyoti Sarkar 249 Jan 08, 2023
Streamz helps you build pipelines to manage continuous streams of data

Streamz helps you build pipelines to manage continuous streams of data. It is simple to use in simple cases, but also supports complex pipelines that involve branching, joining, flow control, feedbac

Python Streamz 1.1k Dec 28, 2022
Manage large and heterogeneous data spaces on the file system.

signac - simple data management The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, and reproduc

Glotzer Group 109 Dec 14, 2022
[CVPR2022] This repository contains code for the paper "Nested Collaborative Learning for Long-Tailed Visual Recognition", published at CVPR 2022

Nested Collaborative Learning for Long-Tailed Visual Recognition This repository is the official PyTorch implementation of the paper in CVPR 2022: Nes

Jun Li 65 Dec 09, 2022