A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Overview

onlyfans-scraper

version python3.8-3.9 downloads status

A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Installation

You can install this program by entering the following in your terminal:

pip install onlyfans-scraper

If you're on macOS/Linux, then do this instead:

pip3 install onlyfans-scraper

Upgrading

In order to upgrade onlyfans-scraper, run the following in your terminal:

pip install --upgrade onlyfans-scraper

Or, a shorter version:

pip install -U onlyfans-scraper

Setup

Before you can fully use it, you need to fill out some fields in a auth.json file. This file will be created for you when you run the program for the first time.

These are the fields:

{
    "auth": {
        "app-token": "33d57ade8c02dbc5a333db99ff9ae26a",
        "sess": "",
        "auth_id": "",
        "auth_uniq_": "",
        "user_agent": "",
        "x-bc": ""
    }
}

It's really not that bad. I'll show you in the next sections how to get these bits of info.

Step One: Creating the 'auth.json' File

You first need to run the program in order for the auth.json file to be created. To run it, simply type onlyfans-scraper in your terminal and hit enter. Because you don't have an auth.json file, the program will create one for you and then ask you to enter some information. Now we need to get that information.

Step Two: Getting Your Auth Info

If you've already used DIGITALCRIMINAL's OnlyFans script, you can simply copy and paste the auth information from there to here.

Go to your notification area on OnlyFans. Once you're there, open your browser's developer tools. If you don't know how to do that, consult the following chart:

Operating System Keys
macOS altcmdi
Windows ctrlshifti
Linux ctrlshifti

Once you have your browser's developer tools open, your screen should look like the following:

Click on the Network tab at the top of the browser tools:

Then click on XHR sub-tab inside of the Network tab:

Once you're inside of the XHR sub-tab, refresh the page while you have your browser's developer tools open. After the page reloads, you should see a section titled init appear:

When you click on init, you should see a large sidebar appear. Make sure you're in the Headers section:

After that, scroll down until you see a subsection called Request Headers. You should then see three important fields inside of the Request Headers subsection: Cookie, User-Agent, and x-bc

Inside of the Cookie field, you will see a couple of important bits:

  • sess=
  • auth_id=
  • auth_uid_=

Your auth_uid_ will only appear if you have 2FA (two-factor authentication) enabled. Also, keep in mind that your auth_uid_ will have numbers after the final underscore and before the equal sign (that's your auth_id).

You need everything after the equal sign and everything before the semi-colon for all of those bits.

Once you've copied the value for your sess cookie, go back to the program, paste it in, and hit enter. Now go back to your browser, copy the auth_id value, and paste it into the program and hit enter. Then go back to your browser, copy the auth_uid_ value, and paste it into the program and hit enter (leave this blank if you don't use 2FA!!!).

Once you do that, the program will ask for your user agent. You should be able to find your user agent in a field called User-Agent below the Cookie field. Copy it and paste it into the program and hit enter.

After it asks for your user agent, it will ask for your x-bc token. You should also be able to find this in the Request Headers section.

You're all set and you can now use onlyfans-scraper.

Usage

Whenever you want to run the program, all you need to do is type onlyfans-scraper in your terminal:

onlyfans-scraper

That's it. It's that simple.

Once the program launches, all you need to do is follow the on-screen directions. The first time you run it, it will ask you to fill out your auth.json file (directions for that in the section above).

You will need to use your arrow keys to select an option:

If you choose to download content, you will have three options: having a list of all of your subscriptions printed, manually entering a username, or scraping all accounts that you're subscribed to.

Liking/Unliking Posts

You can also use this program to like all of a user's posts or remove your likes from their posts. Just select either option during the main menu screen and enter their username.

This program will like posts at a rate of around one post per second. This may be reduced in the future but OnlyFans is strict about how quickly you can like posts.

Migrating Databases

If you've used DIGITALCRIMINAL's script, you might've liked how his script prevented duplicates from being downloaded each time you ran it on a user. This is done through database files.

This program also uses a database file to prevent duplicates. In order to make it easier for user's to transition from his program to this one, this program will migrate the data from those databases for you (only IDs and filenames).

In order to use it select the last option (Migrate an old database) and enter the path to the directory that contains the database files (Posts.db, Archived.db, etc.).

For example, if you have a directory that looks like the following:

Users
|__ home
    |__ .sites
        |__ OnlyFans
            |__ melodyjai
                |__ Metadata
                    |__ Archived.db
                    |__ Messages.db
                    |__ Posts.db

Then the path you enter should be /Users/home/.sites/OnlyFans/melodyjai/Metadata. The program will detect the .db files in the directory and then ask you for the username to whom those .db files belong. The program will then move the relevant data over.

Bugs/Issues/Suggestions

If you run into any trouble while using this script, or if you're confused on how to get something running, feel free to open an issue or open a discussion. I don't bite :D

If you would like a feature added to the program or have some ideas, start a discussion!

You might also like...
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

Liveskidordownload - Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

Automatically download and crop key information from the arxiv daily paper.
Automatically download and crop key information from the arxiv daily paper.

Arxiv daily 速览 功能:按关键词筛选arxiv每日最新paper,自动获取摘要,自动截取文中表格和图片。 1 测试环境 Ubuntu 16+ Python3.7 torch 1.9 Colab GPU 2 使用演示 首先下载权重baiduyun 提取码:il87,放置于code/Pars

PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Download images from forum threads

Forum Image Scraper Downloads images from forum threads Only works with forums which doesn't require a login to view and have an incremental paginatio

Releases(v1.8.0)
SkyScrapers: A collection of variety of Scraping Apps

SkyScrapers Collection of variety of Web Scraping Apps The web-scrapers involved

Biplov Pokhrel 3 Feb 17, 2022
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
A web scraper that exports your entire WhatsApp chat history.

WhatSoup 🍲 A web scraper that exports your entire WhatsApp chat history. Table of Contents Overview Demo Prerequisites Instructions Frequen

Eddy Harrington 87 Jan 06, 2023
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

1 Jan 31, 2022
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
Scrape puzzle scrambles from csTimer.net

Scroodle Selenium script to scrape scrambles from csTimer.net csTimer runs locally in your browser, so this doesn't strain the servers any more than i

Jason Nguyen 1 Oct 29, 2021
Works very well and you can ask for the type of image you want the scrapper to collect.

Works very well and you can ask for the type of image you want the scrapper to collect. Also follows a specific urls path depending on keyword selection.

Memo Sim 1 Feb 17, 2022
Screen scraping and web crawling framework

Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the

Evgeniy Tatarkin 61 Jun 21, 2021
An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

2 Nov 22, 2021
Scraping weather data using Python to receive umbrella reminders

A Python package which scrapes weather data from google and sends umbrella reminders to specified email at specified time daily.

Edula Vinay Kumar Reddy 1 Aug 23, 2022
New World Market Scraper

Bean Seller A New Worlds market scraper. Deployment This must be installed on Windows as it uses the Windows api to do its stuff Install Prerequisites

4 Sep 21, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
Automated data scraper for Thailand COVID-19 data

The Researcher COVID data Automated data scraper for Thailand COVID-19 data Accessing the Data 1st Dose Provincial Vaccination Data 2nd Dose Provincia

Porames Vatanaprasan 31 Apr 17, 2022
Open Crawl Vietnamese Text

Open Crawl Vietnamese Text This repo contains crawled Vietnamese text from multiple sources. This list of a topic-centric public data sources in high

QAI Research 4 Jan 05, 2022
Python script who crawl first shodan page and check DBLTEK vulnerability

🐛 MASS DBLTEK EXPLOIT CHECKER USING SHODAN 🕸 Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 09, 2022
An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post

Autoscraper-n-blogger An Automated udemy coupons scraper which scrapes coupons and autopost the result in blogspot post and notifies via Telegram bot

GOKUL A.P 13 Dec 21, 2022
Meme-videos - Scrapes memes and turn them into a video compilations

Meme Videos Scrapes memes from reddit using praw and request and then converts t

Partho 12 Oct 28, 2022
Download images from forum threads

Forum Image Scraper Downloads images from forum threads Only works with forums which doesn't require a login to view and have an incremental paginatio

9 Nov 16, 2022