Scrape all the media from an OnlyFans account - Updated regularly

Overview

OnlyFans DataScraper (Python 3.9.X)

Language grade: Python

app-token

Mandatory Tutorial

Read the #FAQ at the bottom of this page before submitting a issue.

Running the app via docker

Build and run the image, mounting the appropriate directories:

docker build -t only-fans . && docker run -it --rm --name onlyfans -v ${PWD}/.settings:/usr/src/app/.settings -v ${PWD}/.profiles:/usr/src/app/.profiles -v ${PWD}/.sites:/usr/src/app/.sites only-fans

Running on Linux

https://github.com/DIGITALCRIMINAL/OnlyFans/discussions/889

Running the app locally

From the project folder open CMD/Terminal and run the command below:

pip install --upgrade --user -r requirements.txt

Start:

python start_ofd.py | python3 start_ofd.py | double click start_ofd.py


Open and edit:

.profiles/default/auth.json

[auth]

You have to fill in the following:

  • {"cookie":"your_cookie"}
  • {"user-agent":"your_user-agent"}

Go to www.onlyfans.com and login, open the network debugger, then check the image below on how to get said above cookies

app-token app-token

Your auth config should look similar to this

app-token

If you want to auth via browser, add your email and password.

If you get auth attempt errors, only YOU can fix it unless you're willing to let me into your account so I can see if it's working or not. All issues about auth errors will be closed automatically. It's spam at this point, there's like 1000s of them and I don't care for anyone who can't use the search function lmao.

Note: If active is set to False, the script will ignore the profile.

USAGE

python start_ofd.py | python3 start_ofd.py | double click start_ofd.py

Enter in inputs as prompted by console.

OPTIONAL

Open:

config.json (Open with a texteditor)

[settings]

profile_directories:

Where your account information is stored (auth.json).

Default = [".profiles"]

If you're going to fill, please remember to use forward ("/") slashes only.

download_directories:

Where downloaded content is stored.

Default = [".sites"]

If you're going to fill, please remember to use forward ("/") slashes only.

You can add multiple directories and the script will automatically rollover to the next directory if the current is full.

metadata_directories:

Where metadata content is stored.

Default = [".sites"]

If you're going to fill, please remember to use forward ("/") slashes only.

Automatic rollover not supported yet.

path_formatting:

Overview for file_directory_format, filename_format and metadata_directory_format

{site_name} = The site you're scraping.

{first_letter} = First letter of the model you're scraping.

{post_id} = The posts' ID.

{media_id} = The media's ID.

{profile_username} = Your account's username.

{model_username} = The model's username.

{api_type} = Posts, Messages, etc.

{media_type} = Images, Videos, etc.

{filename} = The media's filename.

{value} = Value of the content. Paid or Free.

{text} = The media's text.

{date} = The post's creation date.

{ext} = The media's file extension.

Don't use the text variable. If you do, enjoy emojis in your filepaths and errors lmao.

file_directory_format:

This puts each media file into a folder.

The list below are unique identifiers that you must include.

You can choose one or more.

Default = "{site_name}/{model_username}/{api_type}/{value}/{media_type}"
Default Translated = "OnlyFans/belledelphine/Posts/Free/Images"

{model_username} = belledelphine

filename_format:

Usage: Format for a filename

The list below are unique identifiers that you must include.

You must choose one or more.

Default = "{filename}.{ext}"
Default Translated = "5fb5a5e4b4ce6c47ce2b4_source.mp4"

{filename} = 5fb5a5e4b4ce6c47ce2b4_source
{media_id} = 133742069

metadata_directory_format:

Usage: Filepath for metadata. It's tied with download_directories so ignore metadata_directories in the config.

The list below are unique identifiers that you must include.

You must choose one or more.

Default = "{site_name}/{model_username}/Metadata"
Default Translated = "OnlyFans/belledelphine/Metadata"

{model_username} = belledelphine

text_length:

Usage: When you use {text} in filename_format, a limit of how many characters can be set by inputting a number.

Default = ""
Ideal = "50"
Max = "255"

The ideal is actually 0.

video_quality:

Usage: Select the resolution of the video.

Default = "source"
720p = "720" | "720p"
240p = "240" | "240p"

auto_site_choice:

Types: str|int

Usage: You can automatically choose which site you want to scrape.

Default = ""

OnlyFans = "onlyfans"

auto_media_choice:

Types: list|int|str|bool

Usage: You can automatically choose which media type you want to scrape.

Default = ""

Inputs: Images, Videos, etc
Inputs: 0,1,etc

You can automatically choose which type of media you want to scrape.

auto_model_choice:

Types: list|int|str|bool

Default = false

If set to true, the script will scrape all the names.

auto_api_choice:

Default = true

If set to false, you'll be given the option to scrape individual apis.

jobs:

"scrape_names" - This will scrape your standard content
"scrape_paid_content" - This will scrape paid content

If set to false, it won't do the job.

export_type:

Default = "json"

JSON = "json"

You can export an archive to different formats (not anymore lol).

overwrite_files:

Default = false

If set to true, any file with the same name will be redownloaded.

date_format:

Default = "%d-%m-%Y"

If you live in the USA and you want to use the incorrect format, use the following:

"%m-%d-%Y"

max_threads:

Default = -1

When number is set below 1, it will use all threads.
Set a number higher than 0 to limit threads.

min_drive_space:

Default = 0
Type: Float

Space is calculated in GBs.
0.5 is 500mb, 1 is 1gb,etc.
When a drive goes below minimum drive space, it will move onto the next drive or go into an infinite loop until drive is above the minimum space.

webhooks:

Default = []

Supported webhooks:
Discord

Data is sent whenever you've completely downloaded a model.
You can also put in your own custom url and parse the data.
Need another webhook? Open an issue.

exit_on_completion:

Default = false

If set to true the scraper run once and exit upon completion, otherwise the scraper will give the option to run again. This is useful if the scraper is being executed by a cron job or another script.

infinite_loop:

Default = true

If set to false, the script will run once and ask you to input anything to continue.

loop_timeout:

Default = 0

When infinite_loop is set to true this will set the time in seconds to pause the loop in between runs.

boards:

Default = []
Example = ["s", "gif"]

Input boards names that you want to automatically scrape.

ignored_keywords:

Default = []
Example = ["ignore", "me"]

Any words you input, the script will ignore any content that contains these words.

ignore_type:

Default = ""
a = "paid"
b = "free"

This setting will not include any paid or free accounts in your subscription list.

Example: "ignore_type": "paid"

This choice will not include any accounts that you've paid for.

export_metadata:

Default = true

Set to false if you don't want to save metadata.

blacklist_name:

Default = ""

This setting will not include any blacklisted usernames when you choose the "scrape all" option.

Go to https://onlyfans.com/my/lists and create a new list; you can name it whatever you want but I called mine "Blacklisted".

Add the list's name to the config.

Example: "blacklist_name": "Blacklisted"

You can create as many lists as you want.

FAQ

Before troubleshooting, make sure you're using Python 3.9 and the latest commit of the script.

Error: Access Denied / Auth Loop

Quadrupal check that the cookies and user agent are correct. Remove 2FA.

AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Only works with Python 3.7 and above.

I can't see ".settings" folder'

Make sure you can see hidden files

Windows Tutorial

Mac Tutorial

Linux

I'm getting authed into the wrong account

Enjoy the free content. | This has been patched lol.

I'm using Linux OS and something isn't working.

Script was built on Windows 10. If you're using Linux you can still submit an issue and I'll try my best to fix it.

Am I able to bypass paywalls with this script?

Hell yeah! My open source script can bypass paywalls for free. Tutorial

Do OnlyFans or OnlyFans models know I'm using this script?

OnlyFans may know that you're using this script, but I try to keep it as anon as possible.

Generally, models will not know unless OnlyFans tells them but there is identifiable information in the metadata folder which contains your IP address, so don't share it unless you're using a proxy/vpn or just don't care.

Do you collect session information?

No. The code is on Github which allows you to audit the codebase yourself. You can use wireshark or any other network analysis program to verify the outgoing connections are respective to the modules you chose.

Disclaimer (lmao):

OnlyFans is a registered trademark of Fenix International Limited.

The contributors of this script isn't in any way affiliated with, sponsored by, or endorsed by Fenix International Limited.

The contributors of this script are not responsible for the end users' actions... lmao.

Comments
  • Session Lock | Please refresh the page error

    Session Lock | Please refresh the page error

    Running the latest version and got this error;

    Auth (V1) Attempt 1/10 Please refresh the page Auth (V1) Attempt 2/10 Please refresh the page Auth (V1) Attempt 3/10 Please refresh the page Auth (V1) Attempt 4/10 Please refresh the page Auth (V1) Attempt 5/10 Please refresh the page Auth (V1) Attempt 6/10 Please refresh the page Auth (V1) Attempt 7/10 Please refresh the page Auth (V1) Attempt 8/10 Please refresh the page Auth (V1) Attempt 9/10 Please refresh the page Auth (V1) Attempt 10/10 Please refresh the page

    bug patching fixed 
    opened by Macmasteri 153
  • Refresh page

    Refresh page

    I fill the auth.json but when I start the script it says that I should refresh the page. Does this happen to anyone else? Is this a bug or I'm doing something wrong?

    opened by CapitanLiteral 127
  • Auth (V1) Attempt 1/10 Please refresh the page

    Auth (V1) Attempt 1/10 Please refresh the page

    Hey

    I know this has been reported before, but apparently it was fixed - yet I received it today.

    1. I cloned the repo today.
    2. I successfully scraped two models
    3. It stopped working and gave me
    Auth (V1) Attempt 1/10
    Please refresh the page
    Auth (V1) Attempt 2/10
    Please refresh the page
    Auth (V1) Attempt 3/10
    Please refresh the page
    Auth (V1) Attempt 4/10
    Please refresh the page
    Auth (V1) Attempt 5/10
    Please refresh the page
    Auth (V1) Attempt 6/10
    Please refresh the page
    Auth (V1) Attempt 7/10
    Please refresh the page
    Auth (V1) Attempt 8/10
    Please refresh the page
    Auth (V1) Attempt 9/10
    Please refresh the page
    Auth (V1) Attempt 10/10
    Please refresh the page
    

    Exact same code, exact same everything, two minutes apart it was working until it wasn't

    Have logged out of OF and back in again then updated the credentials in .profiles but there is no fix

    Is there a fix? What am I doing wrong? Is this bug back?

    Thanks

    bug fixed 
    opened by idiotabroad 71
  • FileNotFoundError

    FileNotFoundError

    Think I'm having an issue with setting the metadata

    Scraping [text]. Should take less than a minute.
    2020-11-09 22:21:34,452 ERROR errors [Errno 2] No such file or directory: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\evenink_cosplay\\Archived\\Images\\24346375 (20-05-2020)\\2880x2160_6598ac9f823fcd75e2f3a40b32aee332.jpg'
    Traceback (most recent call last):
      File "C:\Python\Python39\lib\shutil.py", line 803, in move
        os.rename(src, real_dst)
    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\evenink_cosplay\\Archived\\Posts\\Images\\24346375 (20-05-2020)\\2880x2160_6598ac9f823fcd75e2f3a40b32aee332.jpg' -> 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\evenink_cosplay\\Archived\\Images\\24346375 (20-05-2020)\\2880x2160_6598ac9f823fcd75e2f3a40b32aee332.jpg'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "G:\Scripts\OnlyFans\datascraper\main_datascraper.py", line 120, in start_datascraper
        x = main_helper.process_names(
      File "G:\Scripts\OnlyFans\helpers\main_helper.py", line 505, in process_names
        result = module.start_datascraper(
      File "G:\Scripts\OnlyFans\modules\onlyfans.py", line 128, in start_datascraper
        results = prepare_scraper(
      File "G:\Scripts\OnlyFans\modules\onlyfans.py", line 542, in prepare_scraper
        main_helper.export_archive(
      File "G:\Scripts\OnlyFans\helpers\main_helper.py", line 195, in export_archive
        datas2 = ofrenamer.start(archive_path, json_settings)
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 144, in start
        metadata.valid = fix_metadata(
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 122, in fix_metadata
        old_folders = pool.starmap(start, product(
      File "C:\Python\Python39\lib\multiprocessing\pool.py", line 372, in starmap
        return self._map_async(func, iterable, starmapstar, chunksize).get()
      File "C:\Python\Python39\lib\multiprocessing\pool.py", line 771, in get
        raise self._value
      File "C:\Python\Python39\lib\multiprocessing\pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "C:\Python\Python39\lib\multiprocessing\pool.py", line 51, in starmapstar
        return list(itertools.starmap(args[0], args[1]))
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 90, in start
        filepath, old_filepath = update(filepath)
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 87, in update
        shutil.move(filepath, new_format)
      File "C:\Python\Python39\lib\shutil.py", line 817, in move
        copy_function(src, real_dst)
      File "C:\Python\Python39\lib\shutil.py", line 432, in copy2
        copyfile(src, dst, follow_symlinks=follow_symlinks)
      File "C:\Python\Python39\lib\shutil.py", line 261, in copyfile
        with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
    FileNotFoundError: [Errno 2] No such file or directory: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\evenink_cosplay\\Archived\\Images\\24346375 (20-05-2020)\\2880x2160_6598ac9f823fcd75e2f3a40b32aee332.jpg'
    
    
    bug fixed 
    opened by valdearg 65
  • KeyError: 'medias'

    KeyError: 'medias'

    Hi there, Im receiving the following error since the latest commit

    Type: Profile 0.00B [00:00, ?B/s] Type: Stories Scraping [photo]. Should take less than a minute. Scraping [photo]. Should take less than a minute. Traceback (most recent call last): File "G:\Photo_OF_Scraper\OnlyFans\start_ofd.py", line 43, in apis = main_datascraper.start_datascraper(json_config, site_name_lower) File "G:\Photo_OF_Scraper\OnlyFans\datascraper\main_datascraper.py", line 98, in start_datascraper names = main_helper.process_names( File "G:\Photo_OF_Scraper\OnlyFans\helpers\main_helper.py", line 625, in process_names result = module.start_datascraper( File "G:\Photo_OF_Scraper\OnlyFans\modules\onlyfans.py", line 147, in start_datascraper results = prepare_scraper( File "G:\Photo_OF_Scraper\OnlyFans\modules\onlyfans.py", line 704, in prepare_scraper unrefined_set = pool.starmap(media_scraper, product( File "C:\Users\oehy2\AppData\Local\Programs\Python\Python39-32\lib\multiprocessing\pool.py", line 372, in starmap return self._map_async(func, iterable, starmapstar, chunksize).get() File "C:\Users\oehy2\AppData\Local\Programs\Python\Python39-32\lib\multiprocessing\pool.py", line 771, in get raise self._value File "C:\Users\oehy2\AppData\Local\Programs\Python\Python39-32\lib\multiprocessing\pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "C:\Users\oehy2\AppData\Local\Programs\Python\Python39-32\lib\multiprocessing\pool.py", line 51, in starmapstar return list(itertools.starmap(args[0], args[1])) File "G:\Photo_OF_Scraper\OnlyFans\modules\onlyfans.py", line 1069, in media_scraper found_medias = [x for x in post["medias"] KeyError: 'medias'

    patching fixed 
    opened by Kurosaki22 55
  • renamer script not working

    renamer script not working

    I'm on version 5.1, and when I try to download a model's page it shows me this error:

    2020-07-01 12:30:34,711 ERROR errors [WinError 3] The system cannot find the path specified: 'D:\\Downloads\\.tools\\OnlyFans\\.sites\\OnlyFans\\harleymichelle\\Posts\\Free\\Images'
    Traceback (most recent call last):
      File "D:\Downloads\.tools\OnlyFans\datascraper\main_datascraper.py", line 186, in start_datascraper
        result = x.start_datascraper(
      File "D:\Downloads\.tools\OnlyFans\modules\onlyfans.py", line 92, in start_datascraper
        results = prepare_scraper(
      File "D:\Downloads\.tools\OnlyFans\modules\onlyfans.py", line 512, in prepare_scraper
        export_archive(metadata_set, archive_directory, json_settings)
      File "D:\Downloads\.tools\OnlyFans\helpers\main_helper.py", line 101, in export_archive
        datas2 = ofrenamer.start(archive_path, json_settings)
      File "D:\Downloads\.tools\OnlyFans\extras\OFRenamer\start.py", line 79, in start
        metadata.valid = fix_metadata(
      File "D:\Downloads\.tools\OnlyFans\extras\OFRenamer\start.py", line 64, in fix_metadata
        files = os.listdir(folder)
    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'D:\\Downloads\\.tools\\OnlyFans\\.sites\\OnlyFans\\harleymichelle\\Posts\\Free\\Images'
    

    That is with "sort_free_paid_posts": false, If I set it to true it simply re-downloads everything from scratch

    I also tried to run the renamer script directly and I get this error message:

    Traceback (most recent call last):
      File "start.py", line 148, in <module>
        start(metadata_filepath, json_settings)
      File "start.py", line 79, in start
        metadata.valid = fix_metadata(
      File "start.py", line 64, in fix_metadata
        files = os.listdir(folder)
    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'D:\\Downloads\\.tools\\OnlyFans\\.sites\\OnlyFans\\harleymichelle\\Posts\\Free\\Images'
    

    I had downloaded all her content before upgrading to the new version, so nothing is sorted into free or paid folders (p.s. the readme says that if I set "sort_free_paid_posts": false, it will be incompatible ... how so/incompatible with what?)

    Here is the filename format I was using previously:

    "directory": "./downloads",
     "file_name_format": "{username}/{date}_~_{text}_~_{file_name}.{ext}",
    

    And per suggestions I had seen, my new filename format:

    "download_path": "./{site_name}",
    "file_name_format": "{username}/{date}_{post_id}_{media_id}~~~{file_name}.{ext}",
    

    As an aside, while I will keep {text} out of the file-naming going forward, how do you all add any kind of context to the files so that you know what each file is/about? (b/g scenes, g/g scenes, solo scenes, etc)

    bug patching 
    opened by docholllidae 52
  • Subscriptions Not Showing

    Subscriptions Not Showing

    Hello,

    Everything seems to connect properly and run smoothly to AUTH, but I get those lines after running the PY:

    Scraping Paid Content Scraping Subscriptions There's nothing to scrape. Archive Completed in 0.0 Minutes Pausing scraper for 0 seconds.

    I have a few subscriptions active.

    If you need any help on the documentation I can support you on that side. Love doing it!

    opened by pcormier01 44
  • FileNotFound Error

    FileNotFound Error

    Updated to the latest commit today, spammed with thousands of errors like the below:

    Traceback (most recent call last):
      File "C:\Python\Python39\lib\shutil.py", line 806, in move
        os.rename(src, real_dst)
    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Archived\\Posts\\Videos\\22966993 (11-05-2020)\\5eb8a51f03f2f558973f7_source.mp4' -> 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Posts\\Videos\\22966993 (11-05-2020)\\5eb8a51f03f2f558973f7_source.mp4'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 82, in fix_directories
        moved = shutil.move(old_filepath, new_filepath)
      File "C:\Python\Python39\lib\shutil.py", line 820, in move
        copy_function(src, real_dst)
      File "C:\Python\Python39\lib\shutil.py", line 435, in copy2
        copyfile(src, dst, follow_symlinks=follow_symlinks)
      File "C:\Python\Python39\lib\shutil.py", line 264, in copyfile
        with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
    FileNotFoundError: [Errno 2] No such file or directory: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Posts\\Videos\\22966993 (11-05-2020)\\5eb8a51f03f2f558973f7_source.mp4'
    Traceback (most recent call last):
      File "C:\Python\Python39\lib\shutil.py", line 806, in move
        os.rename(src, real_dst)
    FileNotFoundError: [WinError 3] The system cannot find the path specified: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Archived\\Posts\\Images\\51744068 (23-09-2020)\\1080x1440_fc4762ef6c62c461ce2734cb205dcc91.jpg' -> 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Posts\\Images\\51744068 (23-09-2020)\\1080x1440_fc4762ef6c62c461ce2734cb205dcc91.jpg'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "G:\Scripts\OnlyFans\extras\OFRenamer\start.py", line 82, in fix_directories
        moved = shutil.move(old_filepath, new_filepath)
      File "C:\Python\Python39\lib\shutil.py", line 820, in move
        copy_function(src, real_dst)
      File "C:\Python\Python39\lib\shutil.py", line 435, in copy2
        copyfile(src, dst, follow_symlinks=follow_symlinks)
      File "C:\Python\Python39\lib\shutil.py", line 264, in copyfile
        with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
    FileNotFoundError: [Errno 2] No such file or directory: 'G:\\Scripts\\OnlyFans\\.sites\\OnlyFans-new2\\mikomihokina\\Posts\\Images\\51744068 (23-09-2020)\\1080x1440_fc4762ef6c62c461ce2734cb205dcc91.jpg'
    

    Ran the scraper yesterday and no errors.

    bug patching fixed 
    opened by valdearg 37
  • file path wont leave .sites folder

    file path wont leave .sites folder

    So the following path: line 37:

    Screen Shot 2020-11-11 at 1 17 11 PM

    returns this path:

    Screen Shot 2020-11-11 at 1 23 35 PM

    instead of the intended path:

    Screen Shot 2020-11-11 at 1 29 53 PM

    the file path had worked as intended before the 3.6 update.... what changed?

    opened by James-MF-Thomas 33
  • no media from messages is being downloaded

    no media from messages is being downloaded

    About a few days ago i downloaded a new version of your program when the login process was changed and i was no longer able to login. (cheers for updating your program so quick) But now i am unable to download media from messages. I am unsure if the media does not get scraped or if it has a problem downloading it. But it seems like it's not getting scraped.

    fixed 
    opened by LootScooper 33
  • Auth Error

    Auth Error

    Not sure if I have editted the json file correct or not but I keep getting this error any help would be appreciated. Traceback (most recent call last): File "onlyfans.py", line 14, in j_directory = json_data['directory']+"/Users/" KeyError: 'directory'

    bug fixed 
    opened by elwoodflame 33
  • Only downloading avatars and headers

    Only downloading avatars and headers

    Just started using this today and for some reason it is only downloading the avatars and headers of the models content. All other folders are empty. Anyone know how to fix this? Or is this project just broken rn?

    opened by 4nx13ty 1
  • Python 3.9 and 3.11 issues

    Python 3.9 and 3.11 issues

    Last night the scraper started crashing on me. I downloaded a fresh copy and started getting a message to use python version 3.9. I uninstalled version 3.11 and tried running the scraper again and was given this message

    Traceback (most recent call last): File "C:\Users\Admin\Downloads\Compressed\OnlyFans-master\start_ofd.py", line 12, in main_test.check_config() File "C:\Users\Admin\Downloads\Compressed\OnlyFans-master\tests\main_test.py", line 23, in check_config import helpers.main_helper as main_helper File "C:\Users\Admin\Downloads\Compressed\OnlyFans-master\helpers\main_helper.py", line 1, in from database.databases.user_data.models.api_table import api_table File "C:\Users\Admin\Downloads\Compressed\OnlyFans-master\database\databases\user_data\models\api_table.py", line 6, in import sqlalchemy ModuleNotFoundError: No module named 'sqlalchemy'

    since then I've gone back and forth installing different versions of Python and trying my best to troubleshoot but this is all pretty new to me so I think I've gotten as far as I can on my own. Does anybody have any ideas of what I could do to fix this?

    opened by frustra1 0
  • Onlyfans scraper doesn't work

    Onlyfans scraper doesn't work

    After checking all the model names, retrieving data and showing option to choose, the class create_highlight fail. -> Python 3.10.8

    What could be happening?

    OnlyfansError
    opened by X4oX1 0
  • I am in need of some assistance

    I am in need of some assistance

    _let me start by saying I know nothing about python or coding

    I have been doing my best to go through the steps and I am trying to run the "start_ofd.py" file but when I double click on it i just get a flicking window, So I attempted to run it in Command Prompt and it did work, which then brought me to the following information_

    C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master>python start_ofd.py Traceback (most recent call last): File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\start_ofd.py", line 9, in <module> from helpers.main_helper import OptionsFormat File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\helpers\main_helper.py", line 21, in <module> import classes.make_settings as make_settings File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\classes\make_settings.py", line 6, in <module> from yarl import URL ModuleNotFoundError: No module named 'yarl'

    I then looked up on how to fix this particular issue of "ModuleNotFoundError: No module named 'yarl'" and then opened up powershell and ran the following command

    `PS C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master> python -m pip install yarl Collecting yarl Downloading yarl-1.8.2-cp310-cp310-win_amd64.whl (56 kB) ---------------------------------------- 56.1/56.1 kB 3.1 MB/s eta 0:00:00 Requirement already satisfied: idna>=2.0 in c:\users\spudl\appdata\local\programs\python\python310\lib\site-packages (from yarl) (3.4) Collecting multidict>=4.0 Downloading multidict-6.0.3-cp310-cp310-win_amd64.whl (28 kB) Installing collected packages: multidict, yarl Successfully installed multidict-6.0.3 yarl-1.8.2

    [notice] A new release of pip available: 22.3 -> 22.3.1 [notice] To update, run: python.exe -m pip install --upgrade pip`

    I did perform the upgrade, and ran the code python start_ofd.py again and it just keeps coming back with different file errors and this is currently where i'm at

    C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master>python start_ofd.py Traceback (most recent call last): File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\start_ofd.py", line 9, in <module> from helpers.main_helper import OptionsFormat File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\helpers\main_helper.py", line 35, in <module> from apis.fansly import fansly as Fansly File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\apis\fansly\fansly.py", line 3, in <module> from apis.api_streamliner import StreamlinedAPI File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\apis\api_streamliner.py", line 6, in <module> import apis.fansly.classes as fansly_classes File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\apis\fansly\classes\__init__.py", line 1, in <module> from apis.fansly.classes import auth_model, extras, user_model File "C:\Users\spudl\Downloads\OnlyFans-master\OnlyFans-master\apis\fansly\classes\auth_model.py", line 26, in <module> from dateutil.relativedelta import relativedelta ModuleNotFoundError: No module named 'dateutil'

    I have no idea where to go from now, Be in mind I am using python version 3.10.1 as the latest version was not working with the latest 7.4.1 release on here. any help would be greatly appreciated. Thank you in advanced

    opened by spudly1987 0
  • THE SCREEN CLOSES - ERRORS IN LINES 66, 44, 646, 52, 131, 55, 963

    THE SCREEN CLOSES - ERRORS IN LINES 66, 44, 646, 52, 131, 55, 963

    Hi,

    I'm having this error, since several days ago, the screen closes itself, and I can't download anything, it closes at last time, when it starts downloading, the error is this one:

    image

    The errors are in lines 66, 44, 646, 52, 131, 55, 963.

    Could anybody help me?.

    Thanks.

    opened by djmarioka 0
  • Support for Python 3.10

    Support for Python 3.10

    On tests\main_test.py there's a function where it checks for python version. Sadly i was wondering, why there's a logic where it converts the version into float

    PS C:\Users\Administrator\Downloads\Pan\OF\OnlyFans\tests> python
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import sys
    >>> version_info = sys.version_info
    >>> version_info
    sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)
    >>> python_version = f"{version_info.major}.{version_info.minor}"
    >>> python_version
    '3.10'
    >>> python_version = float(python_version)
    >>> python_version
    3.1
    >>>
    

    That way, it creates a warning:

    Execute the script with Python 3.9

    opened by edayasaki 0
Releases(v7.4.1)
  • v7.4.1(Jul 20, 2021)

    Fixed 2FA | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/343db2019837187b5123925d094d0899ac9f3c2e Script supports multiple blacklists | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/f2472c6c58a1a69ddf658b7ea0f8bb8f56afe3a0

    Source code(tar.gz)
    Source code(zip)
  • v7.4(Jun 19, 2021)

    Added updater script | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/5db4e167bf8fa89628bacade56ee421b644b7850 Fixed a bunch of things | https://github.com/DIGITALCRIMINAL/OnlyFans/compare/v7.3...master

    Authentication process has been changed. You just have to copy the cookie and user-agent values. No more doing it one by one.

    You'll probably need to upgrade your requirements. The new command to upgrade requirements is in the readme.

    Source code(tar.gz)
    Source code(zip)
  • v7.3(Jun 13, 2021)

    Fixed archived content | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/5f865962dfa3d87af69602287757cad2e4ec0d71 Fixed scraping paid content | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/d2732859d24931f34cf833e6ef6fc73b7a3cd1b1 Fixed multiple infinite loops | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/afe026969f11aa7dd5fdaabb261f39c5baa99d29 | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/cd72afd08159d421765f8d23678c08f54b0faf43 Fixed script not downloading profile headers | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/040af16fd75bc9da8d7ce3df5a19436adb66f6ae Fixed custom filenames not being downloaded (OFRenamer Fix) | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/d5adabcd5c05f0e23cef104e8301a0ae03e675b6 Fixed script not setting correct creation date on files | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/599138bbb0bcef2beb6aaa352ca3739936cf465d

    Script now uses asyncio | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/f49a5d6e9a9633148736db3e19f037d26a829a49 Script is now much faster for creators scraping their own profile | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/660ec5cb881848848f9051f2284414d47d9ac72e

    I fixed a bunch of other stuff | https://github.com/DIGITALCRIMINAL/OnlyFans/compare/v7.2...master

    Known Issues: Some OLD json metadata files are broken


    I'm sure you're tired of downloading the latest commit every time the script breaks. I'll be adding a "upgrader script" soon, which will upgrade to the latest commit.

    Update script is now implemented, download from master if you want to use it.

    Source code(tar.gz)
    Source code(zip)
  • v7.2(May 16, 2021)

    Reinstall the requirements

    Not Working:

    Download this version instead https://github.com/DIGITALCRIMINAL/OnlyFans/archive/f49a5d6e9a9633148736db3e19f037d26a829a49.zip

    Source code(tar.gz)
    Source code(zip)
  • v7.1(May 5, 2021)

    Script will now automatically fetch header rules on every api request, so there's no need for you to keep coming back to download every new commit once you get session locked.

    This doesn't mean you won't get session locked. When OF updates their algorithm we have to figure it out and create the rules manually (soon to be automatically) and once we do, the script will continue to work without you having to download a new commit.

    No more download fatigue.

    Release no longer works, use the latest commit.

    Source code(tar.gz)
    Source code(zip)
  • v7.01(May 4, 2021)

    This release isn't working. Download the latest commit, which is. https://github.com/DIGITALCRIMINAL/OnlyFans/archive/refs/heads/master.zip

    Source code(tar.gz)
    Source code(zip)
  • v7(May 2, 2021)

    Fixed signed (auth) requests | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/9e15ee17a3a4b808183660dbe51ebeffd0090454 Fixed missing subscriptions and posts | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/ba1c41d83894ebc88a366ba93d680530705dc28b Fixed the issue where multiple authentications wouldn't work | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/1d212a837e322e2f200d21ba2a6313e610121a68 Added ability to stop the script from formatting media and deleting empty directories | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/1d212a837e322e2f200d21ba2a6313e610121a68

    I'm sorry that I missed v6.9, maybe we'll see v69.

    Source code(tar.gz)
    Source code(zip)
  • v6.4.6(Apr 23, 2021)

    Added ability to turn off OFRenamer. This stops the script from renaming files. | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/1fd0ec6ecd7d81f7d25bb63385f610b6452ecca3 Script now handles 404'ed auths. (No longer produces "There's nothing to scrape") | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/3dd1b0b3d8e995c5f438b13186e5cca408c9699c

    THIS VERSION IS BROKEN, EVERYTHING BEFORE V7.0 (NOT RELEASED YET) WILL NOT WORK.

    Download the latest commit here: https://github.com/DIGITALCRIMINAL/OnlyFans/archive/refs/heads/master.zip

    Source code(tar.gz)
    Source code(zip)
  • v6.4.5(Apr 16, 2021)

    Fixed AttributeError: 'dict' object has no attribute 'name' error | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/87237e29835b9bb7e25349e81fd4e9eabbacdf17 Removed deepdiff and VS Tools as a requirement | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/62e9a7eb6e68dcafe3fcf07fe20221d7c14c0cf5

    Source code(tar.gz)
    Source code(zip)
  • v.6.4.4(Mar 30, 2021)

    Updated requests to stop users running into the "Firefox Profile" error | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/6b51094c84a28a0fe110a3af025ccf737785fc3b

    v6.4.3 is essentially the same as this release without the single update above.

    Source code(tar.gz)
    Source code(zip)
  • v6.4.3(Mar 17, 2021)

    Changed how "Missing Posts" works | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/d394feb3ddda7b8bbcb1e9797343c2543e2aeb3d Fixed 2FA | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/bc246596e990050a2a3c776d8dbca82718e1ad80 Updated SQLALCHEMY requirements | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/5b1da56fb62b6aabd6b331d1c633c8829de727e9 Auth via browser if email and password is filled | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/60f6a5b36cd75d4469bbaaf44d062cdf48cad008 You must download geckodriver to auth via browser. https://github.com/mozilla/geckodriver/releases/tag/v0.27.0

    Source code(tar.gz)
    Source code(zip)
  • v6.4.2(Mar 7, 2021)

    Surely, the filename error has to be fixed now...

    Added {first_letter} path format | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/04882f688cd95f2119ead70cf48d9764250e5a1d

    Potentially fixed (well, bypassed) filename for the 5th(?) time | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/50b052920492609069e353f8f503037f4ba63b36

    Source code(tar.gz)
    Source code(zip)
  • v6.4.1(Feb 16, 2021)

    Fixed "valid" error | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/945c45e3c7359a01a9ae09c83a8f815c978f5d56 Fixed "linked" error | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/3f3055080ee6d94089517d48b7b6488d9927e759 Potential fix for "media" key | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/3c7f6b925490a2625a3140fb20d8b1a82fd6de8a

    Source code(tar.gz)
    Source code(zip)
  • v6.4(Feb 15, 2021)

    Fixes: Fixed OFRenamer 67 times | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/5bafbd418b67f10829642262f0c8228d12f58479 Fixed FileNotFound Error | https://github.com/DIGITALCRIMINAL/OnlyFans/commit/b7a273d2a6b660d26c3c746f457d701f0da095bd

    Features: Added a download progress bar (yay). Improved session handling for proxies that rotate IP every N seconds.

    I fixed a lot more stuff, just look at the commits. https://github.com/DIGITALCRIMINAL/OnlyFans/compare/v6.3.2...master

    Source code(tar.gz)
    Source code(zip)
  • v6.3.2(Feb 4, 2021)

  • v6.3.1(Feb 4, 2021)

  • v6.3(Feb 4, 2021)

    General: Any paid post that has image previews will be marked as "Free". https://github.com/DIGITALCRIMINAL/OnlyFans/issues/772

    Fixes: Incorrect favorite count OFRenamer (detecting dupes)

    Database: Added preview column to the media table https://github.com/DIGITALCRIMINAL/OnlyFans/commit/ed448d9edb8c54cd693c6f5ba97e794b2d280aab

    Source code(tar.gz)
    Source code(zip)
  • v6.2.5(Feb 1, 2021)

    General: Reduced docker size

    Features: You can now store metadata in a separate directory. Metadata storage rollover isn't supported yet, so keep all metadata in a directory. When I do add it, the script will search multiple directories that contain the unique identifier ({username}) for the OLDEST metadata.

    Source code(tar.gz)
    Source code(zip)
  • v6.2.4(Jan 25, 2021)

    Features: Added ability to choose your video quality (Config).

    Fixes: Fixed blacklist Fixed Error in Subscriptions Fixed .DS_store error Fixed AttributeError: 'str' object has no attribute 'get'

    General: Script no longer changes the auth.active to False

    Source code(tar.gz)
    Source code(zip)
  • v6.2.3(Jan 11, 2021)

  • v6.2.2(Jan 10, 2021)

  • v6.2.1(Jan 10, 2021)

  • v6.2(Jan 9, 2021)

  • v6.1.2(Dec 30, 2020)

  • v6.1.1(Dec 22, 2020)

  • v6.1(Dec 22, 2020)

  • v6.0(Dec 14, 2020)

  • v5.9.1(Dec 3, 2020)

  • v5.9(Nov 30, 2020)

  • v5.8.1(Nov 25, 2020)

Owner
CRIMINAL
I have your e-girl's SSN
CRIMINAL
薅薅乐 - JD 测试脚本

薅薅乐 安裝 使用docker docker一键安装: docker run -d --name jd classmatelin/hhl:latest. 使用 进入容器: docker exec -it jd bash 获取JD_COOKIES: python get_jd_cookies.py,

ClassmateLin 575 Dec 28, 2022
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022
Scrapy-based cyber security news finder

Cyber-Security-News-Scraper Scrapy-based cyber security news finder Goal To keep up to date on the constant barrage of information within the field of

2 Nov 01, 2021
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
ChromiumJniGenerator - Jni Generator module extracted from Chromium project

ChromiumJniGenerator - Jni Generator module extracted from Chromium project

allenxuan 4 Jun 12, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
A simple Discord scraper for discord bots

A simple Discord scraper for discord bots. That includes sending an guild members ids to an file, Mass inviter for joining servers your bot is in and Fetching all the servers of the bot (w/MemberCoun

3zg 1 Jan 06, 2022
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
A tool to easily scrape youtube data using the Google API

YouTube data scraper To easily scrape any data from the youtube homepage, a youtube channel/user, search results, playlists, and a single video itself

7 Dec 03, 2022
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
🥫 The simple, fast, and modern web scraping library

About gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with zero dependencies. I

Max Humber 692 Dec 22, 2022
A Python Oriented tool to Scrap WhatsApp Group Link using Google Dork it Scraps Whatsapp Group Links From Google Results And Gives Working Links.

WaGpScraper A Python Oriented tool to Scrap WhatsApp Group Link using Google Dork it Scraps Whatsapp Group Links From Google Results And Gives Working

Muhammed Rizad 27 Dec 18, 2022
Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup.

WebScrapperRoBot Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup. Mark your Star ⭐ ⭐ What is Web Scraping ? Web s

Nuhman Pk 53 Dec 21, 2022
VG-Scraper is a python program using the module called BeautifulSoup which allows anyone to scrape something off an website. This program lets you put in a number trough an input and a number is 1 news article.

VG-Scraper VG-Scraper is a convinient program where you can find all the news articles instead of finding one yourself. Installing [Linux] Open a term

3 Feb 13, 2022
A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Udemy Scraper A Web Scraper built with beautiful soup, that fetches udemy course information. Installation Virtual Environment Firstly, it is recommen

Aditya Gupta 15 May 17, 2022
👁️ Tool for Data Extraction and Web Requests.

httpmapper 👁️ Project • Technologies • Installation • How it works • License Project 🚧 For educational purposes. This is a project that I developed,

15 Dec 05, 2021
A repository with scraping code and soccer dataset from understat.com.

UNDERSTAT - SHOTS DATASET As many people interested in soccer analytics know, Understat is an amazing source of information. They provide Expected Goa

douglasbc 48 Jan 03, 2023
OSTA web scraper, for checking the status of school buses in Ottawa

OSTA-La-Vista OSTA web scraper, for checking the status of school buses in Ottawa. Getting Started Using a Raspberry Pi, download Python 3, and option

1 Jan 28, 2022
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021