A Simple, LightWeight, Statically-Typed Python3 API wrapper for GogoAnime.

Overview

AniKimi API

A Simple, LightWeight, Statically-Typed Python3 API wrapper for GogoAnime
The v2 of gogoanimeapi (depreciated)
Made with JavaScript and Python3

Features of AniKimi

  • Custom url changing option.
  • Statically-Typed, No more annoying JSON responses.
  • Autocomplete supported by most IDE's.
  • Complete solution.
  • Faster response.
  • Less CPU consumption.

Installing

Using Pypi

$ pip3 install anikimiapi

Getting Started

Pre-Requisites

  • Getting Required Tokens

    • Visit the GogoAnime Website.
    • Login or SignUp using ur email or google.
    • Add an extension to your browser named Get cookies.txt.
    • Now in the GogoAnime Website, right click and select "Get cookies.txt"
    • A .txt file will be downloaded.
    • In the .txt file, find the name "gogoanime" and "auth".
    • Copy the respective tokens on the right side of the above names.
    • Keep it safely, since its your private credentials.

Diving into the API

Authorize the API

To Authorize the API, use AniKimi class. You can also import it from other files. It also supports cross imports. But all API request should be made using this class only.

from anikimiapi import AniKimi

# Initialize AniKimi class
anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token",
    host="https://gogoanime.pe/"  
)

Note: If GogoAnime changes their domain, use the 'host' parameter. Otherwise, leave it blank. This parameter was optional and defaults to https://gogoanime.pe/

Getting Anime search results

You can search anime by using search_anime method, It returns the search results as ResultObject which contains two arguments, the title and animeid.

from anikimiapi import AniKimi

anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token"
)

# Search Anime
results = anime.search_anime(query="tokikaku kawaii")

for i in results:
    print(i.title) # (or)
    print(i.animeid)

Note: If no search results found, the API will raise NoSearchResultsError error. Make sure to handle it.

Getting details of a specific Anime

You can the basic information about a specific anime with animeid using get_details method. It will return anime details as MediaInfoObject.

The MediaInfoObject contains the following arguments,

  • title
  • other_names
  • season
  • year
  • status
  • genres
  • episodes
  • image_url
  • summary
from anikimiapi import AniKimi

anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token"
)

# Get anime Details
details = anime.get_details(animeid="clannad-dub")
print(details.title)
print(details.genres) # And many more...

Note: If an Invalid animeid is given, the API will raise InvalidAnimeIdError. Make sure to handle it.

Getting the Anime Links

You can simply get the streamable and downloadable links of a specific episode of an Anime by its animeid and episode_num using get_episode_link method. It will return anime links in MediaLinksObject.

The MediaLinksObject returns the links, if available. Otherwise, it will return None. The MediaLinksObject has the following arguments,

  • link_hdp
  • link_360p
  • link_480p
  • link_720p
  • link_1080p
  • link_streamsb
  • link_xstreamcdn
  • link_streamtape
  • link_mixdrop
  • link_mp4upload
  • link_doodstream
from anikimiapi import AniKimi

anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token"
)

# Getting Anime Links
anime_link = anime.get_episode_link(animeid="clannad-dub", episode_num=3)

print(anime_link.link_hdp)
print(anime_link.link_720p)
print(anime_link.link_streamsb) # And many more...

Note: If invalid animeid or episode_num is passed, the API will return InvalidAnimeIdError. Make sure to handle it.

If the given gogoanime_token and auth_token are invalid, the API will raise InvalidTokenError. So, be careful of that.

Getting a List of anime by Genre

You can also get the List of anime by their genres using get_by_genres method. This method will return results as a List of ResultObject.

Currently, the following genres are supported,

  • action
  • adventure
  • cars
  • comedy
  • dementia
  • demons
  • drama
  • dub
  • ecchi
  • fantasy
  • game
  • harem
  • hentai - Temporarily Unavailable
  • historical
  • horror
  • josei
  • kids
  • magic
  • martial-arts
  • mecha
  • military
  • music
  • mystery
  • parody
  • police
  • psychological
  • romance
  • samurai
  • school
  • sci-fi
  • seinen
  • shoujo
  • shoujo-ai
  • shounen-ai
  • shounen
  • slice-of-life
  • space
  • sports
  • super-power
  • supernatural
  • thriller
  • vampire
  • yaoi
  • yuri
from anikimiapi import AniKimi

anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token"
)

# Getting anime list by genres
gen = anime.get_by_genres(genre_name="romance", page=1)

for result in gen:
    print(result.title)
    print(result.animeid)

Note: If invalid genre_name or page is passed, the API will raise InvalidGenreNameError. Make sure to handle it.

Getting List of Airing Anime (v2 API New Feature)

You can get a List of currently Airing Anime using get_airing_anime method. This method will return results as a List of ResultObject.

from anikimiapi import AniKimi

anime = AniKimi(
    gogoanime_token="the saved gogoanime token",
    auth_token="the saved auth token"
)

# Getting Airing Anime List
airing = anime.get_airing_anime(count=15)
for i in airing:
    print(i.title)
    print(i.animeid)

Note: If the value of count exceeds 20, The API will raise AiringIndexError. So, pass a value less than or equal to 20.

Copyrights ©2021 BaraniARR;

Licensed under GNU GPLv3 Licnense;

Comments
  • [Feature] [TOKEN extraction method] #Add to README.md

    [Feature] [TOKEN extraction method] #Add to README.md

    description

    it's kinda pain to extract the token using pc... Its following methods which i know about token and cookies data extraction

    Methods

    1. Method - using android debug mode Its kinda pain but you can see it how-do-i-extract-and-view-cookies-from-android-chrome

    2. Method - using third party android browser Its easy and convenient even noob in coding can also do it. App in Play store

    3. Method - using eruda Dev consule for all browsers. Its easy You can find repo Here Just copy code ↓

    javascript:(function () { var script = document.createElement('script'); script.src="//cdn.jsdelivr.net/npm/eruda"; document.body.appendChild(script); script.onload = function () { eruda.init() } })();

    Bookmark it and name it as Inspect element Then you can use it in any browser..

    YOU CAN REFER STACKOVERFLOW for this.

    Keep going on!! I like your work 🍭

    opened by heartlog 2
  • suggestion/enhancement : auth tokens should only be used for returning direct download links

    suggestion/enhancement : auth tokens should only be used for returning direct download links

    the download links except link_hdp,link1080p,link_720p,link_480p,link360p are not direct (i.e, just iframe/embed).

    those links can be just scraped without requiring auth tokens.

    as per my suggestion , only method related to returning these direct download links should require those auth tokens. this will make the library more accessible like the previous version since all other info can be acquired wihout instantiating the class with auth tokens.

    @BaraniARR , this will obviously require changing a bunch of things like creating 2 different download methods and media links objects but would be worth it imo.

    opened by ryan-k8 1
  • Pip import error

    Pip import error

    Importing module failed and given the error : No module named anikimiapi Try to publish an update to pypi

    Or

    have a __init__.py? Check it and try to correct it To make import walk through your directories every directory must have a __init__.py file

    opened by heartlog 1
  • got rid of  page_num param from get_by_genres method

    got rid of page_num param from get_by_genres method

    I implemented this todo from anikimi support (https://github.com/BaraniARR/anikimiapi/projects/2#card-68131537). it is now possible to get your required no of anime based on a genre without meddling with page_num.

    due to this, the updated method now has another param : limit & 2 helper functions inside for better readibility and structure of code (see below)

              def get_by_genres(self,genre_name, limit=60 ) -> list :
                """ limit(``int``):
                    The limit for the number of anime you want from the results. defaults to 60 (i.e, 3 pages)"""
                def page_anime_scraper(soup_object) -> list:
                      """a helper function to scrape anime results from page source"""
                      ...
                def pagination_helper(current_page_source : str,url,limit:int) -> None:
                      """a recursive helper function which helps to successively scrape anime from following pages
                       (if there are any) till limit is reached. """ 
                     ...
    
    
    opened by ryan-k8 0
  • Always getting 'InvalidAnimeIdError' error

    Always getting 'InvalidAnimeIdError' error

    Trying to get a link to any anime, including using the example code, results in an InvalidAnimeIdError. My code is as follows:

    results = anikimi.search_anime(query=search)
            for i in results:
                animeid = i.animeid
            
            
            anime_link = anikimi.get_episode_link_advanced(animeid=animeid, episode_num=1)
    
            await ctx.send(anime_link.link_720p)
            await ctx.send(anime_link.link_1080p)
    

    Error is as follows:

    Command raised an exception: InvalidAnimeIdError: Invalid animeid or episode_num given

    This API looks really easy to use, would love to be able to get it working.

    opened by Nova-69 2
  • fixed error/issue: host giving 403 error code on every request

    fixed error/issue: host giving 403 error code on every request

    context : #11 #12 i have fixed this by adding an attribute user_agent in the constructor of the main class which is passed on as header option for every request made to the host. the main gist of changes ⬇️

    the constructor of the main class

    def __init__(
                self,
                gogoanime_token: str,
                auth_token: str, 
                host: str = "https://gogoanime.pe/",
                user_agent:dict = {'User-Agent': 'Mozilla/5.0'},
        ):
    

    now every get requests made to the host looks something like this

    requests.get(animelink,headers=self.user_agent)
    #or
    session.get(url,headers=self.user_agent) 
    
    opened by ryan-k8 0
  • host gives 403 error code on every requests

    host gives 403 error code on every requests

    as mentioned in #11 , this was simply because the host website has been updated and gives 403 error code if user-agent is not set in request headers. i am making a PR to fix this.

    opened by ryan-k8 0
  • 'NoneType' object has no attribute 'find'

    'NoneType' object has no attribute 'find'

    There seems to be an error in the API

    Traceback (most recent call last): File "/Users/user/Library/Python/3.9/lib/python/site-packages/anikimiapi/anikimi.py", line 218, in get_episode_link_advanced source_url = lnk.find("li").a AttributeError: 'NoneType' object has no attribute 'find'

    opened by tanner02 2
  • get_download_link not working.

    get_download_link not working.

    the get_download_link option is not working.

    everytime when you try to get the download link, it returns the gogo-cdn.com link that shows the "403: Forbidden" error upon entering.

    this error could be on my side, but please see if you can do anything with it. i am using the newest version.

    opened by KrychaTech 1
  • [Enhancement] [Suggestion] Getting the required cookies using simple javascript

    [Enhancement] [Suggestion] Getting the required cookies using simple javascript

    [Just a small friendly enhancement for getting the prerequisites :)]

    After logging in/signing up in the gogoanime's website, the below code can be pasted into the console tab of the developer tools to get the required cookies directly in the hand without needing to install any extension. This can be added in the readme as an alternative method to get cookies. I am a python dev though, It's just a simple thing so I thought it would be better to create an issue for this rather than forking and generating a PR.

    const value = `; ${document.cookie}`;
    const cookies = ['gogoanime', 'auth']
    for (let cookie in cookies){
       const parts = value.split(`; ${cookies[cookie]}=`);
       if (parts.length === 2){console.log(`\"${cookies[cookie]}\": \"${parts.pop().split(';').shift()}\"`);}
    }
    

    Example: image

    opened by FireHead90544 1
Releases(v0.1.4-beta)
Owner
Python programmer since student. Specialist in telegram bots and problem solving.
veez music bot is a telegram music bot project, allow you to play music on voice chat group telegram.

🎶 Veez Music Bot Music bot for playing music on telegram voice chat group. Requirements 📝 FFmpeg NodeJS nodesource.com Python 3.7+ PyTgCalls 🧪 Get

levina 143 Jun 19, 2022
BaiduPCS API & App 百度网盘客户端

BaiduPCS-Py A BaiduPCS API and An App BaiduPCS-Py 是百度网盘 pcs 的非官方 api 和一个命令行运用程序。

Peter Ding 450 Jan 05, 2023
KiKi bare dogs can share your joys and sorrows with you.

Kiki-FangLee-DiscordBot KiKi bare dogs can share your joys and sorrows with you. $help: Kiki will show you my talent, aw-aw. $list: Show Kiki's knowle

Fang Lee 0 Feb 12, 2022
1.本项目采用Python Flask框架开发提供(应用管理,实例管理,Ansible管理,LDAP管理等相关功能)

op-devops-api 1.本项目采用Python Flask框架开发提供(应用管理,实例管理,Ansible管理,LDAP管理等相关功能) 后端项目配套前端项目为:op-devops-ui jenkinsManager 一.插件python-jenkins bug修复 (1).插件版本 pyt

3 Nov 12, 2021
A Matrix-Instagram DM puppeting bridge

mautrix-instagram A Matrix-Instagram DM puppeting bridge. Documentation All setup and usage instructions are located on docs.mau.fi. Some quick links:

89 Dec 14, 2022
A simple versatile telgeram bot written in Python using pyTelegramBotAPI library.

A simple versatile telgeram bot written in Python using pyTelegramBotAPI library.

Benyamin Zojaji 15 Jun 17, 2022
GroupMenter : New Telegram Group Manager Bot🔸Fast 🔸Python🔸Pyrogram 🔸

GroupMenter An PowerFull Group Manager Bot. Written In Pytelethon. Info • A modular Telegram Python bot running on python3. • Can be found on telegram

Group Menter 24 Jun 28, 2022
Python async SDK for betsapi.com

Python async SDK for betsapi.com

1 Dec 21, 2021
The official Python library for the Plutto API

Plutto Ruby SDK This library will help you easily integrate Plutto API to your software, making your developer life a little bit more enjoyable. Insta

Plutto 3 Nov 23, 2021
SC4.0 - BEST EXPERIENCE · HEX EDITOR · Discord Nuker · Plugin Adder · Cheat Engine

smilecreator4 This site is for people who want to hack or want to learn it! Furthermore, this program does not work without turning off Antivirus or W

1 Jan 04, 2022
An example of using discordpy 2.0.0a to create a bot that supports slash commands

DpySlashBotExample An example of using discordpy 2.0.0a to create a bot that supports slash commands. This is not a fully complete bot, just an exampl

7 Oct 17, 2022
Singer Tap for dbt Artifacts built with the Meltano SDK

tap-dbt-artifacts tap-dbt-artifacts is a Singer tap for dbtArtifacts. Built with the Meltano SDK for Singer Taps.

Prratek Ramchandani 9 Nov 25, 2022
A minimalistic, modern Discord bot for roles and polls using dropdowns

DropBot A minimalistic, modern Discord bot for roles and polls using dropdowns Made by ThatOneCalculator Technologies used Instructions Type /, and na

ModernBots 1 Jun 27, 2022
Accurately dump Commodore 64 tapes

TrueTape64 A cheap, easy to build adapter to interface a Commodore 1530 (C2N) Datasette to your PC to dump and preserve your aging Commodore 64 softwa

francesco 38 Dec 03, 2022
An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to scrape a user's followers, following, Tweets and more while evading most API limitations.

TWINT - Twitter Intelligence Tool No authentication. No API. No limits. Twint is an advanced Twitter scraping tool written in Python that allows for s

TWINT Project 14.2k Jan 03, 2023
AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications

AWS Serverless Application Model (AWS SAM) The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications

Amazon Web Services 8.9k Dec 31, 2022
VideocompBot - This is TG Video Compress BoT. Prouduct By BINARY Tech 💫

VideocompBot - This is TG Video Compress BoT. Prouduct By BINARY Tech 💫

1 Jan 04, 2022
An iCal file to transport you to a new place every day until you die

everydayvirtualvacation An iCal file to transport you to a new place every day until you die The library is closed 😔 😔 including a video of the plac

Jacob Chapman 33 Apr 19, 2022
Space Bot, a Discord bot built for HackerSpace Club of PES University

Space Bot Space Bot, a Discord bot built for HackerSpace Club of PES University What can Space Bot do? Space Bot allows you to lookup any mentor or to

HackerSpace @PESU 7 Oct 23, 2022
Auxiliator is telegram bot for basic web-application analysis

Auxiliator Auxiliator is telegram bot for basic web-application analysis What for? Sometimes there is no access to your main PC, where you can scan we

Revoltage 13 Dec 26, 2021