A powerful annex BUBT, BUBT Soft, and BUBT website scraping script.

Overview

Annex Bubt Scraping Script

I think this is the first public repository that provides free annex-BUBT, BUBT-Soft, and BUBT website scraping API script on GitHub. When I was doing my 3rd year project one for my friend Abdullah Xayed wrote a web scraping project for me. Now I am maintaining this.

Important Note

There have an api script that can broke the security system of BUBT. So, I am not sharing some api script with you due to security reason. And I am requesting you not to use any of this provided api for production use. I already give you the API script. So, Host them on your web server and then use them for the production.

API Response & Type

BUBT API:

Name Method Description Examples
Student Verify GET Verify bubt students /global_file/getData.php?id=?&type=?
Faculty Verify GET Verify bubt faculty /global_file/getData.php?id=?&type=?

Abdullah Xayed API: (v1)

Name Method Description Examples
Annex Login GET Verify bubt faculty /bubt/v1/login?id=?&pass=?
Annex Result GET Get student result from annex by session id /bubt/v1/prevCourses?phpsessid=?
Annex Fees GET Get student fees from annex by session id /bubt/v1/fees?phpsessid=?
Annex Routine GET Get student routine from annex by session id working, Routine shift from annex to BUBT Soft /bubt/v1/routine?id=?
All Events GET Get all events from bubt website /bubt/v1/allEvent?
Events Details GET Get an event details by events url /bubt/v1/eventDetails?url=?
All Notice GET Get all notices from bubt website /bubt/v1/allNotice?
Notice Details GET Get a notice details by notices url /bubt/v1/noticeDetails?url=?

Sample Json Data

BUBT API:

Student Verify:

{
  "sis_std_id": "17181103084",
  "sis_std_name": "Md. Imam Hossain",
  "sis_std_prgrm_sn": "B.Sc. Engg. in CSE",
  "sis_std_prgrm_id": "006",
  "sis_std_intk": "37",
  "sis_std_email": "[email protected]",
  "sis_std_father": "Mahbub Rashid",
  "sis_std_gender": "M",
  "sis_std_LocGuardian": "Mahbub Rashid",
  "sis_std_Bplace": "Vasantek, Dhaka",
  "sis_std_Status": "R",
  "sis_std_blood": "",
  "gazo": "data:image/jpeg;base64,"
}

Faculty Verify:

[
  {
    "EmpId": "18020331033",
    "DemoId": "18020331033",
    "EmpName": "Md. Ahsanul Haque",
    "DOB": "1996-06-21T00:00:00",
    "PermanentAddress": "South Atapara, Bogura Sadar-5800, Bogura",
    "FatherName": "Md. Abdul Awal",
    "ECName": "Md. Abdul Awal",
    "ECNo": "01711936404",
    "ECRelation": "Father",
    "Gender": "Male",
    "DeptName": "Department of Computer Science & Engineering",
    "PosName": "Lecturer",
    "BloodGroup": "A+",
    "StatusId": "1",
    "EmpImage": "data:image/jpeg;base64,"
    }
]

Abdullah Xayed API:(v1)

Annex Login:

{
  "PHPSESSID": "7d1755fe6c32b74d321fe3d3ba69a4ad",
  "status": "success"
}

Annex Result:

{
  "data": [
    {
      "cgpa": "3.22",
      "results": [
        {
          "code": "ENG 101",
          "credit": "3",
          "grade": "B-",
          "title": "English Language-I",
          "type": "Theory"
        }
      ],
      "semester": "Fall, 2017-18",
      "sgpa": "3.22"
    }
  ],
  "status": "success"
}

Annex Fees:

{
  "data": [
    {
      "Demand": "44195",
      "Due": "0",
      "Paid": "44195",
      "Remarks": "Semester Charge+Tuition Fees+Others",
      "Semester": "Fall, 2017-18",
      "Waiver": "0",
      "payments": [
        {
          "Account_Code": "319",
          "Payment_Amount": "15600",
          "Payment_No": "1",
          "Reciept_No": "18888",
          "Waiver": "0"
        },
        {
          "Account_Code": "319",
          "Payment_Amount": "28595",
          "Payment_No": "2",
          "Reciept_No": "43019",
          "Waiver": "0"
        }
      ]
    }
  ],
  "result": {
    "Total_Demand": "384816",
    "Total_Due": "7442",
    "Total_Paid": "353923",
    "Total_Waiver": "23451"
  },
  "status": "success"
}

Annex Routine:

{
  "data": [
    {
      "Building": "",
      "Day": "Saturday",
      "Intake": "",
      "Room_No": "",
      "Schedule": "08:30 AM to 10:00 AM",
      "Section": "",
      "Subject_Code": "",
      "Teacher_Code": ""
    }
  ],
  "status": "success"
}

All Events:

{
  "data": [
    {
      "published_on": "5 Aug 2021",
      "title": "International Conference on Science and Contemporary Technologies (ICSCT) Opened at BUBT",
      "url": "https://www.bubt.edu.bd/home/event_details/200"
    }
  ],
  "status": "success"
}

Annex Notices:

{
  "data": [
      {
        "category": "Exam Related",
        "published_on": "8 Oct 2021",
        "title": "Defense Notice",
        "url": "https://www.bubt.edu.bd/home/notice_details/665"
      }
  ],
  "status": "success"
}

Events Details:

{
    "data": {
      "description": "Bangladesh University of  Business and Technology  (BUBT) organized a virtual Orientation  Program for Spring 2021 Students on April 22, 2021....",
      "downloads": [
        {
          "url": ""
        }
      ],
      "images": [
        {
          "url": "https://www.bubt.edu.bd/assets/frontend/media/1619504011BUBT_22_04__2021.jpg"
        }
      ],
      "pubDate": "25 Apr 2021",
      "title": "Virtual Orientation for Spring 2021 Students at BUBT"
    },
    "status": "success"
  }

Notice Details:

{
    "data": {
      "description": "Defense Notice\nThis is to notify the intern students that their Online Internship Defense will be held in Google Meet...",
      "downloads": [
        {
          "url": ""
        }
      ],
      "images": [
        {
          "url": ""
        }
      ],
      "pubDate": "8 Oct 2021",
      "title": "Defense Notice"
    },
    "status": "success"
}

🧑 Author

Md. Imam Hossain

You can also follow my GitHub Profile to stay updated about my latest projects:

GitHub Follow

If you liked the repo then kindly support it by giving it a star !

Copyright (c) 2020 MD. IMAM HOSSAIN

Owner
Md Imam Hossain
Lazy coder.
Md Imam Hossain
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data.

Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data. Then used Yahoo Finance to get the related stock data and displayed them in the form of chart

Samrat Mitra 3 Sep 09, 2022
simple http & https proxy scraper and checker

simple http & https proxy scraper and checker

Neospace 11 Nov 15, 2021
a way to scrape a database of all of the isef projects

ISEF Database This is a simple web scraper which gets all of the projects and abstract information from here. My goal for this is for someone to get i

William Kaiser 1 Mar 18, 2022
Divar.ir Ads scrapper

Divar.ir Ads Scrapper Introduction This project first asynchronously grab Divar.ir Ads and then save to .csv and .xlsx files named data.csv and data.x

Iman Kermani 4 Aug 29, 2022
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
API to parse tibia.com content into python objects.

Tibia.py An API to parse Tibia.com content into object oriented data. No fetching is done by this module, you must provide the html content. Features:

Allan Galarza 25 Oct 31, 2022
Automatically scrapes all menu items from the Taco Bell website

Automatically scrapes all menu items from the Taco Bell website. Returns as PANDAS dataframe.

Sasha 2 Jan 15, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
京东云无线宝积分推送,支持查看多设备积分使用情况

JDRouterPush 项目简介 本项目调用京东云无线宝API,可每天定时推送积分收益情况,帮助你更好的观察主要信息 更新日志 2021-03-02: 查询绑定的京东账户 通知排版优化 脚本检测更新 支持Server酱Turbo版 2021-02-25: 实现多设备查询 查询今

雷疯 199 Dec 12, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
Command line program to download documents from web portals.

command line document download made easy Highlights list available documents in json format or download them filter documents using string matching re

16 Dec 26, 2022
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

OneBit 2 Dec 13, 2021
Ebay Webscraper for Getting Average Product Price

Ebay-Webscraper-for-Getting-Average-Product-Price The code in this repo is used to determine the average price of an item on Ebay given a valid search

17 Jan 05, 2023
PS5 bot to find a console in france for chrismas 🎄🎅🏻 NOT FOR SCALPERS

Une PS5 pour Noël Python + Chrome --headless = une PS5 pour noël MacOS Installer chrome Tweaker le .yaml pour la listes sites a scrap et les criteres

Olivier Giniaux 3 Feb 13, 2022
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Scrapes mcc-mnc.com and outputs 3 files with the data (JSON, CSV & XLSX)

mcc-mnc.com-webscraper Scrapes mcc-mnc.com and outputs 3 files with the data (JSON, CSV & XLSX) A Python script for web scraping mcc-mnc.com Link: mcc

Anton Ivarsson 1 Nov 07, 2021
SkyScrapers: A collection of variety of Scraping Apps

SkyScrapers Collection of variety of Web Scraping Apps The web-scrapers involved

Biplov Pokhrel 3 Feb 17, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022