A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Overview

scraper

Udemy Scraper

License Python Chromium

A Web Scraper built with beautiful soup, that fetches udemy course information.

Installation

Virtual Environment

Firstly, it is recommended to install and run this inside of a virtual environment. You can do so by using the virtualenv library and then activating it.

pip install virtualenv

virtualenv somerandomname

Activating for *nix

source somerandomname/bin/activate

Activating for Windows

somerandomname\Scripts\activate

Package Installation

pip install -r requirements.txt

Chrome setup

Be sure to have chrome installed and install the corresponding version of chromedriver. I have already provided a windows binary file. If you want, you can install the linux binary for the chromedriver from its page.

Approach

It is fairly easy to webscrape sites, however, there are some sites that are not that scrape-friendly. Scraping sites, in itself is perfectly legal however there have been cases of lawsuits against web scraping, some companies *cough Amazon *cough consider web-scraping from its website illegal however, they themselves, web-scrape from other websites. And then there are some sites like udemy, that try to prevent people from scraping their site.

Using BS4 in itself, doesn't give the required results back, so I had to use a browser engine by using selenium to fetch the courses information. Initially, even that didn't work out, but then I realised the courses were being fetch asynchronously so I had to add a bit of delay. So fetching the data can be a bit slow initially.

Functionality

As of this commit, the script can search udemy for the search term you input and get the courses link, and all the other overview details like description, instructor, duration, rating, etc.

Here is a json representation of the data it can fetch as of now:-

{
  "query": "The Complete Angular Course: Beginner to Advanced",
  "link": "https://udemy.com/course/the-complete-angular-master-class/",
  "title": "The Complete Angular Course: Beginner to Advanced",
  "headline": "The most comprehensive Angular 4 (Angular 2+) course. Build a real e-commerce app with Angular, Firebase and Bootstrap 4",
  "instructor": "Mosh Hamedani",
  "rating": "4.5",
  "duration": "29.5 total hours",
  "no_of_lectures": "376 lectures",
  "tags": ["Development", "Web Development", "Angular"],
  "no_of_rating": "23,910",
  "no_of_students": "96,174",
  "course_language": "English",
  "objectives": [
    "Establish yourself as a skilled professional developer",
    "Build real-world Angular applications on your own",
    "Troubleshoot common Angular errors",
    "Master the best practices",
    "Write clean and elegant code like a professional developer"
  ],
  "Sections": [
    {
      "name": "Introduction",
      "lessons": [{ "name": "Introduction" }, { "name": "What is Angular" }],
      "no_of_lessons": 12
    },
    {
      "name": "TypeScript Fundamentals",
      "lessons": [
        { "name": "Introduction" },
        { "name": "What is TypeScript?" }
      ],
      "no_of_lessons": 18
    },
    {
      "name": "Angular Fundamentals",
      "lessons": [
        { "name": "Introduction" },
        { "name": "Building Blocks of Angular Apps" }
      ],
      "no_of_lessons": 10
    }
  ],
  "requirements": [
    "Basic familiarity with HTML, CSS and JavaScript",
    "NO knowledge of Angular 1 or Angular 2 is required"
  ],
  "description": "\nAngular is one of the most popular frameworks for building client apps with HTML, CSS and TypeScript. If you want to establish yourself as a front-end or a full-stack developer, you need to learn Angular.\n\nIf you've been confused or frustrated jumping from one Angular 4 tutoria...",
  "target_audience": [
    "Developers who want to upgrade their skills and get better job opportunities",
    "Front-end developers who want to stay up-to-date with the latest technology"
  ],
  "banner": "https://foo.com/somepicture.jpg"
}

Usage

In order to use the scraper, import it as a module and then create a new course class like so-

from udemyscraper import UdemyCourse

This will import the UdemyCourse class and then you can create an instance of it and then pass the search query to it. Prefarably the exact course name.

from udemyscraper import UdemyCourse

javascript_course = UdemyCourse("Javascript course for beginners")

This will create an empty instance of UdemyCourse. To fetch the data, you need to call the fetch_course function.

javascript_course.fetch_course()

Now that you have the course, you can access all of the courses' data as shown here.

print(javascript_course.Sections[2].lessons[1].name) # This will print out the 3rd Sections' 2nd Lesson's name
Comments
  • pip install fails

    pip install fails

    Describe the bug Unable to install udemyscraper via pip install

    To Reproduce ERROR: Cannot install udemyscraper==0.8.1 and udemyscraper==0.8.2 because these package versions have conflicting dependencies.

    The conflict is caused by: udemyscraper 0.8.2 depends on getopt2==0.0.3 udemyscraper 0.8.1 depends on getopt2==0.0.3

    Desktop (please complete the following information):

    • OS: MAC OS
    bug 
    opened by nuggetsnetwork 5
  • udemyscraper timesout

    udemyscraper timesout

    Describe the bug When running the sample code all I get is timeout.

    To Reproduce Steps to reproduce the behavior: Run the sample code from udemyscraper import UdemyCourse

    course = UdemyCourse() course.fetch_course('learn javascript') print(course.title)

    Current behavior Timed out waiting for page to load or could not find a matching course

    OS: MACOS

    bug duplicate 
    opened by nuggetsnetwork 3
  • Switch to browser explicit wait

    Switch to browser explicit wait

    EXPERIMENTAL! Needs Testing.

    time.sleep() introduces a necessary wait, even if the page has already been loaded.

    By using expected_components, we can proceed as and when the element loads. Using the python time library, I calculated the time taken by search and course page to load to be 2 seconds (approx.)

    Theoretically speaking, after the change, execution time should have reduced by 5 seconds. (3+4-2) However, the gain was only 3 seconds instead of the expected 5.

    This behavior seems unexpected for the moment, unless we can find where the missing 2 seconds are. For a reference, the original version, using time.sleep() took 17 seconds to execute.

    (All times are measured for my internet connection, by executing the given example in readme file)

    Possibly need to dig further. I haven't yet got the time to read full code.

    bug optimization 
    opened by flyingcakes85 3
  • Use explicit wait for search query

    Use explicit wait for search query

    Here 4 seconds have been hardcoded, it will be better to wait for the search results to load and then get the source code.

    A basic method to do this would be to check if search element is visible or not, once its visible, it can proceed to fetch source code. This way if you have a really fast connection, you wouldn't need to wait longer and vice-versa.

    bug optimization 
    opened by sortedcord 3
  • Classes Frequently Keep Changing

    Classes Frequently Keep Changing

    It seems that on the search page, the classes of the elements keep changing. So it would be best to only fetch the course url and then fetch all the other data from the course page itself.

    bug 
    opened by sortedcord 3
  • Serialize to xml

    Serialize to xml

    Experimental!!

    Export the entire dictionary to a xml file using the dict2xml library.

    • [x] Make branch even with refractor base
    • [x] Switch to dicttoxml from dict2xml
    • [x] Object arrays of sections and lessons are not grouped under one root called Sections or Lessons. This is also the case for all of the other arrays.
    • [x] Rename List item
    • [x] Rename root tag to course
    enhancement area: module 
    opened by sortedcord 2
  • Automatically fetch drivers

    Automatically fetch drivers

    Setup a way to automatically fetch browser drivers based on user's choice (chromium/firefox) corresponding to the installed browser version.

    The hard part will be to find the version of the browser installed.

    enhancement help wanted 
    opened by sortedcord 2
  • Timed out waiting for page to load or could not find a matching course

    Timed out waiting for page to load or could not find a matching course

    Whenever I try to scrape a course from udemy I get this error-

    on 1: Timed out waiting for page to load or could not find a matching course
    Scraping Course |████████████████████████████████████████| 1 in 29.5s (0.03/s)
    

    It was working a couple of times before but now it doesn't work..

    Steps to reproduce the behavior:

    1. This happens both when using the script and the module
    2. I used query argument
    3. Output- image

    Desktop (please complete the following information):

    • OS: Windows 10
    • Browser: Chromium
    • Version: 92

    I did checked by manually opening chromium and searching for the course. But when I use the scraper, it doesn't work.

    bug good first issue wontfix area: module 
    opened by sortedcord 1
  • Optimize element search

    Optimize element search

    Some tests have shown that it is way more efficient to use css selectors than find, especially with nested finds which tend to be wayyy more slow and time consuming. It would be better replace all of the find statements with select and then use direct path.

    optimization 
    opened by sortedcord 1
  • 🌐 Added browser selection argument

    🌐 Added browser selection argument

    Instead of editing the source code to select which browser you would like to use, you can now specify the same while initializing the UdemyCourse class or by simply using an argument when using the standalone way.

        -b  --browser       Allows you to select the browser you would like to use for Scraping
                            Values: "chrome" or "firefox". Defaults to chrome if no argument is passed.
    

    Also provided a gekodriver.exe binary.

    enhancement optimization 
    opened by sortedcord 1
  • Implementation of Command Line Arguments

    Implementation of Command Line Arguments

    I assume that the main udemyScraper.py file will be used as a module, so instead I made another file main.py which can be used for such operations. As of now only some basic arguments have been added. Will add more in the future.

        -h  --help          Displays information about udemyscraper and its usage
        -v  --version       Displays the version of the tool
        -n  --no-warn       Disables the warning when initializing the udemyscourse class
    
    enhancement 
    opened by sortedcord 1
Releases(0.8.2)
  • 0.8.2(Oct 2, 2021)

  • Beta(Aug 29, 2021)

    The long awaited (atleast by me) distribution update for udemyscraper. Find this project on PyPI - https://pypi.org/project/udemyscraper/

    Added

    • Udemyscraper can now export multiple courses to csv files!

    • course_to_csv takes an array as an input and dumps each course to a single csv file.
    • Udemyscraper can now export courses to xml files!

    • course_to_xml is function that can be used to export the course object to an xml file with the appropriate tags and format.
    • udemyscraper.export submodule for exporting scraped course.
    • Support for Microsoft Edge (Chromium Based) browser.
    • Support for Brave Browser.

    Changes

    • Udemyscraper.py has been refractured into 5 different files:

      • __init__.py - Contains the code which will run when imported as a library
      • metadata.py - Contains metadata of the package such as the name, version, author, etc. Used by setup.py
      • output.py - Contains functions for outputting the course information.
      • udscraperscript.py -Is the script file which will run when you want to use udemyscraper as a script.
      • utils.py - Contains utility related functions for udemyscraper.
    • Now using udemyscraper.export instead of udemyscraper.output.

      • quick_display function has been replaced with print_course function.
    • Now using setup.py instead of setup.cfg

    • Deleted src folder which is now replaced by udemyscraper folder which is the source directory for all the modules

    • Installation Process

      Since udemyscraper is now to be used as a package, it is obvious that the installation process has also had major changes.

      Installation process is documented here

    • Renamed the browser_preference key in Preferences dictionary to browser

    • Relocated browser determination to utils as set_browser function.

    • Removed requirements.txt and pyproject.toml

    Fixed

    • Fixed cache argument bug.
    • Fixed importing preferences bug.
    • Fixed Banner Image scraping.
    • Fixed Progressbar exception handling.
    • Fixed recognition of chrome as a valid browser.
    • Preferences will not be printed while using the script.
    • Fixed browser key error
    Source code(tar.gz)
    Source code(zip)
    udemyscraper-0.8.1-py3-none-any.whl(31.19 KB)
    udemyscraper-0.8.1.tar.gz(4.87 MB)
Owner
Aditya Gupta
🎓 Student🎨 Front end Dev & Part time weeb ϞϞ(๑⚈ ․̫ ⚈๑)∩
Aditya Gupta
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

1 Jan 28, 2022
Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Joseph Lai 543 Jan 03, 2023
Searching info from Google using Python Scrapy

Python-Search-Engine-Scrapy || Python-爬虫-索引/利用爬虫获取谷歌信息**/ Searching info from Google using Python Scrapy /* 利用 PYTHON 爬虫获取天气信息,以及城市信息和资料**/ translatio

HONGVVENG 1 Jan 06, 2022
IGLS - Instagram Like Scraper CLI tool

IGLS - Instagram Like Scraper It's a web scraping command line tool based on python and selenium. Description This is a trial tool for learning purpos

Shreshth Goyal 5 Oct 29, 2021
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
Crawler in Python 3.7, 3.8. 3.9. Pypy3

Description Python Crawler written Python 3. (Supports major Python releases Python3.6, Python3.7 and Python 3.8) Installation and Use Setup VirtualEn

Vinit Kumar 2 Mar 12, 2022
Discord webhook spammer with proxy support and proxy scraper

Discord webhook spammer with proxy support and proxy scraper

3 Feb 27, 2022
Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Carmelo Gonzales 71 Oct 04, 2022
优化版本的京东茅台抢购神器

优化版本的京东茅台抢购神器

1.8k Mar 18, 2022
API to parse tibia.com content into python objects.

Tibia.py An API to parse Tibia.com content into object oriented data. No fetching is done by this module, you must provide the html content. Features:

Allan Galarza 25 Oct 31, 2022
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021
An application that on a given url, crowls a web page and gets all words, sorts and counts them.

Web-Scrapping-1 An application that on a given url, crowls a web page and gets all words, sorts and counts them. Installation Using the package manage

adriano atambo 1 Jan 16, 2022
Nekopoi scraper using python3

Features Scrap from url Todo [+] Search by genre [+] Search by query [+] Scrap from homepage Example # Hentai Scraper from nekopoi import Hent

MhankBarBar 9 Apr 06, 2022
feapder 是一款简单、快速、轻量级的爬虫框架。以开发快速、抓取快速、使用简单、功能强大为宗旨。支持分布式爬虫、批次爬虫、多模板爬虫,以及完善的爬虫报警机制。

feapder 是一款简单、快速、轻量级的爬虫框架。起名源于 fast、easy、air、pro、spider的缩写,以开发快速、抓取快速、使用简单、功能强大为宗旨,历时4年倾心打造。支持轻量爬虫、分布式爬虫、批次爬虫、爬虫集成,以及完善的爬虫报警机制。 之

boris 1.4k Dec 29, 2022
A web service for scanning media hosted by a Matrix media repository

Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit

Brendan Abolivier 5 Dec 01, 2022
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022
让中国用户使用git从github下载的速度提高1000倍!

序言 github上有很多好项目,但是国内用户连github却非常的慢.每次都要用插件或者其他工具来解决. 这次自己做一个小工具,输入github原地址后,就可以自动替换为代理地址,方便大家更快速的下载. 安装 pip install cit 主要功能与用法 主要功能 change 将目标地址转换为

35 Aug 29, 2022
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

OneBit 2 Dec 13, 2021
Scraping followers of an instagram account

ScrapInsta A script to scraping data from Instagram Install First of all you can run: pip install scrapinsta After that you need to install these requ

Matheus Kolln 1 Sep 05, 2021
simple http & https proxy scraper and checker

simple http & https proxy scraper and checker

Neospace 11 Nov 15, 2021