Web Scraping images using Selenium and Python

Overview

Web Scraping images using Selenium and Python

N|Solid

A propos de ce document

This is a markdown document about Web scraping images and videos using Selenium and python. The document summarizes the presentation which has been divided in 2 parts: general presentation and workshop (the workshop is the tutorial in the table of contents). Author :

Markdown is a lightweight markup language based on the formatting conventions that people naturally use in emails. As written by John Gruber on the Markdown site

The overriding design goal for Markdown's formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions.

Table of Contents

Introduction :

Selenium is a Python library and tool used for automating web browsers to do a number of tasks. One of such is web-scraping to extract useful data and information that may be otherwise unavailable. N|Solid

Why using Selenium?

As we know that many tools can be used to scraping data from a website, and the three most popular from them are Scrapy, Beautifulsoup, and Selenium. However, each of them has the special ability for their action to scrape a website. Selenium is a powerful tool for scraping. It can handle automation in a complex way. For example, we need to log in to our Instagram account to scraping Instagram’s website. And surprisingly, selenium can handle it such as log in to our Instagram account automatically. Selenium is useful when you have to perform an action on a website such as :

  • Clicking on buttons
  • Filling forms
  • Scrolling
  • Taking a screenshot

Installation :

We will use Chrome in our example, so make sure you have it installed on your local machine:

Download webdriver:

One of the tools that we must prepare to run the selenium program is webdriver (for Chrome) or geckodriver (for Firefox). You can download it from here (for Chrome user).

Installing the required libraries:

First, we must install a selenium library on our terminal such as the code below:

pip install selenium

Once it has been done, then we must install some python libraries required such as time and requests like the code below:

pip install time

and

pip install requests

Great! Our scraping environment has been prepared, and let’s code!

Importing the libraries :

Here the code about importing the required libraries for scraping using selenium:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, urllib.request
import requests
import os 
import argparse

Setting the PATH code :

The PATH code is the code that aims to connect our code with the browser. Here the code about PATH is below:

# Configure environment variables path for chromedriver.exe
PATH = r"C:\Users\DELL\Desktop\Scrapping photos clothes\chromedriver.exe"
driver = webdriver.Chrome(PATH)

Or alternatively, if you saved your webdriver inside the root folder, you can simply type the following and skip the file_path specification:

driver = webdriver.Chrome()

Start Connection and Login :

Get the website :

After coding the PATH variable, then we must get Instagram’s website which is our scraping target. So, the code is below:

# Navigate to Instagram page
driver.get("https://www.instagram.com/")

This will launch Chrome in headfull mode (like regular Chrome, which is controlled by our Python code). You should see a message stating that the browser is controlled by automated software.

Login and Searchbox handling :

Since we’ve found the primary page of our Instagram account named home, then we must login with username and password,after that we must go to the Instagram account target by type the name of our Instagram account target in the search box located at the top of the display. Then, we must get the element of the search box to fill the blank box automatically.

#login and searchbox function
    def start_connection(self):
        driver.get("https://www.instagram.com/")
        time.sleep(3)
        username=driver.find_element_by_css_selector("input[name='username']")
        password=driver.find_element_by_css_selector("input[name='password']")
        username.clear()
        password.clear()
        username.send_keys(self.user)
        password.send_keys(self.pwd)
        driver.find_element_by_css_selector("button[type='submit']").click()
        #save your login info?
        time.sleep(5)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #turn on notif
        time.sleep(5)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #searchbox
        time.sleep(5)
        searchbox=driver.find_element_by_css_selector("input[placeholder='Rechercher']")
        searchbox.clear()
        searchbox.send_keys(self.page_name)
        time.sleep(3)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(3)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(3)

The code above explains that we started by login to our Instagram account, then we handled the search box automatically by creating the searchbox variable.

Scroll down the profile :

Since we have the profile page for the target user, we must think that we have already scraped this page soon. However, we must scroll down the page automatically first before. Here the code:

20: break # You can change it as per your needs and internet speed">
#scroll down
    def scroll_down(self):
        start = time.time()
        #driver = webdriver.Chrome()
        initialScroll = 0             
        while True:
            driver.execute_script(f"window.scrollTo({initialScroll},{self.finalScroll})")
            # this command scrolls the window starting from the pixel value stored in the initialScroll variable to the pixel value stored at the finalScroll variable
            initialScroll = self.finalScroll
            self.finalScroll += 1  
            # we will stop the script for 3 seconds so that the data can load
            time.sleep(3)
            end = time.time()
            # We will scroll for 20 seconds.You can change it as per your needs and internet speed
            if round(end - start) > 20:
                break
            # You can change it as per your needs and internet speed

If you want to scroll down the page automatically until the end of the page. Here the code:

#Scroll down to the end
scrolldown=driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var scrolldown=document.body.scrollHeight;return scrolldown;")
match=False
while(match==False):
    last_count = scrolldown
    time.sleep(3)
    scrolldown = driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var scrolldown=document.body.scrollHeight;return scrolldown;")
    if last_count==scrolldown:
        match=True

Get the URL posts :

Now, time to get these URL posts which are posted in the instagram page. First, we must create the empty box which is used to accommodate all the URL posts named posts. Then, we create the links variable which is to get all the elements that have the tag name “a”. Then, create the for loop function to get all the URL posts. Thus, create a folder so that we can group the downloaded images by following the code below.

    def fetch_links(self, posts = []):
        #driver = webdriver.Chrome()            
        links = driver.find_elements_by_tag_name('a')
        for link in links:
            post = link.get_attribute('href')
            if '/p/' in post:
                posts.append( post )
        try: 
            os.mkdir(self.folder_name)    
        except: 
            print("Folder Exist with that name!")
            self.folder_name = input("Enter another Folder Name:- ") 
        return(posts)

Download all of the posts :

Lastly, we must download all of the posts on there, and save them to our directory. So, the code is in below:

    def download_images(self, posts): 
        #driver = webdriver.Chrome()    
        download_url = ''
        for post in posts:	
            driver.get(post)
            shortcode = driver.current_url.split("/")[-2]
            time.sleep(7)
            download_url = driver.find_element_by_css_selector("img[style='object-fit:cover;']").get_attribute()
            urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode))
            time.sleep(5)

The Main Function :

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Runnnig...")
    parser.add_argument(
        "--user-email",
        "-U",
        help="Enter your username to login",
    )
    parser.add_argument(
        "--password",
        "-P",
        help="Enter your password to login",
        default="tracing",
    )
    parser.add_argument(
        "--instagram-page", 
        "-I",
        default="", 
        help="Enter the name of the page that you wanna scrap !!"
    )
    parser.add_argument("--scrolls-number",
        "-S",
        default=1, 
        type=int, 
        help="Enter the number of scroll down to the bottom of the page you want !!"
    )
    parser.add_argument("--export-folder",
        "-E",
        help="enter a enter a file name to create and store the scrapped images in this file."
    )
    args = parser.parse_args()
    images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder)
    images.start_connection()
    images.scroll_down()
    posts = images.fetch_links()
    images.download_images(posts)

Running :

Open terminal in the directory of scraper.py and enter: In the first argument enter your instagram username after typing -U or -user-email, in the second enter the password after typing -P or -password, in the third enter the name of the page you want to scrape after typing -I or -instagram-page, in the fourth argument enter the number of scroll down to the bottom of the page you want after typing -S or -scrolls-number it must be an integer, and the last argument enter a file name after typing -E or -export-folder to create and store the scrapped images in this file.

python scrap.py -U  "us[email protected]" -P "Your password" -I "The page" -S 5 -E "file1"

Go grab a cup of coffee while waiting... oh wait, it's already done!

For more informations run this command :

python scrap.py --help

Conclusion :

Finally, we have got all about the code completely in here. Here the code:

20: break # You can change it as per your needs and internet speed def fetch_links(self, posts = []): #driver = webdriver.Chrome() links = driver.find_elements_by_tag_name('a') for link in links: post = link.get_attribute('href') if '/p/' in post: posts.append( post ) try: os.mkdir(self.folder_name) except: print("Folder Exist with that name!") self.folder_name = input("Enter another Folder Name:- ") return(posts) def download_images(self, posts): #driver = webdriver.Chrome() download_url = '' for post in posts: driver.get(post) shortcode = driver.current_url.split("/")[-2] time.sleep(7) download_url = driver.find_element_by_css_selector("img[style='object-fit: cover;']").get_attribute('src') urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode)) time.sleep(5) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Runnnig...") parser.add_argument( "--user-email", "-U", help="Enter your username to login", ) parser.add_argument( "--password", "-P", help="Enter your password to login", default="tracing", ) parser.add_argument( "--instagram-page", "-I", default="", help="Enter the name of the page that you wanna scrap !!" ) parser.add_argument("--scrolls-number", "-S", default=1, type=int, help="Enter the number of scroll down to the bottom of the page you want !!" ) parser.add_argument("--export-folder", "-E", help="enter a enter a file name to create and store the scrapped images in this file." ) args = parser.parse_args() images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder) images.start_connection() images.scroll_down() posts = images.fetch_links() images.download_images(posts)">
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, urllib.request
import os 
import sys
import logging
import argparse

driver = webdriver.Chrome()
logging.basicConfig(level=logging.DEBUG)

class MyScrapper :
    global driver
    def __init__(self, user, pwd, page_name, finalScroll, folder_name):
        self.folder_name = folder_name
        self.user = user
        self.pwd = pwd
        self.page_name = page_name
        self.finalScroll = int(finalScroll)
        self.driver = driver

    def start_connection(self):
        driver.get("https://www.instagram.com/")
        time.sleep(5)
        username=driver.find_element_by_css_selector("input[name='username']")
        password=driver.find_element_by_css_selector("input[name='password']")
        username.clear()
        password.clear()
        username.send_keys(self.user)
        password.send_keys(self.pwd)
        driver.find_element_by_css_selector("button[type='submit']").click()
        #save your login info?
        time.sleep(10)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #turn on notif
        time.sleep(10)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #searchbox
        time.sleep(5)
        searchbox=driver.find_element_by_css_selector("input[placeholder='Rechercher']")
        searchbox.clear()
        searchbox.send_keys(self.page_name)
        time.sleep(5)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(5)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(5)
        # will be used in the while loop

    def scroll_down(self):
        start = time.time()
        #driver = webdriver.Chrome()
        initialScroll = 0             
        while True:
            driver.execute_script(f"window.scrollTo({initialScroll},{self.finalScroll})")
            # this command scrolls the window starting from the pixel value stored in the initialScroll variable to the pixel value stored at the finalScroll variable
            initialScroll = self.finalScroll
            self.finalScroll += 1  
            # we will stop the script for 3 seconds so that the data can load
            time.sleep(3)
            end = time.time()
            # We will scroll for 20 seconds.You can change it as per your needs and internet speed
            if round(end - start) > 20:
                break
            # You can change it as per your needs and internet speed

    def fetch_links(self, posts = []):
        #driver = webdriver.Chrome()            
        links = driver.find_elements_by_tag_name('a')
        for link in links:
            post = link.get_attribute('href')
            if '/p/' in post:
                posts.append( post )
        try: 
            os.mkdir(self.folder_name)    
        except: 
            print("Folder Exist with that name!")
            self.folder_name = input("Enter another Folder Name:- ") 
        return(posts)

    def download_images(self, posts): 
        #driver = webdriver.Chrome()    
        download_url = ''
        for post in posts:	
            driver.get(post)
            shortcode = driver.current_url.split("/")[-2]
            time.sleep(7)
            download_url = driver.find_element_by_css_selector("img[style='object-fit: cover;']").get_attribute('src')
            urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode))
            time.sleep(5)

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Runnnig...")
    parser.add_argument(
        "--user-email",
        "-U",
        help="Enter your username to login",
    )
    parser.add_argument(
        "--password",
        "-P",
        help="Enter your password to login",
        default="tracing",
    )
    parser.add_argument(
        "--instagram-page", 
        "-I",
        default="", 
        help="Enter the name of the page that you wanna scrap !!"
    )
    parser.add_argument("--scrolls-number",
        "-S",
        default=1, 
        type=int, 
        help="Enter the number of scroll down to the bottom of the page you want !!"
    )
    parser.add_argument("--export-folder",
        "-E",
        help="enter a enter a file name to create and store the scrapped images in this file."
    )
    args = parser.parse_args()
    images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder)
    images.start_connection()
    images.scroll_down()
    posts = images.fetch_links()
    images.download_images(posts)

The end ! Thank you all NOTE: Web Scraping from many websites is Illegal. This project is just for Learning and Fun.

Owner
Nafaa BOUGRAINE
Nafaa BOUGRAINE
Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye

Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye, you can search with various keywords and usernames on Twitter.

Jolanda de Koff 19 Dec 12, 2022
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Eric DE MARIA 1 Nov 30, 2021
Nekopoi scraper using python3

Features Scrap from url Todo [+] Search by genre [+] Search by query [+] Scrap from homepage Example # Hentai Scraper from nekopoi import Hent

MhankBarBar 9 Apr 06, 2022
A web scraper that exports your entire WhatsApp chat history.

WhatSoup 🍲 A web scraper that exports your entire WhatsApp chat history. Table of Contents Overview Demo Prerequisites Instructions Frequen

Eddy Harrington 87 Jan 06, 2023
Ebay Webscraper for Getting Average Product Price

Ebay-Webscraper-for-Getting-Average-Product-Price The code in this repo is used to determine the average price of an item on Ebay given a valid search

17 Jan 05, 2023
哔哩哔哩爬取器:以个人为中心

Open Bilibili Crawer 哔哩哔哩是一个信息非常丰富的社交平台,我们基于此构造社交网络。在该网络中,节点包括用户(up主),以及视频、专栏等创作产物;关系包括:用户之间,包括关注关系(following/follower),回复关系(评论区),转发关系(对视频or动态转发);用户对创

Boshen Shi 3 Oct 21, 2021
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022
Scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info

SpaceX Sofware I developed software to scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info to use the software you need Python a

Maxence Rémy 16 Aug 02, 2022
Basic-html-scraper - A complete how to of web scraping with Python for beginners

basic-html-scraper Code from YT Video This video includes a complete how to of w

John 12 Oct 22, 2022
Unja is a fast & light tool for fetching known URLs from Wayback Machine

Unja Fetch Known Urls What's Unja? Unja is a fast & light tool for fetching known URLs from Wayback Machine, Common Crawl, Virus Total & AlienVault's

Sheryar 10 Aug 07, 2022
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
OSTA web scraper, for checking the status of school buses in Ottawa

OSTA-La-Vista OSTA web scraper, for checking the status of school buses in Ottawa. Getting Started Using a Raspberry Pi, download Python 3, and option

1 Jan 28, 2022
Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.

Joseph Lai 543 Jan 03, 2023
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 05, 2022
Bulk download tool for the MyMedia platform

MyMedia Bulk Content Downloader This is a bulk download tool for the MyMedia platform. USE ONLY WHERE ALLOWED BY THE COPYRIGHT OWNER. NOT AFFILIATED W

Ege Feyzioglu 3 Oct 14, 2022
This is a sport analytics project that combines the knowledge of OOP and Webscraping

This is a sport analytics project that combines the knowledge of Object Oriented Programming (OOP) and Webscraping, the weekly scraping of the English Premier league table is carried out to assess th

Dolamu Oludare 1 Nov 26, 2021
Video Games Web Scraper is a project that crawls websites and APIs and extracts video game related data from their pages.

Video Games Web Scraper Video Games Web Scraper is a project that crawls websites and APIs and extracts video game related data from their pages. This

Albert Marrero 1 Jan 12, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
京东抢茅台,秒杀成功很多次讨论,天猫抢购,赚钱交流等。

Jd_Seckill 特别声明: 请添加个人微信:19972009719 进群交流讨论 目前群里很多人抢到【扫描微信添加群就好,满200关闭群,有喜欢薅信用卡羊毛的也可以找我交流】 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性

50 Jan 05, 2023