Web Scraping images using Selenium and Python

Overview

Web Scraping images using Selenium and Python

N|Solid

A propos de ce document

This is a markdown document about Web scraping images and videos using Selenium and python. The document summarizes the presentation which has been divided in 2 parts: general presentation and workshop (the workshop is the tutorial in the table of contents). Author :

Markdown is a lightweight markup language based on the formatting conventions that people naturally use in emails. As written by John Gruber on the Markdown site

The overriding design goal for Markdown's formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions.

Table of Contents

Introduction :

Selenium is a Python library and tool used for automating web browsers to do a number of tasks. One of such is web-scraping to extract useful data and information that may be otherwise unavailable. N|Solid

Why using Selenium?

As we know that many tools can be used to scraping data from a website, and the three most popular from them are Scrapy, Beautifulsoup, and Selenium. However, each of them has the special ability for their action to scrape a website. Selenium is a powerful tool for scraping. It can handle automation in a complex way. For example, we need to log in to our Instagram account to scraping Instagram’s website. And surprisingly, selenium can handle it such as log in to our Instagram account automatically. Selenium is useful when you have to perform an action on a website such as :

  • Clicking on buttons
  • Filling forms
  • Scrolling
  • Taking a screenshot

Installation :

We will use Chrome in our example, so make sure you have it installed on your local machine:

Download webdriver:

One of the tools that we must prepare to run the selenium program is webdriver (for Chrome) or geckodriver (for Firefox). You can download it from here (for Chrome user).

Installing the required libraries:

First, we must install a selenium library on our terminal such as the code below:

pip install selenium

Once it has been done, then we must install some python libraries required such as time and requests like the code below:

pip install time

and

pip install requests

Great! Our scraping environment has been prepared, and let’s code!

Importing the libraries :

Here the code about importing the required libraries for scraping using selenium:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, urllib.request
import requests
import os 
import argparse

Setting the PATH code :

The PATH code is the code that aims to connect our code with the browser. Here the code about PATH is below:

# Configure environment variables path for chromedriver.exe
PATH = r"C:\Users\DELL\Desktop\Scrapping photos clothes\chromedriver.exe"
driver = webdriver.Chrome(PATH)

Or alternatively, if you saved your webdriver inside the root folder, you can simply type the following and skip the file_path specification:

driver = webdriver.Chrome()

Start Connection and Login :

Get the website :

After coding the PATH variable, then we must get Instagram’s website which is our scraping target. So, the code is below:

# Navigate to Instagram page
driver.get("https://www.instagram.com/")

This will launch Chrome in headfull mode (like regular Chrome, which is controlled by our Python code). You should see a message stating that the browser is controlled by automated software.

Login and Searchbox handling :

Since we’ve found the primary page of our Instagram account named home, then we must login with username and password,after that we must go to the Instagram account target by type the name of our Instagram account target in the search box located at the top of the display. Then, we must get the element of the search box to fill the blank box automatically.

#login and searchbox function
    def start_connection(self):
        driver.get("https://www.instagram.com/")
        time.sleep(3)
        username=driver.find_element_by_css_selector("input[name='username']")
        password=driver.find_element_by_css_selector("input[name='password']")
        username.clear()
        password.clear()
        username.send_keys(self.user)
        password.send_keys(self.pwd)
        driver.find_element_by_css_selector("button[type='submit']").click()
        #save your login info?
        time.sleep(5)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #turn on notif
        time.sleep(5)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #searchbox
        time.sleep(5)
        searchbox=driver.find_element_by_css_selector("input[placeholder='Rechercher']")
        searchbox.clear()
        searchbox.send_keys(self.page_name)
        time.sleep(3)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(3)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(3)

The code above explains that we started by login to our Instagram account, then we handled the search box automatically by creating the searchbox variable.

Scroll down the profile :

Since we have the profile page for the target user, we must think that we have already scraped this page soon. However, we must scroll down the page automatically first before. Here the code:

20: break # You can change it as per your needs and internet speed">
#scroll down
    def scroll_down(self):
        start = time.time()
        #driver = webdriver.Chrome()
        initialScroll = 0             
        while True:
            driver.execute_script(f"window.scrollTo({initialScroll},{self.finalScroll})")
            # this command scrolls the window starting from the pixel value stored in the initialScroll variable to the pixel value stored at the finalScroll variable
            initialScroll = self.finalScroll
            self.finalScroll += 1  
            # we will stop the script for 3 seconds so that the data can load
            time.sleep(3)
            end = time.time()
            # We will scroll for 20 seconds.You can change it as per your needs and internet speed
            if round(end - start) > 20:
                break
            # You can change it as per your needs and internet speed

If you want to scroll down the page automatically until the end of the page. Here the code:

#Scroll down to the end
scrolldown=driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var scrolldown=document.body.scrollHeight;return scrolldown;")
match=False
while(match==False):
    last_count = scrolldown
    time.sleep(3)
    scrolldown = driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var scrolldown=document.body.scrollHeight;return scrolldown;")
    if last_count==scrolldown:
        match=True

Get the URL posts :

Now, time to get these URL posts which are posted in the instagram page. First, we must create the empty box which is used to accommodate all the URL posts named posts. Then, we create the links variable which is to get all the elements that have the tag name “a”. Then, create the for loop function to get all the URL posts. Thus, create a folder so that we can group the downloaded images by following the code below.

    def fetch_links(self, posts = []):
        #driver = webdriver.Chrome()            
        links = driver.find_elements_by_tag_name('a')
        for link in links:
            post = link.get_attribute('href')
            if '/p/' in post:
                posts.append( post )
        try: 
            os.mkdir(self.folder_name)    
        except: 
            print("Folder Exist with that name!")
            self.folder_name = input("Enter another Folder Name:- ") 
        return(posts)

Download all of the posts :

Lastly, we must download all of the posts on there, and save them to our directory. So, the code is in below:

    def download_images(self, posts): 
        #driver = webdriver.Chrome()    
        download_url = ''
        for post in posts:	
            driver.get(post)
            shortcode = driver.current_url.split("/")[-2]
            time.sleep(7)
            download_url = driver.find_element_by_css_selector("img[style='object-fit:cover;']").get_attribute()
            urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode))
            time.sleep(5)

The Main Function :

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Runnnig...")
    parser.add_argument(
        "--user-email",
        "-U",
        help="Enter your username to login",
    )
    parser.add_argument(
        "--password",
        "-P",
        help="Enter your password to login",
        default="tracing",
    )
    parser.add_argument(
        "--instagram-page", 
        "-I",
        default="", 
        help="Enter the name of the page that you wanna scrap !!"
    )
    parser.add_argument("--scrolls-number",
        "-S",
        default=1, 
        type=int, 
        help="Enter the number of scroll down to the bottom of the page you want !!"
    )
    parser.add_argument("--export-folder",
        "-E",
        help="enter a enter a file name to create and store the scrapped images in this file."
    )
    args = parser.parse_args()
    images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder)
    images.start_connection()
    images.scroll_down()
    posts = images.fetch_links()
    images.download_images(posts)

Running :

Open terminal in the directory of scraper.py and enter: In the first argument enter your instagram username after typing -U or -user-email, in the second enter the password after typing -P or -password, in the third enter the name of the page you want to scrape after typing -I or -instagram-page, in the fourth argument enter the number of scroll down to the bottom of the page you want after typing -S or -scrolls-number it must be an integer, and the last argument enter a file name after typing -E or -export-folder to create and store the scrapped images in this file.

python scrap.py -U  "us[email protected]" -P "Your password" -I "The page" -S 5 -E "file1"

Go grab a cup of coffee while waiting... oh wait, it's already done!

For more informations run this command :

python scrap.py --help

Conclusion :

Finally, we have got all about the code completely in here. Here the code:

20: break # You can change it as per your needs and internet speed def fetch_links(self, posts = []): #driver = webdriver.Chrome() links = driver.find_elements_by_tag_name('a') for link in links: post = link.get_attribute('href') if '/p/' in post: posts.append( post ) try: os.mkdir(self.folder_name) except: print("Folder Exist with that name!") self.folder_name = input("Enter another Folder Name:- ") return(posts) def download_images(self, posts): #driver = webdriver.Chrome() download_url = '' for post in posts: driver.get(post) shortcode = driver.current_url.split("/")[-2] time.sleep(7) download_url = driver.find_element_by_css_selector("img[style='object-fit: cover;']").get_attribute('src') urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode)) time.sleep(5) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Runnnig...") parser.add_argument( "--user-email", "-U", help="Enter your username to login", ) parser.add_argument( "--password", "-P", help="Enter your password to login", default="tracing", ) parser.add_argument( "--instagram-page", "-I", default="", help="Enter the name of the page that you wanna scrap !!" ) parser.add_argument("--scrolls-number", "-S", default=1, type=int, help="Enter the number of scroll down to the bottom of the page you want !!" ) parser.add_argument("--export-folder", "-E", help="enter a enter a file name to create and store the scrapped images in this file." ) args = parser.parse_args() images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder) images.start_connection() images.scroll_down() posts = images.fetch_links() images.download_images(posts)">
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time, urllib.request
import os 
import sys
import logging
import argparse

driver = webdriver.Chrome()
logging.basicConfig(level=logging.DEBUG)

class MyScrapper :
    global driver
    def __init__(self, user, pwd, page_name, finalScroll, folder_name):
        self.folder_name = folder_name
        self.user = user
        self.pwd = pwd
        self.page_name = page_name
        self.finalScroll = int(finalScroll)
        self.driver = driver

    def start_connection(self):
        driver.get("https://www.instagram.com/")
        time.sleep(5)
        username=driver.find_element_by_css_selector("input[name='username']")
        password=driver.find_element_by_css_selector("input[name='password']")
        username.clear()
        password.clear()
        username.send_keys(self.user)
        password.send_keys(self.pwd)
        driver.find_element_by_css_selector("button[type='submit']").click()
        #save your login info?
        time.sleep(10)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #turn on notif
        time.sleep(10)
        driver.find_element_by_xpath("//button[contains(text(), 'Plus tard')]").click()
        #searchbox
        time.sleep(5)
        searchbox=driver.find_element_by_css_selector("input[placeholder='Rechercher']")
        searchbox.clear()
        searchbox.send_keys(self.page_name)
        time.sleep(5)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(5)
        searchbox.send_keys(Keys.ENTER)
        time.sleep(5)
        # will be used in the while loop

    def scroll_down(self):
        start = time.time()
        #driver = webdriver.Chrome()
        initialScroll = 0             
        while True:
            driver.execute_script(f"window.scrollTo({initialScroll},{self.finalScroll})")
            # this command scrolls the window starting from the pixel value stored in the initialScroll variable to the pixel value stored at the finalScroll variable
            initialScroll = self.finalScroll
            self.finalScroll += 1  
            # we will stop the script for 3 seconds so that the data can load
            time.sleep(3)
            end = time.time()
            # We will scroll for 20 seconds.You can change it as per your needs and internet speed
            if round(end - start) > 20:
                break
            # You can change it as per your needs and internet speed

    def fetch_links(self, posts = []):
        #driver = webdriver.Chrome()            
        links = driver.find_elements_by_tag_name('a')
        for link in links:
            post = link.get_attribute('href')
            if '/p/' in post:
                posts.append( post )
        try: 
            os.mkdir(self.folder_name)    
        except: 
            print("Folder Exist with that name!")
            self.folder_name = input("Enter another Folder Name:- ") 
        return(posts)

    def download_images(self, posts): 
        #driver = webdriver.Chrome()    
        download_url = ''
        for post in posts:	
            driver.get(post)
            shortcode = driver.current_url.split("/")[-2]
            time.sleep(7)
            download_url = driver.find_element_by_css_selector("img[style='object-fit: cover;']").get_attribute('src')
            urllib.request.urlretrieve( download_url, './'+self.folder_name+'/{}.jpg'.format(shortcode))
            time.sleep(5)

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Runnnig...")
    parser.add_argument(
        "--user-email",
        "-U",
        help="Enter your username to login",
    )
    parser.add_argument(
        "--password",
        "-P",
        help="Enter your password to login",
        default="tracing",
    )
    parser.add_argument(
        "--instagram-page", 
        "-I",
        default="", 
        help="Enter the name of the page that you wanna scrap !!"
    )
    parser.add_argument("--scrolls-number",
        "-S",
        default=1, 
        type=int, 
        help="Enter the number of scroll down to the bottom of the page you want !!"
    )
    parser.add_argument("--export-folder",
        "-E",
        help="enter a enter a file name to create and store the scrapped images in this file."
    )
    args = parser.parse_args()
    images = MyScrapper(args.user_email,args.password,args.instagram_page,args.scrolls_number,args.export_folder)
    images.start_connection()
    images.scroll_down()
    posts = images.fetch_links()
    images.download_images(posts)

The end ! Thank you all NOTE: Web Scraping from many websites is Illegal. This project is just for Learning and Fun.

Owner
Nafaa BOUGRAINE
Nafaa BOUGRAINE
download NCERT books using scrapy

download_ncert_books download NCERT books using scrapy Downloading Books: You can either use the spider by cloning this repo and following the instruc

1 Dec 02, 2022
联通手机营业厅自动做任务、签到、领流量、领积分等。

联通手机营业厅自动完成每日任务,领流量、签到获取积分等,月底流量不发愁。 功能 沃之树领流量、浇水(12M日流量) 每日签到(1积分+翻倍4积分+第七天1G流量日包) 天天抽奖,每天三次免费机会(随机奖励) 游戏中心每日打卡(连续打卡,积分递增至最高

2k May 06, 2021
A modern CSS selector implementation for BeautifulSoup

Soup Sieve Overview Soup Sieve is a CSS selector library designed to be used with Beautiful Soup 4. It aims to provide selecting, matching, and filter

Isaac Muse 151 Dec 23, 2022
A Smart, Automatic, Fast and Lightweight Web Scraper for Python

AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python This project is made for automatic web scraping to make scraping easy. It

Mika 4.8k Jan 04, 2023
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

Remax Alghamdi 19 Nov 17, 2022
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 06, 2022
Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

Josué Campos 5 Nov 29, 2021
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

OneBit 2 Dec 13, 2021
A simple reddit scraper to get memes (only images) from r/ProgrammerHumor.

memey A simple reddit scraper to get memes (only images) from r/ProgrammerHumor. Note Only works if you have firefox installed (yet). Instructions foo

2 Nov 16, 2021
Shopee Scraper - A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil

Shopee Scraper A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil. The project was crea

Paulo DaRosa 5 Nov 29, 2022
Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

Fazil vk 1 Jan 08, 2022
Libextract: extract data from websites

Libextract is a statistics-enabled data extraction library that works on HTML and XML documents and written in Python

499 Dec 09, 2022
京东茅台抢购

截止 2021/2/1 日,该项目已无法使用! 京东:约满即止,仅限京东实名认证用户APP端抢购,2月1日10:00开始预约,2月1日12:00开始抢购(京东APP需升级至8.5.6版本及以上) 写在前面 本项目来自 huanghyw - jd_seckill,作者的项目地址我找不到了,找到了再贴上

abee 73 Dec 03, 2022
TikTok Username Swapper/Claimer/etc

TikTok-Turbo TikTok Username Swapper/Claimer/etc I wanted to create it as fast as possible but i eventually gave up and recoded it many many many many

Kevin 12 Dec 19, 2022
Crawl BookCorpus

These are scripts to reproduce BookCorpus by yourself.

Sosuke Kobayashi 590 Jan 03, 2023
A web scraper which checks price of a product regularly and sends price alerts by email if price reduces.

Amazon-Web-Scarper Created a web scraper using simple functions to check price of a product on amazon (can be duplicated to check price at other marke

Swaroop Todankar 1 Jan 17, 2022
Kusonime scraper using python3

Features Scrap from url Scrap from recommendation Search by query Todo [+] Search by genre Example # Get download url from kusonime import Scrap

MhankBarBar 2 Jan 28, 2022
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022