Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository

Overview

Comment Webpage Screenshot

Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository, uploads them to an Image Upload Service and comments the screenshots on the pull request that triggered the action.

Note: This Action Only Works on Pull Requests.

Workflow inputs

These are the inputs that can be provided on the workflow.

Name Required Description Default
upload_to No Image Upload Service Name (Options are: github_branch, imgur) More Details github_branch
capture_changed_html_files No Enable or Disable Screenshot Capture for Changed HTML Files on the Pull Request (Options are: yes, no) yes
capture_html_file_paths No Comma Seperated paths to the HTML files to be captured (Example: /pages/index.html, about.html) null
capture_urls No Comma Seperated URLs to be captured (Example: https://dev.example.com, https://dev.example.com/about.html) null
github_token No GITHUB_TOKEN provided by the workflow run or Personal Access Token (PAT) github.token

Example Workflow

name: Comment Webpage Screenshot

on:
  pull_request:
    types: [opened, reopened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/[email protected]

      - name: Comment Webpage Screenshot
        uses: saadmk11/[email protected]
        with:
          # Optional, the action will create a new branch and
          # upload the screenshots to that branch.
          upload_to: github_branch  # Or, imgur
          # Optional, the action will capture screenshots
          # of all the changed html files on the pull request.
          capture_changed_html_files: yes  # Or, no
          # Optional, the action will capture screenshots
          # of the html files provided in this input.
          # Comma seperated file paths are accepted
          capture_html_file_paths: "/pages/index.html, about.html"
          # Optional, the action will capture screenshots
          # of the URLs provided in this input.
          # You can add URLs of your development server or
          # run the server in the previous step
          # and add that URL here (For Example: http://172.17.0.1:8000/).
          # Comma seperated URLs are accepted.
          capture_urls: "https://dev.example.com, https://dev.example.com/about.html"
          # Optional
          github_token: {{ secrets.MY_GITHUB_TOKEN }}

Run Local Development Server Inside the Workflow and Capture Screenshots

If you want to run your application development server inside the action workflow and capture screenshot from the server running inside the workflow, Then You can structure the workflow yaml file similar to this:

Using Docker to Run The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      # Checkout your pull request code
    - uses: actions/[email protected]
    
    # Build Development Docker Image
    - run: docker build -t local .
    # Run the Docker Image
    # You need to run this detached (-d)
    # so that the action is not blocked
    # and can move on to the next step
    # You Need to publish the port on the host (-p 8000:8000)
    # So that it is reachable outside the container
    - run: docker run --name demo -d -p 8000:8000 local
    # Sleep for few seconds and let the container start
    - run: sleep 10
    
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # You must use `172.17.0.1` if you are running
        # the application locally inside the workflow
        # Otherwise the container which will run this action 
        # will not be able to reach the application
        capture_urls: 'http://172.17.0.1:8000/, http://172.17.0.1:8000/admin/login/'

Directly Running The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/[email protected]

      - name: Use Node.js
        uses: actions/[email protected]
        with:
          node-version: '16.x'
      # Use `nohup` to run the node app
      # so that the execution of the next steps are not blocked
      - run: nohup node main.js &
      # Sleep for few seconds and let the container start
      - run: sleep 5

      # Run Screenshot Comment Action
      - name: Run Screenshot Comment Action
        uses: saadmk11/[email protected]
        with:
          upload_to: imgur
          capture_changed_html_files: no
          # You must use `172.17.0.1` if you are running
          # the application locally inside the workflow
          # Otherwise, the container which will run this action 
          # will not be able to reach the application
          capture_urls: 'http://172.17.0.1:8081'

Important Note:

If you run the application server inside the GitHub Actions Workflow:

  • You need to run it in the background or detached mode.

  • If you are using docker to run your application server you need top publish the port to the host (for example: -p 8000:8000).

  • you can not use localhost url on capture_urls. You need to use 172.17.0.1 so that comment-webpage-screenshot action can send request to the server running locally. So, http://localhost:8081 will become http://172.17.0.1:8081

Examples including application code can be found here: Example Projects

Run External Development Server and Capture Screenshots

If your application has a external development server that deploys changes on every pull request. You can add the URLs of your development server on capture_urls input. This will let the action capture screenshots from the external development server after deployment.

Example:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # Add you external development server URL
        capture_urls: 'https://dev.example.com, https://dev.example.com/about.html'

Capture Screenshots for Static HTML Pages

If your repository contains only static files and does not require a server. You can just put the file path of the HTML files you want to capture screenshot of.

Example:

name: Comment Static Site Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: imgur
        # Capture Screenshots of Changed HTML Files
        capture_changed_html_files: yes
        # Comma seperated paths to any other HTML File
        capture_html_file_paths: "/pages/index.html, about.html"

Available Image Upload Services

As GitHub Does not allow us to upload images to a comment using the API we need to rely on other services to host the screenshots.

These are the currently available image upload services.

Imgur

If the value of upload_to input is imgur then the screenshots will be uploaded to Imgur. Keep in mind that the uploaded screenshots will be public and anyone can see them. Imgur also has a rate limit of how many images can be uploaded per hour. Refer to Imgur's Rate Limits Docs for more details. This is suitable for small open source repositories.

Please refer to Imgur terms of service here

GitHub Branch (Default)

If the value of upload_to input is github_branch then the screenshots will be pushed to a GitHub branch created by the action on your repository. The screenshots on the comments will reference the Images pushed to this branch.

This is suitable for open source and private repositories.

If you want to add/use a different image upload service, feel free create a new issue/pull request.

Examples

You Can find some example use cases of this action here: Example Projects

Here are some comments created by this action on the Example Project:

Screenshot from 2021-11-18 21-32-20 screencapture-github-saadmk11-django-demo-pull-11-2021-11-18-21_31_00

License

The code in this project is released under the GNU GENERAL PUBLIC LICENSE Version 3.

You might also like...
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Extract embedded metadata from HTML markup

extruct extruct is a library for extracting embedded metadata from HTML markup. Currently, extruct supports: W3C's HTML Microdata embedded JSON-LD Mic

A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

A web service for scanning media hosted by a Matrix media repository

Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit

Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Comments
  • Good job, thank you

    Good job, thank you

    Buddy, very good work! This is awesome. In a place that feels like the wild wild west, full of badly documented actions, crappy ones, actions that do not work or that are built for one person very specific use case finding a gem like this just feels awesome. It worked out of the box, with the provided example configuration and does exactly what I expect it to do: I'm impressed. Took me more time to find it than using it, and that is already a compliment to it. I love how it takes advantage of the features of github without requiring you to use anything else. Very, very good job, thank you.

    One little question, does it keep the history of screenshot or is the branch overwritten? Regards

    question 
    opened by danielo515 2
  • Factor out comment/upload feature from screenshot feature

    Factor out comment/upload feature from screenshot feature

    Hey ! Thanks a lot for this !

    In my use case, I do not want to use the screenshoting part, since I already produce my own in my dockerized test suite (by the way, they are animated gifs, not images).

    Unfortunately, this makes your action unusable. This is a shame since all the logic about upload service and calling the API works great.

    I'd suggest factoring out the upload/comment part to a different action, usable on it's own. I've done this here: https://github.com/opengisch/comment-pr-with-images but of course happy to merge back / collaborate on this if you're interested maintaining a splitted version of your action.

    Cheers.

    opened by olivierdalang 0
Releases(v0.5)
Owner
Maksudul Haque
Web Developer, Open Source Contributor
Maksudul Haque
Web-Scraping using Selenium Master

Web-Scraping using Selenium What is the need of Selenium? Some websites don't like to be scrapped and in that case you need to disguise your webscrapi

Md Rashidul Islam 1 Oct 26, 2021
Pyrics is a tool to scrape lyrics, get rhymes, generate relevant lyrics with rhymes.

Pyrics Pyrics is a tool to scrape lyrics, get rhymes, generate relevant lyrics with rhymes. ./test/run.py provides the full function in terminal cmd

MisterDK 1 Feb 12, 2022
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

OneBit 2 Dec 13, 2021
Web Scraping images using Selenium and Python

Web Scraping images using Selenium and Python A propos de ce document This is a markdown document about Web scraping images and videos using Selenium

Nafaa BOUGRAINE 3 Jul 01, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

Alex Papadopoulos 1 Nov 13, 2021
script to scrape direct download links (ddls) from google drive index.

bhadoo Google Personal/Shared Drive Index scraper. A small script to scrape direct download links (ddls) of downloadable files from bhadoo google driv

sαɴᴊɪᴛ sɪɴʜα 53 Dec 16, 2022
Scrapy uses Request and Response objects for crawling web sites.

Requests and Responses¶ Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and p

Md Rashidul Islam 1 Nov 03, 2021
爬取各大SRC当日公告 | 通过微信通知的小工具 | 赏金工具

OnTimeHacker V1.0 OnTimeHacker 是一个爬取各大SRC当日公告,并通过微信通知的小工具 OnTimeHacker目前版本为1.0,已支持24家SRC,列表如下 360、爱奇艺、阿里、百度、哔哩哔哩、贝壳、Boss、58、菜鸟、滴滴、斗鱼、 饿了么、瓜子、合合、享道、京东、

Bywalks 95 Jan 07, 2023
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

1 Jan 28, 2022
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
Tool to scan for secret files on HTTP servers

snallygaster Finds file leaks and other security problems on HTTP servers. what? snallygaster is a tool that looks for files accessible on web servers

Hanno Böck 2k Dec 28, 2022
This is a sport analytics project that combines the knowledge of OOP and Webscraping

This is a sport analytics project that combines the knowledge of Object Oriented Programming (OOP) and Webscraping, the weekly scraping of the English Premier league table is carried out to assess th

Dolamu Oludare 1 Nov 26, 2021
A web scraper for nomadlist.com, made to avoid website restrictions.

Gypsylist gypsylist.py is a web scraper for nomadlist.com, made to avoid website restrictions. nomadlist.com is a website with a lot of information fo

Alessio Greggi 5 Nov 24, 2022
This is my CS 20 final assesment.

eeeeeSpider This is my CS 20 final assesment. How to use: Open program Run to your hearts content! There are no external dependancies that you will ha

1 Jan 17, 2022
Python script for crawling ResearchGate.net papers✨⭐️📎

ResearchGate Crawler Python script for crawling ResearchGate.net papers About the script This code start crawling process by urls in start.txt and giv

Mohammad Sadegh Salimi 4 Aug 30, 2022
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
Web scraper build using python.

Web Scraper This project is made in pyhthon. It took some info. from website list then add them into data.json file. The dependencies used are: reques

Shashwat Harsh 2 Jul 22, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
A web crawler for recording posts in "sina weibo"

Web Crawler for "sina weibo" A web crawler for recording posts in "sina weibo" Introduction This script helps collect attributes of posts in "sina wei

4 Aug 20, 2022