自动完成每日体温上报(Github Actions)

Overview

体温上报助手

简介

每天 10:30 GMT+8 自动完成体温上报,如想修改定时运行的时间,可修改 .github/workflows/SduHealthReport.ymlschedule 属性。

如果当日有异常,请手动在小程序端/PC 端填写!!!!!!!!!!!!

Github Actions 启用步骤

1. Fork 本项目

Fork 本项目: zhangt2333/actions-SduHealthReport (Star 自然是更好)

2. 准备需要的参数

{
    # fill them:
    'username': 'fill-it',  # 学号
    'password': 'fill-it',  # 密码
    'ZXSJ': 'fill-it',  # 手机
    'XM': 'fill-it',  # 姓名
    'XSXB': 'fill-it',  # 性别, 填 '男' 或 '女'
    'NL': 'fill-it',  # 年龄
    'FDYXMX': 'fill-it',  # 辅导员姓名
    'SZDW': 'fill-it',  # 学院全称, 如 计算机科学与技术学院
    'ZYMC': 'fill-it',  # 专业全称, 如 计算机科学与技术(菁英班)
    'JJLXRXM': 'fill-it',  # 家长姓名
    'JJLXRDH': 'fill-it',  # 家长手机
    'JJLXRYBRGX': 'fill-it',  # 和家长关系, 如 父子
    'sheng': 'fill-it',  # 省, 如 山东省
    'shi': 'fill-it',  # 市, 如 青岛市
    'quxian': 'fill-it',  # 区/县, 如 即墨区
    'DQJZDZ': 'fill-it',  # 具体地址, 如 山东省青岛市即墨区鳌山卫街道滨海路72号
    'EXTEST': '否No',  # 是否进行过核酸和血清抗体检测, 填 '否No' 或 '是Yes'

    # choose one, then annotate others:
    'DQSFJJIA': '在家At home',
    # 'DQSFJJIA': '在外Away from home',

    # choose one, then annotate others:
    'LXZT': '非学校所在城市Not in the city of the university',
    # 'LXZT': '学校所在城市In the city of the university',
    # 'LXZT': '境外Abroad',

    'none': 'none'
}

3. 启用 Github Actions

image-20210216140844300

4. 将参数填到 Secrets

将填好的参数加入到 Secrets 中,name 为 DATA,value 为步骤 2 中的多行字符串

image-20210216140557947

为多人打卡

  1. 依照 Github Actions 启用步骤 第 2、4 步,添加新的 Secret,假设命名为 DATA2

  2. 复制一遍 .github/workflows/SduHealthReport.yml 中最后一行 python health_report_helper/main.py "${{ secrets.DATA }}" 并将其中的 secrets.DATA 中的 DATA 改为新 secrets 的键值。

  3. 修改后的文件应如下所示(假设新键为DATA2

  // .....
  - name: Run Spider
        run: |
          python health_report_helper/main.py "${{ secrets.DATA }}"
          python health_report_helper/main.py "${{ secrets.DATA2 }}"
Owner
Teng Zhang
Teng Zhang
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

1 Jan 28, 2022
Example of scraping a paginated API endpoint and dumping the data into a DB

Provider API Scraper Example Example of scraping a paginated API endpoint and dumping the data into a DB. Pre-requisits Python = 3.9 Pipenv Setup # i

Alex Skobelev 1 Oct 20, 2021
Python script who crawl first shodan page and check DBLTEK vulnerability

🐛 MASS DBLTEK EXPLOIT CHECKER USING SHODAN 🕸 Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 09, 2022
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Scrapy project 859 Dec 29, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
Scrapy uses Request and Response objects for crawling web sites.

Requests and Responses¶ Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and p

Md Rashidul Islam 1 Nov 03, 2021
Luis M. Capdevielle 1 Jan 14, 2022
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
Bulk download tool for the MyMedia platform

MyMedia Bulk Content Downloader This is a bulk download tool for the MyMedia platform. USE ONLY WHERE ALLOWED BY THE COPYRIGHT OWNER. NOT AFFILIATED W

Ege Feyzioglu 3 Oct 14, 2022
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 02, 2023
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
Pseudo API for Google Trends

pytrends Introduction Unofficial API for Google Trends Allows simple interface for automating downloading of reports from Google Trends. Only good unt

General Mills 2.6k Dec 28, 2022
Python scraper to check for earlier appointments in Clalit Health Services

clalit-appt-checker Python scraper to check for earlier appointments in Clalit Health Services Some background If you ever needed to schedule a doctor

Dekel 16 Sep 17, 2022
Scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info

SpaceX Sofware I developed software to scrape data on SpaceX: Capsules, Rockets, Cores, Roadsters, SpaceX Info to use the software you need Python a

Maxence Rémy 16 Aug 02, 2022
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Command line program to download documents from web portals.

command line document download made easy Highlights list available documents in json format or download them filter documents using string matching re

16 Dec 26, 2022
一些爬虫相关的签名、验证码破解

cracking4crawling 一些爬虫相关的签名、验证码破解,目前已有脚本: 小红书App接口签名(shield)(2020.12.02) 小红书滑块(数美)验证破解(2020.12.02) 海南航空App接口签名(hnairSign)(2020.12.05) 说明: 脚本按目标网站、App命

XNFA 90 Feb 09, 2021
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022