河南工业大学 完美校园 自动校外打卡

Overview

HAUT-checkin

河南工业大学自动校外打卡
由于github actions存在明显延迟,建议直接使用腾讯云函数

特点

  • 多人打卡
  • 使用简单,仅需账号密码以及用于微信推送的uid
  • 自动获取上一次打卡信息用于打卡
  • 向所有成员微信单独推送打卡状态
  • 完美校园服务器繁忙时造成打卡失败会自动重新打卡,直到所有成员成功打卡

更新日志

2021.1.31 解决完美校园新设备需要验证码问题,重构项目,放弃github actions

使用方法

点这里下载源代码文件到本地

登陆腾讯云函数控制台

点击函数服务->新建创建云函数

选择自定义函数

地域可以随便选

运行环境选择python3.6

提交方法选择本地上传zip包

点击上传选择刚刚下载的zip文件

展开高级配置子菜单

执行超时时间设置为900

然后点击此链接获取二维码

QRcode

每个用户都需要扫描此二维码关注新消息服务公众号用于推送打卡状态

关注后在公众号内依次点击我的->我的UID,获取每个用户的UID

在环境变量处按照以下格式填入打卡成员信息

device_seed 可填入任意数字

建议不要让多个用户的device_seed相同

key Value
user1 账号 密码 device_seed uid
user2 账号 密码 device_seed uid

展开触发器配置子菜单

触发周期选择自定义触发周期

cron表达式填入0 10 0 * * * *

即为凌晨00:10打卡,第二位表示分钟,第三位表示小时

可自行修改打卡时间

点击完成

以上步骤完成后

进入函数代码页面

点击左侧的SMS.py

在点击右上角的绿色小三角运行此脚本

用于验证虚拟的新设备

在命令行依次填入username和刚刚填入环境变量的device_seed

然后输入收到的验证码

至此所有步骤完成

你可以点击页面下方的测试来验证是否出错

部署完成后如需添加打卡成员,修改函数配置添加环境变量即可

部署成功后第一次使用时,请在打卡时间确认脚本运行正常,默认每日00:10开始打卡

注意:本项目默认学校为河南工业大学,其他学校请自行修改代码。

You might also like...
Comments
  • index.py中的bug

    index.py中的bug

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    
    opened by tyu-t 1
  • 修复error队列处理uid的错误

    修复error队列处理uid的错误

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    

    应该是:

                phone = error[i][0]
                password = error[i][1]
                device_seed = error[i][2]
                uid = error[i][3]
    
    opened by tyu-t 0
  • 新增字段信息

    新增字段信息

    字段变了, 暂时懒得自己fork或者提pr,给看到这里的同学几个字段信息: image

    {
    ...,
    "updatainfo": [
        {
            "propertyname": "temperature",
            "value": get_updatainfo(last_check_json['updatainfos'], "temperature")
        },
        {
            "propertyname": "symptom",
            "value": get_updatainfo(last_check_json['updatainfos'], "symptom")
        },
        {
            "propertyname":"isFFHasSymptom",
            # "value": get_updatainfo(last_check_json['updatainfos'], "isFFHasSymptom") # 该字段已经无法获取
            "value": isFFHasSymptomDict[phone]
        },
        {
            "propertyname":"isContactFriendIn14",
            "value": "否"
        },
         { # 2022.03.24 该字段失效
        #     "propertyname":"xinqing",
        #     # "value": "是,已接种二针剂型(灭活疫苗,科兴、国药等)满6个月"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xinqing")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"xndkrqzj",          # 2021/12/8号更新增加,接种时间
        #     # "value": "2023-06-30"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xndkrqzj")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"zdyqdq0511",        # 2021/12/8号更新增加,接种企业
        #     # "value": "科兴"                    
        #     "value": get_updatainfo(last_check_json['updatainfos'], "zdyqdq0511")
        # }
        { # 2022.03.24 新增字段(其实是改名)"您昨日是否进行核酸检测"
            "propertyname":"xinqing",
            "value": "否"
        }
    ],
    ...
    }
    

    直接 Python 代码复制过来的,不是标准 Json 语法。其中 isFFHasSymptomDict[phone] 在外面搞了个词典,类似这样:

    isFFHasSymptomDict = {
        '18666666666': '接种部分剂次',
        '15555555555': '完成接种,待接种加强针',
        '17666666666': '未接种或不能接种',
        '15777777777': '已接种加强针'
    }
    

    字段名还是一如既往地让人一头雾水(

    opened by CHxCOOH 1
Releases(v0.1.0)
优化版本的京东茅台抢购神器

优化版本的京东茅台抢购神器

1.8k Mar 18, 2022
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 06, 2023
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

1 Jan 31, 2022
A python script to extract answers to any question on Quora (Quora+ included)

quora-plus-bypass A python script to extract answers to any question on Quora (Quora+ included) Requirements Python 3.x

Nitin Narayanan 10 Aug 18, 2022
This program scrapes information and images for movies and TV shows.

Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_

1 Dec 05, 2021
DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques

DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques, based in France Only. The particularity of this program i

Dalunacrobate 347 Jan 07, 2023
PS5 bot to find a console in france for chrismas 🎄🎅🏻 NOT FOR SCALPERS

Une PS5 pour Noël Python + Chrome --headless = une PS5 pour noël MacOS Installer chrome Tweaker le .yaml pour la listes sites a scrap et les criteres

Olivier Giniaux 3 Feb 13, 2022
a way to scrape a database of all of the isef projects

ISEF Database This is a simple web scraper which gets all of the projects and abstract information from here. My goal for this is for someone to get i

William Kaiser 1 Mar 18, 2022
This is a script that scrapes the longitude and latitude on food.grab.com

grab This is a script that scrapes the longitude and latitude for any restaurant in Manila on food.grab.com, location can be adjusted. Search Result p

0 Nov 22, 2021
WebScraping - Scrapes Job website for python developer jobs and exports the data to a csv file

WebScraping Web scraping Pyton program that scrapes Job website for python devel

Michelle 2 Jul 22, 2022
CreamySoup - a helper script for automated SourceMod plugin updates management.

CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

3 Jan 03, 2022
This project was created using Python technology and flask tools to scrape a music site

python-scrapping This project was created using Python technology and flask tools to scrape a music site You need to install the following packages to

hosein moradi 1 Dec 07, 2021
HappyScrapper - Google news web scrapper with python

HappyScrapper ~ Google news web scrapper INSTALLATION ♦ Clone the repository ♦ O

Jhon Aguiar 0 Nov 07, 2022
A Python web scraper to scrape latest posts from official Coinbase's Blog.

Coinbase Blog Scraper A Python web scraper to scrape latest posts from official Coinbase's Blog. IDEA It scrapes up latest blog posts from https://blo

Lucas Villela 3 Feb 18, 2022
A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

Scrapy project 1.8k Dec 31, 2022
基于Github Action的定时HITsz疫情上报脚本,开箱即用

HITsz Daily Report 基于 GitHub Actions 的「HITsz 疫情系统」访问入口 定时自动上报脚本,开箱即用。 感谢 @JellyBeanXiewh 提供原始脚本和 idea。 感谢 @bugstop 对脚本进行重构并新增 Easy Connect 校内代理访问。

Ter 56 Nov 27, 2022
Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key.

Facebook Scraper Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key. (Currently working 2021) Setup Befo

Encore Shao 2 Dec 27, 2021
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a ra

1 Jan 04, 2022
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022