CreamySoup - a helper script for automated SourceMod plugin updates management.

Related tags

Web Crawlingsoup
Overview

soup

CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

This project started as a custom utility for the Creamy Neotokyo servers (hence the name), but open sourcing and generalising it for any kind of SRCDS/SourceMod servers seemed like a good idea, in case it's helpful for someone else too.

alt text

FAQ

What it be?

soup is a Python 3 script, a SRCDS SourceMod plugin update helper, intended to be invoked periodically by an external cronjob-like automation system.

It parses soup recipes, remote lists of resources to be kept up-to-date, compares those resources' contents to the target machine's local files, and re-downloads & re-compiles them if they differ. This will automatically keep such resources in-sync with their remote repository. For a SourceMod plugin, this means any new updates get automatically applied upon the next mapchange after the completion of a soup update cycle.

The purpose of soup is to reduce SRCDS sysop workload by making SourceMod plugin updates more automated, while also providing some granularity in terms of which plugins get updated when, with the introduction of maintained/curated recipes. For example, you can have some trusted recipes auto-update their target plugins without any admin intervention, but choose to manually update more fragile or experimental plugins as required (or not at all).

Which recipes to use?

You should always use the default self-updater recipe to keep the soup script itself updated.

If you are operating a Neotokyo SRCDS, this project offers some recommended recipe(s) here. This resource is still work-in-progress, more curated lists to be added later!

You can also host your own custom recipes as you like for any SRCDS+SourceMod server setup.

Foreword of warning

While automation is nice, a malicious actor could use this updater to execute arbitrary code on the target machine. Be sure to only use updater source lists ("recipes") that you trust 100%, or maintain your own fork of such resources where you can review and control the updates.

Installation

Recommended to install with pip, using the requirements.txt file.

You should also consider using a virtual environment to isolate any Python dependencies from the rest of the system (although if you go this route, any cron job or similar automation should also run in that venv to have access to those deps).

Other requirements

  • Python 3

Config

Configuration can be edited in the config.yml file that exists in the same dir as the Python script itself. Please see the additional comments within the config file for more information on the options.

Recipes

The most powerful config option is recipes, which is a list of 0 or more URLs pointing to soup.py "recipes".

A recipe is defined as a valid JSON document using the following structure:

}, <...> ], <...> } ">
{
  "section": [
    {
      "key": "value",
      <...>
    },
    <...>
  ],
  <...>
}

where

<...>

indicates 0 or more additional repeated elements of the same type as above.

Note that trailing commas are not allowed in the JSON syntax – it's a good idea to validate the file before pushing any recipe updates online.

Recipe sections

There are three valid recipe sections: updater, includes, and plugins. Examples follow:

  • updater – A self-updater section for the soup.py script contents. Only one section in total of this kind should exist at most in all of the recipes being used.
	"updater": [
		{
			"version": "1.0.0",
			"url": "https://raw.githubusercontent.com/CreamySoup/soup/main/soup.py"
		}
	]
  • includes – SourceMod include files that are required by some of the plugins in the recipes' plugins section. Required file extension: .inc
	"includes": [
		{
			"name": "neotokyo",
			"about": "sourcemod-nt-include - The de facto NT standard include.",
			"source_url": "https://raw.githubusercontent.com/CreamySoup/sourcemod-nt-include/master/scripting/include/neotokyo.inc"
		}
	]
  • plugins – SourceMod plugins that are to be kept up to date with their remote source code repositories. Required file extension: .sp
	"plugins": [
		{
			"name": "nt_srs_limiter",
			"about": "SRS rof limiter timed from time of shot, inspired by Rain's nt_quickswitchlimiter.",
			"source_url": "https://raw.githubusercontent.com/CreamySoup/nt-srs-limiter/master/scripting/nt_srs_limiter.sp"
		}
	]

For full examples of valid recipes, see the self updater in this repo, and the Neotokyo recipe repository. By default, this repo is configured for game "NeotokyoSource", and to use these Neotokyo default recipes.

Usage

The script can be run manually with python soup.py, but is recommended to be automated as a cron job or similar.

For developers

The soup.py Python script should be PEP 8 compliant (tested using pycodestyle).

You might also like...
A webdriver-based script for reserving Tsinghua badminton courts.

AutoReserve A webdriver-based script for reserving badminton courts. 使用说明 下载 chromedriver 选择当前Chrome对应版本 安装 selenium pip install selenium 更改场次、金额信息dat

A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

Script for scrape user data like
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

This is a script that scrapes the longitude and latitude on food.grab.com
This is a script that scrapes the longitude and latitude on food.grab.com

grab This is a script that scrapes the longitude and latitude for any restaurant in Manila on food.grab.com, location can be adjusted. Search Result p

Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.
Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.

Game Scraper Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms. Join the discord About The Proj

Screenhook is a script that captures an image of a web page and send it to a discord webhook.
Screenhook is a script that captures an image of a web page and send it to a discord webhook.

screenshot from the web for discord webhooks screenhook is a script that captures an image of a web page and send it to a discord webhook.

Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Scraping script for stats on covid19 pandemic status in Chiba prefecture, Japan

About 千葉県の地域別の詳細感染者統計(Excelファイル) をCSVに変換し、かつ地域別の日時感染者集計値を出力するスクリプトです。 Requirement POSIX互換なシェル, e.g. GNU Bash (1) curl (1) python = 3.8 pandas = 1.1.

A simple python script to fetch the latest covid info

covid-tracker-script A simple python script to fetch the latest covid info How it works First, get the current date in MM-DD-YYYY format. Check if the

Comments
  • Bump pipenv from 2021.11.23 to 2022.1.8

    Bump pipenv from 2021.11.23 to 2022.1.8

    Bumps pipenv from 2021.11.23 to 2022.1.8.

    Changelog

    Sourced from pipenv's changelog.

    2022.1.8 (2022-01-08)

    Bug Fixes

    • Remove the extra parentheses around the venv prompt. [#4877](https://github.com/pypa/pipenv/issues/4877) <https://github.com/pypa/pipenv/issues/4877>_
    • Fix a bug of installation fails when extra index url is given. [#4881](https://github.com/pypa/pipenv/issues/4881) <https://github.com/pypa/pipenv/issues/4881>_
    • Fix regression where lockfiles would only include the hashes for releases for the platform generating the lockfile [#4885](https://github.com/pypa/pipenv/issues/4885) <https://github.com/pypa/pipenv/issues/4885>_
    • Fix the index parsing to reject illegal requirements.txt. [#4899](https://github.com/pypa/pipenv/issues/4899) <https://github.com/pypa/pipenv/issues/4899>_
    Commits
    • d378b9f Release v2022.1.8
    • 439782a Merge pull request from GHSA-qc9x-gjcv-465w
    • 1679098 fix TLS validation for requirements.txt
    • 9cb42e1 Merge pull request #4910 from jfly/update-run-tests-instructions
    • 08a7fcf Oops, set the CI environment variable even earlier.
    • f42fcaa Misc doc updates (mostly around running tests)
    • c8f34dd Merge pull request #4908 from jfly/issue-4885-custom-indices-lacking-hashes
    • 34652df Use a PackageFinder with ignore_compatibility when collecting hashes
    • b0ebaf0 Merge pull request #4907 from milo-minderbinder/bugfix/requirements-file-options
    • d535301 disallow abbreviated forms of full option names
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Improve self-updater

    Improve self-updater

    Deprecate the "updater" recipe section in favour of using GitHub Releases API. Fix recursion bug in recipe iteration logic that could cause multiple traversal of the same node.

    bug enhancement 
    opened by Rainyan 0
  • Fix updating Python requirements upon soup self-update

    Fix updating Python requirements upon soup self-update

    Run the pip requirements installation command when self-updating so that script dependencies also stay in sync.

    This is a breaking change from 1.2.x, since:

    • 1.2 doesn't have the ability to self-update requirements.txt
    • the self-update recipe syntax has changed
    enhancement 
    opened by Rainyan 0
  • Generalise recipe inputs

    Generalise recipe inputs

    Currently we are supporting source code (.sp) and include (.inc) files specifically, but there's also other kinds of SM dependencies like localization/translations, configs, gamedata, etc. Ideally, this system should be completely agnostic of the kinds of files being updated. This would require some refactoring of the current concept of a "recipe".

    enhancement 
    opened by Rainyan 0
Releases(2.1.0)
  • 2.1.0(Nov 24, 2022)

    What's Changed

    • Fix selfupdate on major updates by @Rainyan in https://github.com/CreamySoup/soup/pull/12

    Full Changelog: https://github.com/CreamySoup/soup/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Nov 24, 2022)

  • 1.6.3(Jan 13, 2022)

    What's Changed

    • Bump pipenv from 2021.11.23 to 2022.1.8 by @dependabot in https://github.com/CreamySoup/soup/pull/5

    New Contributors

    • @dependabot made their first contribution in https://github.com/CreamySoup/soup/pull/5

    Full Changelog: https://github.com/CreamySoup/soup/compare/1.6.2...1.6.3

    Source code(tar.gz)
    Source code(zip)
  • 1.6.2(Dec 25, 2021)

  • 1.6.1(Dec 25, 2021)

  • 1.6.0(Dec 25, 2021)

  • 1.5.1(Nov 29, 2021)

  • 1.4.5(Nov 29, 2021)

    What's Changed

    Changes from 1.4.0

    • Fix some bugs that broke the new updater

    Changes from 1.3.x

    • Deprecate the updater recipe section in favour of using GitHub Releases API.
    • Fix recursion bug in recipe iteration logic that could cause multiple traversal of the same node.
    • Trigger soup self-restart upon receiving self-update

    This is a breaking change from 1.3.x, since:

    • self-update logic has been changed
    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Nov 29, 2021)

    What's Changed

    • Fix updating Python requirements upon soup self-update:

    Run the pip requirements installation command when self-updating so that script dependencies also stay in sync.

    This is a breaking change from 1.2.x, since:

    • 1.2.x doesn't have the ability to self-update requirements.txt
    • the self-update recipe syntax has changed
    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Nov 26, 2021)

  • 1.2.0(Nov 16, 2021)

  • 1.1.0(Nov 16, 2021)

    What's Changed

    • Use StrictYAML for config by @Rainyan in https://github.com/CreamySoup/soup/pull/2

    Full Changelog: https://github.com/CreamySoup/soup/compare/1.0.0...1.1.0

    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Nov 16, 2021)

Owner
Helper script for automated SourceMod plugin updates management.
Ebay Webscraper for Getting Average Product Price

Ebay-Webscraper-for-Getting-Average-Product-Price The code in this repo is used to determine the average price of an item on Ebay given a valid search

17 Jan 05, 2023
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022
Web scrapping tool written in python3, using regex, to get CVEs, Source and URLs.

searchcve Web scrapping tool written in python3, using regex, to get CVEs, Source and URLs. Generates a CSV file in the current directory. Uses the NI

32 Oct 10, 2022
A list of Python Bots used to extract data from several websites

A list of Python Bots used to extract data from several websites. Data extraction is for products on e-commerce (ecommerce) websites. Data fetched i

Sahil Ladhani 1 Jan 14, 2022
An automated, headless YouTube Watcher and Scraper

Searches YouTube, queries recommended videos and watches them. All fully automated and anonymised through the Tor network. The project consists of two independently usable components, the YouTube aut

44 Oct 18, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Uzuki 1 Nov 14, 2021
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper It is a Google Developer Profile Web Scraper which scrapes for specific badges in a user's Google Developer Pro

Hemant Sachdeva 2 Feb 22, 2022
Minimal set of tools to conduct stealthy scraping.

Stealthy Scraping Tools Do not use puppeteer and playwright for scraping. Explanation. We only use the CDP to obtain the page source and to get the ab

Nikolai Tschacher 88 Jan 04, 2023
Libextract: extract data from websites

Libextract is a statistics-enabled data extraction library that works on HTML and XML documents and written in Python

499 Dec 09, 2022
tweet random sand cat pictures

sandcatbot setup pip3 install --user -r requirements.txt cp sandcatbot.example.conf sandcatbot.conf vim sandcatbot.conf running the first parameter i

jess 8 Aug 07, 2022
一些爬虫相关的签名、验证码破解

cracking4crawling 一些爬虫相关的签名、验证码破解,目前已有脚本: 小红书App接口签名(shield)(2020.12.02) 小红书滑块(数美)验证破解(2020.12.02) 海南航空App接口签名(hnairSign)(2020.12.05) 说明: 脚本按目标网站、App命

XNFA 90 Feb 09, 2021
for those who dont want to pay $10/month for high school game footage with ads

nfhs-scraper Disclaimer: I am in no way responsible for what you choose to do with this script and guide. I do not endorse avoiding paywalls or any il

Conrad Crawford 5 Apr 12, 2022
Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key.

Facebook Scraper Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key. (Currently working 2021) Setup Befo

Encore Shao 2 Dec 27, 2021
A Scrapper with python

Scrapper-en-python Scrapper des données signifie récuperer des données pour les traiter ou les analyser. En python, il y'a 2 grands moyens de scrapper

Lun4rIum 1 Dec 05, 2021
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

0 Jan 07, 2022
自动完成每日体温上报(Github Actions)

体温上报助手 简介 每天 10:30 GMT+8 自动完成体温上报,如想修改定时运行的时间,可修改 .github/workflows/SduHealthReport.yml 中 schedule 属性。 如果当日有异常,请手动在小程序端/PC 端填写!

Teng Zhang 23 Sep 15, 2022
Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Christian Gracia 0 Jan 22, 2022