A tool for creating credentials for accessing S3 buckets

Overview

s3-credentials

PyPI Changelog Tests License

A tool for creating credentials for accessing S3 buckets

For project background, see s3-credentials: a tool for creating credentials for S3 buckets on my blog.

⚠️ Warning

I am not an AWS security expert. You shoud review how this tool works carefully before using it against with own AWS account.

If you are an AWS security expert I would love to get your feedback!

Installation

Install this tool using pip:

$ pip install s3-credentials

Configuration

This tool uses boto3 under the hood which supports a number of different ways of providing your AWS credentials. If you have an existing ~/.aws/config or ~/.aws/credentials file the tool will use that - otherwise you can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables before calling this tool.

Usage

The s3-credentials create command is the core feature of this tool. Pass it one or more S3 bucket names and it will create a new user with permission to access just those specific buckets, then create access credentials for that user and output them to your console.

Make sure to record the SecretAccessKey because it will only be displayed once and cannot be recreated later on.

In this example I create credentials for reading and writing files in my static.niche-museums.com S3 bucket:

% s3-credentials create static.niche-museums.com

Created user: s3.read-write.static.niche-museums.com with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.static.niche-museums.com to user s3.read-write.static.niche-museums.com
Created access key for user: s3.read-write.static.niche-museums.com
{
    "UserName": "s3.read-write.static.niche-museums.com",
    "AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
    "Status": "Active",
    "SecretAccessKey": "...",
    "CreateDate": "2021-11-03 01:38:24+00:00"
}

The command has several additional options:

  • --username TEXT: The username to use for the user that is created by the command (or the username of an existing user if you do not want to create a new one). If ommitted a default such as s3.read-write.static.niche-museums.com will be used.
  • -c, --create-bucket: Create the buckts if they do not exist. Without this any missing buckets will be treated as an error.
  • --read-only: The user should only be allowed to read files from the bucket.-
  • --write-only: The user should only be allowed to write files to the bucket, but not read them. This is useful for logging use-cases.
  • --bucket-region: If creating buckets, the region in which they should be created.
  • --silent: Don't output details of what is happening, just output the JSON for the created access credentials at the end.
  • --user-permissions-boundary: Custom permissions boundary to use for users created by this tool. This will default to restricting those users to only interacting with S3, taking the --read-only option into account. Use none to create users without any permissions boundary at all.

Here's the full sequence of events that take place when you run this command:

  1. Confirm that each of the specified buckets exists. If they do not and --create-bucket was passed create them - otherwise exit with an error.
  2. If a username was not specified, determine a username using the s3.$permission.$buckets format.
  3. If a user with that username does not exist, create one with an S3 permissions boundary that respects the --read-only option - unless --user-permissions-boundary=none was passed (or a custom permissions boundary string).
  4. For each specified bucket, add an inline IAM policy to the user that gives them permission to either read-only, write-only or read-write against that bucket.
  5. Create a new access key for that user and output the key and its secret to the console.

Other commands

whoami

To see which user you are authenticated as:

s3-credentials whoami

This will output JSON representing the currently authenticated user.

list-users

To see a list of all users that exist for your AWS account:

s3-credentials list-users

This will return pretty-printed JSON objects by default.

Add --nl to collapse these to single lines as valid newline-delimited JSON.

Add --array to output a valid JSON array of objects instead.

list-buckets

Shows a list of all buckets in your AWS account.

s3-credentials list-buckets

Accepts the same --nl and --array options as list-users.

list-user-policies

To see a list of inline policies belonging to users:

% s3-credentials list-user-policies s3.read-write.static.niche-museums.com

User: s3.read-write.static.niche-museums.com
PolicyName: s3.read-write.static.niche-museums.com
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::static.niche-museums.com"
            ]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": [
                "arn:aws:s3:::static.niche-museums.com/*"
            ]
        }
    ]
}

You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account:

s3-credentials list-user-policies

delete-user

In trying out this tool it's possible you will create several different user accounts that you later decide to clean up.

Deleting AWS users is a little fiddly: you first need to delete their access keys, then their inline policies and finally the user themselves.

The s3-credentials delete-user handles this for you:

% s3-credentials delete-user s3.read-write.simonw-test-bucket-10
User: s3.read-write.simonw-test-bucket-10
  Deleted policy: s3.read-write.simonw-test-bucket-10
  Deleted access key: AKIAWXFXAIOZK3GPEIWR
  Deleted user

You can pass it multiple usernames to delete multiple users at a time.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd s3-credentials
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
Comments
  • `s3-credentials create` command

    `s3-credentials create` command

    This is the command which create a user and returns credentials for a specified bucket, optionally also creating the bucket as well.

    See initial design notes in #1.

    enhancement 
    opened by simonw 22
  • Standard default output should be a valid JSON array

    Standard default output should be a valid JSON array

    I just spotted list-buckets has the same not-quite-newline-delimited JSON output format, which is a bad default. I should fix that too.

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/issues> /28#issuecomment-1014838721

    enhancement 
    opened by simonw 15
  • Research creating expiring credentials using `sts.assume_role()`

    Research creating expiring credentials using `sts.assume_role()`

    The initial reason for creating this tool was that I wanted to be able to create long-lived (never expiring) tokens for the kinds of use-cases described in this post: https://simonwillison.net/2021/Nov/3/s3-credentials/

    Expiring credentials are fantastic for all sorts of other use-cases. It would be great if this tool could optionally create those instead of creating long-lived credentials.

    This would mean the tool didn't have to create users at all (when used in that mode) - it could create a role and then create temporary access credentials for that role using sts.assume_role(): https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role

    enhancement research 
    opened by simonw 15
  • `s3-credentials put-objects` command

    `s3-credentials put-objects` command

    It's frustrating when using s3-credentials put-object that you have to specify the key name each time, rather than deriving that from the filename:

    s3-credentials put-object simonwillison-cors-allowed-public \
      click_default_group-1.2.2-py3-none-any.whl \
      /tmp/click-default-group/dist/click_default_group-1.2.2-py3-none-any.whl
    

    One way to fix this would be with a s3-credentials put-objects which works like this:

    s3-credentials put-objects simonwillison-cors-allowed-public /tmp/click-default-group/dist/click_default_group-1.2.2-py3-none-any.whl
    

    It could accept multiple files (hence the plural name) and could also accept directories and recursively upload their contents.

    enhancement 
    opened by simonw 13
  • Make it easier to add extra policy statements

    Make it easier to add extra policy statements

    The current --policy option lets you set a custom policy, but leaves it to you to define one.

    I find myself wanting to mix in the following to the policy that I use, for s3-ocr:

    https://docs.aws.amazon.com/textract/latest/dg/security_iam_id-based-policy-examples.html#security_iam_async-actions

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "textract:StartDocumentTextDetection",
                    "textract:StartDocumentAnalysis",
                    "textract:GetDocumentTextDetection",
                    "textract:GetDocumentAnalysis"
                ],
                "Resource": "*"
            }
        ]
    }
    

    Would be nice if there was a neat way to do this.

    enhancement research 
    opened by simonw 10
  • Mechanism for running tests against a real AWS account

    Mechanism for running tests against a real AWS account

    The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.

    But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.

    These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.

    I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts.

    tests 
    opened by simonw 10
  • Stop using action wildcards and start explicitly listing permissions

    Stop using action wildcards and start explicitly listing permissions

    See https://github.com/simonw/s3-credentials/issues/11#issuecomment-959844042_ for context.

    The read-write policy currently uses "Action": "s3:*Object" - and the read-only one uses Action": "s3:GetObject*".

    This is pretty gross - surely explicitly listing the allowed actions is better practice?

    • [x] #23
    • [x] #24
    • [x] #25
    research 
    opened by simonw 10
  • Support configuring the bucket as a website

    Support configuring the bucket as a website

    It would be useful to have an opt-in option for saying "this bucket should be configured as a website" - because setting that up without a tool is quite fiddly.

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html has the details:

    When you configure a bucket as a static website, if you want your website to be public, you can grant public read access. To make your bucket publicly readable, you must disable block public access settings for the bucket and write a bucket policy that grants public read access.

    See #20 for "block public access" setting, and #19 for bucket policies.

    enhancement 
    opened by simonw 9
  • Work-in-progress create command

    Work-in-progress create command

    Refs #3. This is implemented... but it doesn't seem to work - when I copy and paste the credentials into Transmit it refuses to connect.

    • [x] Get it working
    • [x] Add tests
    • [x] Add documentation
    opened by simonw 8
  • Manually test --prefix against litestream.io

    Manually test --prefix against litestream.io

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/pull/39#issuecomment-1014857276

    Splitting this into a separate issue mainly so I can clearly document how to use Litestream in the comments here.

    Goal is to confirm that S3 credentials created using s3-credentials create ... --prefix litestream-test/ can be used with Litestream to back up a SQLite database to that path within the bucket.

    research tests 
    opened by simonw 7
  • Apply jdub policy suggestions

    Apply jdub policy suggestions

    https://github.com/simonw/s3-credentials/blob/main/s3_credentials/policies.py

    My suggestions:

    • specify individual actions explicitly (no wildcards)
    • separate permissions by resource (Buckets vs. Objects)
    • Sid is unnecessary

    Your read/write policy is good, but instead of *Object, list GetObject and PutObject.

    Your read-only policy would be better written like your read/write policy, one section for the bucket permission (ListBucket), one for the object permission (which should be GetObject, no wildcard).

    Your write-only policy is great as is.

    You may want to add additional permissions to let clients set ACLs. But if it's all simple object-by-object stuff, these very simple policies are great.

    Originally posted by @jdub in https://github.com/simonw/s3-credentials/issues/7#issuecomment-958651592

    enhancement research 
    opened by simonw 7
  • Add s3:PutObjectAcl to write policies

    Add s3:PutObjectAcl to write policies

    This came up here:

    • https://github.com/simonw/public-notes/issues/9#issuecomment-1328567164

    It turned out django-storages nees a write policy that includes s3:PutObjectAcl: https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#iam-policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObjectAcl",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:DeleteObject",
                    "s3:PutObjectAcl"
                ],
                "Principal": {
                    "AWS": "arn:aws:iam::example-AWS-account-ID:user/example-user-name"
                },
                "Resource": [
                    "arn:aws:s3:::example-bucket-name/*",
                    "arn:aws:s3:::example-bucket-name"
                ]
            }
        ]
    }
    

    Looks like I should add s3:GetObjectAcl to the default read policies too.

    enhancement 
    opened by simonw 3
  • Add the options to add tags to the created resources

    Add the options to add tags to the created resources

    Hello! I am looking into using s3-credentials for my projects. I use tags to identify resources in different ways, like how it was created or the project it belongs. I was wondering if there is planned support for adding tags to the resources created, or if you would be open for a contribution in that area.

    enhancement 
    opened by threkk 3
  • `get-objects/put-objects` `--skip` and `--skip-hash` options

    `get-objects/put-objects` `--skip` and `--skip-hash` options

    Idea:

    • --skip to skip downloading a file if it already exists with the same filename
    • --skip-hash to skip downloading a file if it already exists AND the MD5 hash has not changed (more expensive as needs to calculate the local hash)

    Originally posted by @simonw in https://github.com/simonw/s3-credentials/issues/78#issuecomment-1248398247

    enhancement 
    opened by simonw 1
  • Provide a `--profile` option to allow AWS profile selection

    Provide a `--profile` option to allow AWS profile selection

    Users with multiple AWS accounts can declare named profiles to manage the different sets of credentials/regions. It would be ideal if s3-credentials accepted a --profile argument, just like the aws comand line tool.

    enhancement 
    opened by nk9 3
  • Bad session token error masked if not creating a new bucket

    Bad session token error masked if not creating a new bucket

    If a bad or expired session token is set, the create command fails with a misleading error that an existing bucket doesn't exist, if --create-bucket isn't specified. If --create-bucket is specified a traceback with more info is given instead:

    export AWS_ACCESS_KEY_ID="..."
    export AWS_SECRET_ACCESS_KEY="..."
    export AWS_SESSION_TOKEN="EXPIRED_TOKEN" 
    
    $ s3-credentials create --username USERNAME BUCKET
    Error: Bucket does not exist: BUCKET - try --create-bucket to create it
    
    $ s3-credentials create --create-bucket --username USERNAME BUCKET
    Traceback (most recent call last):
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/bin/s3-credentials", line 8, in <module>
        sys.exit(cli())
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/s3_credentials/cli.py", line 314, in create
        s3.create_bucket(Bucket=bucket, **kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/botocore/client.py", line 391, in _api_call
        return self._make_api_call(operation_name, kwargs)
      File "/home/kimv/work/data_engineering/sbsa_archive/.venv/lib/python3.9/site-packages/botocore/client.py", line 719, in _make_api_call
        raise error_class(parsed_response, operation_name)
    botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the CreateBucket operation: The provided token has expired.
    
    bug 
    opened by kimvanwyk 3
Releases(0.14)
  • 0.14(Sep 15, 2022)

    • s3-credentials put-objects command (docs) for uploading more than one file or directory to an S3 bucket at a time. #68
    • s3-credentials get-objects command (docs) for downloading multiple files from an S3 bucket. #78
    Source code(tar.gz)
    Source code(zip)
  • 0.13(Aug 12, 2022)

    • Documentation now lives on a dedicated documentation website: https://s3-credentials.readthedocs.io/ #71
    • s3-credentials create ... --website --create-bucket now creates an S3 bucket that is configured to act as a website, with index.html an the index page and error.html as the page used for any errors. #21
    • s3-credentials list-buckets --details now returns the bucket region and the URL to the website, if it is configured to act as a website. #77
    • Fixed a bug where list-bucket would return an error if the bucket (or specified --prefix) was empty. #76
    Source code(tar.gz)
    Source code(zip)
  • 0.12.1(Aug 1, 2022)

    • Using the --policy or --statement options now implies --user-permissions-boundary=none. Previously it was easy to use these options to accidentally create credentials that did not work as expected since they would have a default permissions boundary that locked them down to only being able to access S3. #74
    • The s3-credentials.AmazonS3FullAccess role created by this tool in order to issue temporary credentials previously used the default MaxSessionDuration value of 3600, preventing it from creating credentials that could last more than an hour. This has been increased to 12 hours. See this issue comment for instructions on fixing your existing role if this bug is affecting your account. #75
    Source code(tar.gz)
    Source code(zip)
  • 0.12(Jun 30, 2022)

    • New --statement JSON option for both the s3-credentials create and s3-credentials policy commands, allowing one or more additional policy statements (provided as JSON strings) to be added to the generated IAM policy. #72
    Source code(tar.gz)
    Source code(zip)
  • 0.11(May 1, 2022)

  • 0.10(Jan 25, 2022)

  • 0.9(Jan 18, 2022)

    See Weeknotes: s3-credentials prefix and Datasette 0.60 for extra background on these new features.

    • New --prefix myprefix/ option to s3-credentials create, which configures the credentials to only allow access to keys within the S3 bucket that start with the provided prefix. #12
    • s3-credentials policy --prefix myprefix/ command for generating and outputting a JSON policy that is restricted to the specified prefix. You can see examples in the README.
    • New list-bucket command for listing the contents of a specified bucket. #28
    • The list-users, list-buckets and list-bucket command all default to outputting an indented JSON array - previously the outputted indented JSON objects separated by newlines. The --nl option can be used to return newline-delimited single line JSON objects. The new --csv and --tsv options can be used to return CSV or TSV output. #48
    Source code(tar.gz)
    Source code(zip)
  • 0.8(Dec 7, 2021)

    • s3-credentials create my-bucket --public option for creating public buckets, which allow anyone with knowledge of a filename to download that file. This works by attaching this public bucket policy to the bucket after it is created. #42
    • s3-credentials put-object now sets the Content-Type header on the uploaded object. The type is detected based on the filename, or can be specified using the new --content-type option. #43
    • s3-credentials policy my-bucket --public-bucket outputs the public bucket policy that would be attached to a bucket of that name. #44
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Nov 30, 2021)

    • s3-credentials policy command, to output the JSON policy that would be used directly to the terminal. #37
    • README now includes examples of the three different policies. #36
    • s3-credentials put-object and s3-credentials get-object commands for uploading and downloading files from an S3 bucket. #38
    Source code(tar.gz)
    Source code(zip)
  • 0.6(Nov 18, 2021)

    • create --dry-run option outputs a summary of changes that would be made to an AWS account without applying them. #35
    • s3-credentials whoami command now uses sts.GetCallerIdentity, which means it works with any kind of access key. #33
    Source code(tar.gz)
    Source code(zip)
  • 0.5(Nov 11, 2021)

    • New s3-credentials create --duration 20m option. This creates temporary credentials that only last for the specified time, by creating a role and using STS.AssignRole() to retrieve credentials. #27
    • Redesigned read-only and read-write policies to no longer use wildcards and instead explicitly list allowed actions. #15
    • Commands now accept an optional --auth file/path.json option to specify a JSON or INI file containing the credentials to use. #29
    • New s3-credentials list-buckets --details option to include ACLs, website configuration and bucket policies. #22
    • New s3-credentials create --format ini option for outputting INI format instead of JSON. #17
    • Now uses botocore.stub in some of the tests - thanks, Niko Abeler. #16
    • Added integration tests, run using pytest --integration, which exercise the tool against an AWS account and delete any created resources afterwards. #30
    • Added tips section to the README, including how to access CloudTrail
    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 4, 2021)

    • New options for authenticating with AWS: --access-key, --secret-key, --session-token, --endpoint-url. #2
    • Various improvements to JSON policies - thanks, @jdub! #11
    • --policy filename.json option for specifying a custom JSON policy. #14
    Source code(tar.gz)
    Source code(zip)
  • 0.3(Nov 3, 2021)

  • 0.2(Nov 3, 2021)

  • 0.1(Nov 3, 2021)

    • Initial release
    • s3-credentials create name-of-bucket creates a new user with read-write access only to the specified S3 bucket, creates an access key for that user and outputs it to the console. #3
    • s3-credentials list-users lists all of the users for the current AWS account. #4
    • s3-credentials list-user-policies lists inline policies for the specifeid users, or all users. #5
    • s3-credentials whoami shows information about the currently authenticated user.
    Source code(tar.gz)
    Source code(zip)
Owner
Simon Willison
Simon Willison
Python client for ETAPI of Trilium Note.

Python client for ETAPI of Trilium Note.

33 Dec 31, 2022
A delivery protection and notification system

DeliveryProtect This project builds a delivery protection and notification system, based on integration of Arduino Uno and Raspberry Pi 4. The codes a

2 Dec 13, 2021
Easy & powerful bot to check if your all Telegram bots are working or not. This bot status bot updates every 45 minutes & runs for 24x7 hours.

PowerfulBotStatus-IDN-C-X Easy & powerful bot to check if your all Telegram bots are working or not. This bot status bot updates every 45 minutes & ru

IDNCoderX 5 Oct 06, 2022
Activate exitlag trial accounts

Exitlag Account Activator Activate exitlag trial accounts (unlimited trial accounts) ⭐ If you want to support me, follow me on github and star the pro

Benjamin 173 Nov 10, 2022
Web app for spotify playlist management with last.fm integration

Music Tools Set of utility tools for Spotify and Last.fm. Built on my other libraries for Spotify (spotframework), Last.fm (fmframework) and interfaci

andy 3 Dec 14, 2022
GTPS Status Bot

Python GTPS Status Bot (BETA) Python GTPS Status Bot Require Python How To Use Download This Source Extract The Zip File Install the requirements (Mod

Lamp 4 Oct 11, 2021
This is a simple bot for running Python code through Discord

Python Code Runner Discord Bot This is a simple bot for running Python code through Discord. It was originally developed for the Beginner.Codes Discor

beginner.py 1 Feb 14, 2022
A userbot made for telegram

𝚃𝙷𝙴 𝙼𝙰𝙵𝙸𝙰𝙱𝙾𝚃 This is a userbot made for telegram. I made this userbot with help of all other userbots available in telegram. All credits go

MafiaBotOP 8 Apr 08, 2022
A small and fun Discord Bot that is written in Python and discord-interactions (with discord.py)

Articuno (discord-interactions) A small and fun Discord Bot that is written in Python and discord-interactions (with discord.py) Get started If you wa

Blue 8 Dec 26, 2022
Technical Test - Python Programmer Grupo Nexxera

Technical Test Nexxera Group Teste Técnico - Grupo de Programadores Python Nexxera 📍 Prepare-se para usar a API Nossa API de transações de contas dig

Raul dos Santos Moraes 1 Mar 16, 2022
OliviaV2: danger bot with python

🎶 OLIVIA V2 🎵 Requirements 📝 FFmpeg NodeJS nodesource.com Python 3.7 or higher PyTgCalls 🧪 Get SESSION_NAME from below: Pyrogram 🎖 History Featur

Alvaro Einstein 2 Nov 04, 2021
Fetch tracking numbers of Amazon orders, for the ease of the logistics.

Amazon-Tracking-Number Fetch tracking numbers of Amazon orders, for the ease of the logistics. Read Me First (How to use this code): Get Amazon "Items

Tony Yao 1 Nov 02, 2021
Pure Python implementation of the Windows API method IDvdInfo2::GetDiscID.

pydvdid-m Pure Python implementation of the Windows API method IDvdInfo2::GetDiscID. This is a modification of sjwood's pydvdid. The Windows API metho

4 Nov 22, 2022
LoL API is a Python application made to serve League of Legends data.

LoL API is a Python application made to serve League of Legends data.

Caique Cunha Pereira 1 Nov 06, 2021
LOL-banner - A discord bot that bans anybody playing league of legends

LOL-banner A discord bot that bans anybody playing league of legends This bot ha

bsd_witch 46 Dec 17, 2022
Criando Lambda Functions para Ingerir Dados de APIs com AWS CDK

LIVE001 - AWS Lambda para Ingerir Dados de APIs Fazer o deploy de uma função lambda com infraestrutura como código Lambda vai numa API externa e extra

Andre Sionek 12 Nov 20, 2022
A tool that ensures consistent string quotes in your Python code.

pyquotes Single quotes are superior. And if you disagree, there's an option for this as well. In any case, quotes should be consistent throughout the

Adrian 9 Sep 13, 2022
A FORKED AND Modded version of TL:GD for 🅱️3R0K🧲support

for support join here working example group Leech Here For Any Issues/Imrovements or Discussions go here or here Please Leave A star And Fork this Rep

KangersHub 202 Dec 26, 2022
A solution designed to extract, transform and load Chicago crime data from an RDS instance to other services in AWS.

This project is intended to implement a solution designed to extract, transform and load Chicago crime data from an RDS instance to other services in AWS.

Yesaswi Avula 1 Feb 04, 2022
Change your discord avatar every x h/d based on a list of images

Discord-Avatar-Autochange Introduction A simple script that automatically keeps changing your discord avatar after a given amount of time based on the

Armin Amiri 5 Apr 30, 2022