aiosql - Simple SQL in Python

Overview

aiosql - Simple SQL in Python

SQL is code. Write it, version control it, comment it, and run it using files. Writing your SQL code in Python programs as strings doesn't allow you to easily reuse them in SQL GUIs or CLI tools like psql. With aiosql you can organize your SQL statements in .sql files, load them into your python application as methods to call without losing the ability to use them as you would any other SQL file.

This project supports standard and asyncio based drivers for SQLite and PostgreSQL out of the box (sqlite3, aiosqlite, psycopg2, asyncpg). Extensions to support other database drivers can be written by you! See: Database Driver Adapters

!DANGER!

This project supports asyncio based drivers and requires python versions >3.6.

Badges

https://github.com/nackjicholson/aiosql/actions/workflows/aiosql-package.yml/badge.svg?branch=master

Installation

pip install aiosql

Or if you you use poetry:

poetry add aiosql

Usage

users.sql

-- name: get-all-users
-- Get all user records
select userid,
       username,
       firstname,
       lastname
  from users;


-- name: get-user-by-username^
-- Get user with the given username field.
select userid,
       username,
       firstname,
       lastname
  from users
 where username = :username;

You can use aiosql to load the queries in this file for use in your Python application:

>> [(1, "nackjicholson", "William", "Vaughn"), (2, "johndoe", "John", "Doe"), ...] users = queries.get_user_by_username(conn, username="nackjicholson") # >>> (1, "nackjicholson", "William", "Vaughn")">
import aiosql
import sqlite3

conn = sqlite3.connect("myapp.db")
queries = aiosql.from_path("users.sql", "sqlite3")

users = queries.get_all_users(conn)
# >>> [(1, "nackjicholson", "William", "Vaughn"), (2, "johndoe", "John", "Doe"), ...]

users = queries.get_user_by_username(conn, username="nackjicholson")
# >>> (1, "nackjicholson", "William", "Vaughn")

Writing SQL in a file and executing it from methods in python!

Async Usage

greetings.sql

-- name: get_all_greetings
-- Get all the greetings in the database
select greeting_id, greeting from greetings;

-- name: get_user_by_username^
-- Get a user from the database
select user_id,
       username,
       name
  from users
 where username = :username;

example.py

import asyncio
import aiosql
import aiosqlite


queries = aiosql.from_path("./greetings.sql", "aiosqlite")


async def main():
    # Parallel queries!!!
    async with aiosqlite.connect("greetings.db") as conn:
        greetings, user = await asyncio.gather(
            queries.get_all_greetings(conn),
            queries.get_user_by_username(conn, username="willvaughn")
        )
        # greetings = [(1, "Hi"), (2, "Aloha"), (3, "Hola")]
        # user = (1, "willvaughn", "William")

        for _, greeting in greetings:
            print(f"{greeting}, {user[2]}!")
        # Hi, William!
        # Aloha, William!
        # Hola, William!

asyncio.run(main())

This example has an imaginary SQLite database with greetings and users. It prints greetings in various languages to the user and showcases the basic feature of being able to load queries from a sql file and call them by name in python code. It also happens to do two SQL queries in parallel using aiosqlite and asyncio.

Why you might want to use this

  • You think SQL is pretty good, and writing SQL is an important part of your applications.
  • You don't want to write your SQL in strings intermixed with your python code.
  • You're not using an ORM like SQLAlchemy or Django, and you don't need to.
  • You want to be able to reuse your SQL in other contexts. Loading it into psql or other database tools.

Why you might NOT want to use this

  • You're looking for an ORM.
  • You aren't comfortable writing SQL code.
  • You don't have anything in your application that requires complicated SQL beyond basic CRUD operations.
  • Dynamically loaded objects built at runtime really bother you.

Table of Contents

.. toctree::
   :maxdepth: 2
   :caption: Contents:

   Getting Started 
  
   Defining SQL Queries 
  
   
   Advanced Topics 
   
    
   Database Driver Adapters 
    
     
   Contributing 
     
      
   API 
       
      
     
    
   
  
 
Comments
  • Single character parameters cause parsing issues

    Single character parameters cause parsing issues

    Python 3.10 aiosql 3.4 asyncpg 0.25.0

    -- name: insert_into_symbol_arima_models*!
    INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES (:symbol, :interval, :p, :d, :q, :P, :D, :Q, :s, :k, :n_exog);
    
    sql = 'INSERT INTO symbol_arima_models(symbol, "interval", p, d, q, "P", "D", "Q", s, k, n_exog) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10$11og);'
    

    $10$11og ???

    opened by singlecheeze 13
  • Could I drop support for python 3.6?

    Could I drop support for python 3.6?

    #29 This came up, dataclasses aren't usable in this library until we support 3.7+

    Worse is the aioctxlib.py hack I wrote to get around not having this contextlib.asynccontextmanager in < 3.7

    I think this library being a target for users of asyncio and modern versions of python gives me a pretty good reason to not support 3.6 if it's becoming inconvenient. Asyncio itself went through major changes from 3.6 to 3.7 and people should update asap for that.

    opened by nackjicholson 13
  • Bug with % on PyFormatAdapter and PyMySQL

    Bug with % on PyFormatAdapter and PyMySQL

    Hello, some bug with %

    aiosql==5.0
    contextlib2==21.6.0
    PyMySQL==1.0.2
    

    queries.sql

    -- name: get_now
    SELECT NOW();
    
    
    -- name: get_now_date
    SELECT DATE_FORMAT(NOW(),'%Y-%m-%d %h:%i:%s');
    
    import os
    
    import aiosql
    import pymysql.cursors
    
    queries = aiosql.from_path('queries.sql', driver_adapter='pymysql')
    
    connection = pymysql.connect(
            host=os.environ.get('DB_HOST'),
            user=os.environ.get('DB_USER'),
            password=os.environ.get('DB_PASSWORD'),
            database=os.environ.get('DB_DATABASE'),
            cursorclass=pymysql.cursors.DictCursor
        )
    
    with connection:
        print(queries.get_now(connection))
        print(queries.get_now_date(connection))
    
    [{'NOW()': datetime.datetime(2022, 7, 28, 17, 44, 58)}]
    
      File "/opt/Project/env/lib/python3.10/site-packages/aiosql/queries.py", line 63, in fn
        return self.driver_adapter.select(
      File "/opt/Project/env/lib/python3.10/site-packages/aiosql/adapters/generic.py", line 23, in select
        cur.execute(sql, parameters)
      File "/opt/Project/env/lib/python3.10/site-packages/pymysql/cursors.py", line 146, in execute
        query = self.mogrify(query, args)
      File "/opt/Project/env/lib/python3.10/site-packages/pymysql/cursors.py", line 125, in mogrify
        query = query % self._escape_args(args, conn)
    TypeError: not enough arguments for format string
    
    opened by romkazor 9
  • `ValueError: duplicate parameter name` when the same parameter name appears twice

    `ValueError: duplicate parameter name` when the same parameter name appears twice

    aiosql 3.4.0 cannot load SQL queries when the same parameter :name appears multiple time. It did work with aiosql 3.3.1.

    This can be reproduced with this example:

    # aiosql_test.py
    import aiosql
    
    sql_str = "-- name: get^\n" "select * from test where foo=:foo and foo=:foo"
    aiosql.from_str(sql_str, "aiosqlite")
    print("OK")
    

    Working with aiosql 3.3.1:

    $ pip install "aiosql<3.4.0"
    $ python3 aiosql_test.py
    OK
    

    Failing with aiosql 3.4.0:

    $ pip install "aiosql==3.4.0"
    $ python3 aiosql_test.py
    Traceback (most recent call last):
      File "aiosql_test.py", line 4, in <module>
        aiosql.from_str(sql_str, "aiosqlite")
      File "lib/python3.9/site-packages/aiosql/aiosql.py", line 84, in from_str
        query_data = query_loader.load_query_data_from_sql(sql)
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 117, in load_query_data_from_sql
        query_data.append(self._make_query_datum(query_sql_str, ns_parts))
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 27, in _make_query_datum
        signature = self._extract_signature(sql)
      File "lib/python3.9/site-packages/aiosql/query_loader.py", line 103, in _extract_signature
        return inspect.Signature(parameters=[self] + params) if params else None
      File "lib/python3.9/inspect.py", line 2826, in __init__
        raise ValueError(msg)
    ValueError: duplicate parameter name: 'foo'
    
    opened by vivienm 9
  • Generate a Signature from SQL Parameters

    Generate a Signature from SQL Parameters

    This change adds another step to building a QueryFn which attempts to generate a function signature from the query parameters.

    This adds helpful introspection:

    >>> import inspect
    >>> import aiosql
    ... 
    ... sql_str = (
    ...     "-- name: get^\n"
    ...     "select * from test where foo=:foo and bar=:bar"
    ... )
    ... queries = aiosql.from_str(sql_str, "aiosqlite")
    >>> inspect.signature(queries.get)
    <Signature (*, foo, bar)>
    >>> help(queries.get)
    Help on method get in module aiosql.queries:
    
    async get(*, foo, bar) method of aiosql.queries.Queries instance
    

    Interactive REPLs are able to provide hints/tooltips based on the signature as well.

    opened by seandstewart 7
  • Add query filename info so that goto-definition works in editors

    Add query filename info so that goto-definition works in editors

    Great project. The only thing that was missing for me currently was goto-definition.

    Editors can use fn.__code__.co_filename to implement goto-definition.

    With this change, goto-definition for a query function will go to that function's SQL file, in case the function was created with aiosql.from_path.

    opened by abo-abo 6
  • Add SQLOperationType to QueryFn for improved introspection.

    Add SQLOperationType to QueryFn for improved introspection.

    I use this library in my day-to-day work and I'm looking to automate some additional bootstrapping I do for managing my query library. Having introspection into the operation type of each function would be extremely helpful!

    opened by seandstewart 6
  • Off-by-one on AsyncPGDriverAdapter.var_replacements

    Off-by-one on AsyncPGDriverAdapter.var_replacements

    aiosql version: 3.2.0 Python version: 3.8

    Description

    First of all, thanks for maintaining such an excellent library. I've been itching for a chance to use it for a while now and finally have the opportunity.

    I've come across an issue where if there are multiple child query clients with similar parameters, the AsyncPGDriverAdapter will not properly count them when processing the SQL file.

    The issue appears to be isolated to this block:

    https://github.com/nackjicholson/aiosql/blob/f2095151d6601eed49ae29e45e37e30e0f67ca0c/aiosql/adapters/asyncpg.py#L40-L46

    If there are multiple queries with the same name which share an argument with the same name, the count will not be incremented when that var name is replaced.

    There are two issues with this:

    1. All queries with the same name must implement matching arguments in exactly the same order. (Easily worked with)
    2. Subsequent queries' additional arguments will all be off-by-one. (Breaks this use-case)

    Proposed solution

    I think the simplest solution would be to namespace the keys in var_replacements with the query source as well as name. This would prevent argument collision from occurring at all.

    Current Workaround

    My current workaround is to load each query directory individually. This works for my use-case, but is still a bit of a gotcha.

    bug 
    opened by seandstewart 6
  • Templating Identifers into SQL

    Templating Identifers into SQL

    https://github.com/honza/anosql/issues/45 and https://github.com/honza/anosql/issues/47

    Are both asking for support for Identifiers in order to template other values into the SQL, like table names. I've never had need for this, and actually wasn't aware of it as a feature in psycopg2, but it's worth some thought to see if it's something we can do in this project.

    One potential problem in the way of doing this is that the other driver libraries like sqlite or asyncpg may not have safe features for doing this. I just really don't know since I've never had to do this kind of table name formatting.

    help wanted 
    opened by nackjicholson 6
  • Add support for writing from dataclasses.

    Add support for writing from dataclasses.

    Currently, there is support for reading database rows into dataclasses which significantly reduces the amount of boilerplate required. But the inverse is not the case.

    e.g.

    @dataclass
    class Article:
      title: str
      body: str
      created: datetime
    
      def create(self, conn):
        return queries.insert_article(conn, title=self.title, body=self.body, created=self.created)
      
      def update(self, conn):
        return queries.update_article(conn, title=self.title, body=self.body, created=self.created)
    

    If the parameters supported dot notation, this could be simplified to

    -- name: update_article
    UPDATE article SET title=:article.title body=:article.body ...
    
    
    def update(self, conn):
      return queries.update_article(conn, article=self)
    
    opened by xlevus 6
  • Taking this library to its full potential isn't something I've been able to do. Anyone else interested?

    Taking this library to its full potential isn't something I've been able to do. Anyone else interested?

    I still think this project is great, and I'm so pleased that there are so many people who use it. However, when I began maintaining anosql, and then forked this project, I was a full-time software engineer with the time and focus to spend driving improvements to it. In the last 2-3 years my personal and professional circumstances have changed a lot. I've only been able to make sporadic commits a few times a year. I think it's time to admit to myself that for this project to grow and evolve, I can't be the person with the keys.

    @seandstewart and @zx80 you two have been extremely helpful in keeping things moving somewhat. Are either of you in a position to drive this thing? I think we could probably keep it where it is for continuity, here on github. I would have to do a little bit of permissioning to make sure you have full access to change the CI stuff and to push new versions up to pypi. That's all possible to get done.

    I think there is still lots of exciting work here, and I'm sure any new maintainer would find additional avenues for bettering aiosql.

    Let me know your interest when you can.

    help wanted 
    opened by nackjicholson 5
  • Add support for multiline comments.

    Add support for multiline comments.

    I Changed the _SQL_COMMENT regex in aiosql/query_loader.py to work on single and multiline /* */ style comments. I created a new test for it in tests/test_loading.py to show that it didn't break the previous behavior.

    opened by danofsteel32 2
  • Marshalling Data

    Marshalling Data

    I've recently started playing with this library in a new project, and so far have found it very enjoyable to use :tada:

    I was searching for a way to be able to get dict output from the queries which led me to a bunch of similar questions around marshalling of data.

    • https://github.com/nackjicholson/aiosql/blob/7.0/aiosql/query_loader.py#L70
    • https://github.com/nackjicholson/aiosql/issues/33
    • https://github.com/nackjicholson/aiosql/issues/12
    • https://github.com/nackjicholson/aiosql/issues/77
    • https://github.com/nackjicholson/aiosql/issues/67

    After some thinking about it, I decided to smash out a proof-of-concept for how it might work. I'm opening up as an issue first as I suspect there might be some discussion needed before moving to any PR.

    Overview of changes

    • Adds a Marshal class that can be used to marshal data received from certain query types.
      • Included are some default/example marshallers: DictMarshal, NamedTupleMarshal, DataclassMarshal.
    • Removed record_classes

    Marshals can be provided in two locations:

    1. They can be passed as default_marshal in aiosql.from_str(..., default_marshal=Marshal), aiosql.from_path(..., default_marshal=Marshal).
    2. They can be passed into specific (supporting) queries q.some_func(conn, arg1=arg1, arg2=arg2, marshal=marshal). In this case they override the default_marshal.
    • Note: under the current implementation it's not possible to remove the default_marshal as passing None to a query indicates that that the default_marshal should be used. That said, it is possible to use the Marshal base class which just passes through what it was provided.
    • Supported query types are select, select_one, insert_returning. Note: due to instabiltiy of APIs, insert_returning does not use the default_marshal.

    Sample Code

    From the tests:

    def run_parameterized_query_dict_marshal(conn, queries):
        actual = queries.users.get_by_lastname(conn, lastname="Doe", marshal=aiosql.marshallers.DictMarshal)
        expected = [
            {
                "userid": 3,
                "username": "janedoe",
                "firstname": "Jane",
                "lastname": "Doe",
            },
            {
                "userid": 2,
                "username": "johndoe",
                "firstname": "John",
                "lastname": "Doe",
            },
        ]
        assert actual == expected
    

    Reasoning / Considerations

    Summarising some of the discussions from the above sources.

    record_classes are undesirable from an implementation point of view, but are wanted by users.

    Allows removing of record_classes with a clear path to upgrade.

    This also prevents leaking python details into the SQL.

    This does make a breaking change, however to keep both features would be much harder to maintain.

    We want to avoid turning aiosql into an ORM, functionality should be kept simple. This allows any other tools to hook in with their own marshalling of data without needing to further modify aiosql. This "hook" style of library I think is much more powerful and useful to others than a library that needs to be wrapped.

    This is not the job of aiosql, the library should be wrapped.

    Even the various DB libraries allow some form of marshalling, mostly through the creation of DictCursors or similar. Being able to change the precise output format is a super common task.

    What about marshalling of data before calling the query?

    The Marshal classes could easily have a marshal_input(self, *args, **kwargs) -> tuple(args, kwargs) or something similar added. We'd likely want to be able to use different marshallers by either adding different arguments default_[input|output]_marshaller etc etc, or by providing a "meta" marshaller than can be provided two marshallers, one for input one for output, and thus we don't have to change any function signatures. We'd have to modify aiosql.queries._make_sync_fn

    (I included samples of these in my POC).

    Why marshal_one_row and marshal_many_rows

    Having two functions allows for implementation optimisations in the marshallers, whilst preventing typos when they are called.

    I did originally have one function but marshal_rows(column_names, [row])[0] is very gross to write for the single row cases.

    Keeping row[s] as apart of the name allows for disambiguation in the future if we add a function for marshalling the incoming data.

    Further work

    Before a PR can be accepted, more tests will need to be added, especially for handling the insert_returning marshalling. We'd potentially want to have some tests of the marshallers without being attached to any database.

    opened by nhairs 9
Releases(7.1)
  • 7.1(Nov 11, 2022)

  • 7.0(Oct 28, 2022)

    Features:

    • add MariaDB support
    • allow to change file extension when loading directories
    • switch mysql.connector dependency to mysql-connector-python because mysql-connector is obsolete
    • improved documentation

    Tests:

    • simplify github CI tests, no service needed
    • run from Python 3.7 to 3.11
    • extensive rework of tests to ignore missing modules, use marks…
    • add tests with docker images, needed for mariadb
    • depend on pytest 7.
    Source code(tar.gz)
    Source code(zip)
  • 6.5(Oct 7, 2022)

    • improved documentation
    • add missing dependency on setuptools
    • improved Makefile
    • add untested mariadb support (will be tested and advertised in the next release)
    • some refactoring (for next release really)
    Source code(tar.gz)
    Source code(zip)
  • 6.4(Sep 6, 2022)

    Version 6.4 brings:

    • improved CI checks with rstcheck
    • dynamic query functions point to the SQL query file
    • illegal query names such as 1st are filtered out
    • documentation has been improved
    • some code refactoring…
    Source code(tar.gz)
    Source code(zip)
  • 6.3(Aug 29, 2022)

  • 6.2(Aug 8, 2022)

  • 6.1(Jul 31, 2022)

  • 6.0(Jul 29, 2022)

    The v6.0 release:

    • adds support for pygresql postgres driver.
    • works around pymysql and mysqldb driver issues.
    • adds a few more tests.
    • improves the documentation, including badges which look soooo cooool.
    • simplifies the development environment, CI configuration.
    • updates myproject.toml.
    • does some cleanup.
    Source code(tar.gz)
    Source code(zip)
  • 5.0(Jul 23, 2022)

    New in 5.0:

    • Add Postgres adapter for pg8000 driver.
    • Add register_adapter.
    • Use more recent CI image.
    • Add flake8 to CI.
    • Update documentation.
    • Update dependencies.
    Source code(tar.gz)
    Source code(zip)
  • 4.0(Jul 10, 2022)

    This release:

    • add support for MySQL, on top of Postgres and SQLite
    • add support for more drivers : psycopg3, APSW
    • refactors the base code significantly to reduce the line count
    • CI covers Python 3.7 to 3.10
    • add coverage tests to about 100%, including error paths
    • updates dependencies and the doc accordingly
    • switch to 2-level version numbering
    Source code(tar.gz)
    Source code(zip)
  • 3.4.1(Jan 31, 2022)

  • 3.4.0(Dec 24, 2021)

  • 3.3.0(Jul 23, 2021)

    • #62 Fixes an off-by-one bug that affected variable name translation for the asyncpg driver. Thank you @seandstewart for reporting the issue!
    • Fixes documentation css styling via .nojekyll plugin for sphinx
    Source code(tar.gz)
    Source code(zip)
  • 3.2.1(Jul 19, 2021)

  • 3.2.0(Sep 27, 2020)

    #35 Fix bug with SQL comments at top of files. #45 Introduced QueryFn Protocol (by @dlax) #46 execute_script returns strings. Technically a change in API, but I just really anyone one was depending on this function returning a None. We shall see if it causes anyone problems. #47 Add a select_value operation (by @wagnerflo)

    Source code(tar.gz)
    Source code(zip)
  • 3.1.3(Sep 26, 2020)

  • 3.1.2(Aug 11, 2020)

    Adding type checking and Protocols for database adapters. Refactor async and sync fn wrappers for better code reuse and performance.

    #28 #30 #32

    Source code(tar.gz)
    Source code(zip)
  • 3.1.1(Aug 9, 2020)

  • 3.1.0(Jul 8, 2020)

  • 2.0.2(Dec 8, 2018)

  • v2.0.1(Dec 8, 2018)

Owner
Will Vaughn
https://git.sr.ht/~willvaughn/ VP of Software Eng at Carbon Lighthouse. Reach out to join us in our mission to stop climate change.
Will Vaughn
CouchDB client built on top of aiohttp (asyncio)

aiocouchdb source: https://github.com/aio-libs/aiocouchdb documentation: http://aiocouchdb.readthedocs.org/en/latest/ license: BSD CouchDB client buil

aio-libs 53 Apr 05, 2022
python-beryl, a Python driver for BerylDB.

python-beryl, a Python driver for BerylDB.

BerylDB 3 Nov 24, 2021
Python Wrapper For sqlite3 and aiosqlite

Python Wrapper For sqlite3 and aiosqlite

6 May 30, 2022
A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching.

:: Description :: A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching :: Requirements :: Requires P

ESN Social Software 549 Nov 18, 2022
Confluent's Kafka Python Client

Confluent's Python Client for Apache KafkaTM confluent-kafka-python provides a high-level Producer, Consumer and AdminClient compatible with all Apach

Confluent Inc. 3.1k Jan 05, 2023
Toolkit for storing files and attachments in web applications

DEPOT - File Storage Made Easy DEPOT is a framework for easily storing and serving files in web applications on Python2.6+ and Python3.2+. DEPOT suppo

Alessandro Molina 139 Dec 25, 2022
Create a database, insert data and easily select it with Sqlite

sqliteBasics create a database, insert data and easily select it with Sqlite Watch on YouTube a step by step tutorial explaining this code: https://yo

Mariya 27 Dec 27, 2022
A database migrations tool for SQLAlchemy.

Alembic is a database migrations tool written by the author of SQLAlchemy. A migrations tool offers the following functionality: Can emit ALTER statem

SQLAlchemy 1.7k Jan 01, 2023
Async ODM (Object Document Mapper) for MongoDB based on python type hints

ODMantic Documentation: https://art049.github.io/odmantic/ Asynchronous ODM(Object Document Mapper) for MongoDB based on standard python type hints. I

Arthur Pastel 732 Dec 31, 2022
This repository is for active development of the Azure SDK for Python.

Azure SDK for Python This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public

Microsoft Azure 3.4k Jan 02, 2023
Estoult - a Python toolkit for data mapping with an integrated query builder for SQL databases

Estoult Estoult is a Python toolkit for data mapping with an integrated query builder for SQL databases. It currently supports MySQL, PostgreSQL, and

halcyon[nouveau] 15 Dec 29, 2022
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 04, 2023
A SQL linter and auto-formatter for Humans

The SQL Linter for Humans SQLFluff is a dialect-flexible and configurable SQL linter. Designed with ELT applications in mind, SQLFluff also works with

SQLFluff 5.5k Jan 08, 2023
Python MYSQL CheatSheet.

Python MYSQL CheatSheet Python mysql cheatsheet. Install Required Windows(WAMP) Download and Install from HERE Linux(LAMP) install packages. sudo apt

Mohammad Dori 4 Jul 15, 2022
DBMS Mini-project: Recruitment Management System

# Hire-ME DBMS Mini-project: Recruitment Management System. 💫 ✨ Features Python + MYSQL using mysql.connector library Recruiter and Client Panel Beau

Karan Gandhi 35 Dec 23, 2022
Async database support for Python. 🗄

Databases Databases gives you simple asyncio support for a range of databases. It allows you to make queries using the powerful SQLAlchemy Core expres

Encode 3.2k Dec 30, 2022
Redis client for Python asyncio (PEP 3156)

Redis client for Python asyncio. Redis client for the PEP 3156 Python event loop. This Redis library is a completely asynchronous, non-blocking client

Jonathan Slenders 554 Dec 04, 2022
PostgreSQL database adapter for the Python programming language

psycopg2 - Python-PostgreSQL Database Adapter Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main fe

The Psycopg Team 2.8k Jan 05, 2023
Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Databank Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment. No ORM, no frills. Thread-safe. Only ra

snapADDY GmbH 4 Apr 04, 2022
A Python wheel containing PostgreSQL

postgresql-wheel A Python wheel for Linux containing a complete, self-contained, locally installable PostgreSQL database server. All servers run as th

Michel Pelletier 71 Nov 09, 2022