PostgreSQL database access simplified

Overview

Queries: PostgreSQL Simplified

Queries is a BSD licensed opinionated wrapper of the psycopg2 library for interacting with PostgreSQL.

The popular psycopg2 package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the Usage section below to see how easy it can be.

Key features include:

  • Simplified API
  • Support of Python 2.7+ and 3.4+
  • PyPy support via psycopg2cffi
  • Asynchronous support for Tornado
  • Connection information provided by URI
  • Query results delivered as a generator based iterators
  • Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays
  • Ability to directly access psycopg2 connection and cursor objects
  • Internal connection pooling

Version Status Coverage License

Documentation

Documentation is available at https://queries.readthedocs.org

Installation

Queries is available via pypi and can be installed with easy_install or pip:

pip install queries

Usage

Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI of the PostgreSQL server to connect to when creating a session:

session = queries.Session("postgresql://[email protected]:5432/postgres")

Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting.

When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is fred and you omit the URI when issuing queries.query the URI that is constructed would be postgresql://[email protected]:5432/fred.

If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods.

>>> queries.uri("server-name", 5432, "dbname", "user", "pass")
'postgresql://user:[email protected]:5432/dbname'

Environment Variables

Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are:

  • QUERIES_MAX_POOL_SIZE - Modify the maximum size of the connection pool (default: 1)

Using the queries.Session class

To execute queries or call stored procedures, you start by creating an instance of the queries.Session class. It can act as a context manager, meaning you can use it with the with keyword and it will take care of cleaning up after itself. For more information on the with keyword and context managers, see PEP343.

In addition to both the queries.Session.query and queries.Session.callproc methods that are similar to the simple API methods, the queries.Session class provides access to the psycopg2 connection and cursor objects.

Using queries.Session.query

The following example shows how a queries.Session object can be used as a context manager to query the database table:

>>> import pprint
>>> import queries
>>>
>>> with queries.Session() as session:
...     for row in session.query('SELECT * FROM names'):
...         pprint.pprint(row)
...
{'id': 1, 'name': u'Jacob'}
{'id': 2, 'name': u'Mason'}
{'id': 3, 'name': u'Ethan'}

Using queries.Session.callproc

This example uses queries.Session.callproc to execute a stored procedure and then pretty-prints the single row results as a dictionary:

>>> import pprint
>>> import queries
>>> with queries.Session() as session:
...   results = session.callproc('chr', [65])
...   pprint.pprint(results.as_dict())
...
{'chr': u'A'}

Asynchronous Queries with Tornado

In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between queries.TornadoSession and queries.Session is the TornadoSession.query and TornadoSession.callproc methods return the entire result set instead of acting as an iterator over the results. The following example uses TornadoSession.query in an asynchronous Tornado web application to send a JSON payload with the query result set.

from tornado import gen, ioloop, web
import queries

class MainHandler(web.RequestHandler):

    def initialize(self):
        self.session = queries.TornadoSession()

    @gen.coroutine
    def get(self):
        results = yield self.session.query('SELECT * FROM names')
        self.finish({'data': results.items()})
        results.free()

application = web.Application([
    (r"/", MainHandler),
])

if __name__ == "__main__":
    application.listen(8888)
    ioloop.IOLoop.instance().start()

Inspiration

Queries is inspired by Kenneth Reitz's awesome work on requests.

History

Queries is a fork and enhancement of pgsql_wrapper, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0.

Comments
  • Password with %5D login failed

    Password with %5D login failed

    Hi,

    I found that my password include %5D and it failed to login. And the same password I use psycopg2 library is OK.

    It's some wrong my code or anything i ignored?

    Thanks.

    opened by lycheng 3
  • python script cannot run by windows task scheduler with a simple

    python script cannot run by windows task scheduler with a simple "import queries"

    Python script runs fine but not by windows task scheduler. It exits with (0x1) in task scheduler. try the following script:

    import logging import pandas as pd import sqlalchemy import pprint #import queries from urllib.request import Request, urlopen import datetime

    logging.basicConfig(filename='./logs/mylog.log', filemode='a', format='%(asctime)s.%(msecs)03d %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d,%H:%M:%S', level=logging.INFO)

    logging.info("===== process started =====")

    opened by meilao3rd 2
  • psycopg2 as a dependency

    psycopg2 as a dependency

    Installing queries via pip install psycopg2 as a dependency. However on many platforms now psycopg2-binary package must be used because of lack of PG dev packages available. For instance, AWS Lambda, Azure Functions etc.

    Therefore I believe that either psycopg2 should be removed as a direct dependency and let the user manually add the approprate package or offer a queries-binary variant.

    opened by louis77 2
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries. Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    Can be replicated by accessing the queries.Result object for an insert that inserts one or more rows without returning any results.

    Example:

    >>> result = session.query('insert into some_table (id) values (123)')
    >>> list(result)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/mnt/storage1/queries/queries/results.py", line 48, in __iter__
        for row in self.cursor:
      File "/mnt/storage1/queries/env/lib/python3.6/site-packages/psycopg2-2.7.5-py3.6-linux-x86_64.egg/psycopg2/extras.py", line 117, in __iter__
        first = next(res)
    psycopg2.ProgrammingError: no results to fetch
    

    Handling of the error should probably be abstracted.

    opened by nvllsvm 2
  • Close the connection on a connecting error

    Close the connection on a connecting error

    If an error message is returned from the server during the connecting phase, the socket will remain open after the error has been raised. The socket will continue to be polled forever, causing excessive CPU use.

    opened by cknave 2
  • Add easy way to retrieve info about session state

    Add easy way to retrieve info about session state

    I am using the TornadoSession with hundreds of queries and if somehow a result isn't freed, tracking it down is a nightmare. I put this together and was wondering if it would be useful to have in the core of the application (if there is a better way please let me know)

    
    import queries
    
    
    class MySession(queries.TornadoSession):
    
        def query(self, sql, parameters=None):
            self.sql = sql
            self.parameters = parameters
    
            return super(MySession, self).query(sql=sql, parameters=parameters)
    
        def info(self):
            info = []
    
            for pid, pool in self._pool_manager._pools.items():
                i = pool.idle_connections
                b = [pool.connections[k] for k in pool.connections
                     if pool.connections[k].busy]
                u = [pool.connections[k].used_by().sql for k in
                     pool.connections if pool.connections[k].busy]
    
                info.append({
                    'pid': pid,
                    'idle': len(i),
                    'busy': len(b),
                    'running_queries': u
                })
    
            return info
    
    

    If you ever wanted info about the session, you'd just call session.info()

    enhancement 
    opened by emehrkay 2
  • Is there an executemany using the TornadoSession?

    Is there an executemany using the TornadoSession?

    I have seen the concurrent queries example by wrapping the sql statements in a list but that only works if you know beforehand how many queries you are going to send. Is there a method to executemany such as i would do below?

    `cars = ( (1, 'Audi', 52642), (2, 'Mercedes', 57127), (3, 'Skoda', 9000), (4, 'Volvo', 29000), (5, 'Bentley', 350000), (6, 'Citroen', 21000), (7, 'Hummer', 41400), (8, 'Volkswagen', 21600) )

    cur.executemany(query, cars)`

    Thanks in advance!

    opened by Knights22 2
  • Stacking tornado_session.Results.close calls doesn't close any connection

    Stacking tornado_session.Results.close calls doesn't close any connection

    Maybe I am doing something wrong, but if I were to simply wait until later in the execution stack to call close on a tornado result, it never closes.

    conn = queries.queries.TornadoSession(pool_max_size=3)
    ....
    res1 = yield conn.result('some query')
    res2 = yield conn.result('some query')
    res3 = yield conn.result('some query')
    
    res1.free()
    res2.free()
    res3.free()
    
    res4 = yield conn.result('some query')
    # PoolFullError: Pool <queries.pool.Pool object at $ID> is at its maximum capacity
    res4.free()
    

    If I were to free a result before using the connection again, things work fine. (I even tested with a counter by inspecting the conn_pool_manager._pools dict)

    Am I missing something?

    opened by emehrkay 2
  • Allow iterating over empty query results

    Allow iterating over empty query results

    Added an ability to iterate on empty query results. And a couple of, probably, incorrect tests for this condition.

    Previously it was causing psycopg2.ProgrammingError: scroll destination out of bounds. One could overcome this by checking for truthy result before iterating, but I'm not keen.

    opened by den-t 2
  • Requirement psycopg2<2.8 prevents Python 3.8 support

    Requirement psycopg2<2.8 prevents Python 3.8 support

    See https://github.com/psycopg/psycopg2/issues/854, the current requirements of psycopg2>=2.5.1,<2.8 attempts to install psycopg2-2.7.7, but this version is incompatible with python 3.8 due to the above issue, which was fixed in 2.8

    opened by dparker2 1
  • Broken link in doc

    Broken link in doc

    in : https://queries.readthedocs.io/en/2.0.0/

    the link to "Module Index" (https://queries.readthedocs.io/en/2.0.0/py-modindex.html) is broken.

    opened by BFJonk 1
  • Support connection_factory?

    Support connection_factory?

    I'll start with the question as I am not entirely certain if it is even possible; Is there a way to create a queries.Session using a connection_factory? And if not, would it be possible to add such support?

    Or.... perhaps there is another way to accomplish the following...?

    In cases where I wanted to use a cursor which supported both a NamedTuple result and also using a LoggingCursor I would combine the two cursors but I would also pass in the connection_factory which sets up the connection to utilize logging for the cursor. Is there some way of doing this in queries?

    from psycopg2.extras import NamedTupleCursor, LoggingCursor, LoggingConnection
    
    
    class MyLoggingCursor(LoggingCursor):
        def execute(self, query, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).execute(query, vars)
    
        def callproc(self, procname, vars=None):
            self.timestamp = time.time()
            return super(MyLoggingCursor, self).callproc(procname, vars)
    
    
    class MyLoggingConnection(LoggingConnection):
        def filter(self, msg, curs):
            duration = int((time.time() - curs.timestamp) * 1000)
            output = f"{msg}  ==> {curs.rowcount} rows, {duration:d} ms"
            return output
    
        def cursor(self, *args, **kwargs):
            kwargs.setdefault('cursor_factory', MixinLoggedNamedTupleCursor)
            return LoggingConnection.cursor(self, *args, **kwargs)
    
    
    class MixinLoggedNamedTupleCursor(MyLoggingCursor, NamedTupleCursor):
        pass
    
    
    db_conn = psycopg2.connect(host=db_host, port=db_port,
                               user=db_user, password=db_pass,
                               database=db_name, 
                               connect_timeout=timeout,
                               connection_factory=MyLoggingConnection
                               )
    db_conn.initialize(logger)
    
    opened by bandophahita 0
  • Simple query fails if no rows found

    Simple query fails if no rows found

    Most simple example from docs:

    for row in session.query('SELECT * FROM foo'):
        print row
    

    Fails when there are no rows in the table:

    Traceback (most recent call last):
      File "venv/lib/python3.9/site-packages/queries/results.py", line 45, in __iter__
        raise StopIteration
    StopIteration
    

    Expected result: for loop should just pass without errors

    opened by borislopezaraoz 0
  • Handle resultless statements when rowcount > 0

    Handle resultless statements when rowcount > 0

    When, for example, an insert without return is executed, attempting to access the queries.Result object's rows raises a psycopg2.ProgrammingError with a message of 'no results to fetch'. The existing check in queries is based on cursor.rowcount and thus fails in this example as rowcount contains the number of rows affected - not necessarily returned.

    This commit adds handling of said exception and matches the behaviors if the rowcount were 0.

    opened by nvllsvm 2
  • TornadoSession as a context manager

    TornadoSession as a context manager

    Hello ! First of all thank you for the work you put in your lib !

    We spotted a strange behavior when using queries.TornadoSession in a context manager: The connection pool is not freed when exiting the with block. (This isn't the case for queries.Session).

    It is true that the documentation encourages to use result.free() when working with asynchronous sessions but IMHO it would be a nice feature if both Session classes would behave the same.

    Cheers !

    enhancement 
    opened by Morreski 3
Releases(2.0.0)
  • 2.0.0(Jan 29, 2018)

    • REMOVED support for Python 2.6
    • FIXED CPU Pegging bug: Cleanup IOLoop and internal stack in TornadoSession on connection error. In the case of a connection error, the failure to do this caused CPU to peg @ 100% utilization looping on a non-existent file descriptor. Thanks to cknave for his work on identifying the issue, proposing a fix, and writing a working test case.
    • Move the integration tests to use a local docker development environment
    • Added new methods queries.pool.Pool.report and queries.pool.PoolManager.Report for reporting pool status.
    • Added new methods to queries.pool.Pool for returning a list of busy, closed, executing, and locked connections.
    Source code(tar.gz)
    Source code(zip)
Owner
Gavin M. Roy
CTO @aweber. Technology-minded geek who enjoys Python & Erlang, music and gaming. Author of RabbitMQ in Depth.
Gavin M. Roy
An extension package of 🤗 Datasets that provides support for executing arbitrary SQL queries on HF datasets

datasets_sql A 🤗 Datasets extension package that provides support for executing arbitrary SQL queries on HF datasets. It uses DuckDB as a SQL engine

Mario Šaško 19 Dec 15, 2022
asyncio (PEP 3156) Redis support

aioredis asyncio (PEP 3156) Redis client library. Features hiredis parser Yes Pure-python parser Yes Low-level & High-level APIs Yes Connections Pool

aio-libs 2.2k Jan 04, 2023
python-beryl, a Python driver for BerylDB.

python-beryl, a Python driver for BerylDB.

BerylDB 3 Nov 24, 2021
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 03, 2022
Some scripts for microsoft SQL server in old version.

MSSQL_Stuff Some scripts for microsoft SQL server which is in old version. Table of content Overview Usage References Overview These script works when

小离 5 Dec 29, 2022
AWS SDK for Python

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

the boto project 7.8k Jan 04, 2023
Async ORM based on PyPika

PyPika-ORM - ORM for PyPika SQL Query Builder The package gives you ORM for PyPika with asycio support for a range of databases (SQLite, PostgreSQL, M

Kirill Klenov 7 Jun 04, 2022
Sample code to extract data directly from the NetApp AIQUM MySQL Database

This sample code shows how to connect to the AIQUM Database and pull user quota details from it. AIQUM Requirements: 1. AIQUM 9.7 or higher. 2. An

1 Nov 08, 2021
Async ODM (Object Document Mapper) for MongoDB based on python type hints

ODMantic Documentation: https://art049.github.io/odmantic/ Asynchronous ODM(Object Document Mapper) for MongoDB based on standard python type hints. I

Arthur Pastel 732 Dec 31, 2022
Example Python codes that works with MySQL and Excel files (.xlsx)

Python x MySQL x Excel by Zinglecode Example Python codes that do the processes between MySQL database and Excel spreadsheet files. YouTube videos MyS

Potchara Puttawanchai 1 Feb 07, 2022
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 07, 2022
SpyQL - SQL with Python in the middle

SpyQL SQL with Python in the middle Concept SpyQL is a query language that combines: the simplicity and structure of SQL with the power and readabilit

Daniel Moura 853 Dec 30, 2022
Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.

Redash is designed to enable anyone, regardless of the level of technical sophistication, to harness the power of data big and small. SQL users levera

Redash 22.4k Dec 30, 2022
A HugSQL-inspired database library for Python

PugSQL PugSQL is a simple Python interface for using parameterized SQL, in files. See pugsql.org for the documentation. To install: pip install pugsql

Dan McKinley 558 Dec 24, 2022
Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Kenneth Reitz 6.9k Jan 03, 2023
Toolkit for storing files and attachments in web applications

DEPOT - File Storage Made Easy DEPOT is a framework for easily storing and serving files in web applications on Python2.6+ and Python3.2+. DEPOT suppo

Alessandro Molina 139 Dec 25, 2022
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
Python PostgreSQL adapter to stream results of multi-statement queries without a server-side cursor

streampq Stream results of multi-statement PostgreSQL queries from Python without server-side cursors. Has benefits over some other Python PostgreSQL

Department for International Trade 6 Oct 31, 2022
A Relational Database Management System for a miniature version of Twitter written in MySQL with CLI in python.

Mini-Twitter-Database This was done as a database design course project at Amirkabir university of technology. This is a relational database managemen

Ali 12 Nov 23, 2022
Redis Python Client - The Python interface to the Redis key-value store.

redis-py The Python interface to the Redis key-value store. Installation | Contributing | Getting Started | Connecting To Redis Installation redis-py

Redis 11k Jan 08, 2023