Async ODM (Object Document Mapper) for MongoDB based on python type hints

Overview

ODMantic

build coverage python-3.6-3.7-3.8 Package version Gitter

Documentation: https://art049.github.io/odmantic/


Asynchronous ODM(Object Document Mapper) for MongoDB based on standard python type hints. It's built on top of pydantic for model definition and validation.

Core features:

  • Simple: define your model by typing your fields using python types, build queries using python comparison operators

  • Developer experience: field/method autocompletion, type hints, data validation, perform database operations with a functional API

  • Fully typed: leverage static analysis to reduce runtime issues

  • AsyncIO: works well with ASGI frameworks (FastAPI, quart, sanic, Starlette, ...)

  • Serialization: built in JSON serialization and JSON schema generation

Requirements

Python: 3.6 and later (tested against 3.6, 3.7, 3.8 and 3.9)

MongoDB: 4.0 and later

Two direct dependencies:

  • pydantic: makes data validation and schema definition both handy and elegant.

  • motor: an asyncio MongoDB driver officially developed by the MongoDB team.

Installation

pip install odmantic

Example

To enjoy an async context without any code boilerplate, you can reproduce the following steps using the AsyncIO REPL (only for Python 3.8+).

python3.8 -m asyncio

If you are using an earlier version of Python you can use IPython which provide an Autoawait feature (starting from Python 3.6).

Define your first model

from typing import Optional

from odmantic import Field, Model


class Publisher(Model):
    name: str
    founded: int = Field(ge=1440)
    location: Optional[str] = None

By defining the Publisher class, we've just created an ODMantic model 🎉 . In this example, the model will represent book publishers.

This model contains three fields:

  • name: This is the name of the Publisher. This is a simple string field without any specific validation but it will be required to build a new Publisher.

  • founded: This is the year of foundation of the Publisher. Since the printing press has been invented in 1440, it would be handy to allow only values above 1440. The ge keyword argument passed to the Field is exactly doing this. The model will require a founded value greater or equal than 1440.

  • location: This field will contain the country code of the Publisher. Defining this field as Optional with a None default value makes it a non required field that will be set automatically when not specified.

The collection name has been defined by ODMantic as well. In this case it will be publisher.

Create some instances

instances = [
    Publisher(name="HarperCollins", founded=1989, location="US"),
    Publisher(name="Hachette Livre", founded=1826, location="FR"),
    Publisher(name="Lulu", founded=2002)
]

We defined three instances of the Publisher model. They all have a name property as it was required. All the foundations years are later than 1440. The last publisher has no location specified so by default this field is set to None (it will be stored as null in the database).

For now, those instances only exists locally. We will persist them in a database in the next step.

Populate the database with your instances

For the next steps, you'll need to start a local MongoDB server.The easiest way is to use docker. Simply run the next command in a terminal (closing the terminal will terminate the MongoDB instance and remove the container).

docker run --rm --net=host mongo

First, let's connect to the database using the engine. In ODMantic, every database operation is performed using the engine object.

from odmantic import AIOEngine

engine = AIOEngine()

By default, the AIOEngine (stands for AsyncIOEngine) automatically tries to connect to a MongoDB instance running locally (on port 27017). Since we didn't provide any database name, it will use the database named test by default.

The next step is to persist the instances we created before. We can perform this operation using the AIOEngine.save_all method.

await engine.save_all(instances)

Most of the engine I/O methods are asynchronous, hence the await keyword used here. Once the operation is complete, we should be able to see our created documents in the database. You can use Compass or RoboMongo if you'd like to have a graphical interface.

Another possibility is to use mongo CLI directly:

mongo --eval "db.publisher.find({})"

Output:

connecting to: mongodb://127.0.0.1:27017
{
  "_id": ObjectId("5f67b331514d6855bc5c54c9"),
  "founded": 1989,
  "location": "US",
  "name": "HarperCollins"
},
{
  "_id": ObjectId("5f67b331514d6855bc5c54ca"),
  "founded":1826,
  "location": "FR",
  "name": "Hachette Livre"
},
{
  "_id": ObjectId("5f67b331514d6855bc5c54cb"),
  "founded": 2002,
  "location": null,
  "name": "Lulu"
}

The created instances are stored in the test database under the publisher collection.

We can see that an _id field has been added to each document. MongoDB need this field to act as a primary key. Actually, this field is added by ODMantic and you can access it under the name id.

print(instances[0].id)
#> ObjectId("5f67b331514d6855bc5c54c9")

Find instances matching a criteria

Since we now have some documents in the database, we can start building some queries.

First, let's find publishers created before the 2000s:

early_publishers = await engine.find(Publisher, Publisher.founded <= 2000)
print(early_publishers)
#> [Publisher(name="HarperCollins", founded=1989, location="US),
#>  Publisher(name="Hachette Livre", founded=1826, location="FR")]

Here, we called the engine.find method. The first argument we need to specify is the Model class we want to query on (in our case Publisher). The second argument is the actual query. Similarly to SQLAlchemy, you can build ODMantic queries using the regular python operators.

When awaited, the engine.find method will return the list of matching instances stored in the database.

Another possibility is to query for at most one instance. For example, if we want to retrieve a publisher from Canada (CA):

ca_publisher = await engine.find_one(Publisher, Publisher.location == "CA")
print(ca_publisher)
#> None

Here the result is None because no matching instances have been found in the database. The engine.find_one method returns an instance if one exists in the database otherwise, it will return None.

Modify an instance

Finally, let's edit some instances. For example, we can set the location for the publisher named Lulu. First, we need to gather the instance from the database:

lulu = await engine.find_one(Publisher, Publisher.name == "Lulu")
print(lulu)
#> Publisher(name="Lulu", founded=2002, location=None)

We still have the same instance, with no location set. We can change this field:

lulu.location = "US"
print(lulu)
#> Publisher(name="Lulu", founded=2002, location="US)

The location has been changed locally but the last step to persist this change is to save the document:

await engine.save(lulu)

We can now check the database state:

mongo --eval "db.publisher.find({name: 'Lulu'})"

Output:

connecting to: mongodb://127.0.0.1:27017
{
  "_id": ObjectId("5f67b331514d6855bc5c54cb"),
  "founded": 2002,
  "location": "US",
  "name": "Lulu"
}

The document have been successfully updated !

Now, what if we would like to change the foundation date with an invalid one (before 1440) ?

lulu.founded = 1000
#> ValidationError: 1 validation error for Publisher
#> founded
#>   ensure this value is greater than 1440
#>   (type=value_error.number.not_gt; limit_value=1440)

This will raise an exception as it's not matching the model definition. The raised exception is actually a ValidationError created by from pydantic.

Next steps

If you already have experience with Pydantic and FastAPI, the Usage with FastAPI example might be interesting for you.

Otherwise, to get started on more advanced practices like relations and building more advanced queries, you can directly check the other sections of the documentation.

If you wish to contribute to the project (Thank you! 😃 ), you can have a look to the Contributing section of the documentation.

License

This project is licensed under the terms of the ISC license.

Comments
  • Implement `remove` with tests and doc example

    Implement `remove` with tests and doc example

    Closes Issue #121

    Notes

    • I read the contribution guidelines.
    • I created DocumentsNotFoundError to aggregate multiple documents not being found.
    • I don't believe delete_many should typically raise DocumentsNotFoundError, unless another thread deletes one of the instances returned from find before it is deleted from within delete_many. This should be a rare scenario typically.
    opened by joeriddles 8
  • 🔴 Transactions are not used now!

    🔴 Transactions are not used now!

    Bug

    In the _save() method transaction's session is not passed to update_one() request. https://github.com/art049/odmantic/blob/541d4119fe5e558491ec21b223eb9242ccde4b91/odmantic/engine.py#L315-L320

    It means that transaction is NOT USED NOW!

    How to fix?

    It's not enough to just pass the session to request. In MongoDB transactions are not available for Standalone. This simple way Standalone users will get exceptions for every save() call :cry:

    So we need to define the users of Standalone version and not start a transaction for them. Example:

    if self.client._server_property('server_type') == SERVER_TYPE.Standalone:
        await self._save(instance, None, **kwargs)
    else:
        async with await self.client.start_session() as s:
            async with s.start_transaction():
                await self._save(instance, s, **kwargs)
    

    P.S.: I don't know how to properly check ServerType without protected _server_property method :(

    bug 
    opened by Olegt0rr 8
  • ✨ Add new SyncEngine, support async and sync code - with fixed session

    ✨ Add new SyncEngine, support async and sync code - with fixed session

    ✨ Add new SyncEngine, support async and sync code - with fixed session

    This is the same as https://github.com/art049/odmantic/pull/225, but it includes the fixes to share the same session in https://github.com/art049/odmantic/pull/227

    This would supersede https://github.com/art049/odmantic/pull/225 if the session fix in https://github.com/art049/odmantic/pull/227 is accepted.

    I'm doing it here as an additional PR because there are a couple of changes and fixes needed in the new SyncEngine to handle that same issue with the session.

    opened by tiangolo 6
  • if the field is a reference, can't define this field to a optional

    if the field is a reference, can't define this field to a optional

    Bug

    class A(Model):
        a: int = 1
    
    class B(Model):
        b: Optional[A] = Reference()
    

    Current Behavior

    it occurs a error

    TypeError: Unhandled field definition B: typing.Union[__main__.A, NoneType] = <odmantic.reference.ODMReferenceInfo object at 0x7fdb8e628798>
    

    Expected behavior

    it should work

    Environment

    • ODMantic version: 0.3.4
    • MongoDB version: ...
    • Pydantic infos (output of python -c "import pydantic.utils; print(pydantic.utils.version_info())):
                 pydantic version: 1.8.1
                pydantic compiled: True
                     install path: /usr/local/lib/python3.7/dist-packages/pydantic
                   python version: 3.7.3 (default, Apr  3 2019, 19:16:38)  [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
                         platform: Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic
         optional deps. installed: ['typing-extensions']
    
    ...
    
    • Version of additional modules (if relevant):
      • ...

    Additional context

    Add any other context about the problem here.

    bug 
    opened by fugang 6
  • Update flake8 requirement from ==3.8.3 to ==3.8.4

    Update flake8 requirement from ==3.8.3 to ==3.8.4

    Updates the requirements on flake8 to permit the latest version.

    Commits
    • fb91b99 Release 3.8.4
    • b5b4792 Merge branch 'py34_virtualenv_fix' into 'master'
    • d3c55c1 limit the virtualenv version in python3.4
    • d0a0cb4 Merge branch 'video' into 'master'
    • eb9ca33 add video tutorial to plugin development page
    • a42d8cb Merge branch 'spelling' into 'master'
    • 2fe70dd fix a few small spelling issues
    • 93209ff Merge branch 'fix_linting_end_of_file' into 'master'
    • ee9c287 fix skipping of physical checks when file does not end in newline
    • 2c64d3e Merge branch 'docs-options-defaults' into 'master'
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Note: This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.

    You can always request more updates by clicking Bump now in your Dependabot dashboard.

    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 6
  • Update importlib-metadata requirement from ^1.0 to >=1,<3

    Update importlib-metadata requirement from ^1.0 to >=1,<3

    Updates the requirements on importlib-metadata to permit the latest version.

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Note: This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.

    You can always request more updates by clicking Bump now in your Dependabot dashboard.

    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 6
  • 🐛 Fix using the shared session when updating a document

    🐛 Fix using the shared session when updating a document

    🐛 Fix using the shared session when updating a document

    Currently, the code creates a session and transaction and passes down the session in the parameters through all the function calls, but it's not passed in the last point, which would make motor use the session.

    The original intention of this PR was just to pass that session, a one-line/argument change, but I found a couple of bugs and caveats along the way and the PR got bigger.

    ~I'm not sure I should add tests for this, I was trying to add them but I saw that I would have to add a bunch of mocks for the internals, or add strange tricks to break the save in the middle and see that the session didn't finish... but then any of those things seemed like a very complex trick to enable a test that would mostly test the trick, and not really the session. Not sure if I should do anything else. Let me know!~

    Edit 2022-06-23 Transactions in Standalone

    I see that transactions are only supported in MongoDB clusters with shards or replicas. And there's no simple way to detect the type of cluster in code. When using a transaction on a standalone MongoDB I get this error:

    pymongo.errors.OperationFailure: Transaction numbers are only allowed on a replica set member or mongos, full error: {'ok': 0.0, 'errmsg': 'Transaction numbers are only allowed on a replica set member or mongos', 'code': 20, 'codeName': 'IllegalOperation'}
    

    That error is currently not being thrown because sessions are currently not used (which is what this intends to fix).

    Detecting if the current cluster is replicated, sharded, or standalone (unsupported) would require a lot of complex and fragile logic, as even though transactions are not supported in standalone, there's no way to ask the cluster what's the current type of deployment.

    To solve that, I moved the transactions away from the internal code (they were not used anyway), and added support for passing a session to engine.save() and engine.save_all(), this way, a user could create a session outside, create a transaction (after confirming they have a supported deployment) and then pass the session to these methods.

    Edit 2022-06-23 B Same session concurrently

    I'm seeing that the same session is not expected to be used concurrently 😔

    https://motor.readthedocs.io/en/3.0.0/api-asyncio/asyncio_motor_client.html#motor.motor_asyncio.AsyncIOMotorClient.start_session

    Do not use the same session for multiple operations concurrently.

    So using asyncio.gather() should not be used as the document saves are expected to share the same session.

    Edit 2022-06-24 Tests for transactions

    I added a couple of tests to confirm that external transactions work with engine.save() and engine.save_all(). Because transactions require a cluster with replicas, those tests are only run on the replica version.

    opened by tiangolo 5
  • Not authorized on `database` to execute command ...

    Not authorized on `database` to execute command ...

    Desctription

    Seems that inserting via engine requires some additional permissions. Please describe it in the docs or (better) decrease permissions level similar to the motor way.

    My code

    async def save_via_motor(tree: Model):
        client = AsyncIOMotorClient(MONGO_URI)
        db = client.get_database('forest')
        collection = db.get_collection('trees')
        await collection.insert_one(tree.dict())
    
    async def save_via_odmantic(tree: Model):
        client = AsyncIOMotorClient(MONGO_URI)
        engine = AIOEngine(motor_client=client , database='forest')
        await engine.save(tree)
    

    Expected behavior

    Bot functions works the same way

    In fact

    Saving via engine occurs exception:

     pymongo.errors.OperationFailure: not authorized on forest to execute command { update: "trees", ordered: true, writeConcern: { w: "majority" }, bypassDocumentValidation: true, lsid: { id: UUID("ea4686c2-7ae0-47da-a484-09b9d9c51ae9") }, txnNumber: 1, $clusterTime: { clusterTime: Timestamp(1607689696, 16), signature: { hash: BinData(0, A637C05EC63A4FB3EF374716F3FB175FDA5CDA30), keyId: 6866034364808007110 } }, $db: "trees", $readPreference: { mode: "primary" } }, full error: {'operationTime': Timestamp(1607689695, 32), 'ok': 0.0, 'errmsg': 'not authorized on trees to execute command { update: "trees", ordered: true, writeConcern: { w: "majority" }, bypassDocumentValidation: true, lsid: { id: UUID("ea4686c2-7ae0-47da-a484-09b9d9c51ae9") }, txnNumber: 1, $clusterTime: { clusterTime: Timestamp(1607689695, 16), signature: { hash: BinData(0, A637C05EC63A4FB3EF374716F3FB175FDA5CDA40), keyId: 6866037364809007110 } }, $db: "trees", $readPreference: { mode: "primary" } }', 'code': 13, 'codeName': 'Unauthorized', '$clusterTime': {'clusterTime': Timestamp(1607689695, 32), 'signature': {'hash': b'\xa67\xc0^\xc6:O\xb3\xef7G\x16\xf3\xfb\x17_\xda\\\xda@', 'keyId': 6866034364808007110}}}
    
    bug 
    opened by Olegt0rr 5
  • Update mypy requirement from ^0.782 to ^0.790

    Update mypy requirement from ^0.782 to ^0.790

    Updates the requirements on mypy to permit the latest version.

    Commits
    • 69a055a Bump version to 0.790
    • 54712a7 Update wheel download script to use the correct file names for macOS (#9559)
    • 58cbfb4 Add test case for disable_error_code config file option (#9538)
    • 9503938 Document --disable-error-code config option (#9539)
    • 1bed354 Fix partial type crash during protocol checking (#9495)
    • bbfad46 Store the type for assignment expr (walrus) targets (#9479)
    • 498769b Always type check arguments when using --disallow-untyped-calls (#9510)
    • ab114f3 [mypyc] Try to fix errors about int32_t on Python 3.5 and Appveyor (#9502)
    • a2bdf04 Empty dummy commit to re-trigger builds
    • 9d560ce Fix bad test case description
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Note: This repo was added to Dependabot recently, so you'll receive a maximum of 5 PRs for your first few update runs. Once an update run creates fewer than 5 PRs we'll remove that limit.

    You can always request more updates by clicking Bump now in your Dependabot dashboard.

    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 5
  • ✨ Add new SyncEngine, support async and sync code

    ✨ Add new SyncEngine, support async and sync code

    ✨ Add new SyncEngine, support async and sync code

    Even the same models can be shared, a model can be retrieved by one engine and saved by the other, etc.

    This allows non-async code to use ODMantic, and more importantly, to gradually migrate an existing and sync codebase to ODMantic, migrating to async only/first for the parts that need optimization, but being able to keep the rest of the code.

    It would also be, as far as I know, the first sync ODM for Mongo based on Pydantic and type annotations.

    This is all backwards compatible.

    Note :rotating_light:

    It's probably better to merge https://github.com/art049/odmantic/pull/227 first, as that fixes a bug with sessions and transactions.

    Also, if that one with the session fix is accepted, then there's this PR: https://github.com/art049/odmantic/pull/231 that would supersede this one, as that one includes those session fixes for the new SyncEngine. In that case, that would be the one that should be merged and not this PR.

    Implementation Details

    There's a new SyncEngine besides the AIOEngine. Internally it uses pymongo, the official sync driver. It's API is pretty much the same as that one of motor, so the code changes are minimal, most of the code is copied from the existing implementation and adapted to not use async and await in those cases.

    I also refactored the Engine into a BaseEngine with all the logic that can be shared (query preparation, etc) and the two engines that inherit from it, AIOEngine and SyncEngine.

    Tests

    For the tests, I copied each one of the tests that interact with the async engine and updated the copy to use the sync engine, making it non-async, updating mock checks, wrapping the iterator of sync_engine.find() into list for the tests, etc.

    I considered having a separate file for the sync tests, but I thought it would be better to have each test very close to the original, that way it will be easier to maintain them, apply refactors, make sure they are in sync, and whenever a new test is added it will be easier to remember to add the async and sync version.

    All the tests are currently passing. ~I'm still figuring out tox, I have never used it~. Tox includes pymongo versions. ✅

    Dependencies

    Note: see the edit note in this section below.


    Previous / old comment from 2022-06-09

    It would make sense to not require motor always, as people not using async would still be able to use all of ODMantic in a sync way. I think it would make sense to have optional extra requirements to install motor or pymongo.

    That way developers can install with:

    $ pip install "odmantic[motor]"
    

    or:

    $ pip install "odmantic[pymongo]"
    

    Or even both:

    $ pip install "odmantic[motor,pymongo]"
    

    Those names could also be changed, for example odmantic[async] and odmantic[sync] if that feels better, it would still install motor and pymongo respectively. For now, I put them as just odmantic[motor] and odmantic[pymongo].

    2022-06-09T16:00:00Z Edit

    Tweaking tox I realized that motor depends on pymongo, so, pymongo would always be required, for just sync or also for async. And then motor would be needed only for async.

    In that case, it could make sense to make pymongo always required and make the optional extras just for odmantic[motor].

    ~The other option is to keep both optional extras, I think the only possible advantage of keeping odmantic[pymongo] as an independent option would be if for any chance Motor could end up not depending on PyMongo, but I wouldn't think that's really feasible.~

    ~What do you think?~

    2022-06-22T13:38:00Z Edit

    I realized Motor deeply depends on PyMongo, for example for bulk_write(), so I think it makes sense to make PyMongo always required, and Motor optional for async support. But let me now if it doesn't make sense to you!

    Installing with Motor support looks like:

    $ pip install "odmantic[motor]"
    

    Next steps

    • [x] ~Figure out tox and update it to include pymongo~.
      • ~I'll work on that now~. Done! ✅
    • [ ] Update the docs
      • I'll do it next. Maybe on this same PR or maybe on a new one, not sure what's best.
    • [ ] Make a release
      • I can't do this part 😅🙈

    I'll update the docs next but I want to first wait for feedback on this PR first.

    Future work

    I also want to propose a new feature to allow changing the collection used during an operation that I'm needing. Similar to MongoEngine's switch_collection()... although the implementation for that one is not thread nor async safe, so that one is not really very useful in all the cases and might be dangerous. I'll propose something equivalent but safe for threads and contextvars (e.g. safe with FastAPI and any other web frameworks).

    After that, I want to propose other features to use Mongo's batch updates for saving multiple documents instead of many atomic operations per document.

    But all this would be after this initial step with the sync engine.

    opened by tiangolo 4
  • Replace (or remove) `id` field description

    Replace (or remove) `id` field description

    Feature request

    Context

    Now the id field is printed in documents with information about the used database: MongoDB ObjectId string

    I'm not sure if showing this information is a good idea. What kind of database and what kind of identifier generation are used to store the data is not a client's business.

    Solution

    I suggest removing the description of the id field. This will hide unnecessary information from the user.

    Alternative solutions

    It will also be great if odmantic will be able to specify a custom description and parameters for id field

    Additional context

    It would also be great to make the example look more similar. I.e. replace ffffffffffffffffffffffff with 5f85f36d6dfecacc68428a46

    enhancement 
    opened by Olegt0rr 4
  • ⬆️ Update black requirement from ~=22.3.0 to >=22.3,<22.13

    ⬆️ Update black requirement from ~=22.3.0 to >=22.3,<22.13

    Updates the requirements on black to permit the latest version.

    Release notes

    Sourced from black's releases.

    22.12.0

    Preview style

    • Enforce empty lines before classes and functions with sticky leading comments (#3302)
    • Reformat empty and whitespace-only files as either an empty file (if no newline is present) or as a single newline character (if a newline is present) (#3348)
    • Implicitly concatenated strings used as function args are now wrapped inside parentheses (#3307)
    • Correctly handle trailing commas that are inside a line's leading non-nested parens (#3370)

    Configuration

    • Fix incorrectly applied .gitignore rules by considering the .gitignore location and the relative path to the target file (#3338)
    • Fix incorrectly ignoring .gitignore presence when more than one source directory is specified (#3336)

    Parser

    • Parsing support has been added for walruses inside generator expression that are passed as function args (for example, any(match := my_re.match(text) for text in texts)) (#3327).

    Integrations

    • Vim plugin: Optionally allow using the system installation of Black via let g:black_use_virtualenv = 0(#3309)
    Changelog

    Sourced from black's changelog.

    22.12.0

    Preview style

    • Enforce empty lines before classes and functions with sticky leading comments (#3302)
    • Reformat empty and whitespace-only files as either an empty file (if no newline is present) or as a single newline character (if a newline is present) (#3348)
    • Implicitly concatenated strings used as function args are now wrapped inside parentheses (#3307)
    • For assignment statements, prefer splitting the right hand side if the left hand side fits on a single line (#3368)
    • Correctly handle trailing commas that are inside a line's leading non-nested parens (#3370)

    Configuration

    • Fix incorrectly applied .gitignore rules by considering the .gitignore location and the relative path to the target file (#3338)
    • Fix incorrectly ignoring .gitignore presence when more than one source directory is specified (#3336)

    Parser

    • Parsing support has been added for walruses inside generator expression that are passed as function args (for example, any(match := my_re.match(text) for text in texts)) (#3327).

    Integrations

    • Vim plugin: Optionally allow using the system installation of Black via let g:black_use_virtualenv = 0(#3309)

    22.10.0

    Highlights

    • Runtime support for Python 3.6 has been removed. Formatting 3.6 code will still be supported until further notice.

    Stable style

    • Fix a crash when # fmt: on is used on a different block level than # fmt: off

    ... (truncated)

    Commits
    • 2ddea29 Prepare release 22.12.0 (#3413)
    • 5b1443a release: skip bad macos wheels for now (#3411)
    • 9ace064 Bump peter-evans/find-comment from 2.0.1 to 2.1.0 (#3404)
    • 19c5fe4 Fix CI with latest flake8-bugbear (#3412)
    • d4a8564 Bump sphinx-copybutton from 0.5.0 to 0.5.1 in /docs (#3390)
    • 2793249 Wordsmith current_style.md (#3383)
    • d97b789 Remove whitespaces of whitespace-only files (#3348)
    • c23a5c1 Clarify that Black runs with --safe by default (#3378)
    • 8091b25 Correctly handle trailing commas that are inside a line's leading non-nested ...
    • ffaaf48 Compare each .gitignore found with an appropiate relative path (#3338)
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • ⬆️ Update ipython requirement from ~=7.16.1 to >=7.16.1,<8.9.0

    ⬆️ Update ipython requirement from ~=7.16.1 to >=7.16.1,<8.9.0

    Updates the requirements on ipython to permit the latest version.

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • ⬆️ Update coverage[toml] requirement from ~=6.2 to >=6.2,<8.0

    ⬆️ Update coverage[toml] requirement from ~=6.2 to >=6.2,<8.0

    Updates the requirements on coverage[toml] to permit the latest version.

    Release notes

    Sourced from coverage[toml]'s releases.

    7.0.3

    • Fix: when using pytest-cov or pytest-xdist, or perhaps both, the combining step could fail with assert row is not None using 7.0.2. This was due to a race condition that has always been possible and is still possible. In 7.0.1 and before, the error was silently swallowed by the combining code. Now it will produce a message “Couldn’t combine data file” and ignore the data file as it used to do before 7.0.2. Closes issue 1522.

    :arrow_right:  PyPI page: coverage 7.0.3. :arrow_right:  To install: python3 -m pip install coverage==7.0.3

    Changelog

    Sourced from coverage[toml]'s changelog.

    Version 7.0.3 — 2023-01-03

    • Fix: when using pytest-cov or pytest-xdist, or perhaps both, the combining step could fail with assert row is not None using 7.0.2. This was due to a race condition that has always been possible and is still possible. In 7.0.1 and before, the error was silently swallowed by the combining code. Now it will produce a message "Couldn't combine data file" and ignore the data file as it used to do before 7.0.2. Closes issue 1522_.

    .. _issue 1522: nedbat/coveragepy#1522

    .. _changes_7-0-2:

    Version 7.0.2 — 2023-01-02

    • Fix: when using the [run] relative_files = True setting, a relative [paths] pattern was still being made absolute. This is now fixed, closing issue 1519_.

    • Fix: if Python doesn't provide tomllib, then TOML configuration files can only be read if coverage.py is installed with the [toml] extra. Coverage.py will raise an error if TOML support is not installed when it sees your settings are in a .toml file. But it didn't understand that [tools.coverage] was a valid section header, so the error wasn't reported if you used that header, and settings were silently ignored. This is now fixed, closing issue 1516_.

    • Fix: adjusted how decorators are traced on PyPy 7.3.10, fixing issue 1515_.

    • Fix: the coverage lcov report did not properly implement the --fail-under=MIN option. This has been fixed.

    • Refactor: added many type annotations, including a number of refactorings. This should not affect outward behavior, but they were a bit invasive in some places, so keep your eyes peeled for oddities.

    • Refactor: removed the vestigial and long untested support for Jython and IronPython.

    .. _issue 1515: nedbat/coveragepy#1515 .. _issue 1516: nedbat/coveragepy#1516 .. _issue 1519: nedbat/coveragepy#1519

    .. _changes_7-0-1:

    Version 7.0.1 — 2022-12-23

    ... (truncated)

    Commits
    • 2ff9098 docs: prep for 7.0.3
    • 1f34d8b fix: race condition on data file shouldn't break combining. #1522
    • 85170bf build: two-step combines for speed
    • 1605f07 mypy: misc.py, test_misc.py
    • 4f3ccf2 refactor: a better way to have maybe-importable third-party modules
    • 98301ed mypy: test_config.py, test_context.py
    • 9d2e1b0 mypy: test_concurrency.py, test_python.py
    • c3ee30c refactor(test): use tmp_path instead of tmpdir
    • 0b05b45 mypy: test_annotate.py test_arcs.py test_collector.py
    • 2090f79 style: better
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • ⬆️ Update requests requirement from ~=2.24.0 to >=2.24,<2.29

    ⬆️ Update requests requirement from ~=2.24.0 to >=2.24,<2.29

    Updates the requirements on requests to permit the latest version.

    Release notes

    Sourced from requests's releases.

    v2.28.1

    2.28.1 (2022-06-29)

    Improvements

    • Speed optimization in iter_content with transition to yield from. (#6170)

    Dependencies

    • Added support for chardet 5.0.0 (#6179)
    • Added support for charset-normalizer 2.1.0 (#6169)

    New Contributors

    Full Changelog: https://github.com/psf/requests/blob/main/HISTORY.md#2281-2022-06-29

    Changelog

    Sourced from requests's changelog.

    2.28.1 (2022-06-29)

    Improvements

    • Speed optimization in iter_content with transition to yield from. (#6170)

    Dependencies

    • Added support for chardet 5.0.0 (#6179)
    • Added support for charset-normalizer 2.1.0 (#6169)

    2.28.0 (2022-06-09)

    Deprecations

    • ⚠️ Requests has officially dropped support for Python 2.7. ⚠️ (#6091)
    • Requests has officially dropped support for Python 3.6 (including pypy3.6). (#6091)

    Improvements

    • Wrap JSON parsing issues in Request's JSONDecodeError for payloads without an encoding to make json() API consistent. (#6097)
    • Parse header components consistently, raising an InvalidHeader error in all invalid cases. (#6154)
    • Added provisional 3.11 support with current beta build. (#6155)
    • Requests got a makeover and we decided to paint it black. (#6095)

    Bugfixes

    • Fixed bug where setting CURL_CA_BUNDLE to an empty string would disable cert verification. All Requests 2.x versions before 2.28.0 are affected. (#6074)
    • Fixed urllib3 exception leak, wrapping urllib3.exceptions.SSLError with requests.exceptions.SSLError for content and iter_content. (#6057)
    • Fixed issue where invalid Windows registry entires caused proxy resolution to raise an exception rather than ignoring the entry. (#6149)
    • Fixed issue where entire payload could be included in the error message for JSONDecodeError. (#6036)

    2.27.1 (2022-01-05)

    Bugfixes

    • Fixed parsing issue that resulted in the auth component being dropped from proxy URLs. (#6028)

    2.27.0 (2022-01-03)

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • ⬆️ Update pydocstyle[toml] requirement from ~=6.1.1 to >=6.1.1,<6.3.0

    ⬆️ Update pydocstyle[toml] requirement from ~=6.1.1 to >=6.1.1,<6.3.0

    Updates the requirements on pydocstyle[toml] to permit the latest version.

    Release notes

    Sourced from pydocstyle[toml]'s releases.

    6.2.2

    What's Changed

    New Contributors

    Full Changelog: https://github.com/PyCQA/pydocstyle/compare/6.2.1...6.2.2

    Changelog

    Sourced from pydocstyle[toml]'s changelog.

    6.2.2 - January 3rd, 2023

    Bug Fixes

    • Fix false positives of D417 in google convention docstrings (#619).

    6.2.1 - January 3rd, 2023

    Bug Fixes

    • Use tomllib/tomli to correctly read .toml files (#599, #600).

    6.2.0 - January 2nd, 2023

    New Features

    • Allow for hanging indent when documenting args in Google style. (#449)
    • Add support for property_decorators config to ignore D401.
    • Add support for Python 3.10 (#554).
    • Replace D10X errors with D419 if docstring exists but is empty (#559).

    Bug Fixes

    • Fix --match option to only consider filename when matching full paths (#550).

    6.1.1 - May 17th, 2021

    Bug Fixes

    • Split --source by lines instead of by characters (#536).

    6.1.0 - May 17th, 2021

    New Features

    • Enable full toml configuration and pyproject.toml (#534).

    6.0.0 - March 18th, 2021

    Major Updates

    • Support for Python 3.5 has been dropped (#510).

    New Features

    ... (truncated)

    Commits
    • bd294bb Cut 6.2.2
    • 0107fe6 Fix false positive of google convention missing args descriptions (#619)
    • 45fbcc1 Cut a 6.2.1 release
    • 671329e Docs: pydocstyle supports Python 3.7 through 3.11. (#616)
    • 3bc3b87 Use tomllib/tomli for reading .toml configs (#608)
    • 5c55802 requirements/docs.txt: Use current versions of Jinja2 and sphinx (#615)
    • f4db095 Add Python 3.11 to the testing matrix (#612)
    • 447af8f Add https protocol on websites at the README.rst (#611)
    • 05b92ba Add testpypi to poetry repositories
    • 7007961 Move to poetry and automated relases via Github UI (#614)
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Update tox.ini to support Tox 4

    Update tox.ini to support Tox 4

    Following the guidance in https://tox.wiki/en/latest/upgrading.html#removed-tox-ini-keys:

    • Changed whitelist_externals to allowlist_externals
    • Removed isolated_build since that's the new default behaviour
    opened by edgarrmondragon 0
Releases(v0.9.2)
Owner
Arthur Pastel
Software Engineer at Qopius. Passionate about building software, deep-learning, web technologies, security and electronics
Arthur Pastel
Baserow is an open source no-code database tool and Airtable alternative

Baserow is an open source no-code database tool and Airtable alternative

1.3k Jan 01, 2023
db.py is an easier way to interact with your databases

db.py What is it Databases Supported Features Quickstart - Installation - Demo How To Contributing TODO What is it? db.py is an easier way to interact

yhat 1.2k Jan 03, 2023
A pandas-like deferred expression system, with first-class SQL support

Ibis: Python data analysis framework for Hadoop and SQL engines Service Status Documentation Conda packages PyPI Azure Coverage Ibis is a toolbox to b

Ibis Project 2.3k Jan 06, 2023
Dlsite-doujin-renamer - Dlsite doujin renamer tool with python

dlsite-doujin-renamer Features 支持深度查找带有 RJ 号的文件夹 支持手动选择文件夹或拖拽文件夹到软件窗口 支持在 config

111 Jan 02, 2023
A supercharged SQLite library for Python

SuperSQLite: a supercharged SQLite library for Python A feature-packed Python package and for utilizing SQLite in Python by Plasticity. It is intended

Plasticity 703 Dec 30, 2022
Pystackql - Python wrapper for StackQL

pystackql - Python Library for StackQL Python wrapper for StackQL Usage from pys

StackQL Studios 6 Jul 01, 2022
Logica is a logic programming language that compiles to StandardSQL and runs on Google BigQuery.

Logica: language of Big Data Logica is an open source declarative logic programming language for data manipulation. Logica is a successor to Yedalog,

Evgeny Skvortsov 1.5k Dec 30, 2022
TileDB-Py is a Python interface to the TileDB Storage Engine.

TileDB-Py TileDB-Py is a Python interface to the TileDB Storage Engine. Quick Links Installation Build Instructions TileDB Documentation Python API re

TileDB, Inc. 149 Nov 28, 2022
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use 🔨 Customizable 🔧 Installation I

Mert Güvençli 10 Dec 07, 2022
Redis Python Client

redis-py The Python interface to the Redis key-value store. Python 2 Compatibility Note redis-py 3.5.x will be the last version of redis-py that suppo

Andy McCurdy 11k Dec 29, 2022
A tool to snapshot sqlite databases you don't own

The core here is my first attempt at a solution of this, combining ideas from browser_history.py and karlicoss/HPI/sqlite.py to create a library/CLI tool to (as safely as possible) copy databases whi

Sean Breckenridge 10 Dec 22, 2022
SAP HANA Connector in pure Python

SAP HANA Database Client for Python Important Notice This public repository is read-only and no longer maintained. The active maintained alternative i

SAP Archive 299 Nov 20, 2022
Neo4j Bolt driver for Python

Neo4j Bolt Driver for Python This repository contains the official Neo4j driver for Python. Each driver release (from 4.0 upwards) is built specifical

Neo4j 762 Dec 30, 2022
ClickHouse Python Driver with native interface support

ClickHouse Python Driver ClickHouse Python Driver with native (TCP) interface support. Asynchronous wrapper is available here: https://github.com/myma

Marilyn System 957 Dec 30, 2022
Python Wrapper For sqlite3 and aiosqlite

Python Wrapper For sqlite3 and aiosqlite

6 May 30, 2022
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.9k Jan 08, 2023
Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API

Apache Libcloud - a unified interface for the cloud Apache Libcloud is a Python library which hides differences between different cloud provider APIs

The Apache Software Foundation 1.9k Dec 25, 2022
New generation PostgreSQL database adapter for the Python programming language

Psycopg 3 -- PostgreSQL database adapter for Python Psycopg 3 is a modern implementation of a PostgreSQL adapter for Python. Installation Quick versio

The Psycopg Team 880 Jan 08, 2023
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 345 Dec 28, 2022
Tool for synchronizing clickhouse clusters

clicksync Tool for synchronizing clickhouse clusters works only with partitioned MergeTree tables can sync clusters with different node number uses in

Alexander Rumyantsev 1 Nov 30, 2021