Schema validation just got Pythonic

Overview

Schema validation just got Pythonic

schema is a library for validating Python data structures, such as those obtained from config-files, forms, external services or command-line parsing, converted from JSON/YAML (or something else) to Python data-types.

https://secure.travis-ci.org/keleshev/schema.svg?branch=master

Example

Here is a quick example to get a feeling of schema, validating a list of entries with personal information:

>>> from schema import Schema, And, Use, Optional, SchemaError

>>> schema = Schema([{'name': And(str, len),
...                   'age':  And(Use(int), lambda n: 18 <= n <= 99),
...                   Optional('gender'): And(str, Use(str.lower),
...                                           lambda s: s in ('squid', 'kid'))}])

>>> data = [{'name': 'Sue', 'age': '28', 'gender': 'Squid'},
...         {'name': 'Sam', 'age': '42'},
...         {'name': 'Sacha', 'age': '20', 'gender': 'KID'}]

>>> validated = schema.validate(data)

>>> assert validated == [{'name': 'Sue', 'age': 28, 'gender': 'squid'},
...                      {'name': 'Sam', 'age': 42},
...                      {'name': 'Sacha', 'age' : 20, 'gender': 'kid'}]

If data is valid, Schema.validate will return the validated data (optionally converted with Use calls, see below).

If data is invalid, Schema will raise SchemaError exception. If you just want to check that the data is valid, schema.is_valid(data) will return True or False.

Installation

Use pip or easy_install:

pip install schema

Alternatively, you can just drop schema.py file into your project—it is self-contained.

  • schema is tested with Python 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7 and PyPy.
  • schema follows semantic versioning.

How Schema validates data

Types

If Schema(...) encounters a type (such as int, str, object, etc.), it will check if the corresponding piece of data is an instance of that type, otherwise it will raise SchemaError.

>>> from schema import Schema

>>> Schema(int).validate(123)
123

>>> Schema(int).validate('123')
Traceback (most recent call last):
...
schema.SchemaUnexpectedTypeError: '123' should be instance of 'int'

>>> Schema(object).validate('hai')
'hai'

Callables

If Schema(...) encounters a callable (function, class, or object with __call__ method) it will call it, and if its return value evaluates to True it will continue validating, else—it will raise SchemaError.

>>> import os

>>> Schema(os.path.exists).validate('./')
'./'

>>> Schema(os.path.exists).validate('./non-existent/')
Traceback (most recent call last):
...
schema.SchemaError: exists('./non-existent/') should evaluate to True

>>> Schema(lambda n: n > 0).validate(123)
123

>>> Schema(lambda n: n > 0).validate(-12)
Traceback (most recent call last):
...
schema.SchemaError: <lambda>(-12) should evaluate to True

"Validatables"

If Schema(...) encounters an object with method validate it will run this method on corresponding data as data = obj.validate(data). This method may raise SchemaError exception, which will tell Schema that that piece of data is invalid, otherwise—it will continue validating.

An example of "validatable" is Regex, that tries to match a string or a buffer with the given regular expression (itself as a string, buffer or compiled regex SRE_Pattern):

>>> from schema import Regex
>>> import re

>>> Regex(r'^foo').validate('foobar')
'foobar'

>>> Regex(r'^[A-Z]+$', flags=re.I).validate('those-dashes-dont-match')
Traceback (most recent call last):
...
schema.SchemaError: Regex('^[A-Z]+$', flags=re.IGNORECASE) does not match 'those-dashes-dont-match'

For a more general case, you can use Use for creating such objects. Use helps to use a function or type to convert a value while validating it:

>>> from schema import Use

>>> Schema(Use(int)).validate('123')
123

>>> Schema(Use(lambda f: open(f, 'a'))).validate('LICENSE-MIT')
<_io.TextIOWrapper name='LICENSE-MIT' mode='a' encoding='UTF-8'>

Dropping the details, Use is basically:

class Use(object):

    def __init__(self, callable_):
        self._callable = callable_

    def validate(self, data):
        try:
            return self._callable(data)
        except Exception as e:
            raise SchemaError('%r raised %r' % (self._callable.__name__, e))

Sometimes you need to transform and validate part of data, but keep original data unchanged. Const helps to keep your data safe:

>> from schema import Use, Const, And, Schema

>> from datetime import datetime

>> is_future = lambda date: datetime.now() > date

>> to_json = lambda v: {"timestamp": v}

>> Schema(And(Const(And(Use(datetime.fromtimestamp), is_future)), Use(to_json))).validate(1234567890)
{"timestamp": 1234567890}

Now you can write your own validation-aware classes and data types.

Lists, similar containers

If Schema(...) encounters an instance of list, tuple, set or frozenset, it will validate contents of corresponding data container against all schemas listed inside that container and aggregate all errors:

>>> Schema([1, 0]).validate([1, 1, 0, 1])
[1, 1, 0, 1]

>>> Schema((int, float)).validate((5, 7, 8, 'not int or float here'))
Traceback (most recent call last):
...
schema.SchemaError: Or(<class 'int'>, <class 'float'>) did not validate 'not int or float here'
'not int or float here' should be instance of 'int'
'not int or float here' should be instance of 'float'

Dictionaries

If Schema(...) encounters an instance of dict, it will validate data key-value pairs:

>>> d = Schema({'name': str,
...             'age': lambda n: 18 <= n <= 99}).validate({'name': 'Sue', 'age': 28})

>>> assert d == {'name': 'Sue', 'age': 28}

You can specify keys as schemas too:

>>> schema = Schema({str: int,  # string keys should have integer values
...                  int: None})  # int keys should be always None

>>> data = schema.validate({'key1': 1, 'key2': 2,
...                         10: None, 20: None})

>>> schema.validate({'key1': 1,
...                   10: 'not None here'})
Traceback (most recent call last):
...
schema.SchemaError: Key '10' error:
None does not match 'not None here'

This is useful if you want to check certain key-values, but don't care about others:

>>> schema = Schema({'<id>': int,
...                  '<file>': Use(open),
...                  str: object})  # don't care about other str keys

>>> data = schema.validate({'<id>': 10,
...                         '<file>': 'README.rst',
...                         '--verbose': True})

You can mark a key as optional as follows:

>>> from schema import Optional
>>> Schema({'name': str,
...         Optional('occupation'): str}).validate({'name': 'Sam'})
{'name': 'Sam'}

Optional keys can also carry a default, to be used when no key in the data matches:

>>> from schema import Optional
>>> Schema({Optional('color', default='blue'): str,
...         str: str}).validate({'texture': 'furry'}
...       ) == {'color': 'blue', 'texture': 'furry'}
True

Defaults are used verbatim, not passed through any validators specified in the value.

default can also be a callable:

>>> from schema import Schema, Optional
>>> Schema({Optional('data', default=dict): {}}).validate({}) == {'data': {}}
True

Also, a caveat: If you specify types, schema won't validate the empty dict:

>>> Schema({int:int}).is_valid({})
False

To do that, you need Schema(Or({int:int}, {})). This is unlike what happens with lists, where Schema([int]).is_valid([]) will return True.

schema has classes And and Or that help validating several schemas for the same data:

>>> from schema import And, Or

>>> Schema({'age': And(int, lambda n: 0 < n < 99)}).validate({'age': 7})
{'age': 7}

>>> Schema({'password': And(str, lambda s: len(s) > 6)}).validate({'password': 'hai'})
Traceback (most recent call last):
...
schema.SchemaError: Key 'password' error:
<lambda>('hai') should evaluate to True

>>> Schema(And(Or(int, float), lambda x: x > 0)).validate(3.1415)
3.1415

In a dictionary, you can also combine two keys in a "one or the other" manner. To do so, use the Or class as a key:

>>> from schema import Or, Schema
>>> schema = Schema({
...    Or("key1", "key2", only_one=True): str
... })

>>> schema.validate({"key1": "test"}) # Ok
{'key1': 'test'}

>>> schema.validate({"key1": "test", "key2": "test"}) # SchemaError
Traceback (most recent call last):
...
schema.SchemaOnlyOneAllowedError: There are multiple keys present from the Or('key1', 'key2') condition

Hooks

You can define hooks which are functions that are executed whenever a valid key:value is found. The Forbidden class is an example of this.

You can mark a key as forbidden as follows:

>>> from schema import Forbidden
>>> Schema({Forbidden('age'): object}).validate({'age': 50})
Traceback (most recent call last):
...
schema.SchemaForbiddenKeyError: Forbidden key encountered: 'age' in {'age': 50}

A few things are worth noting. First, the value paired with the forbidden key determines whether it will be rejected:

>>> Schema({Forbidden('age'): str, 'age': int}).validate({'age': 50})
{'age': 50}

Note: if we hadn't supplied the 'age' key here, the call would have failed too, but with SchemaWrongKeyError, not SchemaForbiddenKeyError.

Second, Forbidden has a higher priority than standard keys, and consequently than Optional. This means we can do that:

>>> Schema({Forbidden('age'): object, Optional(str): object}).validate({'age': 50})
Traceback (most recent call last):
...
schema.SchemaForbiddenKeyError: Forbidden key encountered: 'age' in {'age': 50}

You can also define your own hooks. The following hook will call _my_function if key is encountered.

from schema import Hook
def _my_function(key, scope, error):
    print(key, scope, error)

Hook("key", handler=_my_function)

Here's an example where a Deprecated class is added to log warnings whenever a key is encountered:

from schema import Hook, Schema
class Deprecated(Hook):
    def __init__(self, *args, **kwargs):
        kwargs["handler"] = lambda key, *args: logging.warn(f"`{key}` is deprecated. " + (self._error or ""))
        super(Deprecated, self).__init__(*args, **kwargs)

Schema({Deprecated("test", "custom error message."): object}, ignore_extra_keys=True).validate({"test": "value"})
...
WARNING: `test` is deprecated. custom error message.

Extra Keys

The Schema(...) parameter ignore_extra_keys causes validation to ignore extra keys in a dictionary, and also to not return them after validating.

>>> schema = Schema({'name': str}, ignore_extra_keys=True)
>>> schema.validate({'name': 'Sam', 'age': '42'})
{'name': 'Sam'}

If you would like any extra keys returned, use object: object as one of the key/value pairs, which will match any key and any value. Otherwise, extra keys will raise a SchemaError.

User-friendly error reporting

You can pass a keyword argument error to any of validatable classes (such as Schema, And, Or, Regex, Use) to report this error instead of a built-in one.

>>> Schema(Use(int, error='Invalid year')).validate('XVII')
Traceback (most recent call last):
...
schema.SchemaError: Invalid year

You can see all errors that occurred by accessing exception's exc.autos for auto-generated error messages, and exc.errors for errors which had error text passed to them.

You can exit with sys.exit(exc.code) if you want to show the messages to the user without traceback. error messages are given precedence in that case.

A JSON API example

Here is a quick example: validation of create a gist request from github API.

>>> gist = '''{"description": "the description for this gist",
...            "public": true,
...            "files": {
...                "file1.txt": {"content": "String file contents"},
...                "other.txt": {"content": "Another file contents"}}}'''

>>> from schema import Schema, And, Use, Optional

>>> import json

>>> gist_schema = Schema(And(Use(json.loads),  # first convert from JSON
...                          # use str since json returns unicode
...                          {Optional('description'): str,
...                           'public': bool,
...                           'files': {str: {'content': str}}}))

>>> gist = gist_schema.validate(gist)

# gist:
{u'description': u'the description for this gist',
 u'files': {u'file1.txt': {u'content': u'String file contents'},
            u'other.txt': {u'content': u'Another file contents'}},
 u'public': True}

Using schema with docopt

Assume you are using docopt with the following usage-pattern:

Usage: my_program.py [--count=N] <path> <files>...

and you would like to validate that <files> are readable, and that <path> exists, and that --count is either integer from 0 to 5, or None.

Assuming docopt returns the following dict:

>>> args = {'<files>': ['LICENSE-MIT', 'setup.py'],
...         '<path>': '../',
...         '--count': '3'}

this is how you validate it using schema:

>>> from schema import Schema, And, Or, Use
>>> import os

>>> s = Schema({'<files>': [Use(open)],
...             '<path>': os.path.exists,
...             '--count': Or(None, And(Use(int), lambda n: 0 < n < 5))})

>>> args = s.validate(args)

>>> args['<files>']
[<_io.TextIOWrapper name='LICENSE-MIT' ...>, <_io.TextIOWrapper name='setup.py' ...]

>>> args['<path>']
'../'

>>> args['--count']
3

As you can see, schema validated data successfully, opened files and converted '3' to int.

JSON schema

You can also generate standard draft-07 JSON schema from a dict Schema. This can be used to add word completion, validation, and documentation directly in code editors. The output schema can also be used with JSON schema compatible libraries.

JSON: Generating

Just define your schema normally and call .json_schema() on it. The output is a Python dict, you need to dump it to JSON.

>>> from schema import Optional, Schema
>>> import json
>>> s = Schema({"test": str,
...             "nested": {Optional("other"): str}
...             })
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"))

# json_schema
{
    "type":"object",
    "properties": {
        "test": {"type": "string"},
        "nested": {
            "type":"object",
            "properties": {
                "other": {"type": "string"}
            },
            "required": [],
            "additionalProperties": false
        }
    },
    "required":[
        "test",
        "nested"
    ],
    "additionalProperties":false,
    "$id":"https://example.com/my-schema.json",
    "$schema":"http://json-schema.org/draft-07/schema#"
}

You can add descriptions for the schema elements using the Literal object instead of a string. The main schema can also have a description.

These will appear in IDEs to help your users write a configuration.

>>> from schema import Literal, Schema
>>> import json
>>> s = Schema({Literal("project_name", description="Names must be unique"): str}, description="Project schema")
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "project_name": {
            "description": "Names must be unique",
            "type": "string"
        }
    },
    "required": [
        "project_name"
    ],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "description": "Project schema"
}

JSON: Supported validations

The resulting JSON schema is not guaranteed to accept the same objects as the library would accept, since some validations are not implemented or have no JSON schema equivalent. This is the case of the Use and Hook objects for example.

Implemented

Object properties

Use a dict literal. The dict keys are the JSON schema properties.

Example:

Schema({"test": str})

becomes

{'type': 'object', 'properties': {'test': {'type': 'string'}}, 'required': ['test'], 'additionalProperties': False}.

Please note that attributes are required by default. To create optional attributes use Optional, like so:

Schema({Optional("test"): str})

becomes

{'type': 'object', 'properties': {'test': {'type': 'string'}}, 'required': [], 'additionalProperties': False}

additionalProperties is set to true when at least one of the conditions is met:
  • ignore_extra_keys is True
  • at least one key is str or object

For example:

Schema({str: str}) and Schema({}, ignore_extra_keys=True)

both becomes

{'type': 'object', 'properties' : {}, 'required': [], 'additionalProperties': True}

and

Schema({})

becomes

{'type': 'object', 'properties' : {}, 'required': [], 'additionalProperties': False}

Types

Use the Python type name directly. It will be converted to the JSON name:

Example:

Schema(float)

becomes

{"type": "number"}

Array items

Surround a schema with [].

Example:

Schema([str]) means an array of string and becomes:

{'type': 'array', 'items': {'type': 'string'}}

Enumerated values

Use Or.

Example:

Schema(Or(1, 2, 3)) becomes

{"enum": [1, 2, 3]}

Constant values

Use the value itself.

Example:

Schema("name") becomes

{"const": "name"}

Regular expressions

Use Regex.

Example:

Schema(Regex("^v\d+")) becomes

{'type': 'string', 'pattern': '^v\\d+'}

Annotations (title and description)

You can use the name and description parameters of the Schema object init method.

To add description to keys, replace a str with a Literal object.

Example:

Schema({Literal("test", description="A description"): str})

is equivalent to

Schema({"test": str})

with the description added to the resulting JSON schema.

Combining schemas with allOf

Use And

Example:

Schema(And(str, "value"))

becomes

{"allOf": [{"type": "string"}, {"const": "value"}]}

Note that this example is not really useful in the real world, since const already implies the type.

Combining schemas with anyOf

Use Or

Example:

Schema(Or(str, int))

becomes

{"anyOf": [{"type": "string"}, {"type": "integer"}]}

Not implemented

The following JSON schema validations cannot be generated from this library.

JSON: Minimizing output size

Explicit Reuse

If your JSON schema is big and has a lot of repetition, it can be made simpler and smaller by defining Schema objects as reference. These references will be placed in a "definitions" section in the main schema.

You can look at the JSON schema documentation for more information

>>> from schema import Optional, Schema
>>> import json
>>> s = Schema({"test": str,
...             "nested": Schema({Optional("other"): str}, name="nested", as_reference=True)
...             })
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "test": {
            "type": "string"
        },
        "nested": {
            "$ref": "#/definitions/nested"
        }
    },
    "required": [
        "test",
        "nested"
    ],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "definitions": {
        "nested": {
            "type": "object",
            "properties": {
                "other": {
                    "type": "string"
                }
            },
            "required": [],
            "additionalProperties": false
        }
    }
}

This becomes really useful when using the same object several times

>>> from schema import Optional, Or, Schema
>>> import json
>>> language_configuration = Schema({"autocomplete": bool, "stop_words": [str]}, name="language", as_reference=True)
>>> s = Schema({Or("ar", "cs", "de", "el", "eu", "en", "es", "fr"): language_configuration})
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "ar": {
            "$ref": "#/definitions/language"
        },
        "cs": {
            "$ref": "#/definitions/language"
        },
        "de": {
            "$ref": "#/definitions/language"
        },
        "el": {
            "$ref": "#/definitions/language"
        },
        "eu": {
            "$ref": "#/definitions/language"
        },
        "en": {
            "$ref": "#/definitions/language"
        },
        "es": {
            "$ref": "#/definitions/language"
        },
        "fr": {
            "$ref": "#/definitions/language"
        }
    },
    "required": [],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "definitions": {
        "language": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "type": "boolean"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    }
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        }
    }
}

Automatic reuse

If you want to minimize the output size without using names explicitly, you can have the library generate hashes of parts of the output JSON schema and use them as references throughout.

Enable this behaviour by providing the parameter use_refs to the json_schema method.

Be aware that this method is less often compatible with IDEs and JSON schema libraries. It produces a JSON schema that is more difficult to read by humans.

>>> from schema import Optional, Or, Schema
>>> import json
>>> language_configuration = Schema({"autocomplete": bool, "stop_words": [str]})
>>> s = Schema({Or("ar", "cs", "de", "el", "eu", "en", "es", "fr"): language_configuration})
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json", use_refs=True), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "ar": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "type": "boolean",
                    "$id": "#6456104181059880193"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "type": "string",
                        "$id": "#1856069563381977338"
                    }
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        },
        "cs": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "$ref": "#6456104181059880193"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "$ref": "#1856069563381977338"
                    },
                    "$id": "#-5377945144312515805"
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        },
        "de": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "$ref": "#6456104181059880193"
                },
                "stop_words": {
                    "$ref": "#-5377945144312515805"
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false,
            "$id": "#-8142886105174600858"
        },
        "el": {
            "$ref": "#-8142886105174600858"
        },
        "eu": {
            "$ref": "#-8142886105174600858"
        },
        "en": {
            "$ref": "#-8142886105174600858"
        },
        "es": {
            "$ref": "#-8142886105174600858"
        },
        "fr": {
            "$ref": "#-8142886105174600858"
        }
    },
    "required": [],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#"
}
Comments
  • Fix error formatting for validation with callable

    Fix error formatting for validation with callable

    There is a feature (although not documented and not tested) that allows passing format strings as error messages to the validators, which format them with the validated data if a SchemaError is thrown.

    I find this feature very useful, but it does not work when validating using a callable. For example, with the current behavior: schema.Schema(lambda d: False, error='{}').validate('This should be the error message') -> SchemaError: {} After the fix, the error will be SchemaError: This should be the error message

    opened by kmaork 24
  • Custom Schema Names

    Custom Schema Names

    In Schema.py

    Added parameter "name" to the Schema class that defaults to an empty string.

    Added the function set_schema_name() to the Schema class that formats and returns the Schema name if it isn't empty. set_schema_name() also takes in one string argument so the way it formats the name can be altered based on the schema error type.

    The formatted name it returns is then used to be substituted into the error message that gets printed when a schema error is raised.

    opened by SnapperGee 21
  • Adding regular expression (regex) support

    Adding regular expression (regex) support

    Hi,

    Here is a proposition to add regex support to the library, with the use of re.pattern.search method from the Python core regex library (support was tested on python 2.6.9, 2.7.12, 3.3.0, 3.4.3, 3.5.2, pypy-5.3 and pypy3-2.4.0).

    Four simple test cases were added as well.

    Note that support for python 3.2 was not tested because of an issue with tox and virtualenv: https://github.com/travis-ci/travis-ci/issues/5517

    opened by gusmonod 20
  • When ignoring extra keys,  Or's only_one should still be handled

    When ignoring extra keys, Or's only_one should still be handled

    Sorry for another PR about this. I noticed that Or's only_one condition didn't work when mixed with ìgnore_extra_keys as it was relying on the WrongKey exception. I instead implemented a type of Error that stops the execution immediately.

    Let me know if you see anything I might have missed. Thanks

    opened by julienduchesne 18
  • Do not drop previous errors within an Or criterion.

    Do not drop previous errors within an Or criterion.

    When raising a SchemaError with a user readable error message this message would be dropped if there was more than one validator in an Or() clause.

    BTW: Thanks for the library, we use it extensively to keep our model correct.

    opened by blaa 17
  • add strict flag in Schema class to skip wrong key validation without wildcard

    add strict flag in Schema class to skip wrong key validation without wildcard

    To avoid SchemaError for validations with more keys of the schema:

    >>> Schema({'key', 'value'}, strict=False).validate({'key', 'value', 'foo': 'bar})
    >>> {'key', 'value'}
    
    opened by drgarcia1986 15
  • Handle wrong keys better

    Handle wrong keys better

    fixes #3 and #15.

    On top of original pull request https://github.com/halst/schema/pull/18 to improve messages for values, I made the error messages more clear when input contain unexpected keys, e.g.:

    >>> Schema({'a':int}).validate({'a': 1, 'bad': 5, 'bad2':None})
    
    Traceback (most recent call last):
    ...
    SchemaError: wrong keys 'bad', 'bad2' in {'a': 1, 'bad': 5, 'bad2': None}
    

    P.S. tried merging with halst:master but the master currently seem partially broken as some tests are failing there even before my merge (probably connected to Optional() fix), so I'll leave the merging out for now...

    opened by vidma 15
  • Made schema more extendable with simple trick. Issue #63 #64

    Made schema more extendable with simple trick. Issue #63 #64

    I made feature request #64 then read issue #63, and after some thought and seeing code found out that it's ultra easy to achieve. I don't think #64 and #63 need to be implemented, but it's nice to have that possibility without modifying base schema code. Doc class from #63 is implemented as a test test_validate_kwargs_doc_example, to show how easy it is.

    opened by kosz85 14
  • added coverage and pep8 checks

    added coverage and pep8 checks

    • integration with coveralls
    • check pep8 with flake8
    • fixed pep8 to pass, for now we use max-line 90 & two issues are ignored until decided otherwise:
      • disabled E701 (multiple statements on one line) and E126 (continuation line over-indented for hanging indent)
    • as now pep8 passes (and is quite loose), it can be activated to impact travis build status

    Note: based on https://github.com/petrblaho/python-tuskarclient/blob/master/

    opened by vidma 13
  • Types inside `And` are ignored when creating JSON schema

    Types inside `And` are ignored when creating JSON schema

    Currently, it is as follows:

    >>> Schema({'name': str}).json_schema("example_schema")
    {
        'type': 'object',
        'properties': {'name': {'type': 'string'}},
        'required': ['name']
        'additionalProperties': False,
        'id': 'example_schema',
        '$schema': 'http://json-schema.org/draft-07/schema#'
    }
    
    >>> Schema({'name': And(str, Use(str.lower))}).json_schema("example_schema")
    {
        'type': 'object',
        'properties': {'name': {}},
        'required': ['name']
        'additionalProperties': False,
        'id': 'example_schema',
        '$schema': 'http://json-schema.org/draft-07/schema#'
    }
    

    Converting str to And(str, Use(str.lower)) results in dropping the {'type': 'string'} in JSON schema. However, if there is a single type in And, it should be preserved in JSON schema.

    opened by berkanteber 9
  • Feature/json schema descriptions

    Feature/json schema descriptions

    Another feature for #180

    • Added the Literal type for adding a description to JSON schemas
    • Fixed new bugs introduced by the Literal type
    • Added tests to ensure the Literal type was working as expected

    This is a direct continuation to PR #206

    opened by jcbedard 9
  • Update

    Update "User-friendly error reporting" example

    Updating the "User-friendly error reporting" example to reflect the feature implemented in the following PR https://github.com/keleshev/schema/pull/107/files.

    opened by garrettprimm 0
  • Unexpected behavior: Can not double validate keys

    Unexpected behavior: Can not double validate keys

    from schema import Schema, And, Or, Use, Optional, SchemaError, Forbidden
    
    
    x = Schema({
        Or('request', 'requests', only_one=True) : dict,
        Optional('requests'): {Use(int): dict}
    })
    
    x.validate({
        'requests': {1:{}}
    })
    

    The expected result here is that a user can supply a dict using the key 'request' or 'requests'. If 'requests' is used, then the nested dict keys should be numbered. I assumed that the dict would validate Or('request', 'requests', only_one=True) : dict, and then see the optional key 'requests' was supplied. It should also validate Optional('requests'): {Use(int): dict}. But the output seems to only allow one validation per key,

    See output:

    ---------------------------------------------------------------------------
    SchemaMissingKeyError                     Traceback (most recent call last)
    Input In [298], in <cell line: 1>()
    ----> 1 x.validate({
          2     'requests': {1:{}}
          3 })
    
    File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\schema.py:420, in Schema.validate(self, data, **kwargs)
        418     message = "Missing key%s: %s" % (_plural_s(missing_keys), s_missing_keys)
        419     message = self._prepend_schema_name(message)
    --> 420     raise SchemaMissingKeyError(message, e.format(data) if e else None)
        421 if not self._ignore_extra_keys and (len(new) != len(data)):
        422     wrong_keys = set(data.keys()) - set(new.keys())
    
    SchemaMissingKeyError: Missing key: Or('request', 'requests')
    
    opened by MrChadMWood 3
  • Potential Bug; Unexpeted Behavior (Can't find reason)

    Potential Bug; Unexpeted Behavior (Can't find reason)

    I am attempting to validate filters passed via JSON like so:

    schema_numericValue = Schema(Or({'int64_value':int}, {'double_value':float}))
    
    schema_stringFilter = Schema({
        'match_type' : Or(
            'EXACT',
            'BEGINS_WITH',
            'ENDS_WITH',
            'CONTAINS',
            'FULL_REGEXP',
            'PARTIAL_REGEXP',
            only_one=True
        ),
        'value' : str,
        Optional('case_sensitive') : bool
    })
    
    schema_inListFilter = Schema({
        'values' : [str],
        Optional('case_sensitive') : bool
    })
    
    schema_betweenFilter = Schema({
        'from_value' : schema_numericValue,
        'to_value' : schema_numericValue
    })
    
    schema_numericFilter = Schema({
        'operation' : Or(
            'EQUAL',
            'GREATER_THAN',
            'GREATER_THAN_OR_EQUAL',
            'LESS_THAN',
            'LESS_THAN_OR_EQUAL',
            only_one=True
        ),
        'value' : schema_numericValue
    })
    
    schema_filter = Schema({
        'field_name' : str,
        Or(
            'string_filter', 
            'in_list_filter', 
            'numeric_filter', 
            'between_filter',
            only_one=True
        ) : Or(schema_inListFilter,
               schema_stringFilter,
               schema_numericFilter,
               schema_betweenFilter,
               only_one=True
              )
    })
    
    schema_basicFilterExpression = Schema({
        Or(
            'filter', 
            'not_expression', 
            only_one=True
        ) : schema_filter
    })
    
    schema_basicFilterExpressionList = Schema({
        'expressions' : [schema_basicFilterExpression]
    })
    
    schema_intermediateFilterExpression = Schema({
        Or(
            'and_group',
            'or_group',
            'not_expression',
            'filter',
            only_one=True
        ) : Or(schema_basicFilterExpressionList,
               schema_basicFilterExpression,
               schema_filter,
               only_one=True)
    })
    
    schema_intermediateFilterExpressionList = Schema({
        'expressions' : [schema_intermediateFilterExpression]
    })
    
    schema_FilterExpression = Schema({
        Or(
            'and_group',
            'or_group',
            'not_expression',
            'filter',
            only_one=True
        ) : Or(schema_intermediateFilterExpressionList,
               schema_intermediateFilterExpression,
               only_one=True)
    })
    

    Attempting to validate at a sub-level like so

    schema_filter.validate({'field_name': 'p', 'in_list_filter': {'values': ['p', 'p']}})
    

    This produces the following error:

    Key 'in_list_filter' error:
    Or(Schema({'values': [<class 'str'>], Optional('case_sensitive'): <class 'bool'>}), Schema({'match_type': Or('EXACT', 'BEGINS_WITH', 'ENDS_WITH', 'CONTAINS', 'FULL_REGEXP', 'PARTIAL_REGEXP'), 'value': <class 'str'>, Optional('case_sensitive'): <class 'bool'>}), Schema({'operation': Or('EQUAL', 'GREATER_THAN', 'GREATER_THAN_OR_EQUAL', 'LESS_THAN', 'LESS_THAN_OR_EQUAL'), 'value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>}))}), Schema({'from_value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>})), 'to_value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>}))})) did not validate {'values': ['p', 'p']}
    

    I attempt removing the Or() statement from the value section. only checking for the schema I'm passing at the moment. When I do this, it works.

    opened by MrChadMWood 1
  • Question - How do I define conditional rules?

    Question - How do I define conditional rules?

    Say I have a dictionary where the keys are conditional. How do I write a schema to check it?

    For example, the following 2 are ok.

    {
        'name': 'x',
        'x_val': 1
    }
    
    {
        'name': 'y',
        'y_val': 1
    }
    

    But the following 2 are not

    {
        'name': 'x',
        'y_val': 1
    }
    
    {
        'name': 'y',
        'x_val': 1
    }
    

    The existence of what keys need to be in the dict is conditional on the value of name. I can't just say this is a dict with these keys. Name x has a certain set of keys (in this case, x_val), and name y has a different set of keys (y_val).

    Of course, I can write my own lambda function that takes in such a dictionary and performs the checks. But I was wondering if there's some kind of out of the box solution for this type of validation.

    thanks

    opened by fopguy41 1
  • ignore_extra_keys is ignored if flavor == VALIDATOR

    ignore_extra_keys is ignored if flavor == VALIDATOR

    For a case when a validator like Or, And etc is used the ignore_extra_keys is taken as false and any extra fields cause a validation error For example: object {"str":"str", "extra":125} is valid for schema {"str":str} and ignore_extra_keys=true but it is not valid for schema Or({"str":str}, None) and ignore_extra_keys=true, however, it should be valid as well

    Finally, I understood I needed to set it as Or({"str":str}, None, ignore_extra_keys=true), however, it is not evident enough, so it could be more convenient if Or to use ignore_extra_keys parameter from the Scheme object

    opened by ukrsms 1
Releases(v0.7.5)
Owner
Vladimir Keleshev
OCaml developer at SimCorp
Vladimir Keleshev
AB-test-analyzer - Python class to perform AB test analysis

AB-test-analyzer Python class to perform AB test analysis Overview This repo con

13 Jul 16, 2022
Joyplots in Python with matplotlib & pandas :chart_with_upwards_trend:

JoyPy JoyPy is a one-function Python package based on matplotlib + pandas with a single purpose: drawing joyplots (a.k.a. ridgeline plots). The code f

Leonardo Taccari 462 Jan 02, 2023
Glue is a python project to link visualizations of scientific datasets across many files.

Glue Glue is a python project to link visualizations of scientific datasets across many files. Click on the image for a quick demo: Features Interacti

675 Dec 09, 2022
Lightweight, extensible data validation library for Python

Cerberus Cerberus is a lightweight and extensible data validation library for Python. v = Validator({'name': {'type': 'string'}}) v.validate({

eve 2.9k Dec 27, 2022
visualize_ML is a python package made to visualize some of the steps involved while dealing with a Machine Learning problem

visualize_ML visualize_ML is a python package made to visualize some of the steps involved while dealing with a Machine Learning problem. It is build

Ayush Singh 164 Dec 12, 2022
Small U-Net for vehicle detection

Small U-Net for vehicle detection Vivek Yadav, PhD Overview In this repository , we will go over using U-net for detecting vehicles in a video stream

Vivek Yadav 91 Nov 03, 2022
basemap - Plot on map projections (with coastlines and political boundaries) using matplotlib.

Basemap Plot on map projections (with coastlines and political boundaries) using matplotlib. ⚠️ Warning: this package is being deprecated in favour of

Matplotlib Developers 706 Dec 28, 2022
Moscow DEG 2021 elections plots

Построение графиков на основе публичных данных о ДЭГ в Москве в 2021г. Описание Скрипты в данном репозитории позволяют собственноручно построить графи

9 Jul 15, 2022
Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI

Data-Visualization-Projects Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI Indigenous-Brands-Social-Movements Pyt

Jinwoo(Roy) Yoon 1 Feb 05, 2022
Easily configurable, chart dashboards from any arbitrary API endpoint. JSON config only

Flask JSONDash Easily configurable, chart dashboards from any arbitrary API endpoint. JSON config only. Ready to go. This project is a flask blueprint

Chris Tabor 3.3k Dec 31, 2022
Jupyter notebook and datasets from the pandas Q&A video series

Python pandas Q&A video series Read about the series, and view all of the videos on one page: Easier data analysis in Python with pandas. Jupyter Note

Kevin Markham 2k Jan 05, 2023
Bokeh Plotting Backend for Pandas and GeoPandas

Pandas-Bokeh provides a Bokeh plotting backend for Pandas, GeoPandas and Pyspark DataFrames, similar to the already existing Visualization feature of

Patrik Hlobil 822 Jan 07, 2023
Implement the Perspective open source code in preparation for data visualization

Task Overview | Installation Instructions | Link to Module 2 Introduction Experience Technology at JP Morgan Chase Try out what real work is like in t

Abdulazeez Jimoh 1 Jan 23, 2022
A customized interface for single cell track visualisation based on pcnaDeep and napari.

pcnaDeep-napari A customized interface for single cell track visualisation based on pcnaDeep and napari. 👀 Under construction You can get test image

ChanLab 2 Nov 07, 2021
A small timeseries transformation API built on Flask and Pandas

#Mcflyin ###A timeseries transformation API built on Pandas and Flask This is a small demo of an API to do timeseries transformations built on Flask a

Rob Story 84 Mar 25, 2022
Data science project for exploratory analysis on the kcse grades dataset (Kamilimu Data Science Track)

Kcse-Data-Analysis Data science project for exploratory analysis on the kcse grades dataset (Kamilimu Data Science Track) Findings The performance of

MUGO BRIAN 1 Feb 23, 2022
A data visualization curriculum of interactive notebooks.

A data visualization curriculum of interactive notebooks, using Vega-Lite and Altair. This repository contains a series of Python-based Jupyter notebooks.

UW Interactive Data Lab 1.2k Dec 30, 2022
Python package to visualize and cluster partial dependence.

partial_dependence A python library for plotting partial dependence patterns of machine learning classifiers. The technique is a black box approach to

NYU Visualization Lab 25 Nov 14, 2022
Interactive chemical viewer for 2D structures of small molecules

👀 mols2grid mols2grid is an interactive chemical viewer for 2D structures of small molecules, based on RDKit. ➡️ Try the demo notebook on Google Cola

Cédric Bouysset 154 Dec 26, 2022
Piglet-shaders - PoC of custom shaders for Piglet

Piglet custom shader PoC This is a PoC for compiling Piglet fragment shaders usi

6 Mar 10, 2022