Declarative HTTP Testing for Python and anything else

Overview
Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.6, 3.7, 3.8, 3.9 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

Comments
  • Coerce JSON types into correct values for later $RESPONSE replacements

    Coerce JSON types into correct values for later $RESPONSE replacements

    Resolves #147, having $RESPONSE replacements that contained integer or decimal values would wind up as quoted strings after substitution, e.g. {"id": 825} would later become {"id": "825"}.

    This passes tox -epep8, but running tox -epy27 seems to have the first few tests fail, unfortunately. This patch works for my use case, however; I could not POST data to particular endpoints of an API as the number values were wrapped in quotes (and the API was doing type checks on the values).

    This may not be an ideal (or eloquent) solution, but I've also tried to keep performance in mind in lieu of large JSON response bodies, namely that I expect the exception cases to be more common than exceptional, so I've added some additional checking to see if it's really worth parsing particular values as strings (line no. 359) and or if it even looks like a number in the first place (line no. 376). That said, if this performance hit is not an issue, it certainly is a lot more readable without the checks.

    I also chose to do two try/excepts instead of simply using float. Firstly we try parsing for int and then for float as I prefer the resulting JSON to be correct i.e. I would rather not an id field that was initially an int be cast into a float. For example, consider we get a response of {"id": 825} and we simply had one try/except that used float. The value would parse, but the resulting JSON (from json.dumps) would be {"id": 825.0}. This pragmatically doesn't matter as I'm sure most endpoints will accept a decimal value with an appended .0 to be valid as an integer, but I felt the semantics would be a surprise to other users of the lib and it's still possible that certain APIs might have an issue with.

    And thanks for all the effort you've put into the lib!

    opened by justanotherdot 20
  • Unable to make a relative Content Handler import from the command-line

    Unable to make a relative Content Handler import from the command-line

    On the command line, importing a custom Response Handler using a relative path requires manipulation of the PYTHONPATH environment variable to add . to the list of paths.

    Should Gabbi allow relative imports to work out-of-the-box?

    e.g.

    gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... fails with, ModuleNotFoundError: No module named 'foo'.

    Updating PYTHONPATH...

    PYTHONPATH=${PYTHONPATH}:. gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... works.

    opened by scottwallacesh 17
  • Allow to load python object from yaml

    Allow to load python object from yaml

    It can be interesting to write custom object to compare values.

    For example, I need to ensure an output is equal to .NAN

    Because .NAN == .NAN always returns false. We currently can't compare it with assert_equals().

    With the unsafe yaml loader we can register a custom method to check NAN, for example:

    class IsNAN(object):
        @classmethod
        def constructor(cls, loader, node):
            return cls()
    
        def __eq__(self, other):
            return numpy.isnan(other)
    
    yaml.add_constructor(u'!ISNAN', ISNAN.constructor)
    
    opened by sileht 17
  • extra verbosity to include request/response bodies

    extra verbosity to include request/response bodies

    Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

    In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

    It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

    opened by FND 15
  • Ability to run gabbi test cases individually

    Ability to run gabbi test cases individually

    The ability to run an individual gabbi test without any of the tests preceding it in the yaml file could be useful. I created a project where I drive gabbi test cases using Robot Framework (https://github.com/dkt26111/robotframework-gabbilibrary). In order for that to work I explicitly set the prior field of the gabbi test case being run to None.

    opened by dkt26111 14
  • Verbose misses response body

    Verbose misses response body

    I have got the following test spec:

    tests:
    -   name: auth
        verbose: all
        url: /api/sessions
        method: POST
        data: "asdsad"
        status: 200
    

    which has got data not a proper json on purpose. The response results in 400 instead of 200 code. Verbose is set to all, but it still does not print the response body, although detects non empty content:

    ... #### auth ####
    > POST http://localhost:7000/api/sessions
    > user-agent: gabbi/1.40.0 (Python urllib3)
    
    < 400 Bad Request
    < content-length: 48
    < date: Wed, 19 Aug 2020 06:11:24 GMT
    
    ✗ gabbi-runner.input_auth
    
    FAIL: gabbi-runner.input_auth
            Traceback (most recent call last):
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 94, in wrapper
                func(self)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 143, in test_request
                self._run_test()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 550, in _run_test
                self._assert_response()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 188, in _assert_response
                self._test_status(self.test_data['status'], self.response['status'])
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 591, in _test_status
                self.assert_in_or_print_output(observed_status, statii)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 654, in assert_in_or_print_output
                self.assertIn(expected, iterable)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 417, in assertIn
                self.assertThat(haystack, Contains(needle), message)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in assertThat
                raise mismatch_error
            testtools.matchers._impl.MismatchError: '400' not in ['200']
    ----------------------------------------------------------------------
    

    The expected behavior:

    • response body is printed to the stdout alongside the response headers.
    opened by avkonst 11
  • Q: Persist (across > 1 tests) value to variable based on JSON response?

    Q: Persist (across > 1 tests) value to variable based on JSON response?

    Example algorithm to be expressed in YAML:

    • Create an object given an object name.
    • Store in "$FOO" (or similar) the UUID given to object per JSON response.
    • Do unrelated tests.
    • Perform a GET using "$FOO" (the UUID)

    Thanks as always!

    enhancement 
    opened by josdotso 11
  • Variable is not replace to previous result in a request body

    Variable is not replace to previous result in a request body

    Seen from https://gabbi.readthedocs.io/en/latest/format.html#any-previous-test

    There are 2 requests:

    1. post a task, will return a taskId
    2. query the task with the taskId
    • previous test return: {"dataSet": {"header": {"serverIp": "xxx.xxx.xxx.xxx", "version": "1.0", "errorKeys": [{"error_key" : "2-0-0"}], "errorInfo": "", "returnCode": 0}, "data": {"taskId": "3008929"}}}

    • yaml define data: taskId: $HISTORY['start live migrate a vm'].$RESPONSE['$.dataSet.data.taskId']

    • actual result: "data": { "taskId": "$.dataSet.data.taskId" }

    opened by taget 9
  • jsonhandler: allow reading yaml data from disk

    jsonhandler: allow reading yaml data from disk

    This commit aims to change the jsonhandler to be able to read data from disk if it is a yaml file.

    Note:

    • Simply replacing the loads call with yaml.safe_load is not enough due to the nature of the NaN checker requiring an unsafe load[1].

    closes #253 [1] https://github.com/cdent/gabbi/commit/98adca65e05b7de4f1ab2bf90ab0521af5030f35

    opened by trevormccasland 9
  • pytest not working correctly!

    pytest not working correctly!

    Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

    import os
    
    from gabbi import driver
    
    # By convention the YAML files are put in a directory named
    # "gabbits" that is in the same directory as the Python test file. 
    TESTS_DIR = 'gabbits'
    
    def test_gabbits():
        test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
        test_generator = driver.py_test_generator(
            test_dir, host="http://www.nfdsfdsfdsf.se", port=80)
    
        for test in test_generator:
            yield test
    

    The yaml-file looks very simple:

    tests:
      - name: Do get to a faulty site
        url: /sdsdsad
        method: GET
        status: 200
    

    The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

    opened by keyhan 9
  • Add yaml-based tests for host header and sni checking

    Add yaml-based tests for host header and sni checking

    The addition of server_hostname to the httpclient PoolManager, without sufficient testing, has revealed some issues:

    • The minimum urllib3 required is too low. server_hostname was introduced in 1.24.x
    • However, there is a bug [1] in PoolManager when mixing schemes in the same pool manager. This is being fixed so the minimum urllib3 will need to be higher still.

    Tests are added here, and the minimum value for urllib3 will be set when a release is made.

    Some of the tests are "live" meaning they require network, and can be skipped via the live test fixture if the GABBI_SKIP_NETWORK env variable is set to "true".

    [1] https://github.com/urllib3/urllib3/issues/2534

    Fixes #307 Fixes #309

    opened by cdent 8
  • gabbi doesn't support client cert yet

    gabbi doesn't support client cert yet

    Gabbi doesn't support client cert yet

    Help gabbi could support: gabbi-run ... --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/client.crt --key /etc/kubernetes/pki/client.key ...

    opened by wu-wenxiang 4
  • Socket leak with large YAML test files

    Socket leak with large YAML test files

    I have a YAML file with nearly 2000 tests in it. When invoked from the command line, I run out of open file handles due to large amounts of sockets left open:

    ERROR: gabbi-runner.input_/foo/bar/__test_l__
    	[Errno 24] Too many open files
    

    By default a Linux user has 1024 file handles:

    $ ulimit -n
    1024
    

    Inspecting the open file handles:

    $ ls -l /proc/$(pgrep gabbi-run)/fd | awk '{print $NF}' | cut -f1 -d: | sort | uniq -c
          1 0
          2 /path/to/a/file.txt
          1 /path/to/another/file.yaml
       1021 socket
    
    opened by scottwallacesh 3
  • Consider per-suite pre & post executables

    Consider per-suite pre & post executables

    Like fixtures, but a call to an external executable, for when gabbi-run is being used.

    This could be explicit, by putting something in the yaml file, or implicit off the name of the yaml file. That is:

    • if gabbit is foo.yaml
    • if foo-start and foo-end exist in the same dir and are executable

    either way, when the start is called gabbi should save, as a list, the line separated stdout, if any, it produced

    and provide that as args (or stdin?) to foo-end

    this would allow passing things like pids of started stuff

    /cc @FND for sanity check

    enhancement 
    opened by cdent 7
  • some fixtures that

    some fixtures that "capture" no longer work with the removal of testtools

    In https://github.com/cdent/gabbi/pull/279 testtools was removed.

    Fixtures in the openstack community that do output capturing rely on some "end of test" handling in testtools to dump the accumulated data. You can see this by trying a LOG.critical("hi") somewhere in the (e.g.) placement code and causing a test to fail. Dropping to a gabbi <2 makes it work again.

    We're definitely not going to add testtools back in, but the test case subclass in gabbi itself may be able to handling the data gathering that's required. Some investigation required.

    /cc @lbragstad for awareness

    opened by cdent 0
  • Faster tests development with gold files

    Faster tests development with gold files

    There is a cool method to speed up development of tests. It would be great if gabbi supported it too.

    Here is the idea:

    1. a test defines that a response should be compared with a gold file (reference to gold file can be custom configurable per every test)

    2. gabbi runs tests with a new flag 'generate-gold-files', which forces gabbi to capture response bodies and headers and (re-)write gold files containing the captured response data

    3. developer reviews the gold files (usually happens one by one as tests are added one by one during development)

    4. gabbi runs tests as usually

      a) if a test has got a reference to a gold file, it captures actual response output and compares with gold file b) if content of the actual output matches the gold file content, verification is considered to be passed c) otherwise test is failed

    This would allow me to reduce size of my test files by half at least.

    opened by avkonst 3
  • test files with - in the name can lead to failing tests when looking for content-type

    test files with - in the name can lead to failing tests when looking for content-type

    Bear with me, this is hard to explain

    Python v 3.6.9

    gabbi: 1.49.0

    A test file with named device-types.yaml with a test of:

    tests:                                                                          
    - name: get only 405                                                            
      POST: /device-types                                                           
      status: 405    
    

    errors with the following when run in a unittest-style harness:

        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 68, in action'
        b'    response_value = str(response[header])'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/urllib3/_collections.py", line 156, in __getitem__'
        b'    val = self._container[key.lower()]'
        b"KeyError: 'content-type'"
        b''
        b'During handling of the above exception, another exception occurred:'
        b''
        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/suitemaker.py", line 96, in do_test'
        b'    return test_method(*args, **kwargs)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 95, in wrapper'
        b'    func(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 149, in test_request'
        b'    self._run_test()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 556, in _run_test'
        b'    self._assert_response()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 196, in _assert_response'
        b'    handler(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/base.py", line 54, in __call__'
        b'    self.action(test, item, value=value)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 72, in action'
        b'    header, response.keys()))'
        b"AssertionError: 'content-type' header not present in response: KeysView(HTTPHeaderDict({'Vary': 'Origin', 'Date': 'Tue, 24 Mar 2020 14:17:33 GMT', 'Content-Length': '0', 'status': '405', 'reason': 'Method Not Allowed'}))"
        b''
    

    However, rename the file to foo.yaml and the test works, or run the device-types.yaml file with gabbi-run and the tests work. Presumably something about test naming.

    So the short term workaround is to rename the file, but this needs to be fixed because using - in filenames is idiomatic for gabbi.

    opened by cdent 1
Releases(2.3.0)
  • 2.3.0(Sep 3, 2021)

    • For the $ENVIRON and $RESPONSE :ref:substitutions <state-substitution> it is now possible to :ref:cast <casting> the value to a type of int, float, str, or bool.
    • The JSONHandler is now more strict about how it detects that a body content is JSON, avoiding some errors where the content-type header suggests JSON but the content cannot be decoded as such.
    • Better error message when content cannot be decoded.
    • Addition of the disable_response_handler test setting for those cases when the test author has no control over the content-type header and it is wrong.
    Source code(tar.gz)
    Source code(zip)
A collection of testing examples using pytest and many other libreris

Effective testing with Python This project was created for PyConEs 2021 Check out the test samples at tests Check out the slides at slides (markdown o

Héctor Canto 10 Oct 23, 2022
fsociety Hacking Tools Pack – A Penetration Testing Framework

Fsociety Hacking Tools Pack A Penetration Testing Framework, you will have every script that a hacker needs. Works with Python 2. For a Python 3 versi

Manisso 8.2k Jan 03, 2023
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 02, 2022
Generate random test credit card numbers for testing, validation and/or verification purposes.

Generate random test credit card numbers for testing, validation and/or verification purposes.

Dark Hunter 141 5 Nov 14, 2022
hyppo is an open-source software package for multivariate hypothesis testing.

hyppo (HYPothesis Testing in PythOn, pronounced "Hippo") is an open-source software package for multivariate hypothesis testing.

neurodata 137 Dec 18, 2022
Photostudio是一款能进行自动化检测网页存活并实时给网页拍照的工具,通过调用Fofa/Zoomeye/360qua/shodan等 Api快速准确查询资产并进行网页截图,从而实施进一步的信息筛查。

Photostudio-红队快速爬取网页快照工具 一、简介: 正如其名:这是一款能进行自动化检测,实时给网页拍照的工具 信息收集要求所收集到的信息要真实可靠。 当然,这个原则是信息收集工作的最基本的要求。为达到这样的要求,信息收集者就必须对收集到的信息反复核实,不断检验,力求把误差减少到最低限度。我

s7ck Team 41 Dec 11, 2022
Automated testing tool developed in python for Advanced mathematical operations.

Advanced-Maths-Operations-Validations Automated testing tool developed in python for Advanced mathematical operations. Requirements Python 3.5 or late

Nikhil Repale 1 Nov 16, 2021
Useful additions to Django's default TestCase

django-test-plus Useful additions to Django's default TestCase from REVSYS Rationale Let's face it, writing tests isn't always fun. Part of the reason

REVSYS 546 Dec 22, 2022
A folder automation made using Watch-dog, it only works in linux for now but I assume, it will be adaptable to mac and PC as well

folder-automation A folder automation made using Watch-dog, it only works in linux for now but I assume, it will be adaptable to mac and PC as well Th

Parag Jyoti Paul 31 May 28, 2021
This is a bot that can type without any assistance and have incredible speed.

BulldozerType This is a bot that can type without any assistance and have incredible speed. This bot currently only works on the site https://onlinety

1 Jan 03, 2022
Test python asyncio-based code with ease.

aiounittest Info The aiounittest is a helper library to ease of your pain (and boilerplate), when writing a test of the asynchronous code (asyncio). Y

Krzysztof Warunek 55 Oct 30, 2022
bulk upload files to libgen.lc (Selenium script)

LibgenBulkUpload bulk upload files to http://libgen.lc/librarian.php (Selenium script) Usage ./upload.py to_upload uploaded rejects So title and autho

8 Jul 07, 2022
Based on the selenium automatic test framework of python, the program crawls the score information of the educational administration system of a unive

whpu_spider 该程序基于python的selenium自动化测试框架,对某高校的教务系统的成绩信息实时爬取,在检测到成绩更新之后,会通过电子邮件的方式,将更新的成绩以文本的方式发送给用户,可以使得用户在不必手动登录教务系统网站时,实时获取成绩更新的信息。 该程序仅供学习交流,不可用于恶意攻

1 Dec 30, 2021
Sixpack is a language-agnostic a/b-testing framework

Sixpack Sixpack is a framework to enable A/B testing across multiple programming languages. It does this by exposing a simple API for client libraries

1.7k Dec 24, 2022
A library to make concurrent selenium tests that automatically download and setup webdrivers

AutoParaSelenium A library to make parallel selenium tests that automatically download and setup webdrivers Usage Installation pip install autoparasel

Ronak Badhe 8 Mar 13, 2022
show python coverage information directly in emacs

show python coverage information directly in emacs

wouter bolsterlee 30 Oct 26, 2022
pytest plugin for distributed testing and loop-on-failures testing modes.

xdist: pytest distributed testing plugin The pytest-xdist plugin extends pytest with some unique test execution modes: test run parallelization: if yo

pytest-dev 1.1k Dec 30, 2022
This project demonstrates selenium's ability to extract files from a website.

This project demonstrates selenium's ability to extract files from a website. I've added the challenge of connecting over TOR. This package also includes a personal archive site built in NodeJS and A

2 Jan 16, 2022
A Library for Working with Sauce Labs

Robotframework - Sauce Labs Plugin This is a plugin for the SeleniumLibrary to help with using Sauce Labs. This library is a plugin extension of the S

joshin4colours 6 Oct 12, 2021
Auto-hms-action - Automation of NU Health Management System

🦾 Automation of NU Health Management System 🤖 長崎大学 健康管理システムの自動化 🏯 Usage / 使い方

k5-mot 3 Mar 04, 2022