Local continuous test runner with pytest and watchdog.

Related tags

Testingpytest-watch
Overview

pytest-watch -- Continuous pytest runner

Current version on PyPI Say Thanks!

pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It beeps on failures and can run arbitrary commands on each passing and failing test run.

Motivation

Whether or not you use the test-driven development method, running tests continuously is far more productive than waiting until you're finished programming to test your code. Additionally, manually running pytest each time you want to see if any tests were broken has more wait-time and cognitive overhead than merely listening for a notification. This could be a crucial difference when debugging a complex problem or on a tight deadline.

Installation

$ pip install pytest-watch

Usage

$ cd myproject
$ ptw
 * Watching /path/to/myproject

Note: It can also be run using its full name pytest-watch.

Now develop normally and check the terminal every now and then to see if any tests are broken. Alternatively, pytest-watch can notify you when tests pass or fail:

  • OSX

    $ ptw --onpass "say passed" --onfail "say failed"

    $ ptw --onpass "growlnotify -m \"All tests passed!\"" \
          --onfail "growlnotify -m \"Tests failed\""

    using GrowlNotify.

  • Windows

    > ptw --onfail flash

    using Console Flash

You can also run a command before the tests run, e.g. seeding your test database:

$ ptw --beforerun init_db.py

Or after they finish, e.g. deleting a sqlite file. Note that this script receives the exit code of pytest as an argument.

$ ptw --afterrun cleanup_db.py

You can also use a custom runner script for full pytest control:

$ ptw --runner "python custom_pytest_runner.py"

Here's an minimal runner script that runs pytest and prints its exit code:

# custom_pytest_runner.py

import sys
import pytest

print('pytest exited with code:', pytest.main(sys.argv[1:]))

Need to exclude directories from being observed or collected for tests?

$ ptw --ignore ./deep-directory --ignore ./integration_tests

See the full list of options:

$ ptw --help
Usage: ptw [options] [--ignore <dir>...] [<directory>...] [-- <pytest-args>...]

Options:
  --ignore <dir>        Ignore directory from being watched and during
                        collection (multi-allowed).
  --ext <exts>          Comma-separated list of file extensions that can
                        trigger a new test run when changed (default: .py).
                        Use --ext=* to allow any file (including .pyc).
  --config <file>       Load configuration from `file` instead of trying to
                        locate one of the implicit configuration files.
  -c --clear            Clear the screen before each run.
  -n --nobeep           Do not beep on failure.
  -w --wait             Waits for all tests to complete before re-running.
                        Otherwise, tests are interrupted on filesystem events.
  --beforerun <cmd>     Run arbitrary command before tests are run.
  --afterrun <cmd>      Run arbitrary command on completion or interruption.
                        The exit code of "pytest" is passed as an argument.
  --onpass <cmd>        Run arbitrary command on pass.
  --onfail <cmd>        Run arbitrary command on failure.
  --onexit <cmd>        Run arbitrary command when exiting pytest-watch.
  --runner <cmd>        Run a custom command instead of "pytest".
  --pdb                 Start the interactive Python debugger on errors.
                        This also enables --wait to prevent pdb interruption.
  --spool <delay>       Re-run after a delay (in milliseconds), allowing for
                        more file system events to queue up (default: 200 ms).
  -p --poll             Use polling instead of OS events (useful in VMs).
  -v --verbose          Increase verbosity of the output.
  -q --quiet            Decrease verbosity of the output (precedence over -v).
  -V --version          Print version and exit.
  -h --help             Print help and exit.

Configuration

CLI options can be added to a [pytest-watch] section in your pytest.ini file to persist them in your project. For example:

# pytest.ini

[pytest]
addopts = --maxfail=2


[pytest-watch]
ignore = ./integration-tests
nobeep = True

Alternatives

  • xdist offers the --looponfail (-f) option (and distributed testing options). This instead re-runs only those tests which have failed until you make them pass. This can be a speed advantage when trying to get all tests passing, but leaves out the discovery of new failures until then. It also drops the colors outputted by pytest, whereas pytest-watch doesn't.
  • Nosey is the original codebase this was forked from. Nosey runs nose instead of pytest.

Contributing

  1. Check the open issues or open a new issue to start a discussion around your feature idea or the bug you found
  2. Fork the repository, make your changes, and add yourself to Authors.md
  3. Send a pull request

If you want to edit the README, be sure to make your changes to README.md and run the following to regenerate the README.rst file:

$ pandoc -t rst -o README.rst README.md

If your PR has been waiting a while, feel free to ping me on Twitter.

Use this software often? Say Thanks! 😃

Owner
Joe Esposito
Senior Software Engineer, Consultant, and open source contributor.
Joe Esposito
An Instagram bot that can mass text users, receive and read a text, and store it somewhere with user details.

Instagram Bot 🤖 July 14, 2021 Overview 👍 A multifunctionality automated instagram bot that can mass text users, receive and read a message and store

Abhilash Datta 14 Dec 06, 2022
1st Solution to QQ Browser 2021 AIAC Track 2

1st Solution to QQ Browser 2021 AIAC Track 2 This repository is the winning solution to QQ Browser 2021 AI Algorithm Competition Track 2 Automated Hyp

DAIR Lab 24 Sep 10, 2022
The evaluator covering all of the metrics required by tasks within the DUE Benchmark.

DUE Evaluator The repository contains the evaluator covering all of the metrics required by tasks within the DUE Benchmark, i.e., set-based F1 (for KI

DUE Benchmark 4 Jan 21, 2022
Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

airspeed velocity airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. It is primarily designed to benchmark a sing

745 Dec 28, 2022
模仿 USTC CAS 的程序,用于开发校内网站应用的本地调试。

ustc-cas-mock 模仿 USTC CAS 的程序,用于开发校内网站应用阶段调试。 请勿在生产环境部署! 只测试了最常用的三个 CAS route: /login /serviceValidate(验证 CAS ticket) /logout 没有测试过 proxy ticket。(因为我

taoky 4 Jan 27, 2022
Pytest support for asyncio.

pytest-asyncio: pytest support for asyncio pytest-asyncio is an Apache2 licensed library, written in Python, for testing asyncio code with pytest. asy

pytest-dev 1.1k Jan 02, 2023
Ab testing - The using AB test to test of difference of conversion rate

Facebook recently introduced a new type of offer that is an alternative to the current type of bidding called maximum bidding he introduced average bidding.

5 Nov 21, 2022
Python Rest Testing

pyresttest Table of Contents What Is It? Status Installation Sample Test Examples Installation How Do I Use It? Running A Simple Test Using JSON Valid

Sam Van Oort 1.1k Dec 28, 2022
A utility for mocking out the Python Requests library.

Responses A utility library for mocking out the requests Python library. Note Responses requires Python 2.7 or newer, and requests = 2.0 Installing p

Sentry 3.8k Jan 03, 2023
MongoDB panel for the Flask Debug Toolbar

Flask Debug Toolbar MongoDB Panel Info: An extension panel for Rob Hudson's Django Debug Toolbar that adds MongoDB debugging information Author: Harry

Cenk Altı 4 Dec 11, 2019
The definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).

mamba: the definitive test runner for Python mamba is the definitive test runner for Python. Born under the banner of behavior-driven development. Ins

Néstor Salceda 502 Dec 30, 2022
ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries.

ApiPy was created for api testing with Python pytest framework which has also requests, assertpy and pytest-html-reporter libraries. With this f

Mustafa 1 Jul 11, 2022
Selects tests affected by changed files. Continous test runner when used with pytest-watch.

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language l

Tibor Arpas 614 Dec 30, 2022
A framework-agnostic library for testing ASGI web applications

async-asgi-testclient Async ASGI TestClient is a library for testing web applications that implements ASGI specification (version 2 and 3). The motiva

122 Nov 22, 2022
Test django schema and data migrations, including migrations' order and best practices.

django-test-migrations Features Allows to test django schema and data migrations Allows to test both forward and rollback migrations Allows to test th

wemake.services 382 Dec 27, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 03, 2022
RAT-el is an open source penetration test tool that allows you to take control of a windows machine.

To prevent RATel from being detected by antivirus, please do not upload the payload to TOTAL VIRUS. Each month I will test myself if the payload gets detected by antivirus. So you’ll have a photo eve

218 Dec 16, 2022
Enabling easy statistical significance testing for deep neural networks.

deep-significance: Easy and Better Significance Testing for Deep Neural Networks Contents ⁉️ Why 📥 Installation 🔖 Examples Intermezzo: Almost Stocha

Dennis Ulmer 270 Dec 20, 2022
A toolbar overlay for debugging Flask applications

Flask Debug-toolbar This is a port of the excellent django-debug-toolbar for Flask applications. Installation Installing is simple with pip: $ pip ins

863 Dec 29, 2022