pytest plugin for a better developer experience when working with the PyTorch test suite

Overview

pytest-pytorch

license repo status isort black tests status

What is it?

pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test suite if you come from a pytest background.

Why do I need it?

Some testcases in the PyTorch test suite are automatically generated when a module is loaded in order to parametrize them. Trying to collect them with their names as written, e.g. pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar, is unfortunately not possible. If you are used to this syntax or your IDE relies on it (PyCharm, VSCode), you can install pytest-pytorch to make it work.

How do I install it?

You can install pytest-pytorch with pip

$ pip install pytest-pytorch

or with conda:

$ conda install -c conda-forge pytest-pytorch

How do I use it?

With pytest-pytorch installed you can select test cases and tests as if the instantiation for different devices was performed by @pytest.mark.parametrize:

Use case Command
Run a test case against all devices pytest test_foo.py::TestBar
Run a test case against one device pytest test_foo.py::TestBar -k "$DEVICE"
Run a test against all devices pytest test_foo.py::TestBar::test_baz
Run a test against one device pytest test_foo.py::TestBar::test_baz -k "$DEVICE"

Can I have a little more background?

PyTorch uses its own method for generating tests that is for the most part compatible with unittest and pytest. Its custom test generation allows test templates to be written and instantiated for different device types, data types, and operators. Consider the following module test_foo.py:

from torch.testing._internal.common_utils import TestCase
from torch.testing._internal.common_device_type import instantiate_device_type_tests

class TestFoo(TestCase):
    def test_bar(self, device):
        pass
    
    def test_baz(self, device):
        pass

instantiate_device_type_tests(TestFoo, globals())

Assuming we "cpu" and "cuda" are available as devices, we can collect four tests:

  1. test_foo.py::TestFooCPU::test_bar_cpu,
  2. test_foo.py::TestFooCPU::test_baz_cpu,
  3. test_foo.py::TestFooCUDA::test_bar_cuda, and
  4. test_foo.py::TestFooCUDA::test_baz_cuda.

From a pytest perspective this is similar to decorating TestFoo with @pytest.mark.parametrize("device", ("cpu", "cuda"))) which would result in

  1. test_foo.py::TestFoo:test_bar[cpu],
  2. test_foo.py::TestFoo:test_bar[cuda],
  3. test_foo.py::TestFoo:test_baz[cpu], and
  4. test_foo.py::TestFoo:test_baz[cuda].

Since the PyTorch test framework renames testcases and tests, naively running pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar fails, because it can't find anything matching these names. Of course you can get around it by using the regular expression matching (-k command line flag) that pytest offers.

pytest-pytorch performs this matching so you can keep your familiar workflow and your IDE is happy out of the box.

How do I contribute?

First and foremost: Thank you for your interest in development of pytest-pytorch's! We appreciate all contributions be it code or something else. Check out our contribution guide lines for details.

Comments
  • Fix broken link in readme

    Fix broken link in readme

    The blog link provided in the readme is broken. Please fix it.

    https://deploy-preview-211--quansight-labs.netlify.app/blog/2021/06/pytest-pytorch/

    I would suggest to replace it with:

    https://labs.quansight.org/blog/2021/06/pytest-pytorch/

    :bulb: Note: I am pushing a PR to fix this.

    opened by sugatoray 1
  • Change test workflows from PyTorch nightly to stable

    Change test workflows from PyTorch nightly to stable

    Previously, we used PyTorch nightly to have a second device available in CI. As of torch==1.9 the "meta" device is included in the stable binaries. Thus, there is no need for less safe nightly testing anymore.

    opened by pmeier 0
  • remove duplicate tests

    remove duplicate tests

    In some cases new_cmds == legacy_cmds. This made it more verbose to write and additionally resulted in duplicate tests.

    After this PR, if no legacy_cmds is passed to Config, the value of new_cmds is used. Plus, duplicate configs are filtered out.

    opened by pmeier 0
  • trim the test matrix

    trim the test matrix

    We don't need to test this for every os / python combination. It should be sufficient to test every os with the minimum python requirement as well as one (linux) with every supported python version.

    This should speed up the CI runs and save some resources.

    opened by pmeier 0
  • refactor test suite to test the actual collection

    refactor test suite to test the actual collection

    Before we tested the collection by using different outcomes for each test based on the respective parameter. Afterwards we checked the pytest result against that. This has two downsides:

    1. It takes more mental effort to not only parse which test will run, but also how the outcome of all run tests is.
    2. Since we could only test against the aggregated result for multiple tests, we can't be sure the test is actually right.

    With this PR we are actually testing the collection, by parsing the pytest output. Additionally, you can now add this codeblock

    # ======================================================================================
    # This block is necessary to autogenerate the parametrization for
    # tests/test_plugin.py::test_standard_collection.
    # It needs to be placed **after** the import of 'instantiate_device_type_tests' and
    # **before** its first usage.
    # ======================================================================================
    try:
        from _spy import Spy
    
        __spy__ = Spy()
        del Spy
        instantiate_device_type_tests = __spy__(instantiate_device_type_tests)
    except ModuleNotFoundError:
        pass
    # ======================================================================================
    

    to a test file to automatically test the selection of

    • everything in the file,
    • every test case, and
    • every test case function.

    Anything beyond that still needs to be configured manually.

    opened by pmeier 0
  • support selection of tests that use op infos

    support selection of tests that use op infos

    Fixes #16.

    Currently we rely on the device identifier comes directly after the test case function name. That is no longer true when using OpInfo's. The name of the instantiated test follows the scheme (template_name)_(op_name)_(device)_(dtype).

    This is a complete rewrite of the internal matching logic:

    • test case: Test cases are only parametrized by the device. Since every TestCase has a device_type attribute, we can simply strip the device identifier from the instantiated name.
    • test case function: Since both template_name and op_name in the pattern above might contain underscores and they are also separated by a single underscore, it is impossible to extract the two parts without further knowledge. To overcome this, we can inspect the source of the function and extract the template_name (which is the function name) directly.
    opened by pmeier 0
  • Re-add support for OpInfo's

    Re-add support for OpInfo's

    Consider the following test setup:

    from torch.testing._internal.common_device_type import (
        instantiate_device_type_tests,
        ops,
    )
    from torch.testing._internal.common_utils import TestCase
    from torch.testing._internal.common_methods_invocations import OpInfo
    
    BazOpInfo = OpInfo("add")
    
    
    class TestFoo(TestCase):
        @ops([BazOpInfo])
        def test_bar(self, device, dtype, op):
            pass
    
    
    instantiate_device_type_tests(TestFoo, globals(), only_for="cpu")
    

    Running pytest test_foo.py --collect-only on that results in:

    <PyTorchTestCase TestFooCPU>
      <PyTorchTestCaseFunction test_bar_add_cpu_float32>
      <PyTorchTestCaseFunction test_bar_add_cpu_float64>
    <PyTorchTestCase TestFooMETA>
      <PyTorchTestCaseFunction test_bar_add_meta_float32>
      <PyTorchTestCaseFunction test_bar_add_meta_float64>
    

    Naming schemes:

    • test cases: (template_name)(device)
    • test case functions: (template_name)_(op_name)_(device)_(dtype)

    After #12 test case functions now require the device to follow right after the template name. Thus, it is no longer possible to select an individual test by name pytest test_foo.py::TestFoo::test_bar

    opened by pmeier 0
  • make dtype testing more concise

    make dtype testing more concise

    Instead of instantiating the tests for all devices and use the @onlyCPU decorator everywhere, used the only_for keyword when instantiating. With that the meta tests are not generated in the first place and do not need to be considered for the number of skipped tests.

    opened by pmeier 0
  • Do not require torch at installation

    Do not require torch at installation

    Right now we have torch as dependency:

    https://github.com/Quansight/pytest-pytorch/blob/bd98f6b23214460605e4b8c0ee2bd4956e846291/setup.cfg#L32-L34

    If you set up the development environment to work on PyTorch, you would probably not have it installed already. Thus installing pytest-pytorch would also install torch which is usually not desired.

    opened by pmeier 0
  • Support for nested test case names

    Support for nested test case names

    Currently we match the test case (function) names based on this:

    https://github.com/Quansight/pytest-pytorch/blob/ff8f2d86906486a2d437b2617ef9973394f5e216/pytest_pytorch/plugin.py#L9-L10

    This works well until you have a setup similar to this:

    class TestFoo(TestCase):
        pass
    
    class TestFooBar(TestCase):
        pass
    

    If you run pytest test_foo.py::TestFoo on this, both test cases are collected instead of just TestFoo.

    opened by pmeier 0
  • CI tests are failing

    CI tests are failing

    The CI tests are failing, because we need torch>=1.9, but it is only available through the nightlies. Unfortunately, tox-ltt is not able to handle the nightly channel yet.

    opened by pmeier 0
Releases(v0.2.1)
  • v0.2.1(May 25, 2021)

    This adds the --disable-pytest-pytorch command line option (#25), which makes it easier to debug incompatibilites with the vanilla pytest collection.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 21, 2021)

    This minor release adds support for OpInfo's which are used more and more throughout the PyTorch test suite (#17).

    Furthermore @xmnlab helped us to get pytest-pytorch into conda-forge. Installation instruction can be found in the README (#18).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 20, 2021)

    This release includes two minor improvements:

    1. Support for selecting individual test cases if there names are nested, i.e. TestFoo and TestFooBar (#12)
    2. Removal of PyTorch as installation requirement (#14)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 14, 2021)

Owner
Quansight
We grow talent, build technology, and discover products by helping companies grow OSS communities to organize and analyze their data.
Quansight
A testing system for catching visual regressions in Web applications.

Huxley Watches you browse, takes screenshots, tells you when they change Huxley is a test-like system for catching visual regressions in Web applicati

Facebook Archive 4.1k Nov 30, 2022
Doggo Browser

Doggo Browser Quick Start $ python3 -m venv ./venv/ $ source ./venv/bin/activate $ pip3 install -r requirements.txt $ ./sobaki.py References Heavily I

Alexey Kutepov 9 Dec 12, 2022
Cornell record & replay mock server

Cornell: record & replay mock server Cornell makes it dead simple, via its record and replay features to perform end-to-end testing in a fast and isol

HiredScoreLabs 134 Sep 15, 2022
DUCKSPLOIT - Windows Hacking FrameWork using Reverse Shell

Ducksploit Install Ducksploit Hacker setup raspberry pico Download https://githu

2 Jan 31, 2022
HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom

hey is a tiny program that sends some load to a web application. hey was originally called boom and was influenced from Tarek Ziade's tool at tarekzia

Jaana Dogan 14.9k Jan 07, 2023
An Instagram bot that can mass text users, receive and read a text, and store it somewhere with user details.

Instagram Bot 🤖 July 14, 2021 Overview 👍 A multifunctionality automated instagram bot that can mass text users, receive and read a message and store

Abhilash Datta 14 Dec 06, 2022
Enabling easy statistical significance testing for deep neural networks.

deep-significance: Easy and Better Significance Testing for Deep Neural Networks Contents ⁉️ Why 📥 Installation 🔖 Examples Intermezzo: Almost Stocha

Dennis Ulmer 270 Dec 20, 2022
Show surprise when tests are passing

pytest-pikachu pytest-pikachu prints ascii art of Surprised Pikachu when all tests pass. Installation $ pip install pytest-pikachu Usage Pass the --p

Charlie Hornsby 13 Apr 15, 2022
hyppo is an open-source software package for multivariate hypothesis testing.

hyppo (HYPothesis Testing in PythOn, pronounced "Hippo") is an open-source software package for multivariate hypothesis testing.

neurodata 137 Dec 18, 2022
API Test Automation with Requests and Pytest

api-testing-requests-pytest Install Make sure you have Python 3 installed on your machine. Then: 1.Install pipenv sudo apt-get install pipenv 2.Go to

Sulaiman Haque 2 Nov 21, 2021
A pure Python script to easily get a reverse shell

easy-shell A pure Python script to easily get a reverse shell. How it works? After sending a request, it generates a payload with different commands a

Cristian Souza 48 Dec 12, 2022
Minimal example of how to use pytest with automated 'devops' style automated test runs

Pytest python example with automated testing This is a minimal viable example of pytest with an automated run of tests for every push/merge into the m

Karma Computing 2 Jan 02, 2022
AllPairs is an open source test combinations generator written in Python

AllPairs is an open source test combinations generator written in Python

Robson Agapito Correa 5 Mar 05, 2022
Python script to automatically download from Zippyshare

Zippyshare downloader and Links Extractor Python script to automatically download from Zippyshare using Selenium package and Internet Download Manager

Daksh Khurana 2 Oct 31, 2022
Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

yin kaisheng 1.6k Dec 29, 2022
Code for "SUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism"

SUGAR Code for "SUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism" Overview train.py: the cor

41 Nov 08, 2022
Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020

Show, Edit and Tell: A Framework for Editing Image Captions | arXiv This contains the source code for Show, Edit and Tell: A Framework for Editing Ima

Fawaz Sammani 76 Nov 25, 2022
Connexion-faker - Auto-generate mocks from your Connexion API using OpenAPI

Connexion Faker Get Started Install With poetry: poetry add connexion-faker # a

Erle Carrara 6 Dec 19, 2022
A cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.

PyAutoGUI PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard. pip inst

Al Sweigart 7.5k Dec 31, 2022
The Good Old Days. | Testing Out A New Module-

The-Good-Old-Days. The Good Old Days. | Testing Out A New Module- Installation Asciimatics supports Python versions 2 & 3. For the precise list of tes

Syntax. 2 Jun 08, 2022