pytest plugin for a better developer experience when working with the PyTorch test suite

Overview

pytest-pytorch

license repo status isort black tests status

What is it?

pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test suite if you come from a pytest background.

Why do I need it?

Some testcases in the PyTorch test suite are automatically generated when a module is loaded in order to parametrize them. Trying to collect them with their names as written, e.g. pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar, is unfortunately not possible. If you are used to this syntax or your IDE relies on it (PyCharm, VSCode), you can install pytest-pytorch to make it work.

How do I install it?

You can install pytest-pytorch with pip

$ pip install pytest-pytorch

or with conda:

$ conda install -c conda-forge pytest-pytorch

How do I use it?

With pytest-pytorch installed you can select test cases and tests as if the instantiation for different devices was performed by @pytest.mark.parametrize:

Use case Command
Run a test case against all devices pytest test_foo.py::TestBar
Run a test case against one device pytest test_foo.py::TestBar -k "$DEVICE"
Run a test against all devices pytest test_foo.py::TestBar::test_baz
Run a test against one device pytest test_foo.py::TestBar::test_baz -k "$DEVICE"

Can I have a little more background?

PyTorch uses its own method for generating tests that is for the most part compatible with unittest and pytest. Its custom test generation allows test templates to be written and instantiated for different device types, data types, and operators. Consider the following module test_foo.py:

from torch.testing._internal.common_utils import TestCase
from torch.testing._internal.common_device_type import instantiate_device_type_tests

class TestFoo(TestCase):
    def test_bar(self, device):
        pass
    
    def test_baz(self, device):
        pass

instantiate_device_type_tests(TestFoo, globals())

Assuming we "cpu" and "cuda" are available as devices, we can collect four tests:

  1. test_foo.py::TestFooCPU::test_bar_cpu,
  2. test_foo.py::TestFooCPU::test_baz_cpu,
  3. test_foo.py::TestFooCUDA::test_bar_cuda, and
  4. test_foo.py::TestFooCUDA::test_baz_cuda.

From a pytest perspective this is similar to decorating TestFoo with @pytest.mark.parametrize("device", ("cpu", "cuda"))) which would result in

  1. test_foo.py::TestFoo:test_bar[cpu],
  2. test_foo.py::TestFoo:test_bar[cuda],
  3. test_foo.py::TestFoo:test_baz[cpu], and
  4. test_foo.py::TestFoo:test_baz[cuda].

Since the PyTorch test framework renames testcases and tests, naively running pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar fails, because it can't find anything matching these names. Of course you can get around it by using the regular expression matching (-k command line flag) that pytest offers.

pytest-pytorch performs this matching so you can keep your familiar workflow and your IDE is happy out of the box.

How do I contribute?

First and foremost: Thank you for your interest in development of pytest-pytorch's! We appreciate all contributions be it code or something else. Check out our contribution guide lines for details.

Comments
  • Fix broken link in readme

    Fix broken link in readme

    The blog link provided in the readme is broken. Please fix it.

    https://deploy-preview-211--quansight-labs.netlify.app/blog/2021/06/pytest-pytorch/

    I would suggest to replace it with:

    https://labs.quansight.org/blog/2021/06/pytest-pytorch/

    :bulb: Note: I am pushing a PR to fix this.

    opened by sugatoray 1
  • Change test workflows from PyTorch nightly to stable

    Change test workflows from PyTorch nightly to stable

    Previously, we used PyTorch nightly to have a second device available in CI. As of torch==1.9 the "meta" device is included in the stable binaries. Thus, there is no need for less safe nightly testing anymore.

    opened by pmeier 0
  • remove duplicate tests

    remove duplicate tests

    In some cases new_cmds == legacy_cmds. This made it more verbose to write and additionally resulted in duplicate tests.

    After this PR, if no legacy_cmds is passed to Config, the value of new_cmds is used. Plus, duplicate configs are filtered out.

    opened by pmeier 0
  • trim the test matrix

    trim the test matrix

    We don't need to test this for every os / python combination. It should be sufficient to test every os with the minimum python requirement as well as one (linux) with every supported python version.

    This should speed up the CI runs and save some resources.

    opened by pmeier 0
  • refactor test suite to test the actual collection

    refactor test suite to test the actual collection

    Before we tested the collection by using different outcomes for each test based on the respective parameter. Afterwards we checked the pytest result against that. This has two downsides:

    1. It takes more mental effort to not only parse which test will run, but also how the outcome of all run tests is.
    2. Since we could only test against the aggregated result for multiple tests, we can't be sure the test is actually right.

    With this PR we are actually testing the collection, by parsing the pytest output. Additionally, you can now add this codeblock

    # ======================================================================================
    # This block is necessary to autogenerate the parametrization for
    # tests/test_plugin.py::test_standard_collection.
    # It needs to be placed **after** the import of 'instantiate_device_type_tests' and
    # **before** its first usage.
    # ======================================================================================
    try:
        from _spy import Spy
    
        __spy__ = Spy()
        del Spy
        instantiate_device_type_tests = __spy__(instantiate_device_type_tests)
    except ModuleNotFoundError:
        pass
    # ======================================================================================
    

    to a test file to automatically test the selection of

    • everything in the file,
    • every test case, and
    • every test case function.

    Anything beyond that still needs to be configured manually.

    opened by pmeier 0
  • support selection of tests that use op infos

    support selection of tests that use op infos

    Fixes #16.

    Currently we rely on the device identifier comes directly after the test case function name. That is no longer true when using OpInfo's. The name of the instantiated test follows the scheme (template_name)_(op_name)_(device)_(dtype).

    This is a complete rewrite of the internal matching logic:

    • test case: Test cases are only parametrized by the device. Since every TestCase has a device_type attribute, we can simply strip the device identifier from the instantiated name.
    • test case function: Since both template_name and op_name in the pattern above might contain underscores and they are also separated by a single underscore, it is impossible to extract the two parts without further knowledge. To overcome this, we can inspect the source of the function and extract the template_name (which is the function name) directly.
    opened by pmeier 0
  • Re-add support for OpInfo's

    Re-add support for OpInfo's

    Consider the following test setup:

    from torch.testing._internal.common_device_type import (
        instantiate_device_type_tests,
        ops,
    )
    from torch.testing._internal.common_utils import TestCase
    from torch.testing._internal.common_methods_invocations import OpInfo
    
    BazOpInfo = OpInfo("add")
    
    
    class TestFoo(TestCase):
        @ops([BazOpInfo])
        def test_bar(self, device, dtype, op):
            pass
    
    
    instantiate_device_type_tests(TestFoo, globals(), only_for="cpu")
    

    Running pytest test_foo.py --collect-only on that results in:

    <PyTorchTestCase TestFooCPU>
      <PyTorchTestCaseFunction test_bar_add_cpu_float32>
      <PyTorchTestCaseFunction test_bar_add_cpu_float64>
    <PyTorchTestCase TestFooMETA>
      <PyTorchTestCaseFunction test_bar_add_meta_float32>
      <PyTorchTestCaseFunction test_bar_add_meta_float64>
    

    Naming schemes:

    • test cases: (template_name)(device)
    • test case functions: (template_name)_(op_name)_(device)_(dtype)

    After #12 test case functions now require the device to follow right after the template name. Thus, it is no longer possible to select an individual test by name pytest test_foo.py::TestFoo::test_bar

    opened by pmeier 0
  • make dtype testing more concise

    make dtype testing more concise

    Instead of instantiating the tests for all devices and use the @onlyCPU decorator everywhere, used the only_for keyword when instantiating. With that the meta tests are not generated in the first place and do not need to be considered for the number of skipped tests.

    opened by pmeier 0
  • Do not require torch at installation

    Do not require torch at installation

    Right now we have torch as dependency:

    https://github.com/Quansight/pytest-pytorch/blob/bd98f6b23214460605e4b8c0ee2bd4956e846291/setup.cfg#L32-L34

    If you set up the development environment to work on PyTorch, you would probably not have it installed already. Thus installing pytest-pytorch would also install torch which is usually not desired.

    opened by pmeier 0
  • Support for nested test case names

    Support for nested test case names

    Currently we match the test case (function) names based on this:

    https://github.com/Quansight/pytest-pytorch/blob/ff8f2d86906486a2d437b2617ef9973394f5e216/pytest_pytorch/plugin.py#L9-L10

    This works well until you have a setup similar to this:

    class TestFoo(TestCase):
        pass
    
    class TestFooBar(TestCase):
        pass
    

    If you run pytest test_foo.py::TestFoo on this, both test cases are collected instead of just TestFoo.

    opened by pmeier 0
  • CI tests are failing

    CI tests are failing

    The CI tests are failing, because we need torch>=1.9, but it is only available through the nightlies. Unfortunately, tox-ltt is not able to handle the nightly channel yet.

    opened by pmeier 0
Releases(v0.2.1)
  • v0.2.1(May 25, 2021)

    This adds the --disable-pytest-pytorch command line option (#25), which makes it easier to debug incompatibilites with the vanilla pytest collection.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 21, 2021)

    This minor release adds support for OpInfo's which are used more and more throughout the PyTorch test suite (#17).

    Furthermore @xmnlab helped us to get pytest-pytorch into conda-forge. Installation instruction can be found in the README (#18).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 20, 2021)

    This release includes two minor improvements:

    1. Support for selecting individual test cases if there names are nested, i.e. TestFoo and TestFooBar (#12)
    2. Removal of PyTorch as installation requirement (#14)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 14, 2021)

Owner
Quansight
We grow talent, build technology, and discover products by helping companies grow OSS communities to organize and analyze their data.
Quansight
Load and performance benchmark tool

Yandex Tank Yandextank has been moved to Python 3. Latest stable release for Python 2 here. Yandex.Tank is an extensible open source load testing tool

Yandex 2.2k Jan 03, 2023
PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interactive.

penbud - Penetration Tester Buddy PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interac

Himanshu Shukla 15 Feb 01, 2022
hCaptcha solver and bypasser for Python Selenium. Simple website to try to solve hCaptcha.

hCaptcha solver for Python Selenium. Many thanks to engageub for his hCaptcha solver userscript. This script is solely intended for the use of educati

Maxime Dréan 59 Dec 25, 2022
Aplikasi otomasi klik di situs popcat.click menggunakan Python dan Selenium

popthe-popcat Aplikasi Otomasi Klik di situs popcat.click. aplikasi ini akan secara otomatis melakukan click pada kucing viral itu, sehingga anda tida

cndrw_ 2 Oct 07, 2022
The (Python-based) mining software required for the Game Boy mining project.

ntgbtminer - Game Boy edition This is a version of ntgbtminer that works with the Game Boy bitcoin miner. ntgbtminer ntgbtminer is a no thrills getblo

Ghidra Ninja 31 Nov 04, 2022
Testing - Instrumenting Sanic framework with Opentelemetry

sanic-otel-splunk Testing - Instrumenting Sanic framework with Opentelemetry Test with python 3.8.10, sanic 20.12.2 Step to instrument pip install -r

Donler 1 Nov 26, 2021
The source code and slide for my talk about the subject: unittesing in python

PyTest Talk This talk give you some ideals about the purpose of unittest? how to write good unittest? how to use pytest framework? and show you the ba

nguyenlm 3 Jan 18, 2022
An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

Brandon Galbraith 276 Mar 03, 2021
Python tools for penetration testing

pyTools_PT python tools for penetration testing Please don't use these tool for illegal purposes. These tools is meant for penetration testing for leg

Gourab 1 Dec 01, 2021
Selenium Manager

SeleniumManager I'm fed up with always having to struggle unnecessarily when I have to use Selenium on a new machine, so I made this little python mod

Victor Vague 1 Dec 24, 2021
Selects tests affected by changed files. Continous test runner when used with pytest-watch.

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language l

Tibor Arpas 614 Dec 30, 2022
Lightweight, scriptable browser as a service with an HTTP API

Splash - A javascript rendering service Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implem

Scrapinghub 3.8k Jan 03, 2023
Codeforces Test Parser for C/C++ & Python on Windows

Codeforces Test Parser for C/C++ & Python on Windows Installation Run pip instal

Minh Vu 2 Jan 05, 2022
Python Projects - Few Python projects with Testing using Pytest

Python_Projects Few Python projects : Fast_API_Docker_PyTest- Just a simple auto

Tal Mogendorff 1 Jan 22, 2022
Cornell record & replay mock server

Cornell: record & replay mock server Cornell makes it dead simple, via its record and replay features to perform end-to-end testing in a fast and isol

HiredScoreLabs 134 Sep 15, 2022
The async ready version of the AniManga library created by centipede000.

Async-Animanga An Async/Aiohttp compatible library. Async-Animanga is an async ready web scraping library that returns Manga information from animepla

3 Sep 22, 2022
A single module to link Python ecosystem to the Web

A single module to link Python ecosystem to the Web. Have a quick look at the Gallery first to get convinced ! FAQ For any questions, please use Stack

66 Dec 21, 2022
A Modular Penetration Testing Framework

fsociety A Modular Penetration Testing Framework Install pip install fsociety Update pip install --upgrade fsociety Usage usage: fsociety [-h] [-i] [-

fsociety-team 802 Dec 31, 2022
Automatic SQL injection and database takeover tool

sqlmap sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of

sqlmapproject 25.7k Jan 04, 2023
Argument matchers for unittest.mock

callee Argument matchers for unittest.mock More robust tests Python's mocking library (or its backport for Python 3.3) is simple, reliable, and easy

Karol Kuczmarski 77 Nov 03, 2022