Selects tests affected by changed files. Continous test runner when used with pytest-watch.

Related tags

Testingpytest-testmon
Overview

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language like Python and how reliable is it? Read here: Determining affected tests

Quickstart

pip install pytest-testmon

# build the dependency database and save it to .testmondata
pytest --testmon

# change some of your code (with test coverage)

# only run tests affected by recent changes
pytest --testmon

To learn more about specifying multiple project directories and troubleshooting, please head to testmon.org

Comments
  • pytest-xdist support

    pytest-xdist support

    When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

    Is testmon compatible with xdist or is this due to misconfiguration?

    enhancement 
    opened by timdiels 22
  • Rerun tests should not fail.

    Rerun tests should not fail.

    There is a test (A) that fails when running pytest --testmon. After fixing test A, running pytest --testmon shows the results of when the test A was still failing. But If we change any code that affect test A with a NOP and run pytest --testmon the test A will pass once. But running pytest --testmon immediately after that will result in what I believe is a cached result to show up, which is the result failure.

    I believe the cached results in this case should be updated so it doesnt fail.

    opened by henningphan 17
  • Refactor

    Refactor

    This addresses issues: #53 #52 #51 #50 #32 partially #42 (poor mans solution)

    With the limited test cases it worked and the performance was OK. If @blueyed @boxed tell me it works for them I'll merge and release.

    opened by tarpas 14
  • performance with large source base

    performance with large source base

    There is a lot of repetition and json array in the .testmondata sqlite database, let's make it more efficient. @boxed you mentioned you have 20mb per commit Db, so this might interest you.

    opened by tarpas 10
  • Override pytest's exit code 5 for

    Override pytest's exit code 5 for "no tests were run"

    py.test will exit with code 5 in case no tests were ran (https://github.com/pytest-dev/pytest/issues/812, https://github.com/pytest-dev/pytest/issues/500#issuecomment-112204804).

    I can see that this is useful in general, but with pytest-testmon this should not be an error.

    Would it be possible and is it sensible to override pytest's exit code to be 0 in that case then?

    My use case it using pytest-watch and its --onfail feature, which should not be triggered because testmon made py.test skip all tests.

    opened by blueyed 10
  • using of debugger corrupts the testmon database

    using of debugger corrupts the testmon database

    from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.

    I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.

    Is there any info I can supply that would help debug this? I'm happy to post anything.

    Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.

    opened by tarpas 9
  • testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped:

    (Pdb++) nodeid
    'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
    (Pdb++) pp self.testmon_data.fail_reports[nodeid]
    [{'duration': 0.0020258426666259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.003198385238647461,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
                  '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
                  'rest_framework/renderers.py:862: in render\n'
                  '    codec = coreapi.codecs.CoreJSONCodec()\n'
                  "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'failed',
      'sections': [],
      'user_properties': [],
      'when': 'call'},
     {'duration': 0.008923768997192383,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'},
     {'duration': 0.0012934207916259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'skipped',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.026836156845092773,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'}]
    (Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
    [<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]
    

    I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

    opened by blueyed 9
  • combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    I have a situation where executing pytest --testmon --tlf module1/module2 after deleting .testmondata will yield 3 failures.

    The next run of pytest --testmon --tlf module1/module2 yields 6 failures (3 duplicates of the first 3) and each subsequent run yields 3 more duplicates.

    If I run pytest --testmon --tlf module1/module2/tests I get back to 3 tests, the duplication starts as soon as I remove the final tests in the path (or execute with the root path of the repository).

    I've tried to find a simple way to reproduce this but failed so far. I've also tried looking at what changes in .testmondata but didn't spot anything.

    I'd be grateful if you could give me a hint how I might reproduce or analyze this, as I unfortunately can't share the code at the moment.

    opened by TauPan 9
  • deleting code causes an internal exception

    deleting code causes an internal exception

    I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

    How to reproduce:

    • delete .testmondata
    • run py.test --testmon once
    • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
    • call py.test --testmon again And look at a backtrace like the following
    ===================================================================================== test session starts =====================================================================================
    platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
    Django settings: sis.settings_devel (from ini file)
    testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
    rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
    plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
    collected 122 items 
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
    INTERNALERROR>     config.hook.pytest_collection(session=session)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
    INTERNALERROR>     return session.perform_collect()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
    INTERNALERROR>     config=self.config, items=items)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
    INTERNALERROR>     assert item.nodeid not in self.collection_ignored
    INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
    INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
    INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored
    
    ==================================================================================== 185 tests deselected =====================================================================================
    =============================================================================== 185 deselected in 0.34 seconds ================================================================================
    
    opened by TauPan 9
  • Performance issue on big projects

    Performance issue on big projects

    We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

    Wed Dec  7 14:37:13 2016    testmon-startup-profile
    
             353228817 function calls (349177685 primitive calls) in 648.684 seconds
    
       Ordered by: cumulative time
       List reduced from 15183 to 100 due to restriction <100>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
     10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
     10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
     11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
            1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
      10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
            1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
         4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
         4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
      4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
            1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
     73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
     73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
    
    

    as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

    This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

    So, what do you guys think about caching this data in .testmondata?

    opened by boxed 9
  • Failure still reported when whole module gets re-run

    Failure still reported when whole module gets re-run

    I have just seen this, after marking TestSchemaJSRenderer (which only contains test_schemajs_output) with @pytest.mark.skipif(not coreapi, reason='coreapi is not installed'):

    % .tox/venvs/py36-django20/bin/pytest --testmon
    ==================================================================================== test session starts =====================================================================================
    platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
    testmon=True, changed files: tests/test_renderers.py, skipping collection of 118 files, run variant: default
    rootdir: /home/daniel/Vcs/django-rest-framework, inifile: setup.cfg
    plugins: testmon-0.9.11, django-3.2.1, cov-2.5.1
    collected 75 items / 1219 deselected                                                                                                                                                         
    
    tests/test_renderers.py F                                                                                                                                                              [  0%]
    tests/test_filters.py F                                                                                                                                                                [  0%]
    tests/test_renderers.py ..............................................Fs                                                                                                               [100%]
    
    ========================================================================================== FAILURES ==========================================================================================
    _________________________________________________________________________ TestSchemaJSRenderer.test_schemajs_output __________________________________________________________________________
    tests/test_renderers.py:753: in test_schemajs_output
        output = renderer.render('data', renderer_context={"request": request})
    rest_framework/renderers.py:862: in render
        codec = coreapi.codecs.CoreJSONCodec()
    E   AttributeError: 'NoneType' object has no attribute 'codecs'
    _________________________________________________________________ BaseFilterTests.test_get_schema_fields_checks_for_coreapi __________________________________________________________________
    tests/test_filters.py:36: in test_get_schema_fields_checks_for_coreapi
        assert self.filter_backend.get_schema_fields({}) == []
    rest_framework/filters.py:36: in get_schema_fields
        assert coreschema is not None, 'coreschema must be installed to use `get_schema_fields()`'
    E   AssertionError: coreschema must be installed to use `get_schema_fields()`
    ________________________________________________________________ TestDocumentationRenderer.test_document_with_link_named_data ________________________________________________________________
    tests/test_renderers.py:719: in test_document_with_link_named_data
        document = coreapi.Document(
    E   AttributeError: 'NoneType' object has no attribute 'Document'
    ====================================================================================== warnings summary ======================================================================================
    None
      [pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.
    
    -- Docs: http://doc.pytest.org/en/latest/warnings.html
    ======================================================== 3 failed, 46 passed, 1 skipped, 1219 deselected, 1 warnings in 3.15 seconds =========================================================
    

    TestSchemaJSRenderer.test_schemajs_output should not show up in FAILURES.

    Note also that tests/test_renderers is listed twice, where the failure appears to come from the first entry.

    (running without --testmon shows the same number of tests (tests/test_renderers.py ..............................................Fs)).

    opened by blueyed 8
  • feat: merge DB

    feat: merge DB

    This is a very draft PR to answer this need.

    I had a hard time understanding how testmon works internally, and also how to test this.

    I'll happily get any feedback, so we can agree and merge this in testmon.

    It would be very helpful for us as we have multiple workers in our CI that run tests, and we would need to merge testmon result so it can be reused later

    opened by ElPicador 0
  • multiprocessing does not seem to be supported

    multiprocessing does not seem to be supported

    Suppose I have the following code and tests

    # mymodule.py
    class MyClass:
        def foo(self):
            return "bar"
    
    # test_mymodule.py
    import multiprocessing
    
    def __run_foo():
        from mymodule import MyClass
        c = MyClass()
        assert c.foo() == "bar"
    
    def test_foo():
        process = multiprocessing.Process(target=__run_foo)
        process.start()
        process.join()
        assert process.exitcode == 0
    

    After a first successful run of pytest --testmon test_mymodule.py, one can change the implementation of foo() without testmon noticing and no new runs are triggered.

    When running pytest --cov manually, we get a trace image

    opened by janbernloehr 1
  • Changes to global / class variables are ignored (if no method of their module is executed)

    Changes to global / class variables are ignored (if no method of their module is executed)

    Suppose you have the following files

    # mymodule.py
    foo_bar = "value"
    
    class MyClass:
        FOO = "bar"
    
    # test_mymodule.py
    from mymodule import MyClass, foo_bar
    
    def test_module():
        assert foo_bar == "value"
        assert MyClass.FOO == "bar"
    

    Now running pytest --testmon test_mymodule.py does not rerun test_module() when the value of MyClass.FOO or foo_bar is changed. Even worse, FOO can be completely removed, e.g.

    class MyClass:
        pass
    

    without triggering a re-run.

    When running pytest --cov manually, there seems to be a trace

    image

    opened by janbernloehr 2
  • Fix/improve linting of the code when using pytest-pylint

    Fix/improve linting of the code when using pytest-pylint

    This also includes changed source code files when pytest-pylint is enabled. By default source code files are ignored so the pylint is not able to process those files.

    opened by msemanicky 1
  • Testmon very sensitive towards library changes

    Testmon very sensitive towards library changes

    Background

    I have found pytest-testmon to be very sensitive towards the slightest changes for the packages in the environment it is installed. This makes very difficult to use pytest-testmon in practice when sharing a .testmondata file between our developers.

    Use Case:

    A developer is trying out a new library foo and have written a wrapper foo_bar.py for this. The developer only wants to run test_foo_bar.py for foo_bar.py, however the entire test suite is run since foo was installed.

    Example Solution

    Flag for disregarding library changes

    • I am happy to contribute with a solution, if it is considered possible.
    • If library needs funding for a solution, this is an option as well.
    opened by dgot 1
  • How does testmon handle data files?

    How does testmon handle data files?

    if I have in my code an open('path/to/file.json').read() command, would testmon be able to flag that file?

    I saw there is something called file tracers in coverage but not sure if it is for this use case.

    opened by uriva 1
Releases(v1.4.3b1)
Pytest-rich - Pytest + rich integration (proof of concept)

pytest-rich Leverage rich for richer test session output. This plugin is not pub

Bruno Oliveira 170 Dec 02, 2022
This package is a python library with tools for the Molecular Simulation - Software Gromos.

This package is a python library with tools for the Molecular Simulation - Software Gromos. It allows you to easily set up, manage and analyze simulations in python.

14 Sep 28, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022
Python package to easily work with selenium and manage tabs effectively.

Simple Selenium The aim of this package is to quickly get started with working with selenium for simple browser automation tasks. Installation Install

Vishal Kumar Mishra 1 Oct 27, 2021
Automação de Processos (obtenção de informações com o Selenium), atualização de Planilha e Envio de E-mail.

Automação de Processo: Código para acompanhar o valor de algumas ações na B3. O código entra no Google Drive, puxa os valores das ações (pré estabelec

Hemili Beatriz 1 Jan 08, 2022
A Simple Unit Test Matcher Library for Python 3

pychoir - Python Test Matchers for humans Super duper low cognitive overhead matching for Python developers reading or writing tests. Implemented in p

Antti Kajander 15 Sep 14, 2022
Whatsapp messages bulk sender using Python Selenium.

Whatsapp Sender Whatsapp Sender automates sending of messages via Whatsapp Web. The tool allows you to send whatsapp messages in bulk. This program re

Yap Yee Qiang 3 Jan 23, 2022
Obsei is a low code AI powered automation tool.

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .

Obsei 782 Dec 31, 2022
Implement unittest, removing all global variable and returning values

Implement unittest, removing all global variable and returning values

Placide 1 Nov 01, 2021
A Demo of Feishu automation testing framework

FeishuAutoTestDemo This is a automation testing framework which use Feishu as an example. Execute runner.py to run. Technology Web UI Test pytest + se

2 Aug 19, 2022
🎓 Stepik Academy Автоматизация тестирования на Python

🎓 Stepik Academy Автоматизация тестирования на Python Запуск тестов выполняется в командной строке: pytest -v --tb=line --language=en --alluredir=all

Sergey 1 Dec 03, 2021
A library to make concurrent selenium tests that automatically download and setup webdrivers

AutoParaSelenium A library to make parallel selenium tests that automatically download and setup webdrivers Usage Installation pip install autoparasel

Ronak Badhe 8 Mar 13, 2022
Ab testing - basically a statistical test in which two or more variants

Ab testing - basically a statistical test in which two or more variants

Buse Yıldırım 5 Mar 13, 2022
Automating the process of sorting files in my downloads folder by file type.

downloads-folder-automation Automating the process of sorting files in a user's downloads folder on Windows by file type. This script iterates through

Eric Mahasi 27 Jan 07, 2023
Minimal example of how to use pytest with automated 'devops' style automated test runs

Pytest python example with automated testing This is a minimal viable example of pytest with an automated run of tests for every push/merge into the m

Karma Computing 2 Jan 02, 2022
pytest_pyramid provides basic fixtures for testing pyramid applications with pytest test suite

pytest_pyramid pytest_pyramid provides basic fixtures for testing pyramid applications with pytest test suite. By default, pytest_pyramid will create

Grzegorz Śliwiński 12 Dec 04, 2022
UUM Merit Form Filler is a web automation which helps automate entering a matric number to the UUM system in order for participants to obtain a merit

About UUM Merit Form Filler UUM Merit Form Filler is a web automation which helps automate entering a matric number to the UUM system in order for par

Ilham Rachmat 3 May 31, 2022
A Python Selenium library inspired by the Testing Library

Selenium Testing Library Slenium Testing Library (STL) is a Python library for Selenium inspired by Testing-Library. Dependencies Python 3.6, 3.7, 3.8

Anže Pečar 12 Dec 26, 2022
PyBuster A directory busting tool for web application penetration tester, written in python

PyBuster A directory busting tool for web application penetration tester, written in python. Supports custom wordlist,recursive search. Screenshots Pr

Anukul Pandey 4 Jan 30, 2022
XSSearch - A comprehensive reflected XSS tool built on selenium framework in python

XSSearch A Comprehensive Reflected XSS Scanner XSSearch is a comprehensive refle

Sathyaprakash Sahoo 49 Oct 18, 2022