Python syntax highlighted Markdown doctest.

Overview

phmdoctest 1.3.0

Introduction

Python syntax highlighted Markdown doctest

Command line program and Python library to test Python syntax highlighted code examples in Markdown.

  • Creates a pytest Python module that tests Python examples in README and other Markdown files.
  • Reads these from Markdown fenced code blocks:
    • Python interactive sessions described by doctest.
    • Python source code and expected terminal output.
  • The test cases get run later by running pytest.
  • Simple use case is possible with no Markdown edits at all.
  • More features selected by adding HTML comment directives to the Markdown.
    • Set test case name.
    • Add a pytest.mark.skip decorator.
    • Promote names defined in a test case to module level globals.
    • Label any fenced code block for later retrieval (API).
  • Add inline annotations to comment out sections of code.
  • Get code coverage by running pytest with coverage.
  • Select Python source code blocks as setup and teardown code.
  • Setup applies to code blocks and optionally to session blocks.
  • An included Python library: Latest Development tools API.
    • Python function returns test file in a string. (testfile() in main.py)
    • Two pytest fixtures. (tester.py)
      1. testfile_creator runs testfile(). Use with testfile_tester.
      2. testfile_tester runs a pytest file with pytest's pytester in its isolated environment.
    • Runs phmdoctest and can run pytest too. (simulator.py)
    • Functions to read fenced code blocks from Markdown. (tool.py)
    • Extract testsuite tree and list of failing trees from JUnit XML. (tool.py)
  • Available soon as a pytest plugin.

default branch status

Code style: black

Usage Test CI Test Build status readthedocs codecov

Website | Docs | Repos | pytest | Codecov | License

Introduction | Installation | Sample usage | Sample Usage with HTML comment directives | CI usage | --report | Identifying blocks | Directives | skip | label on code and sessions | label on any fenced code block | pytest skip | pytest skipif | setup | teardown | share-names | clear-names | label skip and mark example | setup and teardown example | share-names clear-names example | Inline annotations | skipping blocks with --skip | --skip | short form of --skip | --fail-nocode | --setup | --teardown | Setup example | Setup for sessions | Execution context | Send outfile to stdout | Usage | Run as a Python module | Python API | pytest fixtures | Simulate command line | Hints | Directive hints | Related projects

Changes | Contributions | About

Installation

It is advisable to install in a virtual environment.

python -m pip install phmdoctest

Sample usage

Given the Markdown file example1.md shown in raw form here...

# This is Markdown file example1.md

## Interactive Python session (doctest)

```py
>>> print("Hello World!")
Hello World!
```

## Source Code and terminal output

Code:
```python
from enum import Enum

class Floats(Enum):
    APPLES = 1
    CIDER = 2
    CHERRIES = 3
    ADUCK = 4

for floater in Floats:
    print(floater)
```

sample output:
```
Floats.APPLES
Floats.CIDER
Floats.CHERRIES
Floats.ADUCK
```

the command...

phmdoctest doc/example1.md --outfile test_example1.py

creates the python source code file test_example1.py shown here...

>> print("Hello World!") Hello World! """ def test_code_14_output_28(capsys): from enum import Enum class Floats(Enum): APPLES = 1 CIDER = 2 CHERRIES = 3 ADUCK = 4 for floater in Floats: print(floater) _phm_expected_str = """\ Floats.APPLES Floats.CIDER Floats.CHERRIES Floats.ADUCK """ _phm_compare_exact(a=_phm_expected_str, b=capsys.readouterr().out)">
"""pytest file built from doc/example1.md"""
from phmdoctest.functions import _phm_compare_exact


def session_00001_line_6():
    r"""
    >>> print("Hello World!")
    Hello World!
    """


def test_code_14_output_28(capsys):
    from enum import Enum

    class Floats(Enum):
        APPLES = 1
        CIDER = 2
        CHERRIES = 3
        ADUCK = 4

    for floater in Floats:
        print(floater)

    _phm_expected_str = """\
Floats.APPLES
Floats.CIDER
Floats.CHERRIES
Floats.ADUCK
"""
    _phm_compare_exact(a=_phm_expected_str, b=capsys.readouterr().out)

Then run a pytest command something like this in your terminal to test the Markdown session, code, and expected output blocks.

pytest --doctest-modules

Or these two commands:

pytest
python -m doctest test_example1.py

The line_6 in the function name session_00001_line_6 is the line number in example1.md of the first line of the interactive session. 00001 is a sequence number to order the doctests.

The 14 in the function name test_code_14_output_28 is the line number of the first line of python code. 28 shows the line number of the expected terminal output.

One test case function gets generated for each:

  • Markdown fenced code block interactive session
  • Python-code/expected-output Markdown fenced code block pair

The --report option below shows the blocks discovered and how they are tested.

Sample Usage with HTML comment directives

Given the Markdown file shown in raw form here...



```python
print("Hello World!")
```
```
incorrect expected output
```

the command...

phmdoctest tests/one_mark_skip.md --outfile test_one_mark_skip.py

creates the python source code file shown here...

"""pytest file built from tests/one_mark_skip.md"""
import pytest

from phmdoctest.functions import _phm_compare_exact


@pytest.mark.skip()
def test_example(capsys):
    print("Hello World!")

    _phm_expected_str = """\
incorrect expected output
"""
    _phm_compare_exact(a=_phm_expected_str, b=capsys.readouterr().out)

Run the --outfile with pytest...

$ pytest -vv test_one_mark_skip.py

test_one_mark_skip.py::test_example SKIPPED
  • The HTML comments in the Markdown are phmdoctest directives.
  • The mark.skip directive adds the @pytest.mark.skip() line.
  • The label directive names the test case function.
  • List of Directives
  • Directives are optional.
  • Markdown edits are optional.

CI usage

Test Python examples in README.md in Continuous Integration scripts. In this snippet for Linux the pytest test suite is in the tests folder.

mkdir tests/tmp
phmdoctest README.md --report --outfile tests/tmp/test_readme.py
pytest --doctest-modules -vv tests

This console shows testing Python examples in project.md. Look for the tmp tests at the bottom. Windows Usage on Appveyor.

See this excerpt from ci.yml Actions usage example. It runs on Windows, Linux, and macOS. Please find the phmdoctest command at the bottom.

No changes to README.md are needed here, look in the last job log.

report option

To see the GFM fenced code blocks in the MARKDOWN_FILE use the --report option like this:

phmdoctest doc/example2.md --report

which lists the fenced code blocks it found in the file example2.md. The test role column shows how each fenced code block gets tested.

         doc/example2.md fenced blocks
------------------------------------------------
block     line  test     TEXT or directive
type    number  role     quoted and one per line
------------------------------------------------
python       9  code
            14  output
python      20  code
            26  output
            31  --
python      37  code
python      44  code
            51  output
yaml        59  --
text        67  --
py          75  session
python      87  code
            94  output
py         102  session
------------------------------------------------
7 test cases.
1 code blocks with no output block.

Identifying blocks

The PYPI commonmark project provides code to extract fenced code blocks from Markdown. Specification CommonMark Spec and website CommonMark.

Python code, expected output, and Python interactive sessions get extracted.

Only GFM fenced code blocks are considered.

A block is a session block if the info_string starts with py and the first line of the block starts with the session prompt: '>>> '.

To be treated as Python code the opening fence should start with one of these:

```python
```python3
```py3

plus the block contents can't start with '>>> '.

The examples use the info_strings python for code and py for sessions since they render with coloring on GitHub, readthedocs, GitHub Pages, and Python package index.

project.md has more examples of code and session blocks.

It is ok if the info string is laden with additional text, it will be ignored. The entire info string will be shown in the block type column of the report.

An output block is a fenced code block that immediately follows a Python block and starts with an opening fence like this which has an empty info string.

```

A Python code block has no output if it is followed by any of:

  • Python code block
  • Python session block
  • a fenced code block with a non-empty info string

Test code gets generated for it, but there will be no assertion statement.

Directives

Directives are HTML comments containing test generation commands. They are edited into the Markdown file immediately before a fenced code block. It is OK if other HTML comments are present. See the directive in the raw Markdown below. With the skip directive no test code will be generated from the fenced code block.



```python
print("Hello World!")
```
Expected Output
```
Hello World!
```

List of Directives

       Directive HTML comment      |    Use on blocks
---------------------------------- | ---------------------
             | code, session, output
 | code, session
       | any
        | code
  | code
            | code
         | code
      | code
      | code

Directive hints

skip

The skip directive or --skip TEXT command line option prevents code generation for the code or session block. The skip directive can be placed on an expected output block. There it prevents checking expected against actual output. Example.

label on code and sessions

When used on a Python code block or session the label directive changes the name of the generated test function. Example. Two generated tests, the first without a label, shown in pytest -v terminal output:

test_readme.py::test_code_93 FAILED
test_readme.py::test_beta_feature FAILED

label on any fenced code block

On any fenced code block, the label directive identifies the block for later retrieval by the class phmdoctest.tool.FCBChooser(). The FCBChooser is used separately from phmdoctest in a different pytest file. This allows the test developer to write additional test cases for fenced code blocks that are not handled by phmdoctest. The directive value can be any string.

# This is file doc/my_markdown_file.md


```
The label directive can be placed on any fenced code block.
```

Here is Python code to fetch it:

import phmdoctest.tool

chooser = phmdoctest.tool.FCBChooser("doc/my_markdown_file.md")
text = chooser.contents(label="my-fenced-code-block")
print(text)

Output:

The label directive can be placed on any fenced code block.

pytest skip

The directive generates a test case with a @pytest.mark.skip() decorator. Example.

pytest skipif

The directive generates a test case with the pytest decorator @pytest.mark.skipif(sys.version_info < (3, N), reason="requires >=py3.N"). N is a Python minor version number. Example.

setup

A single Python code block can assign names visible to other code blocks by adding a setup directive or using the --setup command line option.

Names assigned by the setup block get copied to the test module's global namespace after the setup block runs.

Here is an example setup block from setup.md:

import math

mylist = [1, 2, 3]
a, b = 10, 11

def doubler(x):
    return x * 2

Using setup modifies the execution context of the Python code blocks in the Markdown file. The names math, mylist, a, b, and doubler are visible to the other Python code blocks. The objects can be modified. Example.

teardown

Selects a single Python code block that runs at test module teardown time. A teardown block can also be designated using the --teardown command line option. Example.

share-names

Names assigned by the Python code block get copied to the test module as globals after the test code runs. This happens at run time. These names are now visible to subsequent test cases generated for Python code blocks in the Markdown file. share-names modifies the execution context as described for the setup directive above. The share-names directive can be used on more than one code block. Example.

This directive effectively joins its Python code block to the following Python code blocks in the Markdown file.

clear-names

After the test case generated for the Python code block with the clear-names directive runs, all names that were created by one or more preceding share-names directives get deleted. The names that were shared are no longer visible. This directive also deletes the names assigned by setup. Example.

label skip and mark example

The file directive1.md contains example usage of label, skip, and mark directives. The command below generates test_directive1.py. phmdoctest doc/directive1.md --report produces this report.

phmdoctest doc/directive1.md --outfile test_directive1.py

setup and teardown example

The file directive2.md contains example usage of label, skip, and mark directives. The command below generates test_directive2.py. phmdoctest doc/directive2.md --report produces this report.

phmdoctest doc/directive2.md --outfile test_directive2.py

share-names clear-names example

The file directive3.md contains example usage of share-names and clear-names directives. The command below generates test_directive3.py. phmdoctest doc/directive3.md --report produces this report.

phmdoctest doc/directive3.md --outfile test_directive3.py

Inline annotations

Inline annotations comment out sections of code. They can be added to the end of lines in Python code blocks. They should be in a comment.

  • phmdoctest:omit comments out a section of code. The line it is on, plus following lines at greater indent get commented out.
  • phmdoctest:pass comments out one line of code and prepends the pass statement.

Here is a snippet showing how to place phmdoctest:pass in the code. The second block shows the code that is generated. Note there is no # immediately before phmdoctest:pass. It is not required.

import time
def takes_too_long():
    time.sleep(100)    # delay for awhile. phmdoctest:pass
takes_too_long()
import time
def takes_too_long():
    pass  # time.sleep(100)    # delay for awhile. phmdoctest:pass
takes_too_long()

Use phmdoctest:omit on single or multi-line statements. Note the two commented out time.sleep(99). They follow and are indented more that the if condition:line with phmdoctest:omit.

import time                      # phmdoctest:omit

condition = True
if condition:       # phmdoctest:omit
    time.sleep(99)
    time.sleep(99)
# import time                      # phmdoctest:omit

condition = True
# if condition:       # phmdoctest:omit
#     time.sleep(99)
#     time.sleep(99)

Inline annotation processing counts the number of commented out sections and adds the count as the suffix _N to the name of the pytest function in the generated test file.

Inline annotations are similar, but less powerful than the Python standard library doctest directive #doctest+SKIP. Improper use of phmdoctest:omit can cause Python syntax errors.

The examples above are snippets that illustrate how to use inline annotations. Here is an example that produces a pytest file from Markdown. The command below takes inline_example.md and generates test_inline_example.py.

phmdoctest doc/inline_example.md --outfile test_inline_example.py

skipping blocks with skip option

If you don't want to generate test cases for Python blocks precede the block with a skip directive or use the --skip TEXT option. More than one skip directive or--skip TEXTis allowed.

The following describes using --skip TEXT. The code in each Python block gets searched for the substring TEXT. Zero, one or more blocks will contain the substring. These blocks will not generate test cases in the output file.

  • The Python code in the fenced code block gets searched.
  • The info string is not searched.
  • Output blocks are not searched.
  • Both Python code and session blocks get searched.
  • Case is significant.

The report shows which Python blocks get skipped in the test role column, and the Python blocks that matched each --skip TEXT in the skips section.

This option makes it very easy to inadvertently exclude Python blocks from the test cases. In the event no test cases get generated, the option --fail-nocode described below is useful.

Three special --skip TEXT strings work a little differently. They select one of the first, second, or last of the Python blocks. Only Python blocks get counted.

  • --skip FIRST skips the first Python block.
  • --skip SECOND skips the second Python block.
  • --skip LAST skips the final Python block.

skip option

This command using --skip:

phmdoctest doc/example2.md --skip "Python 3.7" --skip LAST --report --outfile test_example2.py

Produces the report

            doc/example2.md fenced blocks
-----------------------------------------------------
block     line  test          TEXT or directive
type    number  role          quoted and one per line
-----------------------------------------------------
python       9  code
            14  output
python      20  skip-code     "Python 3.7"
            26  skip-output
            31  --
python      37  code
python      44  code
            51  output
yaml        59  --
text        67  --
py          75  session
python      87  code
            94  output
py         102  skip-session  "LAST"
-----------------------------------------------------
5 test cases.
1 skipped code blocks.
1 skipped interactive session blocks.
1 code blocks with no output block.

  skip pattern matches (blank means no match)
------------------------------------------------
skip pattern  matching code block line number(s)
------------------------------------------------
Python 3.7    20
LAST          102
------------------------------------------------

creates the output file test_example2.py

short form of skip option

This is the same command as above using the short -s form of the --skip option in two places. It produces the same report and outfile.

phmdoctest doc/example2.md -s "Python 3.7" -sLAST --report --outfile test_example2.py

fail-nocode option

The --fail-nocode option produces a pytest file that will always fail when no Python code or session blocks get found.

Evem if no Python code or session blocks exist in the Markdown file a pytest file gets generated. This also happens when --skip eliminates all the Python code blocks. The generated pytest file will have the function def test_nothing_passes().

If the option --fail-nocode is passed the function is def test_nothing_fails() which raises an assertion.

setup option

A single Python code block can assign names visible to other code blocks by giving the --setup TEXT option. Please see the setup directive above. The rules for TEXT are the same as for --skip TEXT plus...

  • Only one block can match TEXT.
  • The block cannot match a block that is skipped.
  • The block cannot be a session block even though session blocks get searched for TEXT.
  • It is ok if the block has an output block. It will be ignored.

teardown option

A single Python code block can supply code run by the pytest teardown_module() fixture. Use the --teardown TEXT option. Please see the teardown directive above. The rules for TEXT are the same as for --setup above except TEXT won't match a setup block.

Setup example

For the Markdown file setup.md run this command to see how the blocks get tested.

phmdoctest doc/setup.md --setup FIRST --teardown LAST --report
            doc/setup.md fenced blocks
-------------------------------------------------
block     line  test      TEXT or directive
type    number  role      quoted and one per line
-------------------------------------------------
python       9  setup     "FIRST"
python      20  code
            27  output
python      37  code
            42  output
python      47  code
            51  output
python      58  teardown  "LAST"
-------------------------------------------------
3 test cases.

This command

phmdoctest doc/setup.md --setup FIRST --teardown LAST --outfile test_setup.py

creates the test file test_setup.py

Setup for sessions

The pytest option --doctest-modules is required to run doctest on sessions. pytest runs doctests in a separate context. For more on this see Execution context below.

To allow sessions to see the variables assigned by the --setup code block, add the option --setup-doctest

Here is an example with setup code and sessions setup_doctest.md. The first part of this file is a copy of setup.md.

This command uses the short form of setup and teardown. -u for setup and -d for teardown.

phmdoctest doc/setup_doctest.md -u FIRST -d LAST --setup-doctest --outfile test_setup_doctest.py

It creates the test file test_setup_doctest.py

Execution context

When run without --setup

  • pytest and doctest determine the order of test case execution.
  • phmdoctest assumes test code and session execution is in file order.
  • Test case order is not significant.
  • Code and expected output run within a function body of a pytest test case.
  • If pytest is invoked with --doctest-modules:
    • Sessions are run in a separate doctest execution context.
    • Otherwise, sessions do not run.

With --setup

  • Names assigned by setup code are visible to code blocks.
  • Code blocks can modify the objects created by the setup code.
  • Code block test case order is significant.
  • Session order is not significant.
  • If pytest is run with --doctest-modules:
    • pytest runs two separate contexts: one for sessions, one for code blocks.
    • setup and teardown code gets run twice, once by each context.
    • the names assigned by the setup code block are are not visible to the sessions.

With share-names

  • Only following code blocks can modify the shared objects.
  • Shared objects will not be visible to sessions if pytest is run with --doctest-modules.
  • After running a code block with clear-names
    • Shared objects will no longer be visible.
    • Names assigned by setup code will no longer be visible.

With --setup and --setup-doctest

Same as the setup section plus:

  • names assigned by the setup code block are visible to the sessions.
  • Sessions can modify the objects created by the setup code.
  • Session order is significant.
  • Sessions and code blocks are still running in separate contexts isolated from each other.
  • A session can't affect a code block, and a code block can't affect a session.
  • Names assigned by the setup code block are globally visible to the entire test suite via the pytest doctest_namespace fixture. See hint near the end Hints.

pytest live logging demo

The live logging demos reveals pytest execution contexts. pytest Live Logs show the execution order of setup_module(), test cases, sessions, and teardown_module(). There are 2 demo invocations in the workflow action called pytest Live Log Demo. GitHub login required.

Send outfile to stdout

To redirect the above outfile to the standard output stream use one of these two commands.

Be sure to leave out --report when sending --outfile to standard output.

phmdoctest doc/example2.md -s "Python 3.7" -sLAST --outfile -

or

phmdoctest doc/example2.md -s "Python 3.7" -sLAST --outfile=-

Usage

phmdoctest --help

Usage: phmdoctest [OPTIONS] MARKDOWN_FILE

Options:
  --outfile TEXT       Write generated test case file to path TEXT. "-" writes
                       to stdout.

  -s, --skip TEXT      Any Python code or interactive session block that
                       contains the substring TEXT is not tested. More than
                       one --skip TEXT is ok. Double quote if TEXT contains
                       spaces. For example --skip="python 3.7" will skip every
                       Python block that contains the substring "python 3.7".
                       If TEXT is one of the 3 capitalized strings FIRST
                       SECOND LAST the first, second, or last Python code or
                       session block in the Markdown file is skipped.

  --report             Show how the Markdown fenced code blocks are used.

  --fail-nocode        This option sets behavior when the Markdown file has no
                       Python fenced code blocks or interactive session blocks
                       or if all such blocks are skipped. When this option is
                       present the generated pytest file has a test function
                       called test_nothing_fails() that will raise an
                       assertion. If this option is not present the generated
                       pytest file has test_nothing_passes() which will never
                       fail.

  -u, --setup TEXT     The Python code block that contains the substring TEXT
                       is run at test module setup time. Variables assigned at
                       the outer level are visible as globals to the other
                       Python code blocks. TEXT should match exactly one code
                       block. If TEXT is one of the 3 capitalized strings
                       FIRST SECOND LAST the first, second, or last Python
                       code or session block in the Markdown file is matched.
                       A block will not match --setup if it matches --skip, or
                       if it is a session block. Use --setup-doctest below to
                       grant Python sessions access to the globals.

  -d, --teardown TEXT  The Python code block that contains the substring TEXT
                       is run at test module teardown time. TEXT should match
                       exactly one code block. If TEXT is one of the 3
                       capitalized strings FIRST SECOND LAST the first,
                       second, or last Python code or session block in the
                       Markdown file is matched. A block will not match
                       --teardown if it matches either --skip or --setup, or
                       if it is a session block.

--setup-doctest        Make globals created by the --setup Python code block
                       or setup directive visible to session blocks and only
                       when they are tested with the pytest --doctest-modules
                       option.  Please note that pytest runs doctests in a
                       separate context that only runs doctests. This option
                       is ignored if there is no --setup option.

  --version            Show the version and exit.
  --help               Show this message and exit.

Run as a Python module

To run phmdoctest from the command line:

python -m phmdoctest doc/example2.md --report

Python API

Call main.testfile() to generate a pytest file in memory. Please see the Python API here. The example generates a pytest file from doc/setup.md and compares the result to doc/test_setup.py.

from pathlib import Path
import phmdoctest.main

generated_testfile = phmdoctest.main.testfile(
    "doc/setup.md",
    setup="FIRST",
    teardown="LAST",
)
expected = Path("doc/test_setup.py").read_text(encoding="utf-8")
assert expected == generated_testfile

pytest fixtures

Use fixture testfile_creator to generate a test file in memory. Pass the test file to fixture testfile_tester to run the test file in the pytester environment. Fixture API | Example. See more uses in tests/test_examples.py and tests/test_details.py. The fixtures run pytest much faster than run_and_pytest() below since there is no subprocess call. In the readthedocs documentation see the section Development tools API 1.3.0.

Simulate command line

To simulate a command line call to phmdoctest from within a Python script phmdoctest.simulator offers the function run_and_pytest().

  • it creates the --outfile in a temporary directory
  • optionally runs pytest on the outfile
  • pytest can return a JUnit XML report
  • useful during development to validate the command line and prevent use of a stale --outfile

Please see the Latest Development tools API section or the docstring of the function run_and_pytest() in the file simulator.py. Pass pytest_options as a list of strings as shown below.

import phmdoctest.simulator

command = "phmdoctest doc/example1.md --report --outfile temporary.py"
simulator_status = phmdoctest.simulator.run_and_pytest(
    well_formed_command=command, pytest_options=["--doctest-modules", "-v"]
)
assert simulator_status.runner_status.exit_code == 0
assert simulator_status.pytest_exit_code == 0

Hints

  • To read the Markdown file from the standard input stream. Use - for MARKDOWN_FILE.

  • Write the test file to a temporary directory so that it is always up to date.

  • In CI scripts the following shell command will create the temporary directory tmp in the tests folder on Windows, Linux, and macOS.

    python -c "from pathlib import Path; d = Path('tests') / 'tmp'; d.mkdir(mode=0o700)"
  • It is easy to use --output by mistake instead of --outfile.

  • If Python code block has no output, put assert statements in the code.

  • Use pytest option --doctest-modules to test the sessions.

  • Markdown indented code blocks (Spec section 4.4) are ignored.

  • simulator_status.runner_status.exit_code == 2 is the click command line usage error.

  • Since phmdoctest generates code, the input file should be from a trusted source.

  • An empty code block gets given the role del-code. It is not tested.

  • Use special TEXT values FIRST, SECOND, LAST for the command line options --setup and --teardown since they only match one block.

  • The variable names managenamespace, doctest_namespace, capsys, and _phm_expected_str should not be used in Markdown Python code blocks since they may be used in generated code.

  • Setup and teardown code blocks cannot have expected output.

  • To have pytest collect a code block with the label directive start the value with test_.

  • With the --setup-doctest option, names assigned by the setup code block are globally visible to the entire test suite. This is due to the scope of the pytest doctest_namespace fixture. Try using a separate pytest command to test just the phmdoctest test.

  • The module phmdoctest.fixture is imported at pytest time to support setup, teardown, share-names, and clear-names features.

  • The phmdoctest Markdown parser finds fenced code blocks enclosed by html

    and
    tags. The tags may require a preceding and trailing blank line to render correctly. See example in tests/test_details.py.

  • Try redirecting phmdoctest standard output into PYPI Pygments to colorize the generated test file.

    python -m phmdoctest project.md --outfile - | pygmentize
  • If the --outfile is written into a folder that pre-exists in the repository, consider adding the outfile name to .gitignore. If the outfile name later changes, the change will be needed in .gitignore too.

    # Reserved for generated test file.
    tests/test_readme.py
    

Directive hints

  • Only put one of setup, teardown, share-names, or clear-names on a code block.
  • Only one block can be setup. Only one block can be teardown.
  • The setup or teardown block can't have an expected output block.
  • Label directive does not generate a test case name on setup and teardown blocks.
  • Directives displayed in the --report start with a dash like this: -label test_i_ratio.
  • Code generated by Python blocks with setup and teardown directives runs at the pytest fixture scope="module" level.
  • Code generated by Python blocks with share-names and clear-names directives are collected and run by pytest like any other test case.
  • A malformed HTML comment ending is bad. Make sure it ends with both dashes like -->. Running with --report will expose that problem.
  • The setup, teardown, share-names, and clear-names directives have logging. To see the log messages, run pytest with the option: --log-cli-level=DEBUG --color=yes
  • There is no limit to number of blank lines after the directive HTML comment but before the fenced code block.

Related projects

  • rundoc
  • byexample
  • sphinx.ext.doctest
  • sybil
  • doxec
  • egtest
  • pytest-phmdoctest
  • pytest-codeblocks
Comments
  • fix adding pytest import

    fix adding pytest import

    fixing https://github.com/tmarktaylor/phmdoctest/issues/5#issuecomment-840379311

    it turned out that if the check for existing markers was not robust enough :rabbit:

    opened by Borda 9
  • Question: How do I share code between doctest blocks?

    Question: How do I share code between doctest blocks?

    Thanks for writing this library and releasing it for free to the world! :)

    I think I misunderstand something fundamental, but I've tried various combinations for an hour now, and would appreciate some simple pointers if you have the time. I'd like to generate a test setup for a python library called conllu. The readme examples are incrementally adding variables as the tutorial in the readme goes on. So pretty much all doctests are thought of as being in the global scope.

    Problem is, I can't get things to be shared between the code blocks. Even with this simple example:

    In test.md:

    ```python
    from conllu import parse
    ```
    
    ```python
    >>> sentences = parse()
    ```
    

    Generated test.py file:

    """pytest file built from test.md"""
    
    def test_code_2():
        from conllu import parse
    
    def session_00001_line_6():
        r"""
        >>> sentences = parse()
        """
    
    $ pytest --doctest-modules test.py
    ================================================= test session starts ==================================================
    platform linux -- Python 3.9.1, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
    rootdir: /home/emilstenstrom/Projects/conllu
    collected 2 items
    
    test.py F.                                                                                                       [100%]
    
    ======================================================= FAILURES =======================================================
    _________________________________________ [doctest] test.session_00001_line_6 __________________________________________
    011
    012     >>> sentences = parse()
    UNEXPECTED EXCEPTION: NameError("name 'parse' is not defined")
    Traceback (most recent call last):
      File "/home/emilstenstrom/.pyenv/versions/3.9.1/lib/python3.9/doctest.py", line 1336, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest test.session_00001_line_6[0]>", line 1, in <module>
    NameError: name 'parse' is not defined
    /home/emilstenstrom/Projects/conllu/test.py:12: UnexpectedException
    =============================================== short test summary info ================================================
    FAILED test.py::test.session_00001_line_6
    ============================================= 1 failed, 1 passed in 0.02s ==============================================
    

    I'd found the shared-names directive, but the don't seem to work for my case at all:

    In test.md:

    <!--phmdoctest-share-names-->
    ```python
    from conllu import parse
    ```
    
    ```python
    >>> sentences = parse()
    ```
    

    Generated test.py file:

    """pytest file built from test.md"""
    import pytest
    
    from phmdoctest.fixture import managenamespace
    
    def test_code_3(managenamespace):
        from conllu import parse
        managenamespace(operation="update", additions=locals())
    
    def session_00001_line_7():
        r"""
        >>> sentences = parse()
        """
    
    $ pytest --doctest-modules test.py
    ================================================= test session starts ==================================================
    platform linux -- Python 3.9.1, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
    rootdir: /home/emilstenstrom/Projects/conllu
    collected 2 items
    
    test.py F.                                                                                                       [100%]
    
    ======================================================= FAILURES =======================================================
    _________________________________________ [doctest] test.session_00001_line_7 __________________________________________
    015
    016     >>> sentences = parse()
    UNEXPECTED EXCEPTION: NameError("name 'parse' is not defined")
    Traceback (most recent call last):
      File "/home/emilstenstrom/.pyenv/versions/3.9.1/lib/python3.9/doctest.py", line 1336, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest test.session_00001_line_7[0]>", line 1, in <module>
    NameError: name 'parse' is not defined
    /home/emilstenstrom/Projects/conllu/test.py:16: UnexpectedException
    =============================================== short test summary info ================================================
    FAILED test.py::test.session_00001_line_7
    ============================================= 1 failed, 1 passed in 0.02s ==============================================
    

    Tried the setup directive instead, with same result.

    Could you help me get this working? Thanks in advance!

    enhancement 
    opened by EmilStenstrom 7
  • Feature: mark skip in md file

    Feature: mark skip in md file

    I found the skipping by counting a bit confusing as each edit in the MD files requires re-count blocks in the test generation command which would be already included in some CI/CD... also having then indexes numerical would be more robust to typos A better alternative I would see as marking the skip block in the MD file itself, similar as https://github.com/nschloe/exdown does with custom label <!--exdown-skip--> see their example:

    Lorem ipsum
    <!--exdown-skip-->
    ```python
    foo + bar  # not working
    ```
    dolor sit amet.
    
    enhancement 
    opened by Borda 7
  • Inject fixtures from conftest.py

    Inject fixtures from conftest.py

    Hello! I might be doing something wrong, but it seems that all the fixtures I define in conftest.py for my generated tests, including those marked as autouse=True, are being ignored when I run the tests. I assume this is because of the namespace manipulation or something similar. Is there any way to keep these fixtures? Is there any way to pass the fixtures I want to keep, maybe as an option to the generation command?

    My use case is the following, please skip if irrelevant: there is a "basics" readme that helps users create some key entities, but the main readmes only reference it. Using the setup option does not work, since the code block is not in the same file. I thought about using some classic conftest fixtures but I see they are simply lost.

    Right now, for example I have the following code generated by phmdoctest: def test_code_66(managenamespace):

    If I manually edit it to def test_code_66(managenamespace, myfixture_from_conftest): then it works just as expected.

    I do not want to setup a preliminary block or something similar, I just want access to my "regular" fixtures on top of the ones phmdoctest introduces. Can you provide some help? Are these intentional design specs or is it a bug?

    Thank you in advance!

    enhancement 
    opened by calina-c 4
  • directory usage

    directory usage

    Hi, I like your project! My repository contains a docs/ directory with nested subdirectories of markdown files (documentation managed with Portray). It would be useful if there were a command to generate tests for an entire directory so that I don't have to call phmdoctest for each one individually. The simplest thing would probably be to output to a directory which directly mirrors the structure of the input directory.

    opened by garfieldnate 3
  • Consider using pytest dynamic test discovery

    Consider using pytest dynamic test discovery

    Pytest plugins can easily generate tests dynamically during test discovery. This means that it should be no need to compile markdown files into pytests. When the plugin is installed it should only discover tests inside markdown files and report them back to pytest.

    I used a similar approach for exposing molecule tests as pytest tests and worked quite nice.

    The reason I am proposing this is because I do not want to add extra compilation steps before running pytest, it does make the development experience harder and more prone to mistakes (like missing to recompile them).

    enhancement 
    opened by ssbarnea 3
  • how to add pytest markers

    how to add pytest markers

    We mark our tests that use a network so that we can choose not to run them sometimes (because they are slow and require network connectivity). We use a custom pytest attribute @pytest.mark.network.

    Many of the code blocks the documentation I'm writing also require a network connection to run, so I'd like to mark the output test methods with @pytest.mark.network. Is there a way to do this?

    enhancement 
    opened by garfieldnate 2
  • directive for merging cells

    directive for merging cells

    In some of my documentation, I break up cells with explanation. Something like this:


    First, we import our methods and create the object we're going to work with:

    import Blah from foo
    b = Blah()
    

    Then we call calculate() to get our final answer:

    b.calculate(123)
    

    Kind of a weak example, but I hope you get it. I need a way to combine multiple cells into one; currently you have a way to make the code from a cell available to all other cells, but I don't see anything for just combining specific cells together.

    In the codeblocks project, this is the <!--pytest-codeblocks:cont--> directive.

    opened by garfieldnate 2
  • Simulator fails in Windows venv.

    Simulator fails in Windows venv.

    • The phmdoctest test suite has failures when run in a Python venv on Windows.
    • When simulator.run_and_pytest() runs a subprocess to execute pytests the pytest module is not found.
    • This occurs on Windows OS where pytest is only installed in the venv and not in the system (user) Python installation.
    • The CI tests are not able to demonstrate the failure.

    The problem is fixed by changing the subprocess() arg list: "python" is replaced with sys.executable.

    bug 
    opened by tmarktaylor 2
  • Simulator runs subprocess when nothing to do.

    Simulator runs subprocess when nothing to do.

    • The function simulator.run_and_pytest() launches a subprocess to run pytest even when the parameter pytest_options=None.
    • The subprocess should not be run.
    • This bug was introduced by version 1.2.0.
    • run_and_pytest() is part of the phmdoctest's documented Python API.
    • The bug does not affect the command line program phmdoctest.
    • It wastes some time and CPU cycles during phmdoctest continuous integration workflow runs.
    bug 
    opened by tmarktaylor 2
  • Handle inline output

    Handle inline output

    Hello, thanks for this project.

    Would it possible to handle inline output ? For example: ```python print("Hello world !") #> Hello world ! ``` The expected output would be "Hello world !"

    I saw that phm doesn't do Python console >>>, ... Would you consider it anyway ? BTW, I'd be happy to help

    enhancement 
    opened by art049 2
  • Poor advice on placing section in setup.cfg.

    Poor advice on placing section in setup.cfg.

    In the documentation chapter "Using a configuration file" please disregard the example placing a [tool.phmdoctest] section in setup.cfg. Pytest 7 advises a [tool:*] (colon) and discourages use in setup.cfg. I suspect some other tools may use the colon to discover tool configuration sections in setup.cfg.

    For the next version of phmdoctest >1.4.0:

    • Add advice to prefer the .toml format followed by .ini. and finally .cfg.
    • Allow[tool:phmdoctest] in *.cfg.
    • Allow [tool.phmdoctest] in *.cfg to remain back compatible with phmdoctest 1.4.0.

    Note that the name of the configuration file is explictly passed to phmdoctest in place of the Markdown file. There is no requirement to place the configuration section in an existing configuration file. It can be placed in a new file.

    documentation 
    opened by tmarktaylor 0
Releases(v1.4.0)
Owner
Mark Taylor
Interested in readable code.
Mark Taylor
Ultimaker Cura 2 Mooraker Upload Plugin

Klipper & Cura - Cura2MoonrakerPlugin Allows you to upload Gcode directly from Cura to your Klipper-based 3D printer (Fluidd, Mainsailos etc.) using t

214 Jan 03, 2023
Easy OpenAPI specs and Swagger UI for your Flask API

Flasgger Easy Swagger UI for your Flask API Flasgger is a Flask extension to extract OpenAPI-Specification from all Flask views registered in your API

Flasgger 3.1k Jan 05, 2023
OpenTelemetry Python API and SDK

Getting Started • API Documentation • Getting In Touch (GitHub Discussions) Contributing • Examples OpenTelemetry Python This page describes the Pytho

OpenTelemetry - CNCF 1.1k Jan 08, 2023
Spin-off Notice: the modules and functions used by our research notebooks have been refactored into another repository

Fecon235 - Notebooks for financial economics. Keywords: Jupyter notebook pandas Federal Reserve FRED Ferbus GDP CPI PCE inflation unemployment wage income debt Case-Shiller housing asset portfolio eq

Adriano 825 Dec 27, 2022
A module filled with many useful functions and modules in various subjects.

Usefulpy Check out the Usefulpy site Usefulpy site is not always up to date Download and Import download and install with with pip download usefulpyth

Austin Garcia 1 Dec 28, 2021
This is a small project written to help build documentation for projects in less time.

Documentation-Builder This is a small project written to help build documentation for projects in less time. About This project builds documentation f

Tom Jebbo 2 Jan 17, 2022
Sphinx Theme Builder

Sphinx Theme Builder Streamline the Sphinx theme development workflow, by building upon existing standardised tools. and provide a: simplified packagi

Pradyun Gedam 23 Dec 26, 2022
AiiDA plugin for the HyperQueue metascheduler.

aiida-hyperqueue WARNING: This plugin is still in heavy development. Expect bugs to pop up and the API to change. AiiDA plugin for the HyperQueue meta

AiiDA team 3 Jun 19, 2022
Literate-style documentation generator.

888888b. 888 Y88b 888 888 888 d88P 888 888 .d8888b .d8888b .d88b. 8888888P" 888 888 d88P" d88P" d88""88b 888 888 888

Pycco 808 Dec 27, 2022
A collection of simple python mini projects to enhance your python skills

A collection of simple python mini projects to enhance your python skills

PYTHON WORLD 12.1k Jan 05, 2023
Credit EDA Case Study Using Python

This case study aims to identify patterns which indicate if a client has difficulty paying their installments which may be used for taking actions such as denying the loan, reducing the amount of loa

Purvi Padliya 1 Jan 14, 2022
Loudchecker - Python script to check files for earrape

loudchecker python script to check files for earrape automatically installs depe

1 Jan 22, 2022
BakTst_Org is a backtesting system for quantitative transactions.

BakTst_Org 中文reademe:传送门 Introduction: BakTst_Org is a prototype of the backtesting system used for BTC quantitative trading. This readme is mainly di

18 May 08, 2021
Dynamic Resume Generator

Dynamic Resume Generator

Quinten Lisowe 15 May 19, 2022
Legacy python processor for AsciiDoc

AsciiDoc.py This branch is tracking the alpha, in-progress 10.x release. For the stable 9.x code, please go to the 9.x branch! AsciiDoc is a text docu

AsciiDoc.py 178 Dec 25, 2022
the project for the most brutal and effective language learning technique

- "The project for the most brutal and effective language learning technique" (c) Alex Kay The langflow project was created especially for language le

Alexander Kaigorodov 7 Dec 26, 2021
📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.

applied-ml Curated papers, articles, and blogs on data science & machine learning in production. ⚙️ Figuring out how to implement your ML project? Lea

Eugene Yan 22.1k Jan 03, 2023
30 Days of google cloud leaderboard website

30 Days of Cloud Leaderboard This is a leaderboard for the students of Thapar, Patiala who are participating in the 2021 30 days of Google Cloud Platf

Developer Student Clubs TIET 13 Aug 25, 2022
Reproducible Data Science at Scale!

Pachyderm: The Data Foundation for Machine Learning Pachyderm provides the data layer that allows machine learning teams to productionize and scale th

Pachyderm 5.7k Dec 29, 2022
Autolookup GUI Plugin for Plover

Word Tray for Plover Word Tray is a GUI plugin that automatically looks up efficient outlines for words that start with the current input, much like a

Kathy 3 Jun 08, 2022