An analysis tool for Python that blurs the line between testing and type systems.

Overview

CrossHair

Join the chat at https://gitter.im/Cross_Hair/Lobby Check status Downloads

An analysis tool for Python that blurs the line between testing and type systems.

THE LATEST NEWS:
Check out the new crosshair cover command which finds inputs to get you code coverage.

If you have a function with type annotations and add a contract in a supported syntax, CrossHair will attempt to find counterexamples for you:

Animated GIF demonstrating the verification of a python function

CrossHair works by repeatedly calling your functions with symbolic inputs. It uses an SMT solver (a kind of theorem prover) to explore viable execution paths and find counterexamples for you. This is not a new idea; a Python approach was first described in this paper. However, to my knowledge, CrossHair is the most complete implementation: it supports symbolic lists, dictionaries, sets, and custom classes.

Try CrossHair right now, in your browser, at crosshair-web.org!

CrossHair has IDE integrations for VS Code, PyCharm, and more.

Want to do me a favor? Sign up for email or RSS updates. There are other ways to help too.

Documentation

See https://crosshair.readthedocs.io/

Comments
  • Issue/119

    Issue/119

    Working on #119 to improve work while pre-commit is condiered.

    I added isort as a check with optional overwriting like black. I added a few flake8 checks.... E7 for statement checking. This is motivated by me seeing lots of if X == None, not in CrossHair but in general. F60 because F62 doesn't look particularly relevant but there are plenty of dict objects floating around and so I figured adding a safety check for keys might make some sense.

    I'm not sure what else ought to be added for your workflow, or perhaps just running defaults with some specific checks ignored might make more sense? I'm honestly not sure, looking forward to your thoughts!

    opened by pypeaday 25
  • Discover more branches by inspecting the AST

    Discover more branches by inspecting the AST

    What's the idea?

    Let's consider the simple function f, and imagine executing it with a symbolic value x=0:

    def f(x: int) -> Optional[bool]:
         if x > 0:
            if x % 2:
               return True
           return False
        return None
    

    Because we execute if x > 0:, we discover both the False (taken) branch and it's negation but don't discover the un-executed inner branch if x % 2:. Execution isn't the only way to discover what code does though - we could also read it, or write a program to analyse the abstract syntax tree of f:

    body=[
      If(
        test=Compare(left=Name(id='x', ctx=Load()), ops=[Gt()], comparators=[Num(n=0)]),  # We already found this one
        body=[
          If(
            test=BinOp(left=Name(id='x', ctx=Load()), op=Mod(), right=Num(n=2)),  # but *didn't* see this branch
            body=[Return(value=NameConstant(value=True))],
            orelse=[]),
          Return(value=NameConstant(value=False))
        ],
        orelse=[]),
      Return(value=NameConstant(value=None))
    ]
    

    The Fuzzing Book has a chapter on symbolic fuzzing for Python, which is both useful background reading and a good source of starter code!

    Why bother?

    A trivial answer is that finding more branches per executed example is faster, and who doesn't like improved performance? More seriously, I think this would allow CrossHair to support more programs as well as improving how it tests those which are already supported.

    The current "concolic" analysis is fantastic, and handles cases such as dict-based control flow which cannot be inferred from a simple syntactic analysis (but doesn't see inner branches). On the other hand, this AST-based approach can handle cases where it's very difficult to patch in symbolic objects at runtime (but doesn't handle complicated scoping or control-flow issues).

    I think it's possible, and feasible, to combine these approaches at runtime using e.g. ast.parse(inspect.getsource(f)) and therefore to keep the strengths and work around the weaknesses of each πŸ™‚

    enhancement big ideas 
    opened by Zac-HD 13
  • Switch to use pre-commit config YAML file

    Switch to use pre-commit config YAML file

    Resolves #121 by swapping precommit.py for the standard .pre-commit-config.yaml file. I tried my best to replicate the existing actions performed by the old file, and also tried to fix any errors that popped out. There are a few remaining from some of the hooks, so please feel free to push any fixes for them directly to this PR.

    Setting this as a draft for now, if everything looks good I can undraft it.

    Let me know if you have any questions, or if there's anything I can explain!

    hacktoberfest-accepted 
    opened by tekktrik 12
  • Add support for TypedDict

    Add support for TypedDict

    Is your feature request related to a problem? Please describe. CrossHair dies with an exception when encountering a PEP589 TypedDict. See this crosshair-web example.

    Describe the solution you'd like Ideally, CrossHair would produce a symbolic value that conforms to the type specification. Note that this is complicated by the fact that TypeDict classes can inherit from each other, and the total option. I don't think we need to care about the "alternative syntax," as CrossHair requires Python 3.7+ already.

    enhancement 
    opened by pschanely 11
  • Group examples by kind and result

    Group examples by kind and result

    This patch groups the examples by kind (PEP 316, icontract) and outcome (expected success, expected failure).

    Additionally, the patch introduces a script to run the functional tests and verify that the captured output and the exit code of the check command does not deviate from the expected output and the exit code.

    opened by mristin 10
  • Importing spacy fails

    Importing spacy fails

    Expected vs actual behavior In similar vain to #159 I found another package which does not import properly: spacy.

    To Reproduce Just including import spacy produces this error for crosshair watch:

    Could not import your code:
    
    Traceback (most recent call last):
      File "/path/to/my/conda/env/lib/python3.9/site-packages/crosshair/util.py", line 375, in load_file
        return import_module(module_name)
      File "/path/to/my/conda/env/lib/python3.9/site-packages/crosshair/util.py", line 343, in import_module
        result_module = importlib.import_module(module_name)
      File "/path/to/my/conda/env/lib/python3.9/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 850, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "/mnt/macx/data/datasets/plan_extraction.py", line 4, in <module>
        import spacy
      File "/path/to/my/conda/env/lib/python3.9/site-packages/spacy/__init__.py", line 11, in <module>
        from thinc.api import prefer_gpu, require_gpu, require_cpu  # noqa: F401
      File "/path/to/my/conda/env/lib/python3.9/site-packages/thinc/api.py", line 2, in <module>
        from .initializers import normal_init, uniform_init, glorot_uniform_init, zero_init
      File "/path/to/my/conda/env/lib/python3.9/site-packages/thinc/initializers.py", line 4, in <module>
        from .backends import Ops
      File "/path/to/my/conda/env/lib/python3.9/site-packages/thinc/backends/__init__.py", line 7, in <module>
        from .ops import Ops
      File "/path/to/my/conda/env/lib/python3.9/site-packages/thinc/backends/ops.py", line 13, in <module>
        from ..util import get_array_module, is_xp_array, to_numpy
      File "/path/to/my/conda/env/lib/python3.9/site-packages/thinc/util.py", line 48, in <module>
        import tensorflow.experimental.dlpack
      File "/path/to/my/conda/env/lib/python3.9/site-packages/tensorflow/__init__.py", line 37, in <module>
        from tensorflow.python.tools import module_util as _module_util
      File "/path/to/my/conda/env/lib/python3.9/site-packages/tensorflow/python/__init__.py", line 104, in <module>
        from tensorflow.python.platform import test
      File "/path/to/my/conda/env/lib/python3.9/site-packages/tensorflow/python/platform/test.py", line 20, in <module>
        from tensorflow.python.framework import test_util as _test_util
      File "/path/to/my/conda/env/lib/python3.9/site-packages/tensorflow/python/framework/test_util.py", line 33, in <module>
        from absl.testing import parameterized
      File "/path/to/my/conda/env/lib/python3.9/site-packages/absl/testing/parameterized.py", line 218, in <module>
        from absl.testing import absltest
      File "/path/to/my/conda/env/lib/python3.9/site-packages/absl/testing/absltest.py", line 242, in <module>
        get_default_test_tmpdir(),
      File "/path/to/my/conda/env/lib/python3.9/site-packages/absl/testing/absltest.py", line 180, in get_default_test_tmpdir
        tmpdir = os.path.join(tempfile.gettempdir(), 'absl_testing')
      File "/path/to/my/conda/env/lib/python3.9/tempfile.py", line 287, in gettempdir
        tempdir = _get_default_tempdir()
      File "/path/to/my/conda/env/lib/python3.9/tempfile.py", line 198, in _get_default_tempdir
        fd = _os.open(filename, _bin_openflags, 0o600)
      File "/path/to/my/conda/env/lib/python3.9/site-packages/crosshair/auditwall.py", line 145, in audithook
        handler(event, args)
      File "/path/to/my/conda/env/lib/python3.9/site-packages/crosshair/auditwall.py", line 44, in check_open
        raise SideEffectDetected(
    crosshair.auditwall.SideEffectDetected: We've blocked a file writing operation on "/tmp/znr8t36b". CrossHair should not be run on code with side effects
    
    opened by rasenmaeher92 7
  • Consider other precommit approaches

    Consider other precommit approaches

    Hello, I briefly looked over your repo and I'm unfamiliar with using a .py file for precommit... is there any reason you don't configure pre-commit with a .pre-commit-config.yaml and then run pre-commit run all or have a hook that runs pre-commit for you? A custom .py file feels like a lot of unnecessary customization at first glance... I'm not an expert here by the way, so if there's good reason I'm more than happy to be wrong. If though you'd like to transition to a pre-commit config I'd be more than happy to help close this issue with one. Cheers.

    Originally posted by @nicpayne713 in https://github.com/pschanely/CrossHair/issues/119#issuecomment-939202603

    opened by pschanely 7
  • Constructor compatibility for CrossHair symbolics

    Constructor compatibility for CrossHair symbolics

    Expected vs actual behavior

    CrossHair symbolic types cannot be constructed like their native counterparts are. (they take args like the z3 identifier for a symbolic, or a z3 expression). Mostly that's fine, but causes trouble when the code under analysis assumes it can take the type of a value, and construct a new value with the same type; e.g. type(listarg)([1,2,3])

    This will create spurious errors, like in this crosshair playground example

    There are a couple of broad approaches for solving this problem, but I'm not ready to pull the trigger yet on those. One promising avenue is to patch type(), but this requires us to be very strict about what parts of the codebase should and shouldn't be patched - it's a bit of work.

    opened by pschanely 7
  • (option to) Treat leading assertions as preconditions

    (option to) Treat leading assertions as preconditions

    Minimising crosshair-specific configuration maximises ease of experimentation and adoption. To that end, I'd like to propose that leading assertions in a function could be treated as preconditions rather than invariants. An example:

    def fib(n: int) -> int:
        assert n >= 1       # treated like `pre: n >= 1`
        ...                 # calculate the result here
        assert result >= 1  # Checked as usual
        return result
    

    Such leading assertions currently must hold - because no work is done before they are checked, they are logically equivalent to preconditions. Treating them as such would allow preconditions to be checked at runtime without duplicating them, and synergises well with #22 to support precise checks without any crosshair-specific configuration.

    enhancement 
    opened by Zac-HD 7
  • Add pre-commit

    Add pre-commit

    For review... this is an initial stab at migrating your precommit.py file to the pre-commit git hook. It can be run by first installing pre-commit with pip install pre-commit then running pre-commit install and then you can run it with pre-commit run and since it's a git hook it will run on commit - which can be bypassed with --no-verify.

    Pre-commit docs are pretty solid, and I just thought, in the spirit of hacktoberfest to see if you might like to migrate.

    Note: No doctest is here, or pytest necessarily but I think we could work them in if desired.

    Reasons: the config is much more extensible and more easily worked with than a custom .py file and there's wide support for numerous hooks.

    Let me know what you think!

    opened by pypeaday 6
  • Symbolic string method support

    Symbolic string method support

    Today, many string methods cause symbolic strings to be "materialized" (the symbolic string becomes a concrete string and then gets handed to the native string implementation).

    For some of these operations, this is likely the best we can do, but others may be implementable with the SMT solver directly. This is a list of those methods, and we'll check them off when we've decided that we have the implementation we want. For each method, we'll decide to:

    1. Leave as-is, with materialization in place.
    2. Implement the method symbolically, using Z3.
    3. Implement the method in terms of other string methods that have symbolic implementations.

    Note further that direct unicode support in Z3 isn't yet implemented. We might want to defer spending time on this until that's resolved or until we implement a workaround (as a sequence of bitvectors ... or a massive enum?)

    • [x] def capitalize(self)
    • [x] def casefold(self)
    • [x] def center(self, width, *args)
    • [x] def count(self, sub, start=0, end=sys.maxsize)
    • [x] def encode(self, encoding=_MISSING, errors=_MISSING)
    • [x] def endswith(self, suffix, start=0, end=sys.maxsize)
    • [x] def expandtabs(self, tabsize=8)
    • [x] def find(self, sub, start=0, end=sys.maxsize)
    • [x] def format(self, *args, **kwds)
    • [x] def format_map(self, mapping)
    • [x] def index(self, sub, start=0, end=sys.maxsize)
    • [x] def isalpha(self)
    • [x] def isalnum(self)
    • [x] def isascii(self)
    • [x] def isdecimal(self)
    • [x] def isdigit(self)
    • [x] def isidentifier(self)
    • [x] def islower(self)
    • [x] def isnumeric(self)
    • [x] def isprintable(self)
    • [x] def isspace(self)
    • [x] def istitle(self)
    • [x] def isupper(self)
    • [x] def join(self, seq)
    • [x] def ljust(self, width, *args)
    • [x] def lower(self): return self.data.lower()
    • [x] def lstrip(self, chars=None)
    • [x] def partition(self, sep)
    • [x] def replace(self, old, new, maxsplit=-1)
    • [x] def rfind(self, sub, start=0, end=sys.maxsize)
    • [x] def rindex(self, sub, start=0, end=sys.maxsize)
    • [x] def rjust(self, width, *args)
    • [x] def rpartition(self, sep)
    • [x] def rsplit(self, sep=None, maxsplit=-1)
    • [x] def rstrip(self, chars=None)
    • [x] def split(self, sep=None, maxsplit=-1)
    • [x] def splitlines(self, keepends=False)
    • [x] def startswith(self, prefix, start=0, end=sys.maxsize)
    • [x] def strip(self, chars=None)
    • [x] def swapcase(self)
    • [x] def title(self)
    • [x] def translate(self, *args)
    • [x] def upper(self)
    • [x] def zfill(self, width)
    • [x] def removeprefix(self, prefix: str)
    • [x] def removesuffix(self, suffix: str)
    enhancement help wanted good first issue 
    opened by pschanely 6
  • `None` in `max` and `min` goes undetected

    `None` in `max` and `min` goes undetected

    Consider the following program (a solution to https://www.hackerrank.com/challenges/mini-max-sum/problem):

    from typing import List, Tuple, Optional
    
    from icontract import require, ensure
    
    
    @require(lambda numbers: all(1 <= number <= 10**9 for number in numbers))
    @require(lambda numbers: 2 <= len(numbers) < 1000)
    @ensure(lambda numbers, result: 1 <= result[0] < sum(numbers))
    @ensure(lambda numbers, result: 1 <= result[1] < sum(numbers))
    @ensure(lambda result: result[0] <= result[1])
    def slow_and_simple_find_min_max(numbers: List[int]) -> Tuple[int, int]:
        """
        >>> slow_and_simple_find_min_max([1, 2, 3, 4, 5])
        (10, 14)
        """
        min_sum = None  # type: Optional[int]
        max_sum = None  # type: Optional[int]
        for i in range(len(numbers)):
            a_sum = 0
            for j in range(len(numbers)):
                if i == j:
                    continue
    
                a_sum += numbers[j]
    
            min_sum = min(a_sum, min_sum)
            max_sum = max(a_sum, max_sum)
    
        return min_sum, max_sum
    

    CrossHair 0.0.34 fails to figure out that putting a None in min or max is going to crash the program. The doctest fails with an exception, as expected.

    The correct program looks like this (mind the lines min_sum = ... and max_sum = ...):

    from typing import List, Tuple, Optional
    
    from icontract import require, ensure
    
    
    @require(lambda numbers: all(1 <= number <= 10**9 for number in numbers))
    @require(lambda numbers: 2 <= len(numbers) < 1000)
    @ensure(lambda numbers, result: 1 <= result[0] < sum(numbers))
    @ensure(lambda numbers, result: 1 <= result[1] < sum(numbers))
    @ensure(lambda result: result[0] <= result[1])
    def slow_and_simple_find_min_max(numbers: List[int]) -> Tuple[int, int]:
        """
        >>> slow_and_simple_find_min_max([1, 2, 3, 4, 5])
        (10, 14)
        """
        min_sum = None  # type: Optional[int]
        max_sum = None  # type: Optional[int]
        for i in range(len(numbers)):
            a_sum = 0
            for j in range(len(numbers)):
                if i == j:
                    continue
    
                a_sum += numbers[j]
    
            min_sum = min(a_sum, min_sum) if min_sum is not None else a_sum
            max_sum = max(a_sum, max_sum) if max_sum is not None else a_sum
    
        return min_sum, max_sum
    

    I ran CrossHair 0.0.34 with:

    crosshair  check --analysis_kind icontract .\playground\exercise_02.py
    

    Letting CrossHair in watch mode run longer did not help either:

    crosshair  watch --analysis_kind icontract .\playground\exercise_02.py
    
    opened by mristin 1
  • Do not attempt to short-circuit calls with concrete arguments

    Do not attempt to short-circuit calls with concrete arguments

    Noted by @petrusboniatus in https://github.com/pschanely/CrossHair/discussions/187#discussioncomment-4209603_

    One thing that came as a shock to me is that without precomputing the hash outside the function it does not end ...

    If you make many calls (in this case ~30) to short-circuit-able functions like hash(), is likely that at least one of them will short-circuit. But we want at least some number of executions to avoid them all (and probably some to use them all). But in this case, it's especially sad because all the arguments are concrete. At the very least, we should never short-circuit when given concrete arguments.

    opened by pschanely 0
  • Re-investigate string and sequence solvers

    Re-investigate string and sequence solvers

    Noted in #187:

    Hello, I was digging into this trying to isolate the problem that part of the problem might be how the 'in' operator is mapped to z3.

    Crosshair watch returns something on this function after 55 seconds, meaning (if I understand correctly this part) it takes that time to generate an example given the pre-conditions. Because post-conditions are never met it should fail in the first example. This happens also with dictionaries and Sets.

    def keys_in_an_array(arr: List[str]) -> bool:
        """
        pre: "a" in arr
        pre: "b" in arr
        pre: "c" in arr
        pre: "d" in arr
        pre: "e" in arr
        post: __return__ == True
        """
        return False if "a" in arr else True
    

    But solving the same constraints in z3 takes 0.1s

    In [15]: import z3
        ...: import time
        ...: start_time = time.time()
        ...: s = z3.Solver()
        ...: sseq = z3.Const('sseq', z3.SeqSort(z3.StringSort()))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("a"))))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("b"))))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("c"))))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("d"))))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("e"))))
        ...: s.add(z3.Contains(sseq, z3.Unit(z3.StringVal("f"))))
        ...: s.check()
        ...: print(s.model())
        ...: print("--- %s seconds ---" % (time.time() - start_time))
    [sseq = Concat(Unit("c"),
                   Concat(Unit(""),
                          Concat(Unit("f"),
                                 Concat(Unit(""),
                                        Concat(Unit("d"),
                                            Concat(Unit(""),
                                            Concat(Unit("e"),
                                            Concat(Unit(""),
                                            Concat(Unit("a"),
                                            Concat(Unit(""),
                                            Unit("b")))))))))))]
    --- 0.01843857765197754 seconds ---
    

    Sorry if I misunderstood something, I am still trying to get my head around the internals of it.

    Originally posted by @petrusboniatus in https://github.com/pschanely/CrossHair/discussions/187#discussioncomment-4189852

    opened by pschanely 7
  • Generated test suite with cover option contains failing tests

    Generated test suite with cover option contains failing tests

    Expected vs actual behavior

    For certain complexity of program logic we can get cover option to generate unit tests (it may require more time but generates them), once these tests are ran I get assertion failures as if the calculated execution path parameters were not leading to successfully passing of assertion.

    To Reproduce I created git repo https://github.com/azewiusz/for-crosshair where I describe how to reproduce this problem. I'm using 0.0.32 version of crosshair-tool.

    opened by azewiusz 5
  • Extending native types doesn't work as expected

    Extending native types doesn't work as expected

    As shown in this example, extending native types (that CrossHair cares about) doesn't work as expected.

    I suspect there may be multiple layers of problems, the first of which is that a correct constructor signature is harder to deduce for these types.

    cc @mristin

    opened by pschanely 2
  • Independent blog posts about CrossHair

    Independent blog posts about CrossHair

    Are you a Hacktoberfest contributor? Or want to be?

    A variety of no-code and low-code contributions count; like blog posts! Perhaps while you're working on some Python (maybe for another Hacktoberfest project!), you'd try out CrossHair and do a little write up about the experience: what you tried, what worked, what didn't, etc.

    Reach out to by email or on gitter and I'll ensure you get credit. And link to your blog from the CrossHair docs!

    help wanted good first issue Hacktoberfest 
    opened by pschanely 3
Releases(0.0.36)
Owner
Phillip Schanely
Mostly working on "CrossHair": easy SMT, fuzzing, & verification for Python.
Phillip Schanely
Collects all accepted (partial and full scored) codes submitted within the given timeframe and saves them locally for plagiarism check.

Collects all accepted (partial and full scored) codes submitted within the given timeframe of any contest.

ARITRA BELEL 2 Dec 28, 2021
Unbearably fast O(1) runtime type-checking in pure Python.

Look for the bare necessities, the simple bare necessities. Forget about your worries and your strife. β€” The Jungle Book.

1.4k Dec 29, 2022
Learning source code review, spot vulnerability, find some ways how to fix it.

Learn Source Code Review Learning source code review, spot vulnerability, find some ways how to fix it. WordPress Plugin Authenticated Stored XSS on C

Shan 24 Dec 31, 2022
Collection of library stubs for Python, with static types

typeshed About Typeshed contains external type annotations for the Python standard library and Python builtins, as well as third party packages as con

Python 3.3k Jan 02, 2023
πŸ¦” PostHog is developer-friendly, open-source product analytics.

PostHog provides open-source product analytics, built for developers. Automate the collection of every event on your website or app, with no need to send data to 3rd parties. With just 1 click you ca

PostHog 10.3k Jan 01, 2023
Inspects Python source files and provides information about type and location of classes, methods etc

prospector About Prospector is a tool to analyse Python code and output information about errors, potential problems, convention violations and comple

Python Code Quality Authority 1.7k Dec 31, 2022
A static type analyzer for Python code

pytype - ? βœ” Pytype checks and infers types for your Python code - without requiring type annotations. Pytype can: Lint plain Python code, flagging c

Google 4k Dec 31, 2022
The strictest and most opinionated python linter ever!

wemake-python-styleguide Welcome to the strictest and most opinionated python linter ever. wemake-python-styleguide is actually a flake8 plugin with s

wemake.services 2.1k Jan 05, 2023
fixup: Automatically add and remove python import statements

fixup: Automatically add and remove python import statements The goal is that running fixup my_file.py will automatically add or remove import stateme

2 May 08, 2022
Pymwp is a tool for automatically performing static analysis on programs written in C

pymwp: MWP analysis in Python pymwp is a tool for automatically performing static analysis on programs written in C, inspired by "A Flow Calculus of m

Static Analyses of Program Flows: Types and Certificate for Complexity 2 Dec 02, 2022
Auto-generate PEP-484 annotations

PyAnnotate: Auto-generate PEP-484 annotations Insert annotations into your source code based on call arguments and return types observed at runtime. F

Dropbox 1.4k Dec 26, 2022
Find usage statistics (imports, function calls, attribute access) for Python code-bases

Python Library stats This is a small library that allows you to query some useful statistics for Python code-bases. We currently report library import

Francisco Massa 13 May 02, 2022
Static type checker for Python

Static type checker for Python Speed Pyright is a fast type checker meant for large Python source bases. It can run in a β€œwatch” mode and performs fas

Microsoft 9.4k Jan 07, 2023
Optional static typing for Python 3 and 2 (PEP 484)

Mypy: Optional Static Typing for Python Got a question? Join us on Gitter! We don't have a mailing list; but we are always happy to answer questions o

Python 14.4k Jan 05, 2023
Calculator Python Package

Calculator Python Package This is a Calculator Package of Python. How To Install The Package? Install packagearinjoyn with pip (Package Installer Of P

Arinjoy_Programmer 1 Nov 21, 2021
Robocop is a tool that performs static code analysis of Robot Framework code.

Robocop Introduction Documentation Values Requirements Installation Usage Example Robotidy FAQ Watch our talk from RoboCon 2021 about Robocop and Robo

marketsquare 132 Dec 29, 2022
ticktock is a minimalist library to profile Python code

ticktock is a minimalist library to profile Python code: it periodically displays timing of running code.

Victor Benichoux 30 Sep 28, 2022
A static analysis tool for Python

pyanalyze Pyanalyze is a tool for programmatically detecting common mistakes in Python code, such as references to undefined variables and some catego

Quora 212 Jan 07, 2023
This is a Python program to get the source lines of code (SLOC) count for a given GitHub repository.

This is a Python program to get the source lines of code (SLOC) count for a given GitHub repository.

Nipuna Weerasekara 2 Mar 10, 2022
TidyPy is a tool that encapsulates a number of other static analysis tools and makes it easy to configure, execute, and review their results.

TidyPy Contents Overview Features Usage Docker Configuration Ignoring Issues Included Tools Included Reporters Included Integrations Extending TidyPy

Jason Simeone 33 Nov 27, 2022