Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.

Overview

yappi

Yappi

Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.

Highlights

  • Fast: Yappi is fast. It is completely written in C and lots of love&care went into making it fast.
  • Unique: Yappi supports multithreaded, asyncio and gevent profiling. Tagging/filtering multiple profiler results has interesting use cases.
  • Intuitive: Profiler can be started/stopped and results can be obtained from any time and any thread.
  • Standards Complaint: Profiler results can be saved in callgrind or pstat formats.
  • Rich in Feature set: Profiler results can show either Wall Time or actual CPU Time and can be aggregated from different sessions. Various flags are defined for filtering and sorting profiler results.
  • Robust: Yappi had seen years of production usage.

Motivation

CPython standard distribution comes with three deterministic profilers. cProfile, Profile and hotshot. cProfile is implemented as a C module based on lsprof, Profile is in pure Python and hotshot can be seen as a small subset of a cProfile. The major issue is that all of these profilers lack support for multi-threaded programs and CPU time.

If you want to profile a multi-threaded application, you must give an entry point to these profilers and then maybe merge the outputs. None of these profilers are designed to work on long-running multi-threaded applications. It is also not possible to profile an application that start/stop/retrieve traces on the fly with these profilers.

Now fast forwarding to 2019: With the latest improvements on asyncio library and asynchronous frameworks, most of the current profilers lacks the ability to show correct wall/cpu time or even call count information per-coroutine. Thus we need a different kind of approach to profile asynchronous code. Yappi, with v1.2 introduces the concept of coroutine profiling. With coroutine-profiling, you should be able to profile correct wall/cpu time and call count of your coroutine. (including the time spent in context switches, too). You can see details here.

Installation

Can be installed via PyPI

$ pip install yappi

OR from the source directly.

$ pip install git+https://github.com/sumerc/yappi#egg=yappi

Examples

A simple example:

import yappi

def a():
    for _ in range(10000000):  # do something CPU heavy
        pass

yappi.set_clock_type("cpu") # Use set_clock_type("wall") for wall time
yappi.start()
a()

yappi.get_func_stats().print_all()
yappi.get_thread_stats().print_all()
'''

Clock type: CPU
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg      
doc.py:5 a                            1      0.117907  0.117907  0.117907

name           id     tid              ttot      scnt        
_MainThread    0      139867147315008  0.118297  1
'''

Profile a multithreaded application:

You can profile a multithreaded application via Yappi and can easily retrieve per-thread profile information by filtering on ctx_id with get_func_stats API.

import yappi
import time
import threading

_NTHREAD = 3


def _work(n):
    time.sleep(n * 0.1)


yappi.start()

threads = []
# generate _NTHREAD threads
for i in range(_NTHREAD):
    t = threading.Thread(target=_work, args=(i + 1, ))
    t.start()
    threads.append(t)
# wait all threads to finish
for t in threads:
    t.join()

yappi.stop()

# retrieve thread stats by their thread id (given by yappi)
threads = yappi.get_thread_stats()
for thread in threads:
    print(
        "Function stats for (%s) (%d)" % (thread.name, thread.id)
    )  # it is the Thread.__class__.__name__
    yappi.get_func_stats(ctx_id=thread.id).print_all()
'''
Function stats for (Thread) (3)

name                                  ncall  tsub      ttot      tavg
..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000062  0.000062
doc3.py:8 _work                       1      0.000012  0.000045  0.000045

Function stats for (Thread) (2)

name                                  ncall  tsub      ttot      tavg
..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000065  0.000065
doc3.py:8 _work                       1      0.000010  0.000048  0.000048


Function stats for (Thread) (1)

name                                  ncall  tsub      ttot      tavg
..hon3.7/threading.py:859 Thread.run  1      0.000010  0.000043  0.000043
doc3.py:8 _work                       1      0.000006  0.000033  0.000033
'''

Different ways to filter/sort stats:

You can use filter_callback on get_func_stats API to filter on functions, modules or whatever available in YFuncStat object.

import package_a
import yappi
import sys

def a():
    pass

def b():
    pass

yappi.start()
a()
b()
package_a.a()
yappi.stop()

# filter by module object
current_module = sys.modules[__name__]
stats = yappi.get_func_stats(
    filter_callback=lambda x: yappi.module_matches(x, [current_module])
)  # x is a yappi.YFuncStat object
stats.sort("name", "desc").print_all()
'''
Clock type: CPU
Ordered by: name, desc

name                                  ncall  tsub      ttot      tavg
doc2.py:10 b                          1      0.000001  0.000001  0.000001
doc2.py:6 a                           1      0.000001  0.000001  0.000001
'''

# filter by function object
stats = yappi.get_func_stats(
    filter_callback=lambda x: yappi.func_matches(x, [a, b])
).print_all()
'''
name                                  ncall  tsub      ttot      tavg
doc2.py:6 a                           1      0.000001  0.000001  0.000001
doc2.py:10 b                          1      0.000001  0.000001  0.000001
'''

# filter by module name
stats = yappi.get_func_stats(filter_callback=lambda x: 'package_a' in x.module
                             ).print_all()
'''
name                                  ncall  tsub      ttot      tavg
package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
'''

# filter by function name
stats = yappi.get_func_stats(filter_callback=lambda x: 'a' in x.name
                             ).print_all()
'''
name                                  ncall  tsub      ttot      tavg
doc2.py:6 a                           1      0.000001  0.000001  0.000001
package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
'''

Profile an asyncio application:

You can see that coroutine wall-time's are correctly profiled.

import asyncio
import yappi

async def foo():
    await asyncio.sleep(1.0)
    await baz()
    await asyncio.sleep(0.5)

async def bar():
    await asyncio.sleep(2.0)

async def baz():
    await asyncio.sleep(1.0)

yappi.set_clock_type("WALL")
with yappi.run():
    asyncio.run(foo())
    asyncio.run(bar())
yappi.get_func_stats().print_all()
'''
Clock type: WALL
Ordered by: totaltime, desc

name                                  ncall  tsub      ttot      tavg      
doc4.py:5 foo                         1      0.000030  2.503808  2.503808
doc4.py:11 bar                        1      0.000012  2.002492  2.002492
doc4.py:15 baz                        1      0.000013  1.001397  1.001397
'''

Profile a gevent application:

You can use yappi to profile greenlet applications now!

import yappi
from greenlet import greenlet
import time

class GreenletA(greenlet):
    def run(self):
        time.sleep(1)

yappi.set_context_backend("greenlet")
yappi.set_clock_type("wall")

yappi.start(builtins=True)
a = GreenletA()
a.switch()
yappi.stop()

yappi.get_func_stats().print_all()
'''
name                                  ncall  tsub      ttot      tavg
tests/test_random.py:6 GreenletA.run  1      0.000007  1.000494  1.000494
time.sleep                            1      1.000487  1.000487  1.000487
'''

Documentation

Related Talks

Special thanks to A.Jesse Jiryu Davis:

PyCharm Integration

Yappi is the default profiler in PyCharm. If you have Yappi installed, PyCharm will use it. See the official documentation for more details.

Comments
  • Coroutine-aware wall-time profiling

    Coroutine-aware wall-time profiling

    For the sake of profiling ASGI apps, it would be really helpful if there was a way to make use of python 3.7's contextvars while filtering via yappi.get_func_stats. If yappi supported this, you could get per-request-response-cycle profiling through an ASGI middleware.

    I'm not sure if the necessary functionality already exists or not, but wanted to open an issue in case anyone else has thought about this, or in case someone is aware of an existing way to accomplish this.

    See https://github.com/tiangolo/fastapi/issues/701 for more context on how we'd like to use this.

    1.2 
    opened by dmontagu 46
  • Make yappi grok greenlets

    Make yappi grok greenlets

    Proposal to make yappi grok greenlets

    This PR is a proposal to make yappi understand greenlets. The change allows yappi to map each greenlet to a unique execution context. Further, it also adjusts CPU stats measured to account for interleaving greenlet execution on one thread.

    Changes required

    The following changes are required by the fix:

    • ensure _profile_thread is called only from the target thread to be profiled.

      _profile_thread invokes the _current_context_id operation which retrieves the execution context of a thread.

      Using _current_context_id to retrieve the context information from another thread is not guaranteed to work and is hard to get right when yappi is configured with set_context_id_callback because the callback is not passed the information about what thread it has to fetch information for.

      It is also to hard to achieve the above when using greenlets because there is no obvious way to retrieve the greenlet currently executing on another thread. i.e there is no simple way to retrieve the greenlet currently executing on threadA from threadB.

      To get around this, this change guarantees that _profile_thread and _ensure_thread_profiled is only called from the target thread by removing their invocations from the 'start' sequence when 'multithreading' is enabled and delaying it. This is done by attaching a separate profiler func callback (profile_event and profile_event_c) to the threads. These functions first run _ensure_thread_profiled in the target thread context and change the profiler func callback to _yapp_callback. This way the work done by the start sequence is reduced and _ensure_thread_profiled is delayed to the first time a callback is invoked on the target thread.

    • add set_ctx_backend and allow specifying threading backend as greenlets as suggested by @sumerc

    • modify _current_context_id to identify the greenlet currently being executed when backend is greenlets. This is done the same way as it is for threads, except here we use the dictionary of the greenlet rather than that of the thread.

    • account for interleaving greenlet executions on the same native thread

      greenlet is a co-operative multitasking framework that introduces user controlled threads called greenlets. Each native python thread can run a number of greenlets. Greenlets on the same thread co-operatively switch between one another using greenlet.switch. it is important to note that a greenlet can only execute on the thread it was spawned on. As a result, greenlets spawned on one thread can only switch between each other.

      Greenlets do not play well with yappi. Yappi uses thread local stats to track the CPU usage of each frame. It tracks the usage at the entry of each function and at the exit of each function and subtracts the values to get the total clock cycles spent in that frame. This does not work well with greenlets. Let's take an example of two greenlets A and B and view the following sequence of events.

      • greenletA is currently running
      • greenletA enters function foo
      • greenletA co-operatively switches to greenletB
      • greenletB consumes 1 second of CPU
      • greenletB switches over back to greenletA
      • greenletA exits function foo

      The CPU usage of function foo will be 1 second more than it should be because greenletB's execution is interleaved.

      To account for this we have to track the context switches and adjust stats. The solution implemented in this change is a derivative of https://github.com/ajdavis/GreenletProfiler with some modifications to account for multiple native threads

      To track context switches:

      • declare a new thread local variable tl_prev_ctx. This needs to be thread local since multiple threads can exist. Greenlet context switches should be tracked at a thread level rather than at a global level.
      • tl_prev_ctx is initialised in _ensure_thread_profiled. It is initialised to the greenlet currently running on that thread
      • use tl_prev_ctx in _yapp_callback to determine if the context changed since the last time it was set.

      To adjust stats, if a switch is noticed from prev->current, then:

      • pause prev. Store the time it was paused under the paused_at field of the context associated with prev
      • resume the current greenlet. use the paused_at field and the current value of tickcount to determine how long the greenlet was paused for. Since a greenlet can only execute on one thread, subtracting these values is an accurate measure.

    NOTE: Another way to track context switches is by using greenlet.settrace (see https://greenlet.readthedocs.io/en/latest/). However installing this trace function for each native thread and uninstalling it did not seem easy to implement.

    What is remaining?

    This is not a complete fix, here's what is remaining:

    • make this work for wall clock time or document limitations of wall clock time measurement
    • make declaration of tl_prev_ctx platform independent. __thread is not portable to windows AFAIK
    • fix build system to install greenlet before compilation of yappi C extension. greenlet.h is required by yappi.
    • enumerate and add more test cases
    • there are a few TODOs added within the code. They are to be complete before closing. Some of them require discussion with @sumerc for a better understanding.

    Tests performed

    • The tests provided under https://github.com/sumerc/yappi/pull/53 have been modified to test correct CPU usage measurement
    • The script provided by @sumerc under https://github.com/sumerc/yappi/pull/53 was run. Here are the results:
    import gevent
    import yappi
    
    def burn_cpu(sec):
        t0 = yappi.get_clock_time()
        elapsed = 0
        while (elapsed < sec):
            for _ in range(1000):
                pass
            elapsed = yappi.get_clock_time() - t0
    
    
    def foo():
        burn_cpu(0.1)
        gevent.sleep(1.0)
    
    
    def bar():
        burn_cpu(0.1)
        gevent.sleep(1.0)
    
    
    #yappi.set_clock_type("wall")
    yappi.set_ctx_backend("greenlet")
    yappi.start()
    g1 = gevent.spawn(foo)
    g2 = gevent.spawn(bar)
    gevent.wait([g1, g2])
    
    yappi.stop()
    yappi.get_func_stats(
        filter_callback=lambda x: yappi.func_matches(x, [foo, bar])
    ).print_all()
    
    Clock type: CPU
    Ordered by: totaltime, desc
    
    name                                  ncall  tsub      ttot      tavg
    rand_test.py:19 bar                   1      0.000032  0.100202  0.100202
    rand_test.py:14 foo                   1      0.000040  0.100171  0.100171
    
    • automated tests were executed with both python2.7 and python3.8
    opened by Suhail-MOHD 40
  • Feature request: Filter stats by function descriptor

    Feature request: Filter stats by function descriptor

    As part of an application profiler plugin project for the ASGI protocol, where users can just pass a list of functions to profile, I was using a simple yappi.get_func_stats({"name": somefunction.__qualname__, "tag": ctx_tag}) filter, thinking it would be enough to unambigously select those functions. I later found out that __qualname__ is only unambiguous within the scope of a module, so I should in all likelyhood specify somefunction.__module__ as part of the filter as well, but that got me to look at the code yappi uses to identify the functions being called:

    https://github.com/sumerc/yappi/blob/6c97f55ae69a0979bdea8e557510a5d349da340c/yappi/_yappi.c#L546-L567

    It uses ml_name as the function name, which apparently corresponds to __name__, not __qualname__ like I initially figured yappi would use, and has a few edge-cases for both function and module name that could make it difficult/unreliable to build the correct filter automatically.

    Shouldn't yappi have some kind of filter_from_function_descriptor function to generate a detailled filter from a function descriptor automatically, instead of leaving users guessing as to whether they're accidentally profiling other functions with the same name as the one they want?

    opened by sm-Fifteen 26
  • How to get a callgraph including  `asyncio.gather`?

    How to get a callgraph including `asyncio.gather`?

    Coroutines being run via asyncio.gather do not show-up in the callgraph for the calling function.

    asyncio.gather returns a Future gathering the results from the provided coroutines. Timings are correct (up to the caveats in #21) but the callgraph only shows the creation of the gathering future. The caller for the coroutines run via gather is the event loop.

    Is there any way to provide hints to yappi to change the caller for the coroutines?

    Example:

    from asyncio import gather,  run, sleep
    import yappi
    
    async def aio_worker(id):
        await sleep(1.)
        return id
    
    async def doit():
        task1 = aio_worker(1)
        task2 = aio_worker(2)
    
        return await gather(task1, task2)
    
    if __name__ == '__main__':
        yappi.set_clock_type('wall')
        with yappi.run(builtins=True):
            run(doit())
    
        stats = yappi.get_func_stats()
    
        print("\n\nCallers of 'aio_worker'")
        ps = yappi.convert2pstats(stats)
        ps.print_callees('doit')
        ps.print_callees('gather')  # <- this schedules a future collecting results, only see its __init__ in callees
    
        ps.print_callers('aio_worker')  # <- only caller is the event loop: "__builtin__:0(<method 'run' of 'Context' objects>)"
    
    

    For me it would be ok if gather would not show-up in the callgraph at all and aio_worker looked like a direct callee of doit.

    opened by asodeur 13
  • Add async profiling support for gevent

    Add async profiling support for gevent

    First of all, thanks a lot for this project, yappi is really the best Python profiler !

    In my projects I use gevent extensively. I was full of hope when I found this blog article:

    https://emptysqua.re/blog/greenletprofiler/

    Someone made a greenlet profiler a few years ago on top of yappi...

    But the project is Python 2, and it was coming with a modified, bundled version of yappi.

    Now that yappi supports coroutines (with asyncio, unfortunately), could you please give me some guidance how to redo what was done with greenletprofiler ?

    I would be happy to contribute to yappi with gevent support, but I need some help - my C++ skills are rusty and I don't know the code.

    I saw in this discussion https://github.com/sumerc/yappi/issues/21 that it was something you thought about once... But I am afraid set_ctx_backend() was finally not implemented ?

    Thanks a lot

    opened by mguijarr 11
  • Add basic contextvar-based test of asyncio

    Add basic contextvar-based test of asyncio

    I'm not really sure how you want to handle the fact that this requires python 3.7.

    Figured I'd open this to share; feel free to scavenge it into the coroutine-profiling branch in your own way, or to provide me with some guidance about how you'd like it refactored. (No pressure to merge this PR now or ever, but I'm happy to add more if desired.)

    Also, let me know if there are additional tests you'd like me to add.

    opened by dmontagu 11
  • BufferError when using memoryview

    BufferError when using memoryview

    I noticed this when trying to use yappi with tornado. Here is a simple repro:

    import yappi
    yappi.start()
    
    
    def test_mem():
        buf = bytearray()
        buf += b't' * 200
        view = memoryview(buf)[10:].tobytes()
        del buf[:10]
        return view
    
    
    if __name__ == "__main__":
        test_mem()
    

    Without yappi, the code completes successfully. With yappi I get the following error:

    Traceback (most recent call last):
      File "yappi_repro.py", line 14, in <module>
        test_mem()
      File "yappi_repro.py", line 9, in test_mem
        del buf[:10]
    BufferError: Existing exports of data: object cannot be re-sized
    

    Reproducible on python 3.6, 3.7 and 3.8. I have not tried python 2.7

    bug 
    opened by coldeasy 10
  • pip install fails on Windows 10, despite having latest C++ build tools installed

    pip install fails on Windows 10, despite having latest C++ build tools installed

    Any ideas how to fix this?

    I first got an error saying to install Microsoft Visual Studio C++ 14.0 or later build tools. I installed them from https://visualstudio.microsoft.com/visual-cpp-build-tools/

    Here is the checkbox I ticked for the build tools. Which version of MSVC does yappi require?

    image

    โฏ pip install yappi --no-cache-dir
    Collecting yappi
      Downloading yappi-1.3.2.tar.gz (58 kB)
         |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 58 kB 2.0 MB/s
    Building wheels for collected packages: yappi
      Building wheel for yappi (setup.py) ... error
      ERROR: Command errored out with exit status 1:
       command: 'c:\tools\miniconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Use
    rs\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c018e9d5f8a6d367185\\setup.py'"'"'; __file__='"
    '"'C:\\Users\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c018e9d5f8a6d367185\\setup.py'"'"';f 
    = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools im
    port setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '
    "'"'exec'"'"'))' bdist_wheel -d 'C:\Users\jaan\AppData\Local\Temp\pip-wheel-6lwb2tps'
           cwd: C:\Users\jaan\AppData\Local\Temp\pip-install-930ozymw\yappi_e32707e233e74c018e9d5f8a6d367185 
      Complete output (15 lines):
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build\lib.win32-3.8
      copying yappi\yappi.py -> build\lib.win32-3.8
      running build_ext
      building '_yappi' extension
      creating build\temp.win32-3.8
      creating build\temp.win32-3.8\Release
      creating build\temp.win32-3.8\Release\yappi
      C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\bin\HostX86\x86\cl.exe /c /no
    logo /Ox /W3 /GL /DNDEBUG /MD -Ic:\tools\miniconda3\include -Ic:\tools\miniconda3\include "-IC:\Program Files (x86)\Micr
    osoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\include" /Tcyappi/_yappi.c /Fobuild\temp.win32-3.8\Release
    \yappi/_yappi.obj
      _yappi.c
      c:\tools\miniconda3\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or direc
    tory
      error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.28.29910\\bin\
    \HostX86\\x86\\cl.exe' failed with exit status 2
      ----------------------------------------
      ERROR: Failed building wheel for yappi
      Running setup.py clean for yappi
    Failed to build yappi
    Installing collected packages: yappi
        Running setup.py install for yappi ... error
        ERROR: Command errored out with exit status 1:
         command: 'c:\tools\miniconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\U
    sers\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c018e9d5f8a6d367185\\setup.py'"'"'; __file__=
    '"'"'C:\\Users\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c018e9d5f8a6d367185\\setup.py'"'"';
    f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools
    import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__,
     '"'"'exec'"'"'))' install --record 'C:\Users\jaan\AppData\Local\Temp\pip-record-wf2xdwwh\install-record.txt' --single-v
    ersion-externally-managed --compile --install-headers 'c:\tools\miniconda3\Include\yappi'
             cwd: C:\Users\jaan\AppData\Local\Temp\pip-install-930ozymw\yappi_e32707e233e74c018e9d5f8a6d367185\
        Complete output (15 lines):
        running install
        running build
        running build_py
        creating build
        creating build\lib.win32-3.8
        copying yappi\yappi.py -> build\lib.win32-3.8
        running build_ext
        building '_yappi' extension
        creating build\temp.win32-3.8
        creating build\temp.win32-3.8\Release
        creating build\temp.win32-3.8\Release\yappi
        C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\bin\HostX86\x86\cl.exe /c /
    nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\tools\miniconda3\include -Ic:\tools\miniconda3\include "-IC:\Program Files (x86)\Mi
    crosoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\include" /Tcyappi/_yappi.c /Fobuild\temp.win32-3.8\Relea
    se\yappi/_yappi.obj
        _yappi.c
        c:\tools\miniconda3\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or dir
    ectory
        error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.28.29910\\bi
    n\\HostX86\\x86\\cl.exe' failed with exit status 2
        ----------------------------------------
    ERROR: Command errored out with exit status 1: 'c:\tools\miniconda3\python.exe' -u -c 'import io, os, sys, setuptools, t
    okenize; sys.argv[0] = '"'"'C:\\Users\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c018e9d5f8a6
    d367185\\setup.py'"'"'; __file__='"'"'C:\\Users\\jaan\\AppData\\Local\\Temp\\pip-install-930ozymw\\yappi_e32707e233e74c0
    18e9d5f8a6d367185\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else
    io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.clo
    se();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jaan\AppData\Local\Temp\pip-record-wf2xdw
    wh\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\tools\miniconda3\Include\yapp
    i' Check the logs for full command output.
    
    opened by altosaar 7
  • Generate Wheels for Windows

    Generate Wheels for Windows

    > pip install yappi Collecting yappi Using cached https://files.pythonhosted.org/packages/37/dc/86bbe1822cdc6dbf46c644061bd24217f6a0f056f00162a3697c9bea7575/yappi-1.2.3.tar.gz Installing collected packages: yappi Running setup.py install for yappi ... error ERROR: Command errored out with exit status 1: command: 'c:\users\rabbott\appdata\local\programs\python\python38-32\python.exe\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\rabbott\AppData\Local\Temp\pip-install-q4kk7o84\yappi\setup.py'"'"'; file='"'"'C:\Users\rabbott\AppData\Local\Temp\pip-install-q4kk7o84\yappi\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\rabbott\AppData\Local\Temp\pip-record-yukv2gvu\install-record.txt' --single-version-externally-managed --compile cwd: C:\Users\rabbott\AppData\Local\Temp\pip-install-q4kk7o84\yappi
    Complete output (9 lines): running install running build running build_py creating build creating build\lib.win32-3.8 copying yappi\yappi.py -> build\lib.win32-3.8 running build_ext building '_yappi' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\rabbott\appdata\local\programs\python\python38-32\python.exe\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\rabbott\AppData\Local\Temp\pip-install-q4kk7o84\yappi\setup.py'"'"'; file='"'"'C:\Users\rabbott\AppData\Local\Temp\pip-install-q4kk7o84\yappi\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\rabbott\AppData\Local\Temp\pip-record-yukv2gvu\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.

    1.3 
    opened by RussAbbott 7
  • Internal Error 15

    Internal Error 15

    Good day,

    I'm author of pptop, which has a plugin for yappi profiler. After updating to 1.2.1, Yappi prints a lot of "Internal Error 15" messages when working. I believe the issue is because pptop plugin starts yappi from the own thread, however everything seems to work fine except this message.

    p.s. it would be also nice to have an ability obtaining data fields from get_func_stats() by keywords, as I see some of their indexes are changed from version to version.

    1.2 
    opened by divi255 7
  • Feature request: Filter stats by package

    Feature request: Filter stats by package

    Hi there,

    It would be lovely to filter stats by package. Because you're using PyObject_RichCompareBool as part of the filter it's already possible to achieve this with a helper object. Would you be interested in integrating this as a feature with a stable API?

    My current implementation is:

    import dataclasses
    import importlib
    import os
    
    @dataclasses.dataclass
    class PackageModule:
        package: str
        
        def __post_init__(self):
            mod = importlib.import_module(self.package)
            self.fn = mod.__file__
            if self.fn.endswith("__init__.py"):
                self.fn = os.path.dirname(self.fn)
        
        def __eq__(self, other):
            return other.startswith(self.fn)
            
    
    yappi.get_func_stats(filter={"modname": PackageModule("apd.aggregation")}).print_all()
    

    There are caveats to this, mainly that it requires the module to be importable, and not undesirable to import (such as potential import-time side-effects), but I think it improves the usability a fair bit.

    What do you think? If you're interested I'm happy to put together a PR.

    Matt

    opened by MatthewWilkes 6
  • Bump pypa/cibuildwheel from 2.9.0 to 2.11.4

    Bump pypa/cibuildwheel from 2.9.0 to 2.11.4

    Bumps pypa/cibuildwheel from 2.9.0 to 2.11.4.

    Release notes

    Sourced from pypa/cibuildwheel's releases.

    v2.11.4

    • ๐Ÿ› Fix a bug that caused missing wheels on Windows when a test was skipped using CIBW_TEST_SKIP (#1377)
    • ๐Ÿ›  Updates CPython 3.11 to 3.11.1 (#1371)
    • ๐Ÿ›  Updates PyPy 3.7 to 3.7.10, except on macOS which remains on 7.3.9 due to a bug. (#1371)
    • ๐Ÿ“š Added a reference to abi3audit to the docs (#1347)

    v2.11.3

    • โœจ Improves the 'build options' log output that's printed at the start of each run (#1352)
    • โœจ Added a friendly error message to a common misconfiguration of the CIBW_TEST_COMMAND option - not specifying path using the {project} placeholder (#1336)
    • ๐Ÿ›  The GitHub Action now uses Powershell on Windows to avoid occasional incompabilities with bash (#1346)

    v2.11.2

    • ๐Ÿ›  Updates CPython 3.11 to 3.11.0 - final release (#1327)
    • ๐Ÿ›  Simplify the default macOS repair command (#1322)
    • ๐Ÿ›  Fix the default MACOSX_DEPLOYMENT_TARGET on arm64 (#1312)
    • ๐Ÿ›  Hide irrelevant pip warnings on linux (#1311)
    • ๐Ÿ› Fix a bug that caused the stdout and stderr of commands in containers to be in the wrong order Previously, stdout could appear after stderr. (#1324)
    • ๐Ÿ“š Added a FAQ entry describing how to perform native builds of CPython 3.8 wheels on Apple Silicon. (#1323)
    • ๐Ÿ“š Other docs improvements

    v2.11.1

    • ๐Ÿ›  Updates to the latest manylinux images, and updates CPython 3.10 to 3.10.8.

    v2.11.0

    • ๐ŸŒŸ Adds support for cross-compiling Windows ARM64 wheels. To use this feature, add ARM64 to the CIBW_ARCHS option on a Windows Intel runner. (#1144)
    • โœจ Adds support for building Linux aarch64 wheels on Circle CI. (#1307)
    • โœจ Adds support for building Windows wheels on Gitlab CI. (#1295)
    • โœจ Adds support for building Linux aarch64 wheels under emulation on Gitlab CI. (#1295)
    • โœจ Adds the ability to test cp38-macosx_arm64 wheels on a native arm64 runner. To do this, you'll need to preinstall the (experimental) universal2 version of CPython 3.8 on your arm64 runner before invoking cibuildwheel. Note: it is not recommended to build x86_64 wheels with this setup, your wheels will have limited compatibility wrt macOS versions. (#1283)
    • ๐Ÿ›  Improved error messages when using custom Docker images and Python cannot be found at the correct path. (#1298)
    • ๐Ÿ“š Sample configs for Azure Pipelines and Travis CI updated (#1296)
    • ๐Ÿ“š Other docs improvements - including more information about using Homebrew for build dependencies (#1290)

    v2.10.2

    • ๐Ÿ› Fix a bug that caused win32 identifiers to fail when used with --only. (#1282)
    • ๐Ÿ› Fix computation of auto/auto64/auto32 archs when targeting a different platform to the one that you're running cibuildwheel on. (#1266)
    • ๐Ÿ“š Fix an mistake in the 'how it works' diagram. (#1274)

    v2.10.1

    • ๐Ÿ› Fix a bug that stopped environment variables specified in TOML from being expanded. (#1273)

    v2.10.0

    • ๐ŸŒŸ Adds support for building wheels on Cirrus CI. This is exciting for us, as it's the first public CI platform that natively supports macOS Apple Silicon (aka. M1, arm64) runners. As such, it's the first platform that you can natively build and test macOS arm64 wheels. It also has native Linux ARM (aarch64) runners, for fast, native builds there. (#1191)
    • ๐ŸŒŸ Adds support for running cibuildwheel on Apple Silicon machines. For a while, we've supported cross-compilation of Apple Silicon wheels on x86_64, but now that we have Cirrus CI we can run our test suite and officially support running cibuildwheel on arm64. (#1191)
    • โœจ Adds the --only command line option, to specify a single build to run. Previously, it could be cumbersome to set all the build selection options to target a specific build - for example, you might have to run something like CIBW_BUILD=cp39-manylinux_x86_64 cibuildwheel --platform linux --archs x86_64. The new --only option overrides all the build selection options to simplify running a single build, which now looks like cibuildwheel --only cp39-manylinux_x86_64. (#1098)
    • โœจ Adds the CIBW_CONFIG_SETTINGS option, so you can pass arguments to your package's build backend (#1244)
    • ๐Ÿ›  Updates the CPython 3.11 version to the latest release candidate - v3.11.0rc2. (#1265)
    • ๐Ÿ› Fix a bug that can cause a RecursionError on Windows when building from an sdist. (#1253)
    • ๐Ÿ›  Add support for the s390x architecture on manylinux_2_28 (#1255)
    Changelog

    Sourced from pypa/cibuildwheel's changelog.

    v2.11.4

    24 Dec 2022

    • ๐Ÿ› Fix a bug that caused missing wheels on Windows when a test was skipped using CIBW_TEST_SKIP (#1377)
    • ๐Ÿ›  Updates CPython 3.11 to 3.11.1 (#1371)
    • ๐Ÿ›  Updates PyPy to 7.3.10, except on macOS which remains on 7.3.9 due to a bug on that platform. (#1371)
    • ๐Ÿ“š Added a reference to abi3audit to the docs (#1347)

    v2.11.3

    5 Dec 2022

    • โœจ Improves the 'build options' log output that's printed at the start of each run (#1352)
    • โœจ Added a friendly error message to a common misconfiguration of the CIBW_TEST_COMMAND option - not specifying path using the {project} placeholder (#1336)
    • ๐Ÿ›  The GitHub Action now uses Powershell on Windows to avoid occasional incompabilities with bash (#1346)

    v2.11.2

    26 October 2022

    • ๐Ÿ›  Updates CPython 3.11 to 3.11.0 - final release (#1327)
    • ๐Ÿ›  Simplify the default macOS repair command (#1322)
    • ๐Ÿ›  Fix the default MACOSX_DEPLOYMENT_TARGET on arm64 (#1312)
    • ๐Ÿ›  Hide irrelevant pip warnings on linux (#1311)
    • ๐Ÿ› Fix a bug that caused the stdout and stderr of commands in containers to be in the wrong order Previously, stdout could appear after stderr. (#1324)
    • ๐Ÿ“š Added a FAQ entry describing how to perform native builds of CPython 3.8 wheels on Apple Silicon. (#1323)
    • ๐Ÿ“š Other docs improvements

    v2.11.1

    13 October 2022

    • ๐Ÿ›  Updates to the latest manylinux images, and updates CPython 3.10 to 3.10.8.

    v2.11.0

    13 October 2022

    • ๐ŸŒŸ Adds support for cross-compiling Windows ARM64 wheels. To use this feature, add ARM64 to the CIBW_ARCHS option on a Windows Intel runner. (#1144)
    • โœจ Adds support for building Linux aarch64 wheels on Circle CI. (#1307)
    • โœจ Adds support for building Windows wheels on Gitlab CI. (#1295)
    • โœจ Adds support for building Linux aarch64 wheels under emulation on Gitlab CI. (#1295)
    • โœจ Adds the ability to test cp38-macosx_arm64 wheels on a native arm64 runner. To do this, you'll need to preinstall the (experimental) universal2 version of CPython 3.8 on your arm64 runner before invoking cibuildwheel. Note: it is not recommended to build x86_64 wheels with this setup, your wheels will have limited compatibility wrt macOS versions. (#1283)
    • ๐Ÿ›  Improved error messages when using custom Docker images and Python cannot be found at the correct path. (#1298)
    • ๐Ÿ“š Sample configs for Azure Pipelines and Travis CI updated (#1296)
    • ๐Ÿ“š Other docs improvements - including more information about using Homebrew for build dependencies (#1290)

    v2.10.2

    ... (truncated)

    Commits
    • 27fc88e Bump version: v2.11.4
    • a7e9ece Merge pull request #1371 from pypa/update-dependencies-pr
    • b9a3ed8 Update cibuildwheel/resources/build-platforms.toml
    • 3dcc2ff fix: not skipping the tests stops the copy (Windows ARM) (#1377)
    • 1c9ec76 Merge pull request #1378 from pypa/henryiii-patch-3
    • 22b433d Merge pull request #1379 from pypa/pre-commit-ci-update-config
    • 98fdf8c [pre-commit.ci] pre-commit autoupdate
    • cefc5a5 Update dependencies
    • e53253d ci: move to ubuntu 20
    • e9ecc65 [pre-commit.ci] pre-commit autoupdate (#1374)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump pypa/gh-action-pypi-publish from 1.5.1 to 1.6.4

    Bump pypa/gh-action-pypi-publish from 1.5.1 to 1.6.4

    Bumps pypa/gh-action-pypi-publish from 1.5.1 to 1.6.4.

    Release notes

    Sourced from pypa/gh-action-pypi-publish's releases.

    v1.6.4

    oh, boi! again?

    This is the last one tonight, promise! It fixes this embarrassing bug that was actually caught by the CI but got overlooked due to the lack of sleep. TL;DR GH passed $HOME from the external env into the container and that tricked the Python's site module to think that the home directory is elsewhere, adding non-existent paths to the env vars. See #115.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.3...v1.6.4

    v1.6.3

    Another Release!? Why?

    In pypa/gh-action-pypi-publish#112, it was discovered that passing a $PATH variable even breaks the shebang. So this version adds more safeguards to make sure it keeps working with a fully broken $PATH.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.2...v1.6.3

    v1.6.2

    What's Fixed

    • Made the $PATH and $PYTHONPATH environment variables resilient to broken values passed from the host runner environment, which previously allowed the users to accidentally break the container's internal runtime as reported in pypa/gh-action-pypi-publish#112

    Internal Maintenance Improvements

    New Contributors

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.1...v1.6.2

    v1.6.1

    What's happened?!

    There was a sneaky bug in v1.6.0 which caused Twine to be outside the import path in the Python runtime. It is fixed in v1.6.1 by updating $PYTHONPATH to point to a correct location of the user-global site-packages/ directory.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.0...v1.6.1

    v1.6.0

    Anything's changed?

    The only update is that the Python runtime has been upgraded from 3.9 to 3.11. There are no functional changes in this release.

    Full Changelog: https://github.com/pypa/gh-action-pypi-publish/compare/v1.5.2...v1.6.0

    v1.5.2

    What's Improved

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.5.1...v1.5.2

    Commits
    • c7f29f7 ๐Ÿ› Override $HOME in the container with /root
    • 644926c ๐Ÿงช Always run smoke testing in debug mode
    • e71a4a4 Add support for verbose bash execusion w/ $DEBUG
    • e56e821 ๐Ÿ› Make id always available in twine-upload
    • c879b84 ๐Ÿ› Use full path to bash in shebang
    • 57e7d53 ๐Ÿ›Ensure the default $PATH value is pre-loaded
    • ce291dc ๐ŸŽจ๐Ÿ›Fix the branch @ pre-commit.ci badge links
    • 102d8ab ๐Ÿ› Rehardcode devpi port for GHA srv container
    • 3a9eaef ๐Ÿ›Use different ports in/out of GHA containers
    • a01fa74 ๐Ÿ› Use localhost @ GHA outside the containers
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • base class names incorrect

    base class names incorrect

    Dear Yappies, thanks for this fantastic tool - I wish I would have found it earlier :-)

    This ticket is about an issue with names derived for bass classes in a simple class hierarchy. It seems that base class methods are given the name of the first child class which calls it, and that subsequent usages by other child classes then get accounted incorrectly.

    Here is a small reproducer:

    `test_yappi.py`
    #!/usr/bin/env python3
    
    from test_yappi import Base
    
    class A(Base):
        def __init__(self):
            self.sleep(1)
    
    class B(Base):
        def __init__(self):
            self.sleep(1)
    
    def main():
        _ = A()
        for _ in range(10):
            _ = B()
    
    if __name__ == '__main__':
    
        import yappi
        yappi.start(builtins=False)
    
        main()
    
        yappi.get_thread_stats().print_all()
        stats = yappi.convert2pstats(yappi.get_func_stats())
        stats.print_stats()
        stats.dump_stats('pstats.prof')
    
    `test_yappi.py`
    
    import time
    
    class Base(object):
    
        def sleep(self, tout):
            time.sleep(tout)
    

    test_yappi/__init__.py exists but is empty.

    With this code I would expect to find A.sleep and B.sleep, or to find Base.sleep as profiled methods, but in fact I find only A.sleep:

    $ strings pstats.prof | grep sleep
    A.sleep)
    

    and the stats result then reflect that:

    $ ./test_yappi.py 
    
    name           id     tid              ttot      scnt        
    _MainThread    0      140335337744192  0.001044  1         
             23 function calls in 0.000 seconds
    
       Random listing order was used
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.000    0.000    0.001    0.001 ./test_yappi.py:13(main)
           11    0.000    0.000    0.001    0.000 [...]/test_yappi/base.py:6(A.sleep)
           10    0.000    0.000    0.001    0.000 ./test_yappi.py:10(B.__init__)
            1    0.000    0.000    0.000    0.000 ./test_yappi.py:6(A.__init__)
    

    (BTW: I would have expected different values for cumtime, about 11 seconds for main - what happened?)

    When attempting to evaluate results it is basically impossible to separate what contributions the sleep calls for A and B have, respectively. In this specific case it helps a bit to plot the results, but in any non-trivial use case that becomes unwieldy very fast:

    output

    Is this known behavior? Is there a way to avoid the mis-labeling?

    Thanks, Andre.

    opened by andre-merzky 2
  • Bump pypa/cibuildwheel from 2.9.0 to 2.10.1

    Bump pypa/cibuildwheel from 2.9.0 to 2.10.1

    Bumps pypa/cibuildwheel from 2.9.0 to 2.10.1.

    Release notes

    Sourced from pypa/cibuildwheel's releases.

    v2.10.1

    • ๐Ÿ› Fix a bug that stopped environment variables specified in TOML from being expanded. (#1273)

    v2.10.0

    • ๐ŸŒŸ Adds support for building wheels on Cirrus CI. This is exciting for us, as it's the first public CI platform that natively supports macOS Apple Silicon (aka. M1, arm64) runners. As such, it's the first platform that you can natively build and test macOS arm64 wheels. It also has native Linux ARM (aarch64) runners, for fast, native builds there. (#1191)
    • ๐ŸŒŸ Adds support for running cibuildwheel on Apple Silicon machines. For a while, we've supported cross-compilation of Apple Silicon wheels on x86_64, but now that we have Cirrus CI we can run our test suite and officially support running cibuildwheel on arm64. (#1191)
    • โœจ Adds the --only command line option, to specify a single build to run. Previously, it could be cumbersome to set all the build selection options to target a specific build - for example, you might have to run something like CIBW_BUILD=cp39-manylinux_x86_64 cibuildwheel --platform linux --archs x86_64. The new --only option overrides all the build selection options to simplify running a single build, which now looks like cibuildwheel --only cp39-manylinux_x86_64. (#1098)
    • โœจ Adds the CIBW_CONFIG_SETTINGS option, so you can pass arguments to your package's build backend (#1244)
    • ๐Ÿ›  Updates the CPython 3.11 version to the latest release candidate - v3.11.0rc2. (#1265)
    • ๐Ÿ› Fix a bug that can cause a RecursionError on Windows when building from an sdist. (#1253)
    • ๐Ÿ›  Add support for the s390x architecture on manylinux_2_28 (#1255)
    Changelog

    Sourced from pypa/cibuildwheel's changelog.

    v2.10.1

    18 September 2022

    • ๐Ÿ› Fix a bug that stopped environment variables specified in TOML from being expanded. (#1273)

    v2.10.0

    13 September 2022

    • ๐ŸŒŸ Adds support for building wheels on Cirrus CI. This is exciting for us, as it's the first public CI platform that natively supports macOS Apple Silicon (aka. M1, arm64) runners. As such, it's the first platform that you can natively build and test macOS arm64 wheels. It also has native Linux ARM (aarch64) runners, for fast, native builds there. (#1191)
    • ๐ŸŒŸ Adds support for running cibuildwheel on Apple Silicon machines. For a while, we've supported cross-compilation of Apple Silicon wheels on x86_64, but now that we have Cirrus CI we can run our test suite and officially support running cibuildwheel on arm64. (#1191)
    • โœจ Adds the --only command line option, to specify a single build to run. Previously, it could be cumbersome to set all the build selection options to target a specific build - for example, you might have to run something like CIBW_BUILD=cp39-manylinux_x86_64 cibuildwheel --platform linux --archs x86_64. The new --only option overrides all the build selection options to simplify running a single build, which now looks like cibuildwheel --only cp39-manylinux_x86_64. (#1098)
    • โœจ Adds the CIBW_CONFIG_SETTINGS option, so you can pass arguments to your package's build backend (#1244)
    • ๐Ÿ›  Updates the CPython 3.11 version to the latest release candidate - v3.11.0rc2. (#1265)
    • ๐Ÿ› Fix a bug that can cause a RecursionError on Windows when building from an sdist. (#1253)
    • ๐Ÿ›  Add support for the s390x architecture on manylinux_2_28 (#1255)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 0
  • Python run `gevent` tests on CI for `3.11`

    Python run `gevent` tests on CI for `3.11`

    Currently gevent tests are excluded for 3.11 as it is not compiling on 3.11 yet

    from https://github.com/sumerc/yappi/pull/107

    Include them once gevent released on 3.11.

    opened by sumerc 0
  • Openssl related Error install Yappi with pip on pyton 3.11 OS:Windows

    Openssl related Error install Yappi with pip on pyton 3.11 OS:Windows

    Hi i'm having some problems trying to install yappi on windows 10 (x32) my python version is 3.11 , i belive the problem is version related:

    Here the callstack:

      โ”‚ exit code: 1
      โ•ฐโ”€> [24 lines of output]
          running bdist_wheel
          running build
          running build_py
          creating build
          creating build\lib.win32-cpython-311
          copying yappi\yappi.py -> build\lib.win32-cpython-311
          running build_ext
          building '_yappi' extension
          creating build\temp.win32-cpython-311
          creating build\temp.win32-cpython-311\Release
          creating build\temp.win32-cpython-311\Release\yappi
          "C:\Program Files\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.33.31629\bin\HostX86\x86\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\x32\AppData\Local\Programs\Python\Python311-32\include -IC:\Users\x32\AppData\Local\Programs\Python\Python311-32\Include "-IC:\Program Files\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.33.31629\include" "-IC:\Program Files\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /Tcyappi/_yappi.c /Fobuild\temp.win32-cpython-311\Release\yappi/_yappi.obj
          _yappi.c
          yappi/_yappi.c(220): error C2037: l'elemento a sinistra di 'f_state' specifica un '_frame' di struct/union non definito
          yappi/_yappi.c(220): error C2065: 'FRAME_SUSPENDED': identificatore non dichiarato
          yappi/_yappi.c(220): warning C4033: 'IS_SUSPENDED' deve restituire un valore
          yappi/_yappi.c(232): error C2037: l'elemento a sinistra di 'f_code' specifica un '_frame' di struct/union non definito
          yappi/_yappi.c(233): error C2037: l'elemento a sinistra di 'f_code' specifica un '_frame' di struct/union non definito
          yappi/_yappi.c(236): error C2037: l'elemento a sinistra di 'f_code' specifica un '_frame' di struct/union non definito
          yappi/_yappi.c(653): error C2037: l'elemento a sinistra di 'f_code' specifica un '_frame' di struct/union non definito
          yappi/_yappi.c(673): error C2039: 'co_varnames': non ล  un membro di 'PyCodeObject'
          C:\Users\x32\AppData\Local\Programs\Python\Python311-32\include\cpython/code.h(103): note: vedere la dichiarazione di 'PyCodeObject'
          yappi/_yappi.c(673): error C2198: 'PyUnicode_AsUTF8': argomenti insufficienti per una chiamata
          error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.33.31629\\bin\\HostX86\\x86\\cl.exe' failed with exit code 2```
    
    Thanks
    opened by mrcnpp 0
Releases(1.4.0)
Owner
Sรผmer Cip
Senior Software Engineer @blackfireio
Sรผmer Cip
Real-time metrics for nginx server

ngxtop - real-time metrics for nginx server (and others) ngxtop parses your nginx access log and outputs useful, top-like, metrics of your nginx serve

Binh Le 6.4k Dec 22, 2022
pprofile + matplotlib = Python program profiled as an awesome heatmap!

pyheat Profilers are extremely helpful tools. They help us dig deep into code, find and understand performance bottlenecks. But sometimes we just want

Vishwas B Sharma 735 Dec 27, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

Fabian Pedregosa 80 Nov 18, 2022
Visual profiler for Python

vprof vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memo

Nick Volynets 3.9k Dec 19, 2022
๐Ÿšดย Call stack profiler for Python. Shows you why your code is slow!

pyinstrument Pyinstrument is a Python profiler. A profiler is a tool to help you 'optimize' your code - make it faster. It sounds obvious, but to get

Joe Rickerby 5k Jan 01, 2023
Prometheus exporter for Flask applications

Prometheus Flask exporter This library provides HTTP request metrics to export into Prometheus. It can also track method invocations using convenient

Viktor Adam 535 Dec 23, 2022
Call-graph profiling for TwinCAT 3

Twingrind This project brings profiling to TwinCAT PLCs. The general idea of the implementation is as follows. Twingrind is a TwinCAT library that inc

stefanbesler 10 Oct 12, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

3.7k Dec 30, 2022
Prometheus integration for Starlette.

Starlette Prometheus Introduction Prometheus integration for Starlette. Requirements Python 3.6+ Starlette 0.9+ Installation $ pip install starlette-p

Josรฉ Antonio Perdiguero 229 Dec 21, 2022
System monitor - A python-based real-time system monitoring tool

System monitor A python-based real-time system monitoring tool Screenshots Installation Run My project with these commands pip install -r requiremen

Sachit Yadav 4 Feb 11, 2022
Development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application.

README for pympler Before installing Pympler, try it with your Python version: python setup.py try If any errors are reported, check whether your Pyt

996 Jan 01, 2023
GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.

GoAccess What is it? GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal on *nix systems or through y

Gerardo O. 15.6k Jan 02, 2023
Display machine state using Python3 with Flask.

Flask-State English | ็ฎ€ไฝ“ไธญๆ–‡ Flask-State is a lightweight chart plugin for displaying machine state data in your web application. Monitored Metric: CPU,

622 Dec 18, 2022
Line-by-line profiling for Python

line_profiler and kernprof NOTICE: This is the official line_profiler repository. The most recent version of line-profiler on pypi points to this repo

OpenPyUtils 1.6k Dec 31, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 02, 2023
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.

Glances - An eye on your system Summary Glances is a cross-platform monitoring tool which aims to present a large amount of monitoring information thr

Nicolas Hennion 22k Jan 04, 2023
Output provisioning profiles in a diffable way

normalize-profile This tool reads Apple's provisioning profile files and produces reproducible output perfect for diffing. You can easily integrate th

Keith Smiley 8 Oct 18, 2022
Sentry is cross-platform application monitoring, with a focus on error reporting.

Users and logs provide clues. Sentry provides answers. What's Sentry? Sentry is a service that helps you monitor and fix crashes in realtime. The serv

Sentry 33k Jan 04, 2023
ASGI middleware to record and emit timing metrics (to something like statsd)

timing-asgi This is a timing middleware for ASGI, useful for automatic instrumentation of ASGI endpoints. This was developed at GRID for use with our

Steinn Eldjรกrn Sigurรฐarson 99 Nov 21, 2022