📊 A simple command-line utility for querying and monitoring GPU status

Overview

gpustat

pypi Build Status license

Just less than nvidia-smi?

Screenshot: gpustat -cp

NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome!

Self-Promotion: A web interface of gpustat is available (in alpha)! Check out gpustat-web.

Usage

$ gpustat

Options:

  • --color : Force colored output (even when stdout is not a tty)
  • --no-color : Suppress colored output
  • -u, --show-user : Display username of the process owner
  • -c, --show-cmd : Display the process name
  • -f, --show-full-cmd : Display full command and cpu stats of running process
  • -p, --show-pid : Display PID of the process
  • -F, --show-fan : Display GPU fan speed
  • -e, --show-codec : Display encoder and/or decoder utilization
  • -P, --show-power : Display GPU power usage and/or limit (draw or draw,limit)
  • -a, --show-all : Display all gpu properties above
  • --watch, -i, --interval : Run in watch mode (equivalent to watch gpustat) if given. Denotes interval between updates. (#41)
  • --json : JSON Output (Experimental, #10)

Tips

  • To periodically watch, try gpustat --watch or gpustat -i (#41).
    • For older versions, one may use watch --color -n1.0 gpustat --color.
  • Running nvidia-smi daemon (root privilege required) will make the query much faster and use less CPU (#54).
  • The GPU ID (index) shown by gpustat (and nvidia-smi) is PCI BUS ID, while CUDA differently assigns the fastest GPU with the lowest ID by default. Therefore, in order to make CUDA and gpustat use same GPU index, configure the CUDA_DEVICE_ORDER environment variable to PCI_BUS_ID (before setting CUDA_VISIBLE_DEVICES for your CUDA program): export CUDA_DEVICE_ORDER=PCI_BUS_ID.

Quick Installation

Install from PyPI:

pip install gpustat

If you don't have root privilege, please try to install on user namespace: pip install --user gpustat.

To install the latest version (master branch) via pip:

pip install git+https://github.com/wookayin/[email protected]

Note that starting from v1.0, gpustat will support only Python 3.4+. For older versions (python 2.7, <3.4), you can continue using gpustat v0.x.

Default display

[0] GeForce GTX Titan X | 77'C, 96 % | 11848 / 12287 MB | python/52046(11821M)

  • [0]: GPUindex (starts from 0) as PCI_BUS_ID
  • GeForce GTX Titan X: GPU name
  • 77'C: Temperature
  • 96 %: Utilization
  • 11848 / 12287 MB: GPU Memory Usage
  • python/...: Running processes on GPU (and their memory usage)

Changelog

See CHANGELOG.md

License

MIT License

Comments
  • Error on calling nvidia-smi: Command 'ps ...' returned non-zero exit status 1

    Error on calling nvidia-smi: Command 'ps ...' returned non-zero exit status 1

    got above error msg when i run gpustat. but nvidia-smi works on my machine here are some details OS:Ubuntu 14.04.5 LTS Python Version: anaconda3.6

    Error on calling nvidia-smi. Use --debug flag for details
    Traceback (most recent call last):
      File "/usr/local/bin/gpustat", line 417, in print_gpustat                                                      gpu_stats = GPUStatCollection.new_query()
      File "/usr/local/bin/gpustat", line 245, in new_query
        return GPUStatCollection(gpu_list)
      File "/usr/local/bin/gpustat", line 218, in __init__
        self.update_process_information()
      File "/usr/local/bin/gpustat", line 316, in update_process_information
        processes = self.running_processes()
      File "/usr/local/bin/gpustat", line 275, in running_processes
        ','.join(map(str, pid_map.keys()))
      File "/usr/local/bin/gpustat", line 46, in execute_process
        stdout = check_output(command_shell, shell=True).strip()
      File "/home/xiyun/apps/anaconda3/lib/python3.6/subprocess.py", line 336, in check_output
        **kwargs).stdout
      File "/home/xiyun/apps/anaconda3/lib/python3.6/subprocess.py", line 418, in run
        output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command 'ps -o pid,user:16,comm -p1 -p 14471' returned non-zero exit status 1.
    
    

    how can i fix this ?

    bug 
    opened by feiwofeifeixiaowo 20
  • Faile to run ``gpustat --debug'': pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

    Faile to run ``gpustat --debug'': pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

    Hi,

    On Ubuntu 20.04 with Python 3.8.3, I failed to run gpustat --debug, as shown below:

    $ gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 644, in _LoadNvmlLibrary
        nvmlLib = CDLL("libnvidia-ml.so.1")
      File "/home/werner/.pyenv/versions/3.8.3/lib/python3.8/ctypes/__init__.py", line 373, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: libnvidia-ml.so.1: cannot open shared object file: No such file or directory
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/gpustat/__main__.py", line 19, in print_gpustat
        gpu_stats = GPUStatCollection.new_query()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/gpustat/core.py", line 281, in new_query
        N.nvmlInit()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 608, in nvmlInit
        _LoadNvmlLibrary()
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 646, in _LoadNvmlLibrary
        _nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
      File "/home/werner/.pyenv/versions/3.8.3/envs/socks5-haproxy/lib/python3.8/site-packages/pynvml.py", line 310, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
    
    
    question documentation 
    opened by hongyi-zhao 18
  • Add full process info.

    Add full process info.

    Fixes #50

    I added -f, --show-full-cmd that show the full process info as discussed in #50.

    Right now it shows the percent of CPU usage and the percent of system memory in use, but that can be changed.

    Let me know what you think.

    Example:

    server1  Wed Jun 26 15:24:33 2019  418.67
    [0] Tesla V100-SXM2-16GB | 34'C,  23 % |  1097 / 16130 MB | user1(1087M)
     └─ 72041 ( 80%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [1] Tesla V100-SXM2-16GB | 36'C,  23 % |  1097 / 16130 MB | user1(1087M)
     └─ 72042 ( 80%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [2] Tesla V100-SXM2-16GB | 35'C, 100 % |  2130 / 16130 MB | user2(777M) user1(1343M)
     ├─ 95638 (100%,  0.16%): /mnt/home/user2/anaconda3/envs/env/bin/python test_c10d.py
     └─ 72043 (100%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    [3] Tesla V100-SXM2-16GB | 34'C,  22 % |  1097 / 16130 MB | user1(1087M)
     └─ 72044 ( 40%,  0.24%): python /mnt/home/user1/git/horovod/examples/keras_mnist.py
    
    new feature 
    opened by bethune-bryant 17
  • Extra character in watch colour mode on Ubuntu 17.10

    Extra character in watch colour mode on Ubuntu 17.10

    When I use command watch --color -n1.0 gpustat --color I get a lot of extra ^: https://imgur.com/a/A9Fxc

    This problem doesn't occur without watch. I'm on Ubuntu 17.10 with wayland.

    bug 
    opened by Rizhiy 15
  • No such file or directory: '/proc/30094/stat'

    No such file or directory: '/proc/30094/stat'

    I re-install gpustat by pip,it stall has error when I used gpustat:

    root$ gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/__main__.py", line 19, in print_gpustat
        gpu_stats = GPUStatCollection.new_query()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 396, in new_query
        gpu_info = get_gpu_info(handle)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 365, in get_gpu_info
        process = get_process_info(nv_process)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/gpustat/core.py", line 294, in get_process_info
        ps_process = psutil.Process(pid=nv_process.pid)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 339, in __init__
        self._init(pid)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 366, in _init
        self.create_time()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/__init__.py", line 697, in create_time
        self._create_time = self._proc.create_time()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1459, in wrapper
        return fun(self, *args, **kwargs)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1641, in create_time
        values = self._parse_stat_file()
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_common.py", line 340, in wrapper
        return fun(self)
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 1498, in _parse_stat_file
        with open_binary("%s/%s/stat" % (self._procfs_path, self.pid)) as f:
      File "/home/zhudd/anaconda3/lib/python3.7/site-packages/psutil/_pslinux.py", line 205, in open_binary
        return open(fname, "rb", **kwargs)
    FileNotFoundError: [Errno 2] No such file or directory: '/proc/30094/stat'
    

    so , what's wrong with my gpu,please help me

    bug 
    opened by zhudd-hub 14
  • Use NVIDIA's official pynvml binding

    Use NVIDIA's official pynvml binding

    Since 2021, NVIDIA provides an official python binding pynvml: https://pypi.org/project/nvidia-ml-py/#history which should replace a third-party community fork nvidia-ml-py3 that we have been using.

    The main motivations are (1) to use an official library and (2) to add MIG support. See #102 for more details.

    Need to test whether:

    • The new pynvml API works well on old & recent NVIDIA Drivers; maybe some monkey patching needed (see https://github.com/wookayin/gpustat/issues/102#issuecomment-892833816)
    • The new pynvml API works well on Windows (see #90)

    /cc @XuehaiPan @Stonesjtu

    Important Changes

    • The official python bindings nvidia-ml-py needs to be installed, not nvidia-ml-py3. When the legacy one is installed for some reason, an error will occur:

      ImportError: pynvml is missing or an outdated version is installed. 
      
    • To fix this error, please uninstall nvidia-ml-py3 and install nvidia-ml-py<=11.495.46 (please follow the instruction in the error message). Or you can [bypass] the validation if you really want.

    • Due to compatibility reasons, NVIDIA Driver version needs to be 450.66 or higher.

    pynvml 
    opened by wookayin 13
  • pynvml not support lookup process info

    pynvml not support lookup process info

    When I call nvmlDeviceGetGraphicsRunningProcesses, raise below exception.

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    ~/gitProject/venv/siren/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        759         try:
    --> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
        761             return _nvmlGetFunctionPointer_cache[name]
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
        355             raise AttributeError(name)
    --> 356         func = self.__getitem__(name)
        357         setattr(self, name, func)
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
        360     def __getitem__(self, name_or_ordinal):
    --> 361         func = self._FuncPtr((name_or_ordinal, self))
        362         if not isinstance(name_or_ordinal, int):
    
    AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2
    
    During handling of the above exception, another exception occurred:
    
    NVMLError_FunctionNotFound                Traceback (most recent call last)
    <ipython-input-5-6d9d0902fdc2> in <module>
    ----> 1 nvmlDeviceGetGraphicsRunningProcesses(handle)
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses(handle)
       2179
       2180 def nvmlDeviceGetGraphicsRunningProcesses(handle):
    -> 2181     return nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2182
       2183 def nvmlDeviceGetAutoBoostedClocksEnabled(handle):
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2147     # first call to get the size
       2148     c_count = c_uint(0)
    AttributeError                            Traceback (most recent call last)
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        759         try:
    --> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
        761             return _nvmlGetFunctionPointer_cache[name]
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
        355             raise AttributeError(name)
    --> 356         func = self.__getitem__(name)
        357         setattr(self, name, func)
    
    /usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
        360     def __getitem__(self, name_or_ordinal):
    --> 361         func = self._FuncPtr((name_or_ordinal, self))
        362         if not isinstance(name_or_ordinal, int):
    
    AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2
    
    During handling of the above exception, another exception occurred:
    
    NVMLError_FunctionNotFound                Traceback (most recent call last)
    <ipython-input-6-85e61951ad1d> in <module>
    ----> 1 nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
       2147     # first call to get the size
       2148     c_count = c_uint(0)
    -> 2149     fn = _nvmlGetFunctionPointer("nvmlDeviceGetGraphicsRunningProcesses_v2")
       2150     ret = fn(handle, byref(c_count), None)
       2151
    
    ~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
        761             return _nvmlGetFunctionPointer_cache[name]
        762         except AttributeError:
    --> 763             raise NVMLError(NVML_ERROR_FUNCTION_NOT_FOUND)
        764     finally:
        765         # lock is always freed
    
    NVMLError_FunctionNotFound: Function Not Found
    

    So, I guess may be is the pynvml change something lead to this problem #72

    opened by hstk30 12
  • nvidia-smi is not recognized as an internal or external command: with 0.3.x versions on windows

    nvidia-smi is not recognized as an internal or external command: with 0.3.x versions on windows

    C:>gpustat -cp 'nvidia-smi' is not recognized as an internal or external command, operable program or batch file. Error on calling nvidia-smi

    C:>nvidia-smi --query-gpu=index,uuid,name,temperature.gpu,utilization.gpu,memory.used,memory.total --format=csv,noheader,nounits 0, GPU-9d01c9ef-1d73-7774-8b4f-5bee4b3bf644, GeForce GTX 1080 Ti, 28, 65, 9219, 11264 1, GPU-9da3de3f-cdf2-8ca9-504d-fd9bc414a78e, GeForce GTX 1080 Ti, 22, 0, 140, 11264

    Any idea what might be the issue? Windows 10, Python 3.7.2, latest nvidia drivers, etc, as of the time of this post.

    duplicate 
    opened by gotonickpappas 12
  • Not supported?

    Not supported?

    I tried it using

    gpustat -cpFP --watch

    I use Debian 11.

    Here my result :

    image

    When I run something in gpu it sow me the amount of memory used only:

    image

    Any hints? I miss something?

    Thanks.

    invalid question 
    opened by git2013vb 11
  • Please make a new release

    Please make a new release

    Hi!

    I'm observing gpustat as a soft dependency for ray-project, and because currently released version (0.6.0) is not installable on Windows I'm facing hard times enabling this dependency in conda-forge.

    Knowing the compatibility issue is fixed in master for quite some time, it would be really good if you could make a new release.

    References:

    • PR to conda-forge feedstock of gpustat: https://github.com/conda-forge/gpustat-feedstock/pull/2
    • PR to add ray-project to conda-forge (where gpustat is needed): https://github.com/conda-forge/staged-recipes/pull/11160
    opened by vnlitvinov 11
  • Add support for enc/dec gpu utilization (#79)

    Add support for enc/dec gpu utilization (#79)

    This PR only adds encoder and decoder utilization to --json from the cmdline, or to the GPUStat object if gpustat is used as a library.

    The information is also exposed to the standard command line output via the -e or --show-codec flag

    See issue #79

    new feature 
    opened by ChaoticMind 11
  • Some low-level errors (like `pynvml.nvml.NVMLError_LibRmVersionMismatch`) result in nothing printed (std or diagnostic)

    Some low-level errors (like `pynvml.nvml.NVMLError_LibRmVersionMismatch`) result in nothing printed (std or diagnostic)

    Describe the bug

    Something caused a version mismatch somewhere and I can no longer use gpustat. Nothing at all is printed on stdout or stderr. Running with --debug prints nothing as well. I launched it as python -m pdb -m gpustat and stepped through until noticing an error raised in:

    /opt/conda/lib/python3.8/site-packages/pynvml/nvml.py(718)
    

    of type pynvml.nvml.NVMLError_LibRmVersionMismatch.

    Screenshots or Program Output

    Please provide the output of gpustat --debug and nvidia-smi. Or attach screenshots if applicable.

    Environment information:

    • OS: Ubuntu 20.04
    • NVIDIA Driver version: 510.73.08
    • The name(s) of GPU card: Tesla V100-SXM2
    • gpustat version: 1.0.0
    • pynvml version: 11.495.46

    Additional context

    Add any other context about the problem here.

    bug pynvml waiting for response 
    opened by munael 1
  • Add noexcept funcitons `gpu_count` and `is_available`

    Add noexcept funcitons `gpu_count` and `is_available`

    Commented in https://github.com/wookayin/gpustat/issues/142#issuecomment-1336066463, add new noexcept funcitons gpu_count and is_available.

    Closes #142

    opened by XuehaiPan 0
  • On windows, error is raised if nvidia query cannot find a card

    On windows, error is raised if nvidia query cannot find a card

    Describe the bug

    python -m gpustat errors if user is not admin or if nvidia drivers are installed but no NVidia card is available

    Screenshots or Program Output

    Please provide the output of gpustat --debug and nvidia-smi. Or attach screenshots if applicable.

    As a regular user:

    >python -m gpustat --debug
    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "d:\temp\ray_venv\lib\site-packages\gpustat\cli.py", line 20, in print_gpustat
        gpu_stats = GPUStatCollection.new_query(debug=debug)
      File "d:\temp\ray_venv\lib\site-packages\gpustat\core.py", line 362, in new_query
        N.nvmlInit()
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1450, in nvmlInit
        nvmlInitWithFlags(0)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1440, in nvmlInitWithFlags
        _nvmlCheckReturn(ret)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 765, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_NoPermission: Insufficient Permissions
    

    As an admin:

    Error on querying NVIDIA devices. Use --debug flag for details
    Traceback (most recent call last):
      File "d:\temp\ray_venv\lib\site-packages\gpustat\cli.py", line 18, in print_gpustat
        gpu_stats = GPUStatCollection.new_query(debug=debug)
      File "d:\temp\ray_venv\lib\site-packages\gpustat\core.py", line 370, in new_query
        N.nvmlInit()
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1450, in nvmlInit
        nvmlInitWithFlags(0)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 1440, in nvmlInitWithFlags
        _nvmlCheckReturn(ret)
      File "d:\temp\ray_venv\lib\site-packages\pynvml.py", line 765, in _nvmlCheckReturn
        raise NVMLError(ret)
    pynvml.NVMLError_DriverNotLoaded: Driver Not Loaded
    

    As a regular user

    >nvidia-smi
    NVIDIA-SMI has failed because you are not:
            a) running as an administrator or
            b) there is not at least one TCC device in the system
    

    As an admin

    >nvidia-smi
    NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. This can also be happening if non-NVIDIA GPU is running as primary display, and NVIDIA GPU is in WDDM mode.
    

    Environment information:

    • OS: windows10
    • NVIDIA Driver version: 11.7 (Edit: changed from 11.3 to 11.7)
    • The name(s) of GPU card: None
    • gpustat version: 1.1.0
    • pynvml version: nvidia-ml-py 11.495.46

    Additional context

    Add any other context about the problem here.

    enhancement 
    opened by mattip 8
  • Option to display a bar besides the number to indicate memory usage

    Option to display a bar besides the number to indicate memory usage

    Hi,

    Thanks for the program, very useful. I was thinking a nice additional option could be to show the 'fullness' of the GPU ram with a bar instead of just with the number. This for the use case of quickly identifying (almost) empty GPUs when sharing a bunch of GPUs with labmates (eg see image: some of them are full, some have a lot of space left, a bar could show this more 'directly' I feel). image See second image for roughly what I mean: image

    new feature 
    opened by Natithan 2
  • Display thermal throttles

    Display thermal throttles

    nvidia-smi lists Max Clocks and current Clocks. It would be nice to be able to see these, and maybe display these in a % so you know how much you're throttled.

    example:

        Clocks
            Graphics                          : 2025 MHz
            SM                                : 2025 MHz
            Memory                            : 10251 MHz
            Video                             : 1785 MHz
        Max Clocks
            Graphics                          : 2100 MHz
            SM                                : 2100 MHz
            Memory                            : 10501 MHz
            Video                             : 1950 MHz
    
    new feature 
    opened by JohnCoates 0
Releases(v1.0)
  • v1.0(Sep 4, 2022)

    Add windows support, retire Python 2.x, use official NVML bindings, etc. Github Milestone: https://github.com/wookayin/gpustat/issues?q=milestone%3A1.0

    Breaking Changes

    • Retire Python 2 (#66). Add CI tests for python 3.8 and higher.
    • Use official nvidia python bindings (#107).
      • Due to API incompatibility issues, the nvidia driver version should be R450 or higher in order for process information to be correctly displayed.
      • NOTE: nvidia-ml-py<=11.495.46 is required (nvidia-ml-py3 shall not be used).
    • Use of '--gpuname-width' will truncate longer GPU names (#47).

    New Feature and Enhancements

    • Add windows support again, by switching to blessed (#78, @skjerns)
    • Add '--show-codec (-e)' option: display encoder/decoder utilization (#79, @ChaoticMind)
    • Add full process information (-f) (#65, @bethune-bryant)
    • Add '--show-all (-a)' flag (#64, @Michaelvll)
    • '--debug' will show more detailed stacktrace/exception information
    • Use unicode symbols (#58, @arinbjornk)
    • Include nvidia driver version into JSON output (#10)

    Bug Fixes

    • Fix color/highlight issues on power usage
    • Make color/highlight work correctly when TERM is not set
    • Do not list the same GPU process more than once (#84)
    • Fix a bug where querying zombie process can throw errors (#95)
    • Fix a bug where psutil may fail to get process info on Windows (#121, #123, @mattip)

    Etc.

    • Internal improvements on code style and tests
    • CI: Use Github Actions
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc1(Jul 5, 2022)

    • [Breaking changes] Retire Python 2 (#66). Add CI tests for python 3.8.
    • [Breaking changes] Backward-incompatible changes on JSON fields (#10)
    • [Breaking changes] Use official nvidia python bindings (#107).
      • Due to API incompatibility issues, the nvidia driver version should be R450 or higher in order for process information to be correctly displayed.
    • [New Feature] Add '--show-codec (-e)' option: display encoder/decoder utilization (#79)
    • [Enhancement] Re-add windows support, by switching to blessed (#78, @skjerns)
    • [Enhancement] Use unicode symbols (#58, @arinbjornk)
    • [Enhancement] Add full process information (-f) (#65, @bethune-bryant)
    • [Enhancement] Add '--show-all (-a)' flag (#64)
    • [Enhancement] '--debug' will show more stacktrace/exception information
    • [Bugfix] Fix color/highlight issues on power usage
    • [Bugfix] Make color/highlight work correctly when TERM is not set
    • [Bugfix] Do not list the same GPU process more than once (#84)
    • [Bugfix] Fix a bug where querying zombie process can throw errors (#95)
    • [Bugfix] Fix a bug where psutil may fail to get process info on Windows (#121, #123, @mattip)
    • [Etc] Internal improvements on code style and tests
    • [Etc] CI: Use Github Actions
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Jul 22, 2019)

    v0.6.0 (2019/07/22)

    • [Feature] Add a flag for fan speed (-F, --show-fan) (#62, #63), contributed by @bethune-bryant
    • [Enhancement] Align query datetime in the header with respect to --gpuname-width parameter.
    • [Enhancement] Alias gpustat --watch to -i/--interval option.
    • [Enhancement] Display NVIDIA driver version in the header (#53)
    • [Bugfix] Minor fixes on debug mode
    • [Etc] Travis: python 3.7

    Note: This will be the last version that supports python 2.7 and <3.4.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Sep 10, 2018)

    Changelog

    • [Feature] Built-in watch mode (gpustat -i) (#7, #41).
      • Contributed by @drons and @Stonesjtu, Thanks!
    • [Bug] Fix the problem extra character was showing (#32)
    • [Bug] Fix a bug in json mode where process information is unavailable (#45)
    • [Etc.] Refactoring of internal code structure: gpustat is now a package (#33)
    • [Etc.] More unit tests and better use of code styles (flake8)

    See also: Milestone 0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Dec 2, 2017)

  • v0.4.0(Nov 2, 2017)

    Changelog

    gpustat is no more a zero-dependency script and now depends on some packages. Please install using pip.

    • Use nvidia-ml-py bindings and psutil to replace command-line call of nvidia-smi and ps (#20, Thanks to @Stonesjtu).
    • A behavior on pipe is changed; it will not be in color by default, use --color explicitly. (e.g. watch --color -n1.0 gpustat --color)
    • Fix a bug in handling stale-state or zombie process (#16)
    • Include non-CUDA graphics applications in the process list (#18, Thanks to @kapsh)
    • Support power usage (#13, #28, Thanks to @cjw85)
    • Support --debug option
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Sep 17, 2017)

  • v0.3.1(Apr 10, 2017)

    Minor update. CHANGELOG:

    • Experimental JSON output feature (#10)
    • Add some properties and dict-style access for GPUStat class
    • Fix Python3 compatibility
    Source code(tar.gz)
    Source code(zip)
  • v0.2(Nov 19, 2016)

Owner
Jongwook Choi
Researcher & Developer & Productivity Geek. PhD Student at @umich.
Jongwook Choi
BlazingSQL is a lightweight, GPU accelerated, SQL engine for Python. Built on RAPIDS cuDF.

A lightweight, GPU accelerated, SQL engine built on the RAPIDS.ai ecosystem. Get Started on app.blazingsql.com Getting Started | Documentation | Examp

BlazingSQL 1.8k Jan 02, 2023
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python

GPUtil GPUtil is a Python module for getting the GPU status from NVIDA GPUs using nvidia-smi. GPUtil locates all GPUs on the computer, determines thei

Anders Krogh Mortensen 927 Dec 08, 2022
Library for faster pinned CPU <-> GPU transfer in Pytorch

SpeedTorch Faster pinned CPU tensor - GPU Pytorch variabe transfer and GPU tensor - GPU Pytorch variable transfer, in certain cases. Update 9-29-1

Santosh Gupta 657 Dec 19, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code her

NVIDIA Corporation 6.9k Dec 28, 2022
QPT-Quick packaging tool 前项式Python环境快捷封装工具

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,一行命令即可将普通的Python脚本打包成EXE可执行程序,与此同时还可轻松引入CUDA等深度学习加速库, 尽可能在用户使用时复现您的开发环境。

GT-Zhang 545 Dec 28, 2022
cuDF - GPU DataFrame Library

cuDF - GPU DataFrames NOTE: For the latest stable README.md ensure you are on the main branch. Resources cuDF Reference Documentation: Python API refe

RAPIDS 5.2k Jan 08, 2023
Python interface to GPU-powered libraries

Package Description scikit-cuda provides Python interfaces to many of the functions in the CUDA device/runtime, CUBLAS, CUFFT, and CUSOLVER libraries

Lev E. Givon 924 Dec 26, 2022
📊 A simple command-line utility for querying and monitoring GPU status

gpustat Just less than nvidia-smi? NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome! Self-Promo

Jongwook Choi 3.2k Jan 04, 2023
Python 3 Bindings for the NVIDIA Management Library

====== pyNVML ====== *** Patched to support Python 3 (and Python 2) *** ------------------------------------------------ Python bindings to the NVID

Nicolas Hennion 95 Jan 01, 2023
A NumPy-compatible array library accelerated by CUDA

CuPy : A NumPy-compatible array library accelerated by CUDA Website | Docs | Install Guide | Tutorial | Examples | API Reference | Forum CuPy is an im

CuPy 6.6k Jan 05, 2023
cuML - RAPIDS Machine Learning Library

cuML - GPU Machine Learning Algorithms cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions t

RAPIDS 3.1k Jan 04, 2023
Conda package for artifact creation that enables offline environments. Ideal for air-gapped deployments.

Conda-Vendor Conda Vendor is a tool to create local conda channels and manifests for vendored deployments Installation To install with pip, run: pip i

MetroStar - Tech 13 Nov 17, 2022
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

ArrayFire 4k Dec 29, 2022
A Python function for Slurm, to monitor the GPU information

Gpu-Monitor A Python function for Slurm, where I couldn't use nvidia-smi to monitor the GPU information. whole repo is not finish Installation TODO Mo

Squidward Tentacles 2 Feb 11, 2022
cuSignal - RAPIDS Signal Processing Library

cuSignal The RAPIDS cuSignal project leverages CuPy, Numba, and the RAPIDS ecosystem for GPU accelerated signal processing. In some cases, cuSignal is

RAPIDS 646 Dec 30, 2022
CUDA integration for Python, plus shiny features

PyCUDA lets you access Nvidia's CUDA parallel computation API from Python. Several wrappers of the CUDA API already exist-so what's so special about P

Andreas Klöckner 1.4k Jan 02, 2023
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases.

Vulkan Kompute The general purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabl

The Institute for Ethical Machine Learning 1k Dec 26, 2022
cuGraph - RAPIDS Graph Analytics Library

cuGraph - GPU Graph Analytics The RAPIDS cuGraph library is a collection of GPU accelerated graph algorithms that process data found in GPU DataFrames

RAPIDS 1.2k Jan 01, 2023
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi

NVIDIA Corporation 4.2k Jan 08, 2023
Python 3 Bindings for NVML library. Get NVIDIA GPU status inside your program.

py3nvml Documentation also available at readthedocs. Python 3 compatible bindings to the NVIDIA Management Library. Can be used to query the state of

Fergal Cotter 212 Jan 04, 2023