A Fast, Extensible Progress Bar for Python and CLI

Overview

Logo

tqdm

Py-Versions Versions Conda-Forge-Status Docker Snapcraft

Build-Status Coverage-Status Branch-Coverage-Status Codacy-Grade Libraries-Rank PyPI-Downloads

LICENCE OpenHub-Status binder-demo awesome-python

tqdm derives from the Arabic word taqaddum (تقدّم) which can mean "progress," and is an abbreviation for "I love you so much" in Spanish (te quiero demasiado).

Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you're done!

from tqdm import tqdm
for i in tqdm(range(10000)):
    ...

76%|████████████████████████        | 7568/10000 [00:33<00:10, 229.00it/s]

trange(N) can be also used as a convenient shortcut for tqdm(range(N)).

Screenshot
Video Slides

It can also be executed as a module with pipes:

$ seq 9999999 | tqdm --bytes | wc -l
75.2MB [00:00, 217MB/s]
9999999

$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \
    > backup.tgz
 32%|██████████▍                      | 8.89G/27.9G [00:42<01:31, 223MB/s]

Overhead is low -- about 60ns per iteration (80ns with tqdm.gui), and is unit tested against performance regression. By comparison, the well-established ProgressBar has an 800ns/iter overhead.

In addition to its low overhead, tqdm uses smart algorithms to predict the remaining time and to skip unnecessary iteration displays, which allows for a negligible overhead in most cases.

tqdm works on any platform (Linux, Windows, Mac, FreeBSD, NetBSD, Solaris/SunOS), in any console or in a GUI, and is also friendly with IPython/Jupyter notebooks.

tqdm does not require any dependencies (not even curses!), just Python and an environment supporting carriage return \r and line feed \n control characters.


Installation

Latest PyPI stable release

Versions PyPI-Downloads Libraries-Dependents

pip install tqdm

Latest development release on GitHub

GitHub-Status GitHub-Stars GitHub-Commits GitHub-Forks GitHub-Updated

Pull and install pre-release devel branch:

pip install "git+https://github.com/tqdm/[email protected]#egg=tqdm"

Latest Conda release

Conda-Forge-Status

conda install -c conda-forge tqdm

Latest Snapcraft release

Snapcraft

There are 3 channels to choose from:

snap install tqdm  # implies --stable, i.e. latest tagged release
snap install tqdm  --candidate  # master branch
snap install tqdm  --edge  # devel branch

Note that snap binaries are purely for CLI use (not import-able), and automatically set up bash tab-completion.

Latest Docker release

Docker

docker pull tqdm/tqdm
docker run -i --rm tqdm/tqdm --help

Other

There are other (unofficial) places where tqdm may be downloaded, particularly for CLI use:

Repology

Changelog

The list of all changes is available either on GitHub's Releases: GitHub-Status, on the wiki, or on the website.

Usage

tqdm is very versatile and can be used in a number of ways. The three main ones are given below.

Iterable-based

Wrap tqdm() around any iterable:

from tqdm import tqdm
from time import sleep

text = ""
for char in tqdm(["a", "b", "c", "d"]):
    sleep(0.25)
    text = text + char

trange(i) is a special optimised instance of tqdm(range(i)):

from tqdm import trange

for i in trange(100):
    sleep(0.01)

Instantiation outside of the loop allows for manual control over tqdm():

pbar = tqdm(["a", "b", "c", "d"])
for char in pbar:
    sleep(0.25)
    pbar.set_description("Processing %s" % char)

Manual

Manual control of tqdm() updates using a with statement:

with tqdm(total=100) as pbar:
    for i in range(10):
        sleep(0.1)
        pbar.update(10)

If the optional variable total (or an iterable with len()) is provided, predictive stats are displayed.

with is also optional (you can just assign tqdm() to a variable, but in this case don't forget to del or close() at the end:

pbar = tqdm(total=100)
for i in range(10):
    sleep(0.1)
    pbar.update(10)
pbar.close()

Module

Perhaps the most wonderful use of tqdm is in a script or on the command line. Simply inserting tqdm (or python -m tqdm) between pipes will pass through all stdin to stdout while printing progress to stderr.

The example below demonstrate counting the number of lines in all Python files in the current directory, with timing information included.

$ time find . -name '*.py' -type f -exec cat \{} \; | wc -l
857365

real    0m3.458s
user    0m0.274s
sys     0m3.325s

$ time find . -name '*.py' -type f -exec cat \{} \; | tqdm | wc -l
857366it [00:03, 246471.31it/s]
857365

real    0m3.585s
user    0m0.862s
sys     0m3.358s

Note that the usual arguments for tqdm can also be specified.

$ find . -name '*.py' -type f -exec cat \{} \; |
    tqdm --unit loc --unit_scale --total 857366 >> /dev/null
100%|█████████████████████████████████| 857K/857K [00:04<00:00, 246Kloc/s]

Backing up a large directory?

$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \
  > backup.tgz
 44%|██████████████▊                   | 153M/352M [00:14<00:18, 11.0MB/s]

This can be beautified further:

$ BYTES="$(du -sb docs/ | cut -f1)"
$ tar -cf - docs/ \
  | tqdm --bytes --total "$BYTES" --desc Processing | gzip \
  | tqdm --bytes --total "$BYTES" --desc Compressed --position 1 \
  > ~/backup.tgz
Processing: 100%|██████████████████████| 352M/352M [00:14<00:00, 30.2MB/s]
Compressed:  42%|█████████▎            | 148M/352M [00:14<00:19, 10.9MB/s]

Or done on a file level using 7-zip:

$ 7z a -bd -r backup.7z docs/ | grep Compressing \
  | tqdm --total $(find docs/ -type f | wc -l) --unit files \
  | grep -v Compressing
100%|██████████████████████████▉| 15327/15327 [01:00<00:00, 712.96files/s]

Pre-existing CLI programs already outputting basic progress information will benefit from tqdm's --update and --update_to flags:

$ seq 3 0.1 5 | tqdm --total 5 --update_to --null
100%|████████████████████████████████████| 5.0/5 [00:00<00:00, 9673.21it/s]
$ seq 10 | tqdm --update --null  # 1 + 2 + ... + 10 = 55 iterations
55it [00:00, 90006.52it/s]

FAQ and Known Issues

GitHub-Issues

The most common issues relate to excessive output on multiple lines, instead of a neat one-line progress bar.

  • Consoles in general: require support for carriage return (CR, \r).
  • Nested progress bars:
    • Consoles in general: require support for moving cursors up to the previous line. For example, IDLE, ConEmu and PyCharm (also here, here, and here) lack full support.
    • Windows: additionally may require the Python module colorama to ensure nested bars stay within their respective lines.
  • Unicode:
    • Environments which report that they support unicode will have solid smooth progressbars. The fallback is an ascii-only bar.
    • Windows consoles often only partially support unicode and thus often require explicit ascii=True (also here). This is due to either normal-width unicode characters being incorrectly displayed as "wide", or some unicode characters not rendering.
  • Wrapping generators:
    • Generator wrapper functions tend to hide the length of iterables. tqdm does not.
    • Replace tqdm(enumerate(...)) with enumerate(tqdm(...)) or tqdm(enumerate(x), total=len(x), ...). The same applies to numpy.ndenumerate.
    • Replace tqdm(zip(a, b)) with zip(tqdm(a), b) or even zip(tqdm(a), tqdm(b)).
    • The same applies to itertools.
    • Some useful convenience functions can be found under tqdm.contrib.
  • Hanging pipes in python2: when using tqdm on the CLI, you may need to use Python 3.5+ for correct buffering.
  • No intermediate output in docker-compose: use docker-compose run instead of docker-compose up and tty: true.

If you come across any other difficulties, browse and file GitHub-Issues.

Documentation

Py-Versions README-Hits (Since 19 May 2016)

class tqdm():
  """
  Decorate an iterable object, returning an iterator which acts exactly
  like the original iterable, but prints a dynamically updating
  progressbar every time a value is requested.
  """

  def __init__(self, iterable=None, desc=None, total=None, leave=True,
               file=None, ncols=None, mininterval=0.1,
               maxinterval=10.0, miniters=None, ascii=None, disable=False,
               unit='it', unit_scale=False, dynamic_ncols=False,
               smoothing=0.3, bar_format=None, initial=0, position=None,
               postfix=None, unit_divisor=1000):

Parameters

  • iterable : iterable, optional

    Iterable to decorate with a progressbar. Leave blank to manually manage the updates.

  • desc : str, optional

    Prefix for the progressbar.

  • total : int or float, optional

    The number of expected iterations. If unspecified, len(iterable) is used if possible. If float("inf") or as a last resort, only basic progress statistics are displayed (no ETA, no progressbar). If gui is True and this parameter needs subsequent updating, specify an initial arbitrary large positive number, e.g. 9e9.

  • leave : bool, optional

    If [default: True], keeps all traces of the progressbar upon termination of iteration. If None, will leave only if position is 0.

  • file : io.TextIOWrapper or io.StringIO, optional

    Specifies where to output the progress messages (default: sys.stderr). Uses file.write(str) and file.flush() methods. For encoding, see write_bytes.

  • ncols : int, optional

    The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound. If unspecified, attempts to use environment width. The fallback is a meter width of 10 and no limit for the counter and statistics. If 0, will not print any meter (only stats).

  • mininterval : float, optional

    Minimum progress display update interval [default: 0.1] seconds.

  • maxinterval : float, optional

    Maximum progress display update interval [default: 10] seconds. Automatically adjusts miniters to correspond to mininterval after long display update lag. Only works if dynamic_miniters or monitor thread is enabled.

  • miniters : int or float, optional

    Minimum progress display update interval, in iterations. If 0 and dynamic_miniters, will automatically adjust to equal mininterval (more CPU efficient, good for tight loops). If > 0, will skip display of specified number of iterations. Tweak this and mininterval to get very efficient loops. If your progress is erratic with both fast and slow iterations (network, skipping items, etc) you should set miniters=1.

  • ascii : bool or str, optional

    If unspecified or False, use unicode (smooth blocks) to fill the meter. The fallback is to use ASCII characters " 123456789#".

  • disable : bool, optional

    Whether to disable the entire progressbar wrapper [default: False]. If set to None, disable on non-TTY.

  • unit : str, optional

    String that will be used to define the unit of each iteration [default: it].

  • unit_scale : bool or int or float, optional

    If 1 or True, the number of iterations will be reduced/scaled automatically and a metric prefix following the International System of Units standard will be added (kilo, mega, etc.) [default: False]. If any other non-zero number, will scale total and n.

  • dynamic_ncols : bool, optional

    If set, constantly alters ncols and nrows to the environment (allowing for window resizes) [default: False].

  • smoothing : float, optional

    Exponential moving average smoothing factor for speed estimates (ignored in GUI mode). Ranges from 0 (average speed) to 1 (current/instantaneous speed) [default: 0.3].

  • bar_format : str, optional

    Specify a custom bar string formatting. May impact performance. [default: '{l_bar}{bar}{r_bar}'], where l_bar='{desc}: {percentage:3.0f}%|' and r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]' Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt, percentage, elapsed, elapsed_s, ncols, nrows, desc, unit, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, postfix, unit_divisor, remaining, remaining_s, eta. Note that a trailing ": " is automatically removed after {desc} if the latter is empty.

  • initial : int or float, optional

    The initial counter value. Useful when restarting a progress bar [default: 0]. If using float, consider specifying {n:.3f} or similar in bar_format, or specifying unit_scale.

  • position : int, optional

    Specify the line offset to print this bar (starting from 0) Automatic if unspecified. Useful to manage multiple bars at once (eg, from threads).

  • postfix : dict or *, optional

    Specify additional stats to display at the end of the bar. Calls set_postfix(**postfix) if possible (dict).

  • unit_divisor : float, optional

    [default: 1000], ignored unless unit_scale is True.

  • write_bytes : bool, optional

    If (default: None) and file is unspecified, bytes will be written in Python 2. If True will also write bytes. In all other cases will default to unicode.

  • lock_args : tuple, optional

    Passed to refresh for intermediate output (initialisation, iterating, and updating).

  • nrows : int, optional

    The screen height. If specified, hides nested bars outside this bound. If unspecified, attempts to use environment height. The fallback is 20.

  • colour : str, optional

    Bar colour (e.g. 'green', '#00ff00').

Extra CLI Options

  • delim : chr, optional
    Delimiting character [default: 'n']. Use '0' for null. N.B.: on Windows systems, Python converts 'n' to 'rn'.
  • buf_size : int, optional
    String buffer size in bytes [default: 256] used when delim is specified.
  • bytes : bool, optional
    If true, will count bytes, ignore delim, and default unit_scale to True, unit_divisor to 1024, and unit to 'B'.
  • tee : bool, optional
    If true, passes stdin to both stderr and stdout.
  • update : bool, optional
    If true, will treat input as newly elapsed iterations, i.e. numbers to pass to update(). Note that this is slow (~2e5 it/s) since every input must be decoded as a number.
  • update_to : bool, optional
    If true, will treat input as total elapsed iterations, i.e. numbers to assign to self.n. Note that this is slow (~2e5 it/s) since every input must be decoded as a number.
  • null : bool, optional
    If true, will discard input (no stdout).
  • manpath : str, optional
    Directory in which to install tqdm man pages.
  • comppath : str, optional
    Directory in which to place tqdm completion.
  • log : str, optional
    CRITICAL|FATAL|ERROR|WARN(ING)|[default: 'INFO']|DEBUG|NOTSET.

Returns

  • out : decorated iterator.
class tqdm():
  def update(self, n=1):
      """
      Manually update the progress bar, useful for streams
      such as reading files.
      E.g.:
      >>> t = tqdm(total=filesize) # Initialise
      >>> for current_buffer in stream:
      ...    ...
      ...    t.update(len(current_buffer))
      >>> t.close()
      The last line is highly recommended, but possibly not necessary if
      ``t.update()`` will be called in such a way that ``filesize`` will be
      exactly reached and printed.

      Parameters
      ----------
      n  : int or float, optional
          Increment to add to the internal counter of iterations
          [default: 1]. If using float, consider specifying ``{n:.3f}``
          or similar in ``bar_format``, or specifying ``unit_scale``.

      Returns
      -------
      out  : bool or None
          True if a ``display()`` was triggered.
      """

  def close(self):
      """Cleanup and (if leave=False) close the progressbar."""

  def clear(self, nomove=False):
      """Clear current bar display."""

  def refresh(self):
      """
      Force refresh the display of this bar.

      Parameters
      ----------
      nolock  : bool, optional
          If ``True``, does not lock.
          If [default: ``False``]: calls ``acquire()`` on internal lock.
      lock_args  : tuple, optional
          Passed to internal lock's ``acquire()``.
          If specified, will only ``display()`` if ``acquire()`` returns ``True``.
      """

  def unpause(self):
      """Restart tqdm timer from last print time."""

  def reset(self, total=None):
      """
      Resets to 0 iterations for repeated use.

      Consider combining with ``leave=True``.

      Parameters
      ----------
      total  : int or float, optional. Total to use for the new bar.
      """

  def set_description(self, desc=None, refresh=True):
      """
      Set/modify description of the progress bar.

      Parameters
      ----------
      desc  : str, optional
      refresh  : bool, optional
          Forces refresh [default: True].
      """

  def set_postfix(self, ordered_dict=None, refresh=True, **tqdm_kwargs):
      """
      Set/modify postfix (additional stats)
      with automatic formatting based on datatype.

      Parameters
      ----------
      ordered_dict  : dict or OrderedDict, optional
      refresh  : bool, optional
          Forces refresh [default: True].
      kwargs  : dict, optional
      """

  @classmethod
  def write(cls, s, file=sys.stdout, end="\n"):
      """Print a message via tqdm (without overlap with bars)."""

  @property
  def format_dict(self):
      """Public API for read-only member access."""

  def display(self, msg=None, pos=None):
      """
      Use ``self.sp`` to display ``msg`` in the specified ``pos``.

      Consider overloading this function when inheriting to use e.g.:
      ``self.some_frontend(**self.format_dict)`` instead of ``self.sp``.

      Parameters
      ----------
      msg  : str, optional. What to display (default: ``repr(self)``).
      pos  : int, optional. Position to ``moveto``
        (default: ``abs(self.pos)``).
      """

  @classmethod
  @contextmanager
  def wrapattr(cls, stream, method, total=None, bytes=True, **tqdm_kwargs):
      """
      stream  : file-like object.
      method  : str, "read" or "write". The result of ``read()`` and
          the first argument of ``write()`` should have a ``len()``.

      >>> with tqdm.wrapattr(file_obj, "read", total=file_obj.size) as fobj:
      ...     while True:
      ...         chunk = fobj.read(chunk_size)
      ...         if not chunk:
      ...             break
      """

  @classmethod
  def pandas(cls, *targs, **tqdm_kwargs):
      """Registers the current `tqdm` class with `pandas`."""

def trange(*args, **tqdm_kwargs):
    """
    A shortcut for `tqdm(xrange(*args), **tqdm_kwargs)`.
    On Python3+, `range` is used instead of `xrange`.
    """

Convenience Functions

def tqdm.contrib.tenumerate(iterable, start=0, total=None,
                            tqdm_class=tqdm.auto.tqdm, **tqdm_kwargs):
    """Equivalent of `numpy.ndenumerate` or builtin `enumerate`."""

def tqdm.contrib.tzip(iter1, *iter2plus, **tqdm_kwargs):
    """Equivalent of builtin `zip`."""

def tqdm.contrib.tmap(function, *sequences, **tqdm_kwargs):
    """Equivalent of builtin `map`."""

Submodules

class tqdm.notebook.tqdm(tqdm.tqdm):
    """IPython/Jupyter Notebook widget."""

class tqdm.auto.tqdm(tqdm.tqdm):
    """Automatically chooses beween `tqdm.notebook` and `tqdm.tqdm`."""

class tqdm.asyncio.tqdm(tqdm.tqdm):
  """Asynchronous version."""
  @classmethod
  def as_completed(cls, fs, *, loop=None, timeout=None, total=None,
                   **tqdm_kwargs):
      """Wrapper for `asyncio.as_completed`."""

class tqdm.gui.tqdm(tqdm.tqdm):
    """Matplotlib GUI version."""

class tqdm.tk.tqdm(tqdm.tqdm):
    """Tkinter GUI version."""

class tqdm.rich.tqdm(tqdm.tqdm):
    """`rich.progress` version."""

class tqdm.keras.TqdmCallback(keras.callbacks.Callback):
    """`keras` callback for epoch and batch progress."""
contrib

The tqdm.contrib package also contains experimental modules:

  • tqdm.contrib.itertools: Thin wrappers around itertools
  • tqdm.contrib.concurrent: Thin wrappers around concurrent.futures
  • tqdm.contrib.discord: Posts to Discord bots
  • tqdm.contrib.telegram: Posts to Telegram bots
  • tqdm.contrib.bells: Automagically enables all optional features
    • auto, pandas, discord, telegram

Examples and Advanced Usage

Description and additional stats

Custom information can be displayed and updated dynamically on tqdm bars with the desc and postfix arguments:

from tqdm import tqdm, trange
from random import random, randint
from time import sleep

with trange(10) as t:
    for i in t:
        # Description will be displayed on the left
        t.set_description('GEN %i' % i)
        # Postfix will be displayed on the right,
        # formatted automatically based on argument's datatype
        t.set_postfix(loss=random(), gen=randint(1,999), str='h',
                      lst=[1, 2])
        sleep(0.1)

with tqdm(total=10, bar_format="{postfix[0]} {postfix[1][value]:>8.2g}",
          postfix=["Batch", dict(value=0)]) as t:
    for i in range(10):
        sleep(0.1)
        t.postfix[1]["value"] = i / 2
        t.update()

Points to remember when using {postfix[...]} in the bar_format string:

  • postfix also needs to be passed as an initial argument in a compatible format, and
  • postfix will be auto-converted to a string if it is a dict-like object. To prevent this behaviour, insert an extra item into the dictionary where the key is not a string.

Additional bar_format parameters may also be defined by overriding format_dict, and the bar itself may be modified using ascii:

from tqdm import tqdm
class TqdmExtraFormat(tqdm):
    """Provides a `total_time` format parameter"""
    @property
    def format_dict(self):
        d = super(TqdmExtraFormat, self).format_dict
        total_time = d["elapsed"] * (d["total"] or 0) / max(d["n"], 1)
        d.update(total_time=self.format_interval(total_time) + " in total")
        return d

for i in TqdmExtraFormat(
      range(9), ascii=" .oO0",
      bar_format="{total_time}: {percentage:.0f}%|{bar}{r_bar}"):
    if i == 4:
        break
00:00 in total: 44%|0000.     | 4/9 [00:00<00:00, 962.93it/s]

Note that {bar} also supports a format specifier [width][type].

  • width
    • unspecified (default): automatic to fill ncols
    • int >= 0: fixed width overriding ncols logic
    • int < 0: subtract from the automatic default
  • type
    • a: ascii (ascii=True override)
    • u: unicode (ascii=False override)
    • b: blank (ascii=" " override)

This means a fixed bar with right-justified text may be created by using: bar_format="{l_bar}{bar:10}|{bar:-10b}right-justified"

Nested progress bars

tqdm supports nested progress bars. Here's an example:

from tqdm.auto import trange
from time import sleep

for i in trange(4, desc='1st loop'):
    for j in trange(5, desc='2nd loop'):
        for k in trange(50, desc='3rd loop', leave=False):
            sleep(0.01)

On Windows colorama will be used if available to keep nested bars on their respective lines.

For manual control over positioning (e.g. for multi-processing use), you may specify position=n where n=0 for the outermost bar, n=1 for the next, and so on. However, it's best to check if tqdm can work without manual position first.

from time import sleep
from tqdm import trange, tqdm
from multiprocessing import Pool, RLock, freeze_support

L = list(range(9))

def progresser(n):
    interval = 0.001 / (n + 2)
    total = 5000
    text = "#{}, est. {:<04.2}s".format(n, interval * total)
    for _ in trange(total, desc=text, position=n):
        sleep(interval)

if __name__ == '__main__':
    freeze_support()  # for Windows support
    tqdm.set_lock(RLock())  # for managing output contention
    p = Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),))
    p.map(progresser, L)

Note that in Python 3, tqdm.write is thread-safe:

from time import sleep
from tqdm import tqdm, trange
from concurrent.futures import ThreadPoolExecutor

L = list(range(9))

def progresser(n):
    interval = 0.001 / (n + 2)
    total = 5000
    text = "#{}, est. {:<04.2}s".format(n, interval * total)
    for _ in trange(total, desc=text):
        sleep(interval)
    if n == 6:
        tqdm.write("n == 6 completed.")
        tqdm.write("`tqdm.write()` is thread-safe in py3!")

if __name__ == '__main__':
    with ThreadPoolExecutor() as p:
        p.map(progresser, L)

Hooks and callbacks

tqdm can easily support callbacks/hooks and manual updates. Here's an example with urllib:

``urllib.urlretrieve`` documentation

[...]
If present, the hook function will be called once
on establishment of the network connection and once after each block read
thereafter. The hook will be passed three arguments; a count of blocks
transferred so far, a block size in bytes, and the total size of the file.
[...]
import urllib, os
from tqdm import tqdm

class TqdmUpTo(tqdm):
    """Provides `update_to(n)` which uses `tqdm.update(delta_n)`."""
    def update_to(self, b=1, bsize=1, tsize=None):
        """
        b  : int, optional
            Number of blocks transferred so far [default: 1].
        bsize  : int, optional
            Size of each block (in tqdm units) [default: 1].
        tsize  : int, optional
            Total size (in tqdm units). If [default: None] remains unchanged.
        """
        if tsize is not None:
            self.total = tsize
        return self.update(b * bsize - self.n)  # also sets self.n = b * bsize

eg_link = "https://caspersci.uk.to/matryoshka.zip"
with TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
              desc=eg_link.split('/')[-1]) as t:  # all optional kwargs
    urllib.urlretrieve(eg_link, filename=os.devnull,
                       reporthook=t.update_to, data=None)
    t.total = t.n

Inspired by twine#242. Functional alternative in examples/tqdm_wget.py.

It is recommend to use miniters=1 whenever there is potentially large differences in iteration speed (e.g. downloading a file over a patchy connection).

Wrapping read/write methods

To measure throughput through a file-like object's read or write methods, use CallbackIOWrapper:

from tqdm.auto import tqdm
from tqdm.utils import CallbackIOWrapper

with tqdm(total=file_obj.size,
          unit='B', unit_scale=True, unit_divisor=1024) as t:
    fobj = CallbackIOWrapper(t.update, file_obj, "read")
    while True:
        chunk = fobj.read(chunk_size)
        if not chunk:
            break
    t.reset()
    # ... continue to use `t` for something else

Alternatively, use the even simpler wrapattr convenience function, which would condense both the urllib and CallbackIOWrapper examples down to:

import urllib, os
from tqdm import tqdm

eg_link = "https://caspersci.uk.to/matryoshka.zip"
with tqdm.wrapattr(open(os.devnull, "wb"), "write",
                   miniters=1, desc=eg_link.split('/')[-1]) as fout:
    for chunk in urllib.urlopen(eg_link):
        fout.write(chunk)

The requests equivalent is nearly identical, albeit with a total:

import requests, os
from tqdm import tqdm

eg_link = "https://caspersci.uk.to/matryoshka.zip"
response = requests.get(eg_link, stream=True)
with tqdm.wrapattr(open(os.devnull, "wb"), "write",
                   miniters=1, desc=eg_link.split('/')[-1],
                   total=int(response.headers.get('content-length', 0))) as fout:
    for chunk in response.iter_content(chunk_size=4096):
        fout.write(chunk)

Custom callback

tqdm is known for intelligently skipping unnecessary displays. To make a custom callback take advantage of this, simply use the return value of update(). This is set to True if a display() was triggered.

from tqdm.auto import tqdm as std_tqdm

def external_callback(*args, **kwargs):
    ...

class TqdmExt(std_tqdm):
    def update(self, n=1):
        displayed = super(TqdmExt, self).update(n):
        if displayed:
            external_callback(**self.format_dict)
        return displayed

asyncio

Note that break isn't currently caught by asynchronous iterators. This means that tqdm cannot clean up after itself in this case:

from tqdm.asyncio import tqdm

async for i in tqdm(range(9)):
    if i == 2:
        break

Instead, either call pbar.close() manually or use the context manager syntax:

from tqdm.asyncio import tqdm

with tqdm(range(9)) as pbar:
    async for i in pbar:
        if i == 2:
            break

Pandas Integration

Due to popular demand we've added support for pandas -- here's an example for DataFrame.progress_apply and DataFrameGroupBy.progress_apply:

import pandas as pd
import numpy as np
from tqdm import tqdm

df = pd.DataFrame(np.random.randint(0, 100, (100000, 6)))

# Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`
# (can use `tqdm.gui.tqdm`, `tqdm.notebook.tqdm`, optional kwargs, etc.)
tqdm.pandas(desc="my bar!")

# Now you can use `progress_apply` instead of `apply`
# and `progress_map` instead of `map`
df.progress_apply(lambda x: x**2)
# can also groupby:
# df.groupby(0).progress_apply(lambda x: x**2)

In case you're interested in how this works (and how to modify it for your own callbacks), see the examples folder or import the module and run help().

Keras Integration

A keras callback is also available:

from tqdm.keras import TqdmCallback

...

model.fit(..., verbose=0, callbacks=[TqdmCallback()])

IPython/Jupyter Integration

IPython/Jupyter is supported via the tqdm.notebook submodule:

from tqdm.notebook import trange, tqdm
from time import sleep

for i in trange(3, desc='1st loop'):
    for j in tqdm(range(100), desc='2nd loop'):
        sleep(0.01)

In addition to tqdm features, the submodule provides a native Jupyter widget (compatible with IPython v1-v4 and Jupyter), fully working nested bars and colour hints (blue: normal, green: completed, red: error/interrupt, light blue: no ETA); as demonstrated below.

Screenshot-Jupyter1 Screenshot-Jupyter2 Screenshot-Jupyter3

The notebook version supports percentage or pixels for overall width (e.g.: ncols='100%' or ncols='480px').

It is also possible to let tqdm automatically choose between console or notebook versions by using the autonotebook submodule:

from tqdm.autonotebook import tqdm
tqdm.pandas()

Note that this will issue a TqdmExperimentalWarning if run in a notebook since it is not meant to be possible to distinguish between jupyter notebook and jupyter console. Use auto instead of autonotebook to suppress this warning.

Note that notebooks will display the bar in the cell where it was created. This may be a different cell from the one where it is used. If this is not desired, either

  • delay the creation of the bar to the cell where it must be displayed, or
  • create the bar with display=False, and in a later cell call display(bar.container):
from tqdm.notebook import tqdm
pbar = tqdm(..., display=False)
# different cell
display(pbar.container)

The keras callback has a display() method which can be used likewise:

from tqdm.keras import TqdmCallback
cbk = TqdmCallback(display=False)
# different cell
cbk.display()
model.fit(..., verbose=0, callbacks=[cbk])

Another possibility is to have a single bar (near the top of the notebook) which is constantly re-used (using reset() rather than close()). For this reason, the notebook version (unlike the CLI version) does not automatically call close() upon Exception.

from tqdm.notebook import tqdm
pbar = tqdm()
# different cell
iterable = range(100)
pbar.reset(total=len(iterable))  # initialise with new `total`
for i in iterable:
    pbar.update()
pbar.refresh()  # force print final status but don't `close()`

Custom Integration

To change the default arguments (such as making dynamic_ncols=True), simply use built-in Python magic:

from functools import partial
from tqdm import tqdm as std_tqdm
tqdm = partial(std_tqdm, dynamic_ncols=True)

For further customisation, tqdm may be inherited from to create custom callbacks (as with the TqdmUpTo example above) or for custom frontends (e.g. GUIs such as notebook or plotting packages). In the latter case:

  1. def __init__() to call super().__init__(..., gui=True) to disable terminal status_printer creation.
  2. Redefine: close(), clear(), display().

Consider overloading display() to use e.g. self.frontend(**self.format_dict) instead of self.sp(repr(self)).

Some submodule examples of inheritance:

Dynamic Monitor/Meter

You can use a tqdm as a meter which is not monotonically increasing. This could be because n decreases (e.g. a CPU usage monitor) or total changes.

One example would be recursively searching for files. The total is the number of objects found so far, while n is the number of those objects which are files (rather than folders):

from tqdm import tqdm
import os.path

def find_files_recursively(path, show_progress=True):
    files = []
    # total=1 assumes `path` is a file
    t = tqdm(total=1, unit="file", disable=not show_progress)
    if not os.path.exists(path):
        raise IOError("Cannot find:" + path)

    def append_found_file(f):
        files.append(f)
        t.update()

    def list_found_dir(path):
        """returns os.listdir(path) assuming os.path.isdir(path)"""
        listing = os.listdir(path)
        # subtract 1 since a "file" we found was actually this directory
        t.total += len(listing) - 1
        # fancy way to give info without forcing a refresh
        t.set_postfix(dir=path[-10:], refresh=False)
        t.update(0)  # may trigger a refresh
        return listing

    def recursively_search(path):
        if os.path.isdir(path):
            for f in list_found_dir(path):
                recursively_search(os.path.join(path, f))
        else:
            append_found_file(path)

    recursively_search(path)
    t.set_postfix(dir=path)
    t.close()
    return files

Using update(0) is a handy way to let tqdm decide when to trigger a display refresh to avoid console spamming.

Writing messages

This is a work in progress (see #737).

Since tqdm uses a simple printing mechanism to display progress bars, you should not write any message in the terminal using print() while a progressbar is open.

To write messages in the terminal without any collision with tqdm bar display, a .write() method is provided:

from tqdm.auto import tqdm, trange
from time import sleep

bar = trange(10)
for i in bar:
    # Print using tqdm class method .write()
    sleep(0.1)
    if not (i % 3):
        tqdm.write("Done task %i" % i)
    # Can also use bar.write()

By default, this will print to standard output sys.stdout. but you can specify any file-like object using the file argument. For example, this can be used to redirect the messages writing to a log file or class.

Redirecting writing

If using a library that can print messages to the console, editing the library by replacing print() with tqdm.write() may not be desirable. In that case, redirecting sys.stdout to tqdm.write() is an option.

To redirect sys.stdout, create a file-like class that will write any input string to tqdm.write(), and supply the arguments file=sys.stdout, dynamic_ncols=True.

A reusable canonical example is given below:

from time import sleep
import contextlib
import sys
from tqdm import tqdm
from tqdm.contrib import DummyTqdmFile


@contextlib.contextmanager
def std_out_err_redirect_tqdm():
    orig_out_err = sys.stdout, sys.stderr
    try:
        sys.stdout, sys.stderr = map(DummyTqdmFile, orig_out_err)
        yield orig_out_err[0]
    # Relay exceptions
    except Exception as exc:
        raise exc
    # Always restore sys.stdout/err if necessary
    finally:
        sys.stdout, sys.stderr = orig_out_err

def some_fun(i):
    print("Fee, fi, fo,".split()[i])

# Redirect stdout to tqdm.write() (don't forget the `as save_stdout`)
with std_out_err_redirect_tqdm() as orig_stdout:
    # tqdm needs the original stdout
    # and dynamic_ncols=True to autodetect console width
    for i in tqdm(range(3), file=orig_stdout, dynamic_ncols=True):
        sleep(.5)
        some_fun(i)

# After the `with`, printing is restored
print("Done!")

Monitoring thread, intervals and miniters

tqdm implements a few tricks to to increase efficiency and reduce overhead.

  • Avoid unnecessary frequent bar refreshing: mininterval defines how long to wait between each refresh. tqdm always gets updated in the background, but it will display only every mininterval.
  • Reduce number of calls to check system clock/time.
  • mininterval is more intuitive to configure than miniters. A clever adjustment system dynamic_miniters will automatically adjust miniters to the amount of iterations that fit into time mininterval. Essentially, tqdm will check if it's time to print without actually checking time. This behaviour can be still be bypassed by manually setting miniters.

However, consider a case with a combination of fast and slow iterations. After a few fast iterations, dynamic_miniters will set miniters to a large number. When iteration rate subsequently slows, miniters will remain large and thus reduce display update frequency. To address this:

  • maxinterval defines the maximum time between display refreshes. A concurrent monitoring thread checks for overdue updates and forces one where necessary.

The monitoring thread should not have a noticeable overhead, and guarantees updates at least every 10 seconds by default. This value can be directly changed by setting the monitor_interval of any tqdm instance (i.e. t = tqdm.tqdm(...); t.monitor_interval = 2). The monitor thread may be disabled application-wide by setting tqdm.tqdm.monitor_interval = 0 before instantiation of any tqdm bar.

Contributions

GitHub-Commits GitHub-Issues GitHub-PRs OpenHub-Status GitHub-Contributions CII Best Practices

All source code is hosted on GitHub. Contributions are welcome.

See the CONTRIBUTING file for more information.

Developers who have made significant contributions, ranked by SLoC (surviving lines of code, git fame -wMC --excl '\.(png|gif|jpg)$'), are:

Name ID SLoC Notes
Casper da Costa-Luis casperdcl ~82% primary maintainer Gift-Casper
Stephen Larroque lrq3000 ~11% team member
Martin Zugnoni martinzugnoni ~3%  
Guangshuo Chen chengs ~1%  
Kyle Altendorf altendky <1%  
Hadrien Mary hadim <1% team member
Matthew Stevens mjstevens777 <1%  
Ivan Ivanov obiwanus <1%  
Daniel Panteleit danielpanteleit <1%  
Jona Haag jonashaag <1%  
James E. King III jeking3 <1%  
Noam Yorav-Raphael noamraph <1% original author
Mikhail Korobov kmike <1% team member

Ports to Other Languages

A list is available on this wiki page.

LICENCE

Open Source (OSI approved): LICENCE

Citation information: DOI

README-Hits (Since 19 May 2016)

Comments
  • Automate `nested` with `position`

    Automate `nested` with `position`

    Should fix #83.

    I initially planned to make a new class multi_tqdm() to centrally manage multiple tqdm bars, but I've found a way to do the same in a decentralized fashion.

    The only glitch is when all bars have leave=True, they will be overwritten by command prompt (just like the old issue with nested, but here we can't fix that).

    Sample code:

    from tqdm import tqdm, trange
    from time import sleep
    
    # Iteration-based usage
    for i in trange(2, desc='pos0 loop', position=0):
        for j in trange(3, desc='pos1 loop', position=1):
            for k in trange(4, desc='pos2 loop', position=2):
                sleep(0.1)
    
    # Manual usage
    t1 = tqdm(total=10, desc='pos0 bar', position=0)
    t2 = tqdm(total=10, desc='pos1 bar', position=1)
    t3 = tqdm(total=10, desc='pos2 bar', position=2)
    for i in range(10):
        t1.update()
        t3.update()
        t2.update()
        sleep(0.5)
    

    Should we keep this or rather try to make a centralized multi_tqdm() ? (but it will be uglier and a good deal slower).

    Todo (edited):

    • [x] Add unit test to get coverage 100%.
    • [x] Update readme.
    • [x] rebase
    • [x] fix tests
      • [x] ~~conflicts with~~ deprecate nested
      • [x] nested deprecation test
      • [x] position test
      • [x] fix timing test (Discrete Timer)
    • [x] fix py26 exceptions
    • [x] fix pypy exceptions
    • [x] fix py3 exceptions
    • [x] fix display of test output (why are there blank lines in the terminal when we use StringIO!?)
    • [x] fix del() exceptions in tests
    • [x] fix pypy on Travis
    • [x] answer this question on SO.
    • [x] make more efficient
    p3-enhancement 🔥 
    opened by lrq3000 79
  • Change tqdm project name

    Change tqdm project name

    I think the best way to do it is keep this repo, empty it and put a message in the README saying that the project has been moved to another repo.

    If others agree, this is the way I think we should do it :

    • [ ] find a name
    • [ ] rename the organization
    • [ ] rename the repo
    • [ ] tag a version on git (v1.0 ???)
    • [ ] create a new account on pypi
    • [ ] share password with @lrq3000, @casperdcl, @kmike and @hadim
    • [ ] and finally we can release it on pypi !!!

    PS: for the pypi release I would like us to register a new account on pypi with project name as login and a password (shared by mail) to avoid any issue if someone disapear.

    opened by hadim 73
  • 'tqdm' object has no attribute 'pos'

    'tqdm' object has no attribute 'pos'

    Hi,

    I sometimes get this error, with no clear explanation. I wrapped my iterator with tqdm and every now and then, after a few iterations, it comes up with this AttributeError: 'tqdm' object has no attribute 'pos'

    Redacted trace below. Any ideas?

    \\<inbox>\Inbox\Reports\GFH: 0emails [00:00, ?emails/s]
       File "P:\TS Projects\Python\python3\Lib\threading.py", line 911, in _bootstrap_inner
         self.run()
       File "P:\TS Projects\Python\python3\Lib\threading.py", line 859, in run
         self._target(*self._args, **self._kwargs)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 111, in startExtractionThread
         extractor.run("emails", inbox, outlook)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 90, in run
         self.extract_data(mainfolder, restriction, db, collection)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 79, in extract_data
         walk_folders(folder)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 78, in walk_folders
         walk_folders(flds)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 78, in walk_folders
         walk_folders(flds)
       File "P:\TS Projects\Document Classification\dataExtractor.py", line 36, in walk_folders
         for msg in self.iterator:
       File "P:\TS Projects\Python\venv3\lib\site-packages\tqdm-4.10.0-py3.4.egg\tqdm\_tqdm.py", line 883, in __iter__
         self.close()
       File "P:\TS Projects\Python\venv3\lib\site-packages\tqdm-4.10.0-py3.4.egg\tqdm\_tqdm.py", line 984, in close
         self._decr_instances(self)
       File "P:\TS Projects\Python\venv3\lib\site-packages\tqdm-4.10.0-py3.4.egg\tqdm\_tqdm.py", line 398, in _decr_instances
         if inst.pos > instance.pos:
     AttributeError: 'tqdm' object has no attribute 'pos'
    
    p0-bug-critical ☢ need-feedback 📢 synchronisation ⇶ 
    opened by jimanvlad 54
  • Each iteration of progressbar starts a new line in Jupyter

    Each iteration of progressbar starts a new line in Jupyter

    If I'm running tqdm on a cell in Jupyter and cancel it, when I run tqdm again it prints on a new line for each iteration. Is this a common problem? I have seen this happen to others online but haven't seen a solution. I checked https://github.com/tqdm/tqdm/#help and didn't see the issue. If there is a solution please share. Thanks

    Example below:

     0%|          | 1/3542 [00:00<18:27,  3.20it/s]
    
      0%|          | 16/3542 [00:00<13:08,  4.47it/s]
    
      1%|          | 24/3542 [00:00<09:31,  6.15it/s]
    
      1%|          | 32/3542 [00:00<06:56,  8.42it/s]
    
      1%|          | 40/3542 [00:00<05:07, 11.37it/s]
    
      1%|▏         | 48/3542 [00:01<03:50, 15.17it/s]
    
      2%|▏         | 56/3542 [00:01<02:54, 20.00it/s]
    
      2%|▏         | 65/3542 [00:01<02:13, 26.00it/s]
    
      2%|▏         | 80/3542 [00:01<01:40, 34.47it/s]
    
      3%|▎         | 102/3542 [00:01<01:15, 45.69it/s]
    
      3%|▎         | 115/3542 [00:01<01:03, 53.89it/s]
    
      4%|▎         | 127/3542 [00:01<00:56, 60.65it/s]
    
      4%|▍         | 138/3542 [00:01<00:50, 67.34it/s]
    
      4%|▍         | 152/3542 [00:02<00:44, 76.91it/s]
    
      5%|▍         | 168/3542 [00:02<00:38, 87.94it/s]
    
      5%|▌         | 184/3542 [00:02<00:34, 96.67it/s]
    
      6%|▌         | 200/3542 [00:02<00:31, 106.87it/s]
    
      6%|▌         | 216/3542 [00:02<00:28, 115.95it/s]
    
      7%|▋         | 232/3542 [00:02<00:26, 124.56it/s]
    
      7%|▋         | 248/3542 [00:02<00:25, 129.90it/s]
    
      7%|▋         | 264/3542 [00:02<00:24, 133.62it/s]
    
      8%|▊         | 280/3542 [00:02<00:23, 138.38it/s]
    
      8%|▊         | 296/3542 [00:03<00:22, 143.22it/s]
    
      9%|▉         | 314/3542 [00:03<00:22, 143.28it/s]
    
      9%|▉         | 332/3542 [00:03<00:21, 151.39it/s]
    
     11%|█         | 376/3542 [00:03<00:17, 186.09it/s]
    
     11%|█▏        | 399/3542 [00:03<00:17, 175.56it/s]
    
     12%|█▏        | 420/3542 [00:03<00:18, 165.79it/s]
    
     13%|█▎        | 452/3542 [00:03<00:17, 172.49it/s]
    
     14%|█▎        | 484/3542 [00:03<00:17, 179.45it/s]
    
     15%|█▍        | 516/3542 [00:04<00:16, 185.41it/s]
    
     15%|█▌        | 548/3542 [00:04<00:16, 185.36it/s]
    
     16%|█▋        | 580/3542 [00:04<00:15, 193.92it/s]
    
     17%|█▋        | 612/3542 [00:04<00:14, 203.12it/s]
    
     18%|█▊        | 644/3542 [00:04<00:14, 205.14it/s]
    
     19%|█▉        | 676/3542 [00:04<00:13, 210.39it/s]
    
     20%|█▉        | 708/3542 [00:05<00:13, 217.00it/s]
    
     21%|██        | 740/3542 [00:05<00:12, 222.86it/s]
    
     22%|██▏       | 772/3542 [00:05<00:12, 228.65it/s]
    
     23%|██▎       | 804/3542 [00:05<00:11, 234.10it/s]
    
     24%|██▎       | 836/3542 [00:05<00:11, 238.16it/s]
    
     25%|██▍       | 868/3542 [00:05<00:11, 236.76it/s]
    
     25%|██▌       | 900/3542 [00:05<00:10, 240.95it/s]
    
     26%|██▋       | 932/3542 [00:05<00:10, 244.96it/s]
    
     27%|██▋       | 964/3542 [00:06<00:10, 249.78it/s]
    
     28%|██▊       | 996/3542 [00:06<00:09, 257.60it/s]
    
     29%|██▉       | 1028/3542 [00:06<00:09, 255.52it/s]
    
     30%|██▉       | 1060/3542 [00:06<00:09, 256.75it/s]
    
     31%|███       | 1092/3542 [00:06<00:09, 259.42it/s]
    
     32%|███▏      | 1124/3542 [00:06<00:09, 263.77it/s]
    
     33%|███▎      | 1156/3542 [00:06<00:08, 270.32it/s]
    
     34%|███▎      | 1188/3542 [00:06<00:08, 267.43it/s]
    
     34%|███▍      | 1220/3542 [00:07<00:08, 274.86it/s]
    
     35%|███▌      | 1252/3542 [00:07<00:08, 278.29it/s]
    
     36%|███▋      | 1288/3542 [00:07<00:07, 293.28it/s]
    
     37%|███▋      | 1324/3542 [00:07<00:07, 306.54it/s]
    
     38%|███▊      | 1355/3542 [00:07<00:07, 301.84it/s]
    
     39%|███▉      | 1386/3542 [00:07<00:07, 295.60it/s]
    
     40%|███▉      | 1416/3542 [00:07<00:07, 286.25it/s]
     41%|████      | 1448/3542 [00:07<00:07, 282.87it/s]
     42%|████▏     | 1484/3542 [00:07<00:06, 298.21it/s]
     43%|████▎     | 1520/3542 [00:08<00:06, 297.00it/s]
     44%|████▍     | 1556/3542 [00:08<00:06, 303.63it/s]
     46%|████▌     | 1620/3542 [00:08<00:05, 360.27it/s]
     48%|████▊     | 1684/3542 [00:08<00:05, 348.07it/s]
     49%|████▉     | 1748/3542 [00:08<00:05, 347.18it/s]
    
    invalid ⛔ p2-bug-warning ⚠ submodule-notebook 📓 
    opened by jolespin 53
  • [Regression] Tqdm freezes when iteration speed changes drastically

    [Regression] Tqdm freezes when iteration speed changes drastically

    That’s a regression that happened between version 1.0 and 2.0. Try this minimal example with several tqdm versions:

    from tqdm import tqdm
    from time import sleep
    
    for i in tqdm(range(10000)):
        if i > 3000:
            sleep(1)
    

    As you’ll see, it works great with tqdm 1.0, but it takes almost forever to update with version 2.0 and 3.0. In fact, it takes about 3000 seconds to update, because the first update did 3000 updates in one second, so tqdm assumes it’s ok to wait for 3000 more updates before updating. But that’s only an acceptable behaviour for loops with a rather constant iteration speed. The smoothing argument was developed for such cases (see #48, great work on this by the way!), but the current miniters/mininterval behaviour contradicts it.

    Of course, specifying miniters=1 does the expected behaviour, so I’m wondering: why not do that by default?

    In tqdm 1.0, miniters was set to 1 and mininterval was set to 0.5, which meant: "we update the display at every iteration if the time taken by the iteration is longer than 0.5 seconds, otherwise we wait several iterations, enough to make at least a 0.5 second interval".

    Since tqdm 2.0, miniters is set to None and mininterval is set to 0.1. From what I understand, it means "we update the display after waiting for several iterations, enough to make at least a 0.1 second interval".

    Unfortunately, from what the example above shows, tqdm doesn’t respect this rule since we don’t have an update every 0.1 second. The behaviour seems more complex now, it tries to assume how much time tqdm should wait instead of basically counting when it should update its display, like it was in tqdm 1.0.

    opened by BertrandBordage 42
  • Monitoring thread to prevent tqdm taking too much time for display

    Monitoring thread to prevent tqdm taking too much time for display

    Added a monitoring thread that will check all tqdm instances regularly and reduce miniters if necessary. This should fix issues like #249 and #54 for good.

    About performance impact:

    • This solution adds a bit of additional overhead at the first tqdm instanciation because it creates a thread, but the bright side is that then the thread sleeps most of the time, and wakes up only to check every instances' miniters value and last printing time. By default, it wakes up every 10 seconds. So the monitoring thread
    • But the bad side is that this solution needs to access and modify self.miniters. No problem here, the problem is with tqdm.__iter__() since it doesn't do any access to self variables inside loops. So I had to modify the loop to access self.miniters. My reasoning is that accessing self.miniters will use less CPU than calling time.time(), but I didn't try it out. If someone wants to profile to see the impact of self.miniters, that would be great!

    /EDIT: profiled, here are the results:

    Normal tqdm (pre-PR):

    for i in tqdm.trange(int(1e8)):
        pass
    100%|#######################| 100000000/100000000 [00:20<00:00, 5000000.00it/s]
    (20.002348575184286, 'seconds')
    

    Normal tqdm no miniters:

    for i in tqdm.trange(int(1e8), miniters=1, mininterval=0.1):
        pass
    100%|#######################| 100000000/100000000 [00:40<00:00, 2499937.51it/s]
    (40.00375434790657, 'seconds')
    

    Monitoring thread PR:

    for i in tqdm.trange(int(1e8)):
        pass
    100%|#######################| 100000000/100000000 [00:22<00:00, 4423995.78it/s]
    (22.605596624972463, 'seconds')
    

    Note that for all other tests in examples/simple_examples.py, they all show similar performances between standard tqdm and monitoring thread PR (of course the no miniters is slower).

    So there is a small performance hit, which is still way better than removing miniters completely: removing miniters doubles the computation time, whereas the monitoring thread has an overhead of 10% (2s here). The perf hit doesn't even come from the monitoring thread itself, but rather from using self.miniters instead of miniters in tqdm.__iter__(). This seems to slow down by a linear constant, so it's indeed noticeable but it can be ok (or if we can find a way around it would be better but I can't think how). Makes tqdm go from 20s to 22s for the task above.

    Note that there is no performance impact on manual tqdm. The perf hit is only on iterable tqdm.

    TODO:

    • [x] Profile if self.miniters in tqdm.__iter__() isn't too much of a burden.
    • [x] Fix thread hanging up terminal if KeyboardInterrupt.
    • [x] Force display refresh on miniters update from monitoring thread (else user will have to wait until next iteration to see the update).
    • [x] Fix slowdown at closing (because of TMonitor.join()).
    • [x] Add unit test for monitoring thread and the special case in #249 (would need to modify TMonitor to support virtual discrete timer, just store in self._time and self._sleep like we do in tqdm).
    • [x] Add a note to Readme.
    • [x] Fix unit test to synchronize with thread and wait for it to wake up instead of sleeping (which makes Travis fail sometimes).
    • [x] Fix thread hanging up and making Travis test fail randomly.
    p3-enhancement 🔥 to-review 🔍 
    opened by lrq3000 41
  • Jupyterlab and tqdm_notebook

    Jupyterlab and tqdm_notebook

    The following code errors out with, NameError: name 'IntProgress' is not defined

    import tqdm
    tqdm.tqdm_notebook().pandas()
    df.progress_apply(func, axis=1)
    

    I imported ipywidgets.IntProgress with no luck.

    Python=3.6 Jupyter=4.2.1 Jupyter lab=0.18.1

    invalid ⛔ question/docs ‽ submodule-notebook 📓 
    opened by blahster 36
  • Pythonic submodules architecture

    Pythonic submodules architecture

    Should fix #176, #245 and the second issue in #188 by reorganizing tqdm modules with a more pythonic architecture. This will allow a small performance boost (for all modules including core) as noted in https://github.com/tqdm/tqdm/pull/258#issuecomment-245119167.

    The main goal of reorganizing tqdm's modules architecture is to avoid unnecessary imports without relying on delayed imports nor wrappers (which bring a whole lot of other issues, like no help message in IPython or other interpreters and the inability to call class methods such as tqdm_notebook.write()).

    The minor goal is to take this reorganization opportunity to enhance the overall API (uniformization for example, but if you have other ideas/complaints, please shout out in the discussion!).

    Basically, I implemented the architecture suggested by @wernight:

    • each module is now exposed publicly (eg, _tqdm_notebook.py -> notebook.py):
    # Before:
    from tqdm import tqdm_gui
    # Now:
    from tqdm.gui import tqdm
    
    • submodules import and declaration in __all__ were removed from __init__.py
    • all classes are now named tqdm() and trange(), this uniformizes the usage of tqdm across all modules. For example, it will ease the end-user implementation of adapting import codes. Before:
    if ipython_notebook:
        from tqdm import tqdm_notebook as tqdm
        from tqdm import tnrange as trange
    else:
        from tqdm import tqdm, trange
    
    for element in tqdm(some_iterator):
        pass
    

    Now:

    if ipython_notebook:
        from tqdm.notebook import tqdm, trange
    else:
        from tqdm.core import tqdm, trange
    

    Note that I don't consider this change mandatory, but I wanted to profit of the opportunity that we are anyway changing the library architecture to see if API uniformization could be done. I am very open to feedback about this.

    • __init__.py still imports and expose tqdm.core as the default tqdm, so the old API for the core module is still compatible:
    # Both are equivalent
    from tqdm import tqdm, trange
    from tqdm.core import tqdm, trange
    

    This also implies that tqdm.core is always imported even if the user only uses a submodule such as tqdm.gui, but since anyway all current submodules subclass from tqdm.core, this doesn't change anything (and anyway importing tqdm.core is not really heavy since it doesn't rely on any third party module...).

    Do you find this new architecture and API easier to use? Any feedback is welcome! (Users are also welcome to participate!)

    TODO:

    • [ ] Update README
    • [ ] Write in CONTRIBUTION the smooth rolling scheme (and modules acceptance guidelines) as described in #252.
    • [ ] Write list of submodules with their status in README.
    • [ ] Test examples/ scripts.
    need-feedback 📢 
    opened by lrq3000 34
  • Sub stats showed as an unique progress bar

    Sub stats showed as an unique progress bar

    I have thinking about this for a couple of days.

    Lets say I have two threads working in parallel in a single problem. The counter update would be something similar to this: long pause, an update (thread 1), short pause, another update (thread 2), long pause... Those two close updates break the speed and ETA estimation.

    Thinking of more sophisticated algorithms, I realized that the problem is actually pretty simple if we keep average speed for both threads separated and just accumulate it when printed.

    I realize that this code is a bit specialized but I think it would be a nice feature. If you are not interested or you are worried about this change impacting performance, would be nice to provide some hooks to specialize via the constructor of thru subclassing.

    What do you think?

    Thanks.

    p3-enhancement 🔥 question/docs ‽ 
    opened by jcea 30
  • Parallelism safety (thread, multiprocessing)

    Parallelism safety (thread, multiprocessing)

    Implement parallelism safety in tqdm by providing a new set_lock() class method. This is a follow-up on #291.

    For Linux (and any platform supporting fork), no action is required from the user.

    For Windows, here is the canonical example usage:

    from time import sleep
    from tqdm import tqdm
    from multiprocessing import Pool, freeze_support, Lock
    
    def progresser(n):         
        text = "bar{}".format(n)
        for i in tqdm(range(5000), desc=text, position=n, leave=True):
            sleep(0.001)
    
    def init_child(write_lock):
        """
        Provide tqdm with the lock from the parent app.
        This is necessary on Windows to avoid racing conditions.
        """
        tqdm.set_lock(write_lock)
    
    if __name__ == '__main__':
        freeze_support()
        write_lock = Lock()
        L = list(range(10))
        Pool(len(L), initializer=init_child, initargs=(write_lock,)).map(progresser, L)
    

    Todo:

    • [ ] Unit test (using squash_ctrl and a fake IO, just check that there are 10 lines at the end, each with a different number, just like the example case provided) with the sample code above as a unit test + with mp.set_start_method('spawn') to force mimicking Windows "no-fork" spawning of processes. Else it's impossible to test on Linux.
    • [x] Flake8
    • [x] add in the documentation how to use tqdm with locks on Windows (as it should be transparent on Linux)
    • [ ] LIMITATION: tqdm.write() won't work, because there is no way to implement it without a centralized manager that is aware of all bars. See #143 for a possible solution. Or maybe we can change _instances type to a multiprocessing.Queue() (or another type that is shareable across multiprocesses)?
    • [x] update documentation (e.g. #439)
    p0-bug-critical ☢ to-review 🔍 need-feedback 📢 
    opened by lrq3000 29
  • jitter in IPython notebook

    jitter in IPython notebook

    Hey,

    This is how tqdm 2.2.4 looks in IPython notebook: http://www.youtube.com/watch?v=GjRHAmj_xfc

    This is how it used to look in 1.0: http://www.youtube.com/watch?v=t7e6IEdEaTc

    Red color is fine (we're using stderr now, and it makes sense), but I think text shouldn't jump by default, and progress bar looks untidy (it is not image compression artifact - there are vertical lines, and the last symbol doesn't have the same height as other symbols).

    I also find old elapsed: ... left: ... to be easier to understand than new ...<..., but that's a personal preference :)

    Code:

    import time
    import random
    from tqdm import tqdm
    
    x = range(343)
    
    for el in tqdm(x, "Loading", mininterval=0, leave=True):
        time.sleep(random.random() * 0.01)
    
    p0-bug-critical ☢ 
    opened by kmike 29
  • multi threading + print statement breaks tqdm

    multi threading + print statement breaks tqdm

    • [x] I have marked all applicable categories:
      • [ ] exception-raising bug
      • [x] visual output bug
    • [x] I have visited the source website, and in particular read the known issues
    • [x] I have searched through the issue tracker for duplicates
    • [x] I have mentioned version numbers, operating system and environment, where applicable:
      import tqdm, sys
      print(tqdm.__version__, sys.version, sys.platform)
      

    tqdm: 4.64.0 sys: 3.9.12 platform: (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] win32

    Ok so basically I'm using tqdm to show progress when downloading a file. My program downloads various different files concurrently, using the threading module. So basically I create a Thread object for every file I want to download, add it to the queue, and then every task gets executed. The function being run downloads the file, then at the end prints a message to the console.

    Now I think that while more than 1 tqdms are active in a console if a print statement occurs by one of those, it sort of breaks all the layout of the console, resulting in things like this: https://cdn.discordapp.com/attachments/1057766895970947082/1057778949041700974/image.png

    How do I solve it?

    opened by SRK7Kyros 2
  • Add support for more time formats in `rate_inv_fmt`

    Add support for more time formats in `rate_inv_fmt`

    • [X] I have marked all applicable categories:
      • [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
      • [X] new feature request
    • [X] I have visited the [source website], and in particular read the [known issues]
    • [X] I have searched through the [issue tracker] for duplicates
    • [ ] I have mentioned version numbers, operating system and environment, where applicable:

    When setting rate_inv_fmt in format_meter function, it would be nice to use different time formats when unit_scale is True (maybe create a format_rate function to use here instead of format_sizeof?).

    This way, we could see information like 1.35min/it, 8.55h/it, or even 1.5d/it.

    opened by george-gca 0
  • Add support for days in `format_interval`

    Add support for days in `format_interval`

    • [X] I have marked all applicable categories:
      • [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
      • [X] new feature request
    • [X] I have visited the [source website], and in particular read the [known issues]
    • [X] I have searched through the [issue tracker] for duplicates
    • [ ] I have mentioned version numbers, operating system and environment, where applicable:

    Since it is quite common nowadays (especially because of machine learning and deep learning) to run a code for more than just hours, it would be nice to add support for days in format_interval function.

    opened by george-gca 0
  • Tqdm process_map doesn't work with PyPy

    Tqdm process_map doesn't work with PyPy

    • [x] I have marked all applicable categories:
      • [x] exception-raising bug
      • [ ] visual output bug
    • [x] I have visited the [source website], and in particular read the [known issues]
    • [x] I have searched through the [issue tracker] for duplicates
    • [x] I have mentioned version numbers, operating system and environment, where applicable:
    4.64.1 3.9.15 (21401ebc2df332b6be6e3d364a985e951a72bbbd, Dec 05 2022, 18:37:18)
    [PyPy 7.3.10 with MSC v.1929 64 bit (AMD64)] win32
    
        values = process_map(self.count, candidates, max_workers=8, ncols=100, desc="Candidates", position=0)
      File "C:\Program Files\PyPy\lib\site-packages\tqdm\contrib\concurrent.py", line 130, in process_map
        return _executor_map(ProcessPoolExecutor, fn, *iterables, **tqdm_kwargs)
      File "C:\Program Files\PyPy\lib\site-packages\tqdm\contrib\concurrent.py", line 76, in _executor_map
        return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
      File "C:\Program Files\PyPy\Lib\concurrent\futures\process.py", line 752, in map
        results = super().map(partial(_process_chunk, fn),
      File "C:\Program Files\PyPy\Lib\concurrent\futures\_base.py", line 598, in map
        fs = [self.submit(fn, *args) for args in zip(*iterables)]
      File "C:\Program Files\PyPy\Lib\concurrent\futures\_base.py", line 598, in <listcomp>
        fs = [self.submit(fn, *args) for args in zip(*iterables)]
      File "C:\Program Files\PyPy\Lib\concurrent\futures\process.py", line 723, in submit
        self._adjust_process_count()
    AttributeError: 'ProcessPoolExecutor' object has no attribute '_adjust_process_count'
    
    opened by eblis 1
  • upstream asyncio changes cause breakage

    upstream asyncio changes cause breakage

    Upstream: https://github.com/python/cpython/issues/100160

    Affected Python versions: 3.10.9, 3.11.1

    ============================= test session starts ==============================
    platform linux -- Python 3.10.9, pytest-7.1.3, pluggy-1.0.0 -- /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/bin/python3.10
    cachedir: .pytest_cache
    rootdir: /build/tqdm-4.64.1, configfile: setup.cfg, testpaths: tests
    plugins: asyncio-0.19.0, timeout-2.1.0
    asyncio: mode=strict
    timeout: 30.0s
    timeout method: signal
    timeout func_only: False
    collected 147 items / 7 deselected / 140 selected                              
    
    tests/tests_asyncio.py::test_break ERROR                                 [  0%]
    tests/tests_asyncio.py::test_break ERROR                                 [  0%]
    tests/tests_asyncio.py::test_generators FAILED                           [  1%]
    tests/tests_asyncio.py::test_generators ERROR                            [  1%]
    tests/tests_asyncio.py::test_range FAILED                                [  2%]
    tests/tests_asyncio.py::test_range ERROR                                 [  2%]
    tests/tests_asyncio.py::test_nested FAILED                               [  2%]
    tests/tests_asyncio.py::test_nested ERROR                                [  2%]
    tests/tests_asyncio.py::test_coroutines FAILED                           [  3%]
    tests/tests_asyncio.py::test_coroutines ERROR                            [  3%]
    tests/tests_asyncio.py::test_as_completed[0.1] FAILED                    [  4%]
    tests/tests_asyncio.py::test_as_completed[0.1] ERROR                     [  4%]
    tests/tests_asyncio.py::test_gather FAILED                               [  5%]
    tests/tests_asyncio.py::test_gather ERROR                                [  5%]
    tests/tests_concurrent.py::test_thread_map PASSED                        [  5%]
    

    in particular:

    ==================================== ERRORS ====================================
    _________________________ ERROR at setup of test_break _________________________
    /nix/store/lgvhc70zxvn2y6855ir6h3s0pkvqab7q-python3.10-pytest-asyncio-0.19.0/lib/python3.10/site-packages/pytest_asyncio/plugin.py:380: in pytest_fixture_setup
        old_loop = policy.get_event_loop()
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/events.py:666: in get_event_loop
        warnings.warn('There is no current event loop',
    E   DeprecationWarning: There is no current event loop
    _______________________ ERROR at teardown of test_break ________________________
    /nix/store/lgvhc70zxvn2y6855ir6h3s0pkvqab7q-python3.10-pytest-asyncio-0.19.0/lib/python3.10/site-packages/pytest_asyncio/plugin.py:359: in pytest_fixture_post_finalizer
        loop = policy.get_event_loop()
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/events.py:666: in get_event_loop
        warnings.warn('There is no current event loop',
    E   DeprecationWarning: There is no current event loop
    _____________________ ERROR at teardown of test_generators _____________________
    /nix/store/lgvhc70zxvn2y6855ir6h3s0pkvqab7q-python3.10-pytest-asyncio-0.19.0/lib/python3.10/site-packages/pytest_asyncio/plugin.py:359: in pytest_fixture_post_finalizer
        loop = policy.get_event_loop()
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/events.py:666: in get_event_loop
        warnings.warn('There is no current event loop',
    E   DeprecationWarning: There is no current event loop
    [..]
    =================================== FAILURES ===================================
    _______________________________ test_generators ________________________________
    /nix/store/lgvhc70zxvn2y6855ir6h3s0pkvqab7q-python3.10-pytest-asyncio-0.19.0/lib/python3.10/site-packages/pytest_asyncio/plugin.py:452: in inner
        task = asyncio.ensure_future(coro, loop=_loop)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/tasks.py:615: in ensure_future
        return _ensure_future(coro_or_future, loop=loop)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/tasks.py:636: in _ensure_future
        return loop.create_task(coro_or_future)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/base_events.py:436: in create_task
        self._check_closed()
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/base_events.py:515: in _check_closed
        raise RuntimeError('Event loop is closed')
    E   RuntimeError: Event loop is closed
    __________________________________ test_range __________________________________
    /nix/store/lgvhc70zxvn2y6855ir6h3s0pkvqab7q-python3.10-pytest-asyncio-0.19.0/lib/python3.10/site-packages/pytest_asyncio/plugin.py:452: in inner
        task = asyncio.ensure_future(coro, loop=_loop)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/tasks.py:615: in ensure_future
        return _ensure_future(coro_or_future, loop=loop)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/tasks.py:636: in _ensure_future
        return loop.create_task(coro_or_future)
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/base_events.py:436: in create_task
        self._check_closed()
    /nix/store/l12hgx09yh3bmhxi9i9g6riwgimybf5l-python3-3.10.9/lib/python3.10/asyncio/base_events.py:515: in _check_closed
        raise RuntimeError('Event loop is closed')
    E   RuntimeError: Event loop is closed
    [..]
    
    opened by fabianhjr 0
Releases(v4.64.1)
Owner
tqdm developers
Developing and maintaining the tqdm progress bar project
tqdm developers
Discord-Image-Logger - Discord Image Logger With Python

Discord-Image-Logger A exploit I found in discord. Working as of now. Explanatio

111 Dec 31, 2022
giving — the reactive logger

giving is a simple, magical library that lets you log or "give" arbitrary data throughout a program and then process it as an event stream.

Olivier Breuleux 0 May 24, 2022
dash-manufacture-spc-dashboard is a dashboard for monitoring read-time process quality along manufacture production line

In our solution based on plotly, dash and influxdb, the user will firstly generate the specifications for different robots, and then a wide range of interactive visualizations for different machines

Dequn Teng 1 Feb 13, 2022
loghandler allows you to easily log messages to multiple endpoints.

loghandler loghandler allows you to easily log messages to multiple endpoints. Using Install loghandler via pip pip install loghandler In your code im

Mathias V. Nielsen 2 Dec 04, 2021
APT-Hunter is Threat Hunting tool for windows event logs

APT-Hunter is Threat Hunting tool for windows event logs which made by purple team mindset to provide detect APT movements hidden in the sea of windows event logs to decrease the time to uncover susp

824 Jan 08, 2023
Rich is a Python library for rich text and beautiful formatting in the terminal.

Rich 中文 readme • lengua española readme • Läs på svenska Rich is a Python library for rich text and beautiful formatting in the terminal. The Rich API

Will McGugan 41.5k Jan 07, 2023
Python bindings for g3log

g3logPython Python bindings for g3log This library provides python3 bindings for g3log + g3sinks (currently logrotate, syslog, and a color-terminal ou

4 May 21, 2021
Small toolkit for python multiprocessing logging to file

Small Toolkit for Python Multiprocessing Logging This is a small toolkit for solving unsafe python mutliprocess logging (file logging and rotation) In

Qishuai 1 Nov 10, 2021
Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running.

lazyprofiler Lazy Profiler is a simple utility to collect CPU, GPU, RAM and GPU Memory stats while the program is running. Installation Use the packag

Shankar Rao Pandala 28 Dec 09, 2022
Vibrating-perimeter - Simple helper mod that logs how fast you are mining together with a simple buttplug.io script to control a vibrator

Vibrating Perimeter This project consists of a small minecraft helper mod that writes too a log file and a script that reads said log. Currently it on

Heart[BOT] 0 Nov 20, 2022
Python logging package for easy reproducible experimenting in research

smilelogging Python logging package for easy reproducible experimenting in research. Why you may need this package This project is meant to provide an

Huan Wang 20 Dec 23, 2022
Track Nano accounts and notify via log file or email

nano-address-notifier Track accounts and notify via log file or email Required python libs

Joohansson (Json) 4 Nov 08, 2021
A basic logging library for Python.

log.py 📖 About: A basic logging library for Python with the capability to: save to files. have custom formats. have custom levels. be used instantiat

Sebastiaan Bij 1 Jan 19, 2022
ScreenshotLogger works just like a keylogger but instead of capturing keystroke,it captures the screen, stores it or sends via email

ScreenshotLogger works just like a keylogger but instead of capturing keystroke,it captures the screen, stores it or sends via email. Scrapeasy is super easy to use and handles everything for you. Ju

Ifechukwudeni Oweh 17 Jul 17, 2022
A very basic esp32-based logic analyzer capable of sampling digital signals at up to ~3.2MHz.

A very basic esp32-based logic analyzer capable of sampling digital signals at up to ~3.2MHz.

Davide Della Giustina 43 Dec 27, 2022
Python script to scan log files/system for unauthorized access around system

checkLogs Python script to scan log files/system for unauthorized access around Linux systems Table of contents General info Getting started Usage Gen

James Kelly 1 Feb 25, 2022
metovlogs is a very simple logging library

metovlogs is a very simple logging library. Setup is one line, then you can use it as a drop-in print replacement. Sane and useful log format out of the box. Best for small or early projects.

Azat Akhmetov 1 Mar 01, 2022
Yaml - Loggers are like print() statements

Upgrade your print statements Loggers are like print() statements except they also include loads of other metadata: timestamp msg (same as print!) arg

isaac peterson 38 Jul 20, 2022
Json Formatter for the standard python logger

This library is provided to allow standard python logging to output log data as json objects. With JSON we can make our logs more readable by machines and we can stop writing custom parsers for syslo

Zakaria Zajac 1.4k Jan 04, 2023
🐑 Syslog Simulator hazır veya kullanıcıların eklediği logları belirtilen adreslere ve port'a seçilen döngüde syslog ile gönderilmesini sağlayan araçtır. | 🇹🇷

syslogsimulator hazır ürün loglarını SIEM veya log toplayıcısına istediğiniz portta belirli sürelerde göndermeyi sağlayan küçük bir araçtır.

Enes Aydın 3 Sep 28, 2021