NumPy aware dynamic Python compiler using LLVM

Overview

Numba

Gitter Discourse Zenodo DOI

A Just-In-Time Compiler for Numerical Functions in Python

Numba is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax.

Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks.

For more information about Numba, see the Numba homepage: https://numba.pydata.org

Supported Platforms

  • Operating systems and CPUs:
    • Linux: x86 (32-bit), x86_64, ppc64le (POWER8 and 9), ARMv7 (32-bit), ARMv8 (64-bit).
    • Windows: x86, x86_64.
    • macOS: x86_64, (M1/Arm64, unofficial support only).
    • *BSD: (unofficial support only).
  • (Optional) Accelerators and GPUs:
    • NVIDIA GPUs (Kepler architecture or later) via CUDA driver on Linux and Windows.

Dependencies

  • Python versions: 3.7-3.10
  • llvmlite 0.38.*
  • NumPy >=1.18 (can build with 1.11 for ABI compatibility).

Optionally:

  • SciPy >=1.0.0 (for numpy.linalg support).

Installing

The easiest way to install Numba and get updates is by using the Anaconda Distribution: https://www.anaconda.com/download

$ conda install numba

For more options, see the Installation Guide: https://numba.readthedocs.io/en/stable/user/installing.html

Documentation

https://numba.readthedocs.io/en/stable/index.html

Contact

Numba has a discourse forum for discussions:

Continuous Integration

Azure Pipelines
Comments
  • Allow masking threads out at runtime

    Allow masking threads out at runtime

    Fixes https://github.com/numba/numba/issues/2713

    Still TODO here:

    • [x] For some reason this doesn't work correctly with workqueue. If you mask the thread count to fewer than the maximum number of threads, it only uses 1 thread.
    • [x] Automated tests
    • [x] Add some kind of example to the docs
    • [x] Add tests using Python threading
    • [x] ~Address TBB sometimes using fewer threads than requested in the tests~ (stu: edit, can't/won't fix)
    • [x] ~Consolidate exception code in get_num_threads~
    • [x] ~Fix test failure with Python threading~ (stu: edit, see https://github.com/numba/numba/pull/5044, exact patch that fixes it is https://github.com/numba/numba/pull/5044/commits/851437184d0de1dbdd425aee3896bbe01da7670e)
    • [x] Warning from compilation
    • [ ] More strenuous testing
    BuildFarm Passed 5 - Ready to merge 
    opened by asmeurer 227
  • test case failure ( segmentation fault ) in ppc64le

    test case failure ( segmentation fault ) in ppc64le

    Hi All,

    I was trying to build numba on ubuntu/ppc64le. I have installed the required dependencies - llvmlite, numpy, funcsigs

    Machine + other details.

    # arch
    ppc64le
    
    # cat /etc/os-release
    NAME="Ubuntu"
    
    # llvm-config --version
    4.0.1
    
    # ls -l /usr/local/lib/python2.7/dist-packages/ | grep llvmlite
    drwxr-sr-x  6 root staff  4096 Jul  5 15:53 llvmlite
    drwxr-sr-x  2 root staff  4096 Jul  5 15:39 llvmlite-0.16.0+0.g964cf1d.dirty.egg-info
    drwxr-sr-x  2 root staff  4096 Jul  5 15:53 llvmlite-0.19.0.dev0+22.g6ac74c8.egg-info
    
    # ls -l /usr/local/lib/python2.7/dist-packages/ | grep numpy
    drwxr-sr-x 16 root staff  4096 Jul  5 14:59 numpy
    drwxr-sr-x  2 root staff  4096 Jul  5 14:59 numpy-1.13.0.dist-info
    
    # ls -l /usr/local/lib/python2.7/dist-packages/ | grep funcsigs
    drwxr-sr-x  4 root staff  4096 Jul  5 15:31 funcsigs-1.0.2-py2.7.egg
    
    
    1. Now facing below issue whilst building "numba"
    # pip install -r requirements.txt
    Requirement already satisfied: numpy>=1.7 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 1))
    Collecting llvmlite>=0.19 (from -r requirements.txt (line 3))
      Could not find a version that satisfies the requirement llvmlite>=0.19 (from -r requirements.txt (line 3)) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0.1, 0.12.1, 0.13.0, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.17.1, 0.18.0)
    No matching distribution found for llvmlite>=0.19 (from -r requirements.txt (line 3))
    
    

    Thereafter did the below change to ensure we check for correct llvm version tag in "requirements.txt"

    llvmlite>=v0.19.0.dev
    
    

    Guess the above changed is needed till it changes from 0.19.0.dev --> 0.19.0 as seen here : https://github.com/numba/llvmlite/tags

    2 ) After above change , was able to successfully run the next build steps -

    $ python setup.py build_ext --inplace
    $ python setup.py install
    

    Also numba does gets installed too as seen below

    # ls -l /usr/local/lib/python2.7/dist-packages/ | grep numba
    drwxr-sr-x  4 root staff  4096 Jul  6 14:24 numba-0.34.0rc1+0.g11ae5f1.dirty-py2.7-linux-ppc64le.egg
    drwxr-sr-x  4 root staff  4096 Jul  5 15:31 numba-0.34.0rc1-py2.7-linux-ppc64le.egg
    
    

    However when i am trying to run the test cases for numba , getting segmentation fault

    a# python runtests.py -v
    skipped CUDA tests
    skipped CUDA tests
    test_gufunc (numba.tests.npyufunc.test_gufunc.TestGUFunc) ... ok
    test_guvectorize_decor (numba.tests.npyufunc.test_gufunc.TestGUFunc) ... ok
    test_ufunc_like (numba.tests.npyufunc.test_gufunc.TestGUFunc) ... ok
    test_gufunc (numba.tests.npyufunc.test_gufunc.TestGUFuncParallel) ... ok
    test_guvectorize_decor (numba.tests.npyufunc.test_gufunc.TestGUFuncParallel) ... ok
    test_ufunc_like (numba.tests.npyufunc.test_gufunc.TestGUFuncParallel) ... ok
    test_ndim_mismatch (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalar) ... ok
    test_scalar_input (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalar) ... ok
    test_scalar_input_core_type (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalar) ... ok
    test_scalar_input_core_type_error (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalar) ... ok
    test_scalar_output (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalar) ... ok
    test_ndim_mismatch (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalarParallel) ... ok
    test_scalar_input (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalarParallel) ... ok
    test_scalar_input_core_type (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalarParallel) ... ok
    test_scalar_input_core_type_error (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalarParallel) ... ok
    test_scalar_output (numba.tests.npyufunc.test_gufunc.TestGUVectorizeScalarParallel) ... ok
    test_gil_reacquire_deadlock (numba.tests.npyufunc.test_parallel_ufunc_issues.TestParGUfuncIssues) ... Segmentation fault
    
    

    Any pointers/inputs to resolve the test case failure and if change done in "requirements.txt" is correct ?

    opened by ghatwala 135
  • Specify synchronization semantics of CUDA Array Interface

    Specify synchronization semantics of CUDA Array Interface

    This PR adds a new key, stream, to the CUDA Array Interface (CAI) specification, and updates the CAI implementation in Numba to follow these changes. The semantics of this have been iterated towards through the discussion in this PR, so the original PR description suggests a different idea. However, I've left the original description in place below so that the discussion can be followed if necessary.

    To save anyone generating docs locally (or trying to read them in the diff), the specification as suggested in this PR is available at: https://gmarkall.github.io/numba-rtd-theme/cuda/cuda_array_interface.html. The majority of the text added in this PR is the Synchronization section.

    The Numba implementation follows that described in the specification, and adds tests to ensure that it:

    • Correctly populates in the stream field of the interface when acting as a Producer, and
    • Synchronizes at the correct points when acting as a Consumer.

    Original PR description

    Following on from discussions in Issues #4933 and #4886, this commit clarifies the synchronization and lifetime of __cuda_array_interface__ objects. In summary:

    • Synchronization: Producers and consumers of arrays on the CUDA array interface should operate on those arrays in the default stream, or synchronized on the default stream, in order to implicitly be in sync. In special cases (e.g. where the same stream is used across the producer / consumer boundary) the synchronization on the default stream may be elided.
    • Lifetime: Consuming the CUDA array interface does not extend the lifetime of the object owning the data. The consumer must ensure that a reference to the owner is kept as long as the data is required. I think this isn't really a new requirement, but codifies something that was implicitly required in past versions.

    I've avoided referring to the "legacy default stream" as that serializes everything (see e.g. https://devblogs.nvidia.com/gpu-pro-tip-cuda-7-streams-simplify-concurrency/) - I think we really do just want to synchronize with the default stream (happy to be shown why that's not the case if it isn't though :-) )

    The version number is bumped - I think for these changes it's important to bump the version number as it's possible to implement v2 correctly right now, but not match the synchronization semantics specified in this PR, so it's needed to be sure both sides of the interface agree on the semantics.

    Some changes are also made to Numba to support the correct implementation of these semantics by users:

    • from_cuda_array_interface and as_cuda_array now have a True-by-default sync kwarg, which indicates Numba should bind the new device array to the default stream.
    • __cuda_array_interface__ for a device array records an event on the stream the array is bound to and inserts a wait on the event in the default stream.

    This is based on #5136, as it uses the default_stream() function to get the default stream.

    BuildFarm Passed CUDA 5 - Ready to merge 
    opened by gmarkall 86
  • LLVM 8 segfault/Invalid PPC CTR loop! on ppc64le (again)

    LLVM 8 segfault/Invalid PPC CTR loop! on ppc64le (again)

    Reporting a bug

    Having updated LLVM support to LLVM8 https://github.com/numba/llvmlite/pull/478 and https://github.com/numba/numba/pull/4022 , it has become apparent that ppc64le builds based on this LLVM version are failing. Observations:

    • https://reviews.llvm.org/D53383 was added, which switched the relocation model for ppc64le from PIC to Static. Changing nothing in the Numba source leads to the default relocation model of Static being used and huge numbers of SIGSEGV occur when running the test suite (even for simple tests, and especially if gufuncs are used).
    • Changing the Numba source in this area: https://github.com/numba/numba/blob/e8ac4951affacd25c63ba2c18d62a3f12ed7e0ba/numba/targets/codegen.py#L793-L798 so as to make it that ppc64le uses PIC relocation prevents the segfaults, but ends up signalling with SIGABRT from UNREACHABLE executed at lib/Target/PowerPC/PPCCTRLoops.cpp:793:! due to Invalid PPC CTR loop!. Which is the same thing that happened in LLVM 6.
    bug llvm ISA: POWER 
    opened by stuartarchibald 63
  • CUDA on WSL2 Support

    CUDA on WSL2 Support


    Feature request

    I just tried to work with CUDA on WSL, with Numba on anaconda 3. Many part of CUDA features works well, such as nvcc, nvidia-smi, and python libraries such as Cupy, other than Numba CUDA.

    Numba works well while it runs simple @jit compiling. I add the log when I type numba -s bellow.

    __Hardware Information__
    Machine                                       : x86_64
    CPU Name                                      : skylake
    CPU Count                                     : 8
    Number of accessible CPUs                     : 8
    List of accessible CPUs cores                 : 0 1 2 3 4 5 6 7
    CFS Restrictions (CPUs worth of runtime)      : None
    
    CPU Features                                  : 64bit adx aes avx avx2 bmi bmi2
                                                    clflushopt cmov cx16 cx8 f16c fma
                                                    fsgsbase fxsr invpcid lzcnt mmx
                                                    movbe pclmul popcnt prfchw rdrnd
                                                    rdseed sahf sse sse2 sse3 sse4.1
                                                    sse4.2 ssse3 xsave xsavec xsaveopt
                                                    xsaves
    
    Memory Total (MB)                             : 24048
    Memory Available (MB)                         : 19547
    
    __OS Information__
    Platform Name                                 : Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.10
    Platform Release                              : 5.10.16.3-microsoft-standard-WSL2
    OS Name                                       : Linux
    OS Version                                    : #1 SMP Fri Apr 2 22:23:49 UTC 2021
    OS Specific Version                           : ?
    Libc Version                                  : glibc 2.31
    
    __Python Information__
    Python Compiler                               : GCC 7.3.0
    Python Implementation                         : CPython
    Python Version                                : 3.8.8
    Python Locale                                 : ja_JP.UTF-8
    
    __LLVM Information__
    LLVM Version                                  : 10.0.1
    
    __CUDA Information__
    CUDA Device Initialized                       : False
    CUDA Driver Version                           : ?
    CUDA Detect Output:
    None
    CUDA Libraries Test Output:
    None
    
    __ROC information__
    ROC Available                                 : False
    ROC Toolchains                                : None
    HSA Agents Count                              : 0
    HSA Agents:
    None
    HSA Discrete GPUs Count                       : 0
    HSA Discrete GPUs                             : None
    
    __SVML Information__
    SVML State, config.USING_SVML                 : False
    SVML Library Loaded                           : False
    llvmlite Using SVML Patched LLVM              : True
    SVML Operational                              : False
    
    __Threading Layer Information__
    TBB Threading Layer Available                 : True
    +-->TBB imported successfully.
    OpenMP Threading Layer Available              : True
    +-->Vendor: GNU
    Workqueue Threading Layer Available           : True
    +-->Workqueue imported successfully.
    
    __Numba Environment Variable Information__
    None found.
    
    __Conda Information__
    Conda Build                                   : 3.21.4
    Conda Env                                     : 4.10.1
    Conda Platform                                : linux-64
    Conda Python Version                          : 3.8.8.final.0
    Conda Root Writable                           : True
    
    ~~~~
    
    __Warning log__
    Warning (cuda): CUDA device intialisation problem. Message:Error at driver init:
    [100] Call to cuInit results in CUDA_ERROR_NO_DEVICE:
    Exception class: <class 'numba.cuda.cudadrv.error.CudaSupportError'>
    Warning (roc): Error initialising ROC: No ROC toolchains found.
    Warning (roc): No HSA Agents found, encountered exception when searching: Error at driver init:
    NUMBA_HSA_DRIVER /opt/rocm/lib/libhsa-runtime64.so is not a valid file path.  Note it must be a filepath of the .so/.dll/.dylib or the driver:
    Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_quota_us
    Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_period_us
    <!--
    
    Please include details of the feature you would like to see, why you would
    like to see it/the use case.
    

    -->

    CUDA bug - incorrect behavior 
    opened by Sahuta 60
  • Bump minimum supported Python version to 3.8

    Bump minimum supported Python version to 3.8

    Per https://github.com/numba/numba/blob/aaa6a0099ac217959126d195ab43d37000f9624a/CHANGE_LOG#L5-L6

    and NEP29's drop schedule On Dec 26, 2021 drop support for Python 3.7 (initially released on Jun 27, 2018)

    this PR executes on dropping support for Python versions prior to 3.8 for the next minor release of Numba (0.57.0)

    highpriority BuildFarm Passed 5 - Ready to merge Effort - long 
    opened by jamesobutler 59
  • Seemingly random segfault on macOS if function is in larger library

    Seemingly random segfault on macOS if function is in larger library

    Reporting a bug

    • [x] I am using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
    • [ x] I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).

    Hi,

    first of all, sorry for this small report and few examples, but at this point this seems to untraceable to me that im hoping for any input to trace down the issue. Maybe its even a severely stupid mistake by myself that I just can't find. I was getting segfaults from a Numba function and traced it down to the state I will outline here, but at this point I can't find anything anymore.

    I have a function which operates on arrays. I have simplified it very far so I know its not making so much sense - but here it goes.

    import numpy as np
    import numba
    
    @numba.jit("f8[:,:](f8[:,:,:],f8[:,:],f8,f8,f8[:])", nopython=True, parallel=True,
               nogil=True)
    def evalManyIndividual(individuals, X,p1, p2, p3):
        fitnesses = np.zeros((individuals.shape[0], 4))
        nF = individuals.shape[0]
        for i in numba.prange(nF):
            individual = individuals[i]
            P = np.random.random((3,4))
            fitnesses[i] = np.random.random((4,))
        return fitnesses
    

    For debugging, Im using synthetic input data

    n = 15
    m = 3
    inputInd = np.random.random((500, n, m))
    inputArray = np.random.random((n, m))
    p1 = 25e-3
    p2 = 55.
    p3 = np.array([320., 240.])
    
    ret = evalManyIndividualQ3D(inputInd, inputArray, p1, p2, p3)
    

    Running this as a small script works. Running this from interactive works. However, I have a large library with Numba functions in which the one above is included. Just somewhere in there. Same syntax, copy & paste. If I then add the execution with the same synthetic input data after the library (compiling the full library including the above function), only calling the function as above

    ret = evalManyIndividualQ3D(inputInd, inputArray, p1, p2, p3)
    

    Im getting a segfault. No traceback, no nothing. In terminal its zsh: segmentation fault, Jupyter just hangs completely.

    This happens on macOS 10.15. A difference to mention would be that during compilation of the library, Im getting some warnings

    NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, A), array(float64, 2d, A))
      warnings.warn(NumbaPerformanceWarning(msg))
    

    Those products are not in the function or connected to the function that crashes!

    At this point, Im happy for any type of input since I can't find a reason.

    bug ParallelAccelerator threading 
    opened by max3-2 58
  • Bounds checking

    Bounds checking

    This is still a work in progress, but feedback is welcome. I'm still new to LLVM so let me know if I should be generating the code in a better way.

    I have changed the API of get_item_pointer[2] to include the context argument. This is required to raise an exception.

    Still TODO:

    • [x] Add tests
    • [x] Add the flag to the public API (right now it is enabled by default for testing purposes).
    • [x] Document the flag
    • [x] Figure out why the parallel tests fail with bounds checking enabled
    • [x] Figure out why the cuda tests fail with bounds checking enabled
    • [ ] Add a CI run with bounds checking globally enabled
    • [x] See if missing broadcast errors are related to this https://github.com/numba/numba/pull/4432#issuecomment-527571792
    • [x] Fix memory leak in the tests
    • [x] Add support for boundschecking fancy indexing

    TODOs that should probably wait for a future PR:

    • [ ] Make the error message match object mode. This would require being more fancy in the way the exception is generated. For now, if NUMBA_FULL_TRACEBACK=1 is set, the index, shape, and axis are printed as a debug message.
    • [ ] Make the error message show the location in the code

    I'll need help on how to do the last item.

    There is also a boundcheck flag in some places in the code, which doesn't seem to do anything. I have named my flag boundscheck with an s, as that seemed better, but I don't particularly care if you decide another name is better.

    5 - Ready to merge 
    opened by asmeurer 53
  • Python 3.9 Support

    Python 3.9 Support

    Reporting a bug

    • [X] I have tried using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
    • [X] I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).

    I'm seeing this warning pop up for a clean installation of numba with Python 3.9:

    python3 -m pip install numba
    
    Collecting numba
      Using cached numba-0.51.2.tar.gz (2.1 MB)
    Processing ./.cache/pip/wheels/40/08/53/26580f3607587bd3fa1a18619841d1dcfedcabf2be52f8e2cd/llvmlite-0.34.0-cp39-cp39-linux_x86_64.whl
    Processing ./.cache/pip/wheels/a3/17/dd/f2dba23a35bb6008732772ccfb13d3d0e537fbc6919ce6862b/numpy-1.19.2-cp39-cp39-linux_x86_64.whl
    Requirement already satisfied: setuptools in /usr/local/lib/python3.9/site-packages (from numba) (50.3.0)
    Building wheels for collected packages: numba
      Building wheel for numba (setup.py) ... error
      ERROR: Command errored out with exit status 1:
       command: /usr/local/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-t2pl9xsf/numba/setup.py'"'"'; __file__='"'"'/tmp/pip-install-t2pl9xsf/numba/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3k5ws828
           cwd: /tmp/pip-install-t2pl9xsf/numba/
      Complete output (7 lines):
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
        File "/tmp/pip-install-t2pl9xsf/numba/setup.py", line 354, in <module>
          metadata['ext_modules'] = get_ext_modules()
        File "/tmp/pip-install-t2pl9xsf/numba/setup.py", line 87, in get_ext_modules
          import numpy.distutils.misc_util as np_misc
      ModuleNotFoundError: No module named 'numpy'
      ----------------------------------------
      ERROR: Failed building wheel for numba
      Running setup.py clean for numba
      ERROR: Command errored out with exit status 1:
       command: /usr/local/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-t2pl9xsf/numba/setup.py'"'"'; __file__='"'"'/tmp/pip-install-t2pl9xsf/numba/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all
           cwd: /tmp/pip-install-t2pl9xsf/numba
      Complete output (7 lines):
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
        File "/tmp/pip-install-t2pl9xsf/numba/setup.py", line 354, in <module>
          metadata['ext_modules'] = get_ext_modules()
        File "/tmp/pip-install-t2pl9xsf/numba/setup.py", line 87, in get_ext_modules
          import numpy.distutils.misc_util as np_misc
      ModuleNotFoundError: No module named 'numpy'
      ----------------------------------------
      ERROR: Failed cleaning build dir for numba
    Failed to build numba
    Installing collected packages: llvmlite, numpy, numba
        Running setup.py install for numba ... done
      DEPRECATION: numba was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. pip 21.0 will remove support for this functionality. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.
    Successfully installed llvmlite-0.34.0 numba-0.51.2 numpy-1.19.2
    

    I can see on the README that these versions are currently recommended:

    • Python versions: 3.6-3.8
    • llvmlite 0.33.*

    Is Python 3.9 and llvmlite 0.34.* not supported? Pip is currently warning about wheel build failure, but numba will install.

    feature_request 
    opened by mjsteinbaugh 50
  • Implement np.is* functions

    Implement np.is* functions

    Still a working in progress but any feedback is appreciated. The goal is to implement all np.is* functions listed here.

    At the moment, only np.iscomplexobj and np.isrealobj were implemented.

    Implementation and tests are based on NumPy counterparts.

    • [x] np.isclose - moved to PR https://github.com/numba/numba/pull/7067
    • [x] np.iscomplex
    • [x] np.iscomplexobj
    • [x] np.isneginf
    • [x] np.isposinf
    • [x] np.isreal
    • [x] np.isrealobj
    • [x] np.isscalar

    5 - Ready to merge Effort - long 
    opened by guilhermeleobas 46
  • Combined parfor chunking and caching PRs.

    Combined parfor chunking and caching PRs.

    This replaces #6025 and #7522. There was overlap between these two PRs around using the dynamic thread count so rather than delaying the merge I went ahead and combined them.

    This combined PR provides an API for selecting a parfor chunk size to deal with load balancing issues and it eliminates all use of static thread counts in generated parfor code. Thus, parfor code (even with reductions) is now cacheable and if you change the chunksize or thread count after reloading from cache then you will use the new values as they are applied correctly in the code now.

    Closes #2556 Closes #3144

    ParallelAccelerator BuildFarm Passed 5 - Ready to merge Effort - long 
    opened by DrTodd13 42
  • Access to outer variables in nested functions

    Access to outer variables in nested functions

    Reporting a bug

    • [x] I have tried using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
    • [x] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'.

    I don't know if it is a bug. In the nested functions, variables defined before the function definition can be accessed in both python and nopython mode, and variables defined after the function definition can not be accessed in nopython mode but can be accessed in python mode. Examples are below:

    import numba
    
    
    @numba.njit
    def func1():
        msg = "func1() called"
    
        def wrapper():
            print(msg)
    
        wrapper()
    
    
    @numba.njit
    def func2():
        def wrapper():
            print(msg)
    
        msg = "func2() called"
        wrapper()
    
    
    if __name__ == "__main__":
        print("\nPython mode\n")
        func1.py_func()  # works
        func2.py_func()  # works
    
        print("\nNumba mode\n")
        func1()  # works
        func2()  # fails
    

    Here is the error message:

    PS C:\Users\Hailin\OneDrive\Desktop\numba-test> python .\test.py
    
    Python mode
    
    func1() called
    func2() called
    
    Numba mode
    
    func1() called
    Traceback (most recent call last):
      File "C:\Users\Hailin\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\ir.py", line 267, in get
        return self._con[name]
    KeyError: 'msg'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "C:\Users\Hailin\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\ir.py", line 1124, in get_exact
        return self.localvars.get(name)
      File "C:\Users\Hailin\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\ir.py", line 269, in get
        raise NotDefinedError(name)
    numba.core.errors.NotDefinedError: The compiler failed to analyze the bytecode. Variable 'msg' is not defined.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "C:\Users\Hailin\AppData\Local\Programs\Python\Python310\lib\site-packages\numba\core\ir.py", line 267, in get
        return self._con[name]
    KeyError: 'msg'
    
    opened by haiiliin 0
  • building a code that utilizes Numba and mpi4py

    building a code that utilizes Numba and mpi4py

    Hi, I don't think this has been addressed before. I am trying to simulate atomic diffusion using Monte Carlo method. First I am decomposing the domain size into subdomains and operate on each subdomain in different processors. Of course, a lot of of MPI.Send() and MPI.Recv() communications are required for my problem. I don't use numba in functions that call send and recv because I don't think Numba supports that and there is no reason to do so i believe. Here is my problem. So I make buffers per every subdomain with unique sizes, and Send() them to nearby processors so that they can Recv() it. Then I use Numba to act on specific region of the subdomain in addition to the buffers that a processor has just received. My code looks as following:

    #quad 0 #@njit def get_state_quad_a(ind_1st_nei, z_current): output = np.zeros((ind_1st_nei.shape[0],ind_1st_nei.shape[1]), dtype=np.ubyte)

    for atom in range(len(ind_1st_nei)): 
        for nei in range(24): 
            if ((ind_1st_nei[atom][nei][0] < 0) and (ind_1st_nei[atom][nei][1] < 0)):
                output[atom][nei] = (buff_corn[ind_1st_nei[atom][nei][0] + NUMBER_NEIGHBOR, ind_1st_nei[atom][nei][1] + NUMBER_NEIGHBOR, (ind_1st_nei[atom][nei][2] - z_current + 1)]) 
            elif((ind_1st_nei[atom][nei][0] < 0) and (ind_1st_nei[atom][nei][1] > 0)): 
                output[atom][nei] = (buff_vert[ind_1st_nei[atom][nei][0] + NUMBER_NEIGHBOR, ind_1st_nei[atom][nei][1], (ind_1st_nei[atom][nei][2] - z_current + 1)]) 
            elif((ind_1st_nei[atom][nei][0] > 0) and (ind_1st_nei[atom][nei][1] < 0)): 
                output[atom][nei] = (buff_horz[ind_1st_nei[atom][nei][0], ind_1st_nei[atom][nei][1] + NUMBER_NEIGHBOR, (ind_1st_nei[atom][nei][2] - z_current + 1)]) 
            else: 
                output[atom][nei] = State[ind_1st_nei[atom][nei][0],ind_1st_nei[atom][nei][1],ind_1st_nei[atom][nei][2]]
    return output
    

    So basically this function reads the information of ind_1st_nei that is a 3D array (physically this is the location of sites in the array to determine location of atoms). And I use that array to read the State (also 3D array) in the current step to tell me what type of atom there is there (0,1,2,3...etc). Passing some conditions (near subdomain boundaries), I need to read the buffers (buff_vert, buff_corn and buff_horz). When I don't njit, the results come out fine, I read correct neighbors states, however, when I use numba, at some point, the simulations messes up, the output array comes out in correct, it is hard to predict where the error is. I am afraid there's a race condition happening, but i believe this should not be the case, because every processor should technically have its own memory. Any suggestions? thanks a lot.

    Reporting a bug

    • [ ] I have tried using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
    • [ ] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'.
    more info needed 
    opened by ahmad681 6
  • NumPy 1.24 (PR for review)

    NumPy 1.24 (PR for review)

    This PR updates Numba to support NumPy 1.24, and is ready for review. Presently CI will fail due to the lack of NumPy 1.24 packages in Anaconda, but this should be resolved in time.

    Each individual commit message details the changes made and their rationale - each should be reviewable as an individual change.

    See testing with two other PRs:

    • #8690 tests the changes in this branch with the old build matrix, demonstrating that these changes don't introduce any issues that would have been caught by the old slicing with NumPy versions.
    • #8620 contains the actual history of the development of these changes, and tests every slice with a pip-installed NumPy 1.24.1.
    3 - Ready for Review 
    opened by gmarkall 3
  • Memory leak when called function raises error

    Memory leak when called function raises error

    Reporting a bug

    • [x] I have tried using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
    • [x] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'.
    import gc
    
    import numpy as np
    
    from numba import njit
    from numba.core.extending import register_jitable
    from numba.core.runtime import rtsys
    
    
    @register_jitable
    def raise_error():
        raise ValueError("test")
    
    
    @njit(parallel=False)
    def leak():
        data = np.zeros((100, 2))
        raise_error()
        return data
    
    
    print(rtsys.get_allocation_stats())
    try:
        leak()
    except Exception as e:
        ...
    gc.collect()
    print(rtsys.get_allocation_stats())
    

    Prints:

    nrt_mstats(alloc=0, free=0, mi_alloc=0, mi_free=0)
    nrt_mstats(alloc=1, free=0, mi_alloc=1, mi_free=0)
    

    I'm not 100% sure, but I think it's also causing leaks when calling np.argmin or np.argmax with the axis argument and a 0-length sequence:

    
    @njit()
    def jitargmin(arr, axis):
        return np.argmin(arr, axis)
    
    print(rtsys.get_allocation_stats())
    try:
        result = jitargmin(np.zeros((3, 0)), axis=1)
    except Exception as e:
        ...
    gc.collect()
    print(rtsys.get_allocation_stats())
    

    printing:

    nrt_mstats(alloc=0, free=0, mi_alloc=0, mi_free=0)
    nrt_mstats(alloc=6, free=4, mi_alloc=5, mi_free=3)
    
    duplicate bug - memory leak 
    opened by Tobi995 5
  • No definition for lowering <built-in method twoD_impl of _dynfunc._Closure object at 0x2b60ce355458>(array(float32, 2d, A), omitted(default=None)) -> float64

    No definition for lowering (array(float32, 2d, A), omitted(default=None)) -> float64

    I try my code as below:

    @jit
    def compute(points):
        return linalg.norm(points)
    compute_gpu = cuda.jit(float32(float32[:,:]),device = True)(compute)
    

    However it keep throws an error:

    After I upgrade the version to 0.56.4, it throws an error:

    Unknown attribute 'norm' of type Module(<module 'numpy.linalg' from '/mnt/home/subowen/.local/lib/python3.9/site-packages/numpy/linalg/__init__.py'>)
    
    question 
    opened by bowensu123 1
Releases(0.56.4)
  • 0.56.4(Nov 4, 2022)

    This is a bugfix release to fix a regression in the CUDA target in relation to the .view() method on CUDA device arrays that is present when using NumPy version 1.23.0 or later.

    Source code(tar.gz)
    Source code(zip)
  • 0.56.3(Oct 14, 2022)

    This is a bugfix release to remove the version restriction applied to the setuptools package and to fix a bug in the CUDA target in relation to copying zero length device arrays to zero length host arrays.

    Source code(tar.gz)
    Source code(zip)
  • 0.56.2(Sep 5, 2022)

  • 0.56.0(Jul 26, 2022)

    This release continues to add new features, bug fixes and stability improvements to Numba. Please note that this will be the last release that has support for Python 3.7 as the next release series (Numba 0.57) will support Python 3.11! Also note that, this will be the last release to support linux-32 packages produced by the Numba team.

    Source code(tar.gz)
    Source code(zip)
  • 0.55.2(May 26, 2022)

  • 0.55.1(Jan 28, 2022)

    This is a bugfix release that closes all the remaining issues from the accelerated release of 0.55.0 and also any release critical regressions discovered since then.

    Source code(tar.gz)
    Source code(zip)
  • 0.55.0(Jan 14, 2022)

    This release includes a significant number important dependency upgrades along with a number of new features and bug fixes. Most importantly, this release adds support for Python 3.10 and NumPy 1.21.

    Source code(tar.gz)
    Source code(zip)
  • 0.54.1(Oct 8, 2021)

    This is a bugfix release for 0.54.0. It fixes a regression in structured array type handling, a potential leak on initialization failure in the CUDA target, a regression caused by Numba’s vendored cloudpickle module resetting dynamic classes and a few minor testing/infrastructure related problems.

    Please see details in the release note: https://numba.readthedocs.io/en/0.54.1/release-notes.html#version-0-54-1-7-october-2021

    Source code(tar.gz)
    Source code(zip)
  • 0.54.0(Sep 23, 2021)

    This release includes a significant number of new features, important refactoring, critical bug fixes and a number of dependency upgrades.

    Please see details in the release notes at https://numba.readthedocs.io/en/0.54.0/release-notes.html

    Source code(tar.gz)
    Source code(zip)
  • 0.53.1(Apr 1, 2021)

    This is a bugfix release for 0.53.0. It contains the following four pull-requests which fix two critical regressions and two build failures reported by the openSuSe team:

    • PR #6851 set non-reported llvm timing values to 0.0
    • PR #6837 Ignore warnings from packaging module when testing import behaviour.
    • PR #6828 Fix regression in CUDA: Set stream in mapped and managed array device_setup
    • PR #6826 Fix regression on gufunc serialization
    Source code(tar.gz)
    Source code(zip)
  • 0.53.0(Mar 16, 2021)

  • 0.52.0(Dec 17, 2020)

Owner
Numba
Array-oriented Python JIT compiler
Numba
Flood modeling by 2D shallow water equation

hydraulicmodel Flood modeling by 2D shallow water equation. Refer to Hunter et al (2005), Bates et al. (2010). Diffusive wave approximation Local iner

6 Nov 30, 2022
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Iron 1.3k Dec 30, 2022
DaCe is a parallel programming framework that takes code in Python/NumPy and other programming languages

aCe - Data-Centric Parallel Programming Decoupling domain science from performance optimization. DaCe is a parallel programming framework that takes c

SPCL 330 Dec 30, 2022
A 2-dimensional physics engine written in Cairo

A 2-dimensional physics engine written in Cairo

Topology 38 Nov 16, 2022
A columnar data container that can be compressed.

Unmaintained Package Notice Unfortunately, and due to lack of resources, the Blosc Development Team is unable to maintain this package anymore. During

944 Dec 09, 2022
PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Ralph Seichter 11 Nov 24, 2022
CSV database for chihuahua (HUAHUA) blockchain transactions

super-fiesta Shamelessly ripped components from https://github.com/hodgerpodger/staketaxcsv - Thanks for doing all the hard work. This code does only

Arlene Macciaveli 1 Jan 07, 2022
Retentioneering 581 Jan 07, 2023
A forecasting system dedicated to smart city data

smart-city-predictions System prognostyczny dedykowany dla danych inteligentnych miast Praca inżynierska realizowana przez Michała Stawikowskiego and

Kevin Lai 1 Nov 08, 2021
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8k Dec 29, 2022
Candlestick Pattern Recognition with Python and TA-Lib

Candlestick-Pattern-Recognition-with-Python-and-TA-Lib Goal Look at the S&P500 to try and get a better understanding of these candlestick patterns and

Ganesh Jainarain 11 Oct 07, 2022
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown.

Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown

915 Dec 26, 2022
Repository created with LinkedIn profile analysis project done

EN/en Repository created with LinkedIn profile analysis project done. The datase

Mayara Canaver 4 Aug 06, 2022
A collection of learning outcomes data analysis using Python and SQL, from DQLab.

Data Analyst with PYTHON Data Analyst berperan dalam menghasilkan analisa data serta mempresentasikan insight untuk membantu proses pengambilan keputu

6 Oct 11, 2022
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Feng Ruohang 88 Nov 04, 2022
This tool parses log data and allows to define analysis pipelines for anomaly detection.

logdata-anomaly-miner This tool parses log data and allows to define analysis pipelines for anomaly detection. It was designed to run the analysis wit

AECID 32 Nov 27, 2022
A Python and R autograding solution

Otter-Grader Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is desi

Infrastructure Team 93 Jan 03, 2023
This python script allows you to manipulate the audience data from Sl.ido surveys

Slido-Automated-VoteBot This python script allows you to manipulate the audience data from Sl.ido surveys Since Slido blocks interference from automat

Pranav Menon 1 Jan 24, 2022
Improving your data science workflows with

Make Better Defaults Author: Kjell Wooding [email protected] This is the git re

Kjell Wooding 18 Dec 23, 2022