PyMatting: A Python Library for Alpha Matting

Overview

PyMatting: A Python Library for Alpha Matting

License: MIT CI PyPI JOSS Gitter

We introduce the PyMatting package for Python which implements various methods to solve the alpha matting problem.

Lemur

Given an input image and a hand-drawn trimap (top row), alpha matting estimates the alpha channel of a foreground object which can then be composed onto a different background (bottom row).

PyMatting provides:

  • Alpha matting implementations for:
    • Closed Form Alpha Matting [1]
    • Large Kernel Matting [2]
    • KNN Matting [3]
    • Learning Based Digital Matting [4]
    • Random Walk Matting [5]
  • Foreground estimation implementations for:
    • Closed Form Foreground Estimation [1]
    • Fast Multi-Level Foreground Estimation (CPU, CUDA and OpenCL) [6]
  • Fast multithreaded KNN search
  • Preconditioners to accelerate the convergence rate of conjugate gradient descent:
    • The incomplete thresholded Cholesky decomposition (Incomplete is part of the name. The implementation is quite complete.)
    • The V-Cycle Geometric Multigrid preconditioner
  • Readable code leveraging NumPy, SciPy and Numba

Getting Started

Requirements

Minimal requiremens

  • numpy>=1.16.0
  • pillow>=5.2.0
  • numba>=0.47.0
  • scipy>=1.1.0

Additional requirements for GPU support

  • cupy-cuda90>=6.5.0 or similar
  • pyopencl>=2019.1.2

Requirements to run the tests

  • pytest>=5.3.4

Installation with PyPI

pip3 install pymatting

Installation from Source

git clone https://github.com/pymatting/pymatting
cd pymatting
pip3 install .

Example

from pymatting import cutout

cutout(
    # input image path
    "data/lemur/lemur.png",
    # input trimap path
    "data/lemur/lemur_trimap.png",
    # output cutout path
    "lemur_cutout.png")

More advanced examples

Trimap Construction

All implemented methods rely on trimaps which roughly classify the image into foreground, background and unknown reagions. Trimaps are expected to be numpy.ndarrays of type np.float64 having the same shape as the input image with only one color-channel. Trimap values of 0.0 denote pixels which are 100% background. Similarly, trimap values of 1.0 denote pixels which are 100% foreground. All other values indicate unknown pixels which will be estimated by the algorithm.

Testing

Run the tests from the main directory:

 python3 tests/download_images.py
 pip3 install -r requirements_tests.txt
 pytest

Currently 89% of the code is covered by tests.

Upgrade

pip3 install --upgrade pymatting
python3 -c "import pymatting"

The last line is necessary to rebuild the ahead-of-time compiled module. Without it, the module will be rebuilt on first import, but the old module will already be loaded at that point, which might cause compatibility issues. Simply re-running the code should usually fix it.

Bug Reports, Questions and Pull-Requests

Please, see our community guidelines.

Authors

  • Thomas Germer
  • Tobias Uelwer
  • Stefan Conrad
  • Stefan Harmeling

See also the list of contributors who participated in this project.

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Citing

If you found PyMatting to be useful for your work, please consider citing our paper:

@article{Germer2020,
  doi = {10.21105/joss.02481},
  url = {https://doi.org/10.21105/joss.02481},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {54},
  pages = {2481},
  author = {Thomas Germer and Tobias Uelwer and Stefan Conrad and Stefan Harmeling},
  title = {PyMatting: A Python Library for Alpha Matting},
  journal = {Journal of Open Source Software}
}

References

[1] Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence, 30(2):228–242, 2007.

[2] Kaiming He, Jian Sun, and Xiaoou Tang. Fast matting using large kernel matting laplacian matrices. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2165–2172. IEEE, 2010.

[3] Qifeng Chen, Dingzeyu Li, and Chi-Keung Tang. Knn matting. IEEE transactions on pattern analysis and machine intelligence, 35(9):2175–2188, 2013.

[4] Yuanjie Zheng and Chandra Kambhamettu. Learning based digital matting. In 2009 IEEE 12th international conference on computer vision, 889–896. IEEE, 2009.

[5] Leo Grady, Thomas Schiwietz, Shmuel Aharon, and Rüdiger Westermann. Random walks for interactive alpha-matting. In Proceedings of VIIP, volume 2005, 423–429. 2005.

[6] Germer, T., Uelwer, T., Conrad, S., & Harmeling, S. (2020). Fast Multi-Level Foreground Estimation. arXiv preprint arXiv:2006.14970.

Lemur image by Mathias Appel from https://www.flickr.com/photos/mathiasappel/25419442300/ licensed under CC0 1.0 Universal (CC0 1.0) Public Domain License.

Comments
  • [Question❓] All unknown region input

    [Question❓] All unknown region input

    When I input a trimap with all unknown region, i.e. no foreground and background, it raised an error there , https://github.com/pymatting/pymatting/blob/master/pymatting/util/util.py#L491.

    opened by michaelowenliu 11
  • Got Segmentation Fault when calling estimate_alpha_knn

    Got Segmentation Fault when calling estimate_alpha_knn

    Got this error both on macOS 10.14 and Ubuntu 16.04

    When installing the package I used --ignore-installed llvmlite flag for pip because I got Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.. I am not sure if this is relevant.

    pytest
    ============================================================================== test session starts ===============================================================================
    platform darwin -- Python 3.7.4, pytest-5.2.1, py-1.8.0, pluggy-0.13.0
    rootdir: /Users/user/pymatting-master
    plugins: arraydiff-0.3, remotedata-0.3.2, doctestplus-0.4.0, openfiles-0.4.0
    collected 11 items                                                                                                                                                               
    
    tests/test_boxfilter.py .                                                                                                                                                  [  9%]
    tests/test_cg.py .                                                                                                                                                         [ 18%]
    tests/test_estimate_alpha.py F                                                                                                                                             [ 27%]
    tests/test_foreground.py .                                                                                                                                                 [ 36%]
    tests/test_ichol.py .                                                                                                                                                      [ 45%]
    tests/test_kdtree.py Fatal Python error: Segmentation fault
    
    Current thread 0x00000001086d7dc0 (most recent call first):
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pymatting/util/kdtree.py", line 280 in __init__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pymatting/util/kdtree.py", line 347 in knn
      File "/Users/user/pymatting-master/tests/test_kdtree.py", line 20 in run_kdtree
      File "/Users/user/pymatting-master/tests/test_kdtree.py", line 46 in test_kdtree
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/python.py", line 170 in pytest_pyfunc_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/python.py", line 1423 in runtest
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 125 in pytest_runtest_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 201 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 229 in from_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 201 in call_runtest_hook
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 176 in call_and_report
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 95 in runtestprotocol
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 80 in pytest_runtest_protocol
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 256 in pytest_runtestloop
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 235 in _main
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 191 in wrap_session
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py", line 90 in main
      File "/Users/user/opt/anaconda3/bin/pytest", line 11 in <module>
    [1]    99661 segmentation fault  pytest
    
    opened by ntuLC 10
  • [Question❓]is there a way to speed it up?

    [Question❓]is there a way to speed it up?

    Hey!This tool can bring me very good results, but I did statistics, and it took me nearly 10 minutes to process 1,000 images without counting the IO time. 1,000 images generally correspond to a 30-second video. This efficiency is Not ideal, looking forward to reply

    opened by JSHZT 6
  • [BUG 🐛] No module named 'pymatting_aot.aot'

    [BUG 🐛] No module named 'pymatting_aot.aot'

    Bug description

    $ python test2.py Failed to import ahead-of-time-compiled modules. This is expected on first import. Compiling modules and trying again (this might take a minute). Traceback (most recent call last): File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 21, in import pymatting_aot.aot ModuleNotFoundError: No module named 'pymatting_aot.aot'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "test2.py", line 1, in from pymatting import cutout File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting/init.py", line 2, in import pymatting_aot.cc File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 28, in compile_modules() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 6, in compile_modules cc = CC("aot") File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/cc.py", line 65, in init self._toolchain = Toolchain() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 78, in init self._raise_external_compiler_error() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 121, in _raise_external_compiler_error raise RuntimeError(msg) RuntimeError: Attempted to compile AOT function without the compiler used by numpy.distutils present. If using conda try:

    #> conda install gcc_linux-64 gxx_linux-64

    To Reproduce

    installed pymatting with pip install rembg on Fedora 33 within venv for Python3.8 create test2.py: from pymatting import cutout

    cutout( # input image path "data/lemur/lemur.png", # input trimap path "data/lemur/lemur_trimap.png", # output cutout path "lemur_cutout.png") launch: python test2.py within venv for Python3.8

    Expected behavior

    runed without errors

    Images

    (Add relevant images.)

    Library versions:

    (Run the following commands and paste the result here.)

    python --version --version
    Python 3.8.6 (default, Sep 25 2020, 00:00:00) 
    [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
    (envrembg) [[email protected] envrembg]$ python -c "import numpy; numpy.show_config()"
    
    python -c "import numpy; numpy.show_config()"
    blas_mkl_info:
      NOT AVAILABLE
    blis_info:
      NOT AVAILABLE
    openblas_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_mkl_info:
      NOT AVAILABLE
    openblas_lapack_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    
    python -c "import scipy;scipy.show_config()"
    lapack_mkl_info:
      NOT AVAILABLE
    openblas_lapack_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_mkl_info:
      NOT AVAILABLE
    blis_info:
      NOT AVAILABLE
    openblas_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    
    python -c "import numba;print('Numba version:', numba.__version__)"
    Numba version: 0.51.2
    
    python -c "import PIL;print('PIL version:', PIL.__version__)"
    PIL version: 8.0.1
    
    python -c "from pymatting.__about__ import __version__;print('PyMatting version:', __version__)"
    Failed to import ahead-of-time-compiled modules. This is expected on first import.
    Compiling modules and trying again (this might take a minute).
    Traceback (most recent call last):
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 21, in <module>
        import pymatting_aot.aot
    ModuleNotFoundError: No module named 'pymatting_aot.aot'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting/__init__.py", line 2, in <module>
        import pymatting_aot.cc
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 28, in <module>
        compile_modules()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 6, in compile_modules
        cc = CC("aot")
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/cc.py", line 65, in __init__
        self._toolchain = Toolchain()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 78, in __init__
        self._raise_external_compiler_error()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 121, in _raise_external_compiler_error
        raise RuntimeError(msg)
    RuntimeError: Attempted to compile AOT function without the compiler used by `numpy.distutils` present. If using conda try:
    
    #> conda install gcc_linux-64 gxx_linux-64
    
    
    opened by vlsav 6
  • [Question❓] Tests for GPU implementation skipped, because of missing packages

    [Question❓] Tests for GPU implementation skipped, because of missing packages

    Hi

    I have setup the pymatting under container environment and executed the test. Pytest was able to complete it however I got following warnings:

    tests/test_foreground.py::test_foreground /pymatting/tests/test_foreground.py:32: UserWarning: Tests for GPU implementation skipped, because of missing packages. "Tests for GPU implementation skipped, because of missing packages."

    -- Docs: https://docs.pytest.org/en/stable/warnings.html

    I noticed that similar issue was reported earlier as well but couldn't find conclusion

    I have got Nvidia GPUs but somehow it is not being detected. I have individually installed cuPy, pyopencl, libcutensor and some other output on installed cuda packages:

    [email protected]:/pymatting# dpkg --list | grep cuda
    ii  cuda-command-line-tools-10-2  10.2.89-1                           amd64        CUDA command-line tools
    ii  cuda-compat-10-2              440.95.01-1                         amd64        CUDA Compatibility Platform
    ii  cuda-compiler-10-2            10.2.89-1                           amd64        CUDA compiler
    ii  cuda-cudart-10-2              10.2.89-1                           amd64        CUDA Runtime native Libraries
    ii  cuda-cudart-dev-10-2          10.2.89-1                           amd64        CUDA Runtime native dev links, headers
    ii  cuda-cufft-10-2               10.2.89-1                           amd64        CUFFT native runtime libraries
    ii  cuda-cufft-dev-10-2           10.2.89-1                           amd64        CUFFT native dev links, headers
    ii  cuda-cuobjdump-10-2           10.2.89-1                           amd64        CUDA cuobjdump
    ii  cuda-cupti-10-2               10.2.89-1                           amd64        CUDA profiling tools runtime libs.
    ii  cuda-cupti-dev-10-2           10.2.89-1                           amd64        CUDA profiling tools interface.
    ii  cuda-curand-10-2              10.2.89-1                           amd64        CURAND native runtime libraries
    ii  cuda-curand-dev-10-2          10.2.89-1                           amd64        CURAND native dev links, headers
    ii  cuda-cusolver-10-2            10.2.89-1                           amd64        CUDA solver native runtime libraries
    ii  cuda-cusolver-dev-10-2        10.2.89-1                           amd64        CUDA solver native dev links, headers
    ii  cuda-cusparse-10-2            10.2.89-1                           amd64        CUSPARSE native runtime libraries
    ii  cuda-cusparse-dev-10-2        10.2.89-1                           amd64        CUSPARSE native dev links, headers
    ii  cuda-driver-dev-10-2          10.2.89-1                           amd64        CUDA Driver native dev stub library
    ii  cuda-gdb-10-2                 10.2.89-1                           amd64        CUDA-GDB
    ii  cuda-libraries-10-2           10.2.89-1                           amd64        CUDA Libraries 10.2 meta-package
    ii  cuda-libraries-dev-10-2       10.2.89-1                           amd64        CUDA Libraries 10.2 development meta-package
    ii  cuda-license-10-2             10.2.89-1                           amd64        CUDA licenses
    ii  cuda-memcheck-10-2            10.2.89-1                           amd64        CUDA-MEMCHECK
    ii  cuda-minimal-build-10-2       10.2.89-1                           amd64        Minimal CUDA 10.2 toolkit build packages.
    ii  cuda-misc-headers-10-2        10.2.89-1                           amd64        CUDA miscellaneous headers
    ii  cuda-npp-10-2                 10.2.89-1                           amd64        NPP native runtime libraries
    ii  cuda-npp-dev-10-2             10.2.89-1                           amd64        NPP native dev links, headers
    ii  cuda-nvcc-10-2                10.2.89-1                           amd64        CUDA nvcc
    ii  cuda-nvdisasm-10-2            10.2.89-1                           amd64        CUDA disassembler
    ii  cuda-nvgraph-10-2             10.2.89-1                           amd64        NVGRAPH native runtime libraries
    ii  cuda-nvgraph-dev-10-2         10.2.89-1                           amd64        NVGRAPH native dev links, headers
    ii  cuda-nvjpeg-10-2              10.2.89-1                           amd64        NVJPEG native runtime libraries
    ii  cuda-nvjpeg-dev-10-2          10.2.89-1                           amd64        NVJPEG native dev links, headers
    ii  cuda-nvml-dev-10-2            10.2.89-1                           amd64        NVML native dev links, headers
    ii  cuda-nvprof-10-2              10.2.89-1                           amd64        CUDA Profiler tools
    ii  cuda-nvprune-10-2             10.2.89-1                           amd64        CUDA nvprune
    ii  cuda-nvrtc-10-2               10.2.89-1                           amd64        NVRTC native runtime libraries
    ii  cuda-nvrtc-dev-10-2           10.2.89-1                           amd64        NVRTC native dev links, headers
    ii  cuda-nvtx-10-2                10.2.89-1                           amd64        NVIDIA Tools Extension
    ii  cuda-sanitizer-api-10-2       10.2.89-1                           amd64        CUDA Sanitizer API
    hi  libcudnn7                     7.6.5.32-1+cuda10.2                 amd64        cuDNN runtime libraries
    ii  libcudnn7-dev                 7.6.5.32-1+cuda10.2                 amd64        cuDNN development libraries and headers
    hi  libnccl-dev                   2.7.8-1+cuda10.2                    amd64        NVIDIA Collectives Communication Library (NCCL) Development Files
    hi  libnccl2                      2.7.8-1+cuda10.2                    amd64        NVIDIA Collectives Communication Library (NCCL) Runtime
    

    Could you please advise on what package might be missing? Thank you.

    opened by ghazni123 6
  • pytest Error: tests/test_lkm.py:81: AssertionError

    pytest Error: tests/test_lkm.py:81: AssertionError

    === warnings summary ===

    tests/test_foreground.py::test_foreground /home/ferg/git/pymatting/tests/test_foreground.py:31: UserWarning: Tests for GPU implementation skipped, because of missing packages.

    I'm on Fedora 31 and here are my pip 3 package versions which above the required dependencies.

    Requirement already satisfied: numpy in /usr/lib64/python3.7/site-packages (1.17.4) Requirement already satisfied: pillow in /usr/lib64/python3.7/site-packages (6.1.0) Requirement already satisfied: numba in /home/ferg/.local/lib/python3.7/site-packages (0.48.0) Requirement already satisfied: scipy in /home/ferg/.local/lib/python3.7/site-packages (1.4.1)

    opened by 3dsf 5
  • Include a MANIFEST.in file

    Include a MANIFEST.in file

    I'm attempting to get this package into a binary format on conda-forge; would it be possible to include a MANIFEST.in file? Currently some of the required files are not included in the sdist (e.g. requirement.txt)

    https://packaging.python.org/guides/using-manifest-in/

    opened by thewchan 4
  • [Question❓] What exactly is a trimap?

    [Question❓] What exactly is a trimap?

    I was reading about trimap, I found 2 different definition on it

    • An image consisting of only 3 colors: black, white and a single shade of grey
    • An image consisting of black, white and shades of grey (where all shades of grey correspond to unknown region)

    Which one is correct?

    opened by Nkap23 4
  • @vlsav, thanks for reporting this issue! Have you tried running `conda install gcc_linux-64 gxx_linux-64` (as suggested)?

    @vlsav, thanks for reporting this issue! Have you tried running `conda install gcc_linux-64 gxx_linux-64` (as suggested)?

    @vlsav, thanks for reporting this issue! Have you tried running conda install gcc_linux-64 gxx_linux-64 (as suggested)?

    Originally posted by @tuelwer in https://github.com/pymatting/pymatting/issues/37#issuecomment-731645867

    opened by dreamer121121 4
  • ValueError on import

    ValueError on import

    Hi, I installed the lib but there are a problem with the importation of your package.

    I am using python 3.8.1 with : numpy=1.18.1 (>=1.16.0) pillow=6.2.1 (>=5.2.0) numba=0.47.0 (>=0.44.0) scipy=1.3.3 (>=1.1.0)

    *** ValueError: Failed in nopython mode pipeline (step: convert to parfors) Cannot add edge as dest node 26 not in nodes {130, 132, 262, 264, 528, 30, 418, 302, 564, 565, 566, 568, 322, 450, 196, 452, 324, 340, 212, 86, 214, 348, 94, 228, 356, 494, 118, 246, 248, 378, 380}

    (you can read all here: https://gyazo.com/b6b9756f0c8d75a30a63dada09c5f82e)

    Thank you for your work :+1:

    opened by Mathux 4
  • [BUG 🐛] PyMatting crashes when I use it in torch dataloader.

    [BUG 🐛] PyMatting crashes when I use it in torch dataloader.

    Bug description

    I used pymatting in torch data preprocessing, but the new version of pymatting does not seem to support multi-threading. In addition, 1.0.4 works.

    To Reproduce

    Pymatting 1.1.4, Torch 1.10, 5900x with 3090, CUDA 11.4。 torch.dataset/dataloader. number_workers>=1.

    opened by Windaway 3
  • Tests require missing images

    Tests require missing images

    Several tests (for example, the one in test_estimate_alpha.py) fail because a required image is missing:

    FAILED tests/test_estimate_alpha.py::test_alpha - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_laplacians.py::test_laplacians - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_lkm.py::test_lkm - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_preconditioners.py::test_preconditioners - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    

    Could/should GT01.png be included?

    opened by jangop 4
  • `setup.py` should define actual dependencies

    `setup.py` should define actual dependencies

    Currently, the packages specified in requirements.txt are copied into setup.py:

        install_requires=load_text("requirements.txt").strip().split("\n"),
    

    This is bad practice and can cause problems downstream.

    requirements.txt should be used to define a repeatable installation, such as a development environment or a production environment. As such, versions of dependencies contained therein should be as specific as possible.

    install_requires should be used to indicate dependencies necessary to run the package. As such, versions of dependencies contained therein should be as broad as possible.

    Please see “install_requires vs requirements files” on python.org or “requirements.txt vs setup.py” on stackoverflow for more information.

    I'd be happy to contribute a PR with loose dependency specifications in setup.py and concrete specifications in requirements.txt.

    opened by jangop 5
  • Make PyMatting available on conda-forge

    Make PyMatting available on conda-forge

    opened by tuelwer 4
  • Foreground background estimation for TensorFlow version[Question❓]

    Foreground background estimation for TensorFlow version[Question❓]

    Hi,

    Thank you for your amazing repo. I try to convert estimate_fg_bg_numpy.py to TensorFlow. However, the inference speed is not satisfactory. In GPU 1080Ti, the cupy version just cost 2ms, the TensorFlow version will cost 20ms for 144x256 resolution. Do you know how to correctly revise the numpy code to TensorFlow? Thank you very much.

    import numpy as np
    from PIL import Image
    import time
    import tensorflow as tf
    
    
    def inv2(mat):
        a = mat[..., 0, 0]
        b = mat[..., 0, 1]
        c = mat[..., 1, 0]
        d = mat[..., 1, 1]
    
        inv_det = 1 / (a * d - b * c)
    
        inv00 = inv_det * d
        inv01 = inv_det * -b
        inv10 = inv_det * -c
        inv11 = inv_det * a
        inv00 = inv00[:, tf.newaxis, tf.newaxis]
        inv01 = inv01[:, tf.newaxis, tf.newaxis]
        inv10 = inv10[:, tf.newaxis, tf.newaxis]
        inv11 = inv11[:, tf.newaxis, tf.newaxis]
        inv_temp1 = tf.concat([inv00, inv10], axis=1)
        inv_temp2 = tf.concat([inv01, inv11], axis=1)
        inv = tf.concat([inv_temp1, inv_temp2], axis=2)
    
        return inv
    
    
    def pixel_coordinates(w, h, flat=False):
        x, y = tf.meshgrid(np.arange(w), np.arange(h))
    
        if flat:
            x = tf.reshape(x, [-1])
            y = tf.reshape(y, [-1])
    
        return x, y
    
    
    def vec_vec_outer(a, b):
        return tf.einsum("...i,...j", a, b)
    
    def estimate_fb_ml(
            input_image,
            input_alpha,
            min_size=2,
            growth_factor=2,
            regularization=1e-5,
            n_iter_func=2,
            print_info=True,):
    
        h0, w0 = 144, 256
    
        # Find initial image size.
        w = int(np.ceil(min_size * w0 / h0))
        h = min_size
    
        # Generate initial foreground and background from input image
        F = tf.image.resize_nearest_neighbor(input_image[tf.newaxis], [h, w])[0]
        B = F * 1.0
        while True:
            if print_info:
                print("New level of size: %d-by-%d" % (w, h))
            # Resize image and alpha to size of current level
            image = tf.image.resize_nearest_neighbor(input_image[tf.newaxis], [h, w])[0]
            alpha = tf.image.resize_nearest_neighbor(input_alpha[tf.newaxis, :, :, tf.newaxis], [h, w])[0, :, :, 0]
            # Iterate a few times
            n_iter = n_iter_func
            for iteration in range(n_iter):
                x, y = pixel_coordinates(w, h, flat=True) # w: 4, h: 2
                # Make alpha into a vector
                a = tf.reshape(alpha, [-1])
                # Build system of linear equations
                U = tf.stack([a, 1 - a], axis=1)
                A = vec_vec_outer(U, U) # 8 x 2 x 2
                b = vec_vec_outer(U, tf.reshape(image, [w*h, 3])) # 8 x 2 x 3
                # For each neighbor
                for dx, dy in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
                    x2 = tf.clip_by_value(x + dx, 0, w - 1)
                    y2 = tf.clip_by_value(y + dy, 0, h - 1)
                    # Vectorized neighbor coordinates
                    j = x2 + y2 * w
                    # Gradient of alpha
                    a_j = tf.nn.embedding_lookup(a, j)
                    da = regularization + tf.abs(a - a_j)
                    # Update matrix of linear equation system
                    A00 = A[:, 0, 0] + da
                    A01 = A[:, 0, 1]
                    A10 = A[:, 1, 0]
                    A11 = A[:, 1, 1] + da
                    A00 = A00[:, tf.newaxis, tf.newaxis]
                    A01 = A01[:, tf.newaxis, tf.newaxis]
                    A10 = A10[:, tf.newaxis, tf.newaxis]
                    A11 = A11[:, tf.newaxis, tf.newaxis]
                    A_temp1 = tf.concat([A00, A10], axis=1)
                    A_temp2 = tf.concat([A01, A11], axis=1)
                    A = tf.concat([A_temp1, A_temp2], axis=2)
                    # Update rhs of linear equation system
                    F_resp = tf.reshape(F, [w * h, 3])
                    F_resp_j = tf.nn.embedding_lookup(F_resp, j)
                    B_resp = tf.reshape(B, [w * h, 3])
                    B_resp_j = tf.nn.embedding_lookup(B_resp, j)
                    da_resp = tf.reshape(da, [w * h, 1])
                    b0 = b[:, 0, :] + da_resp * F_resp_j
                    b1 = b[:, 1, :] + da_resp * B_resp_j
                    b = tf.concat([b0[:, tf.newaxis, :], b1[:, tf.newaxis, :]], axis=1)
                    # Solve linear equation system for foreground and background
                fb = tf.clip_by_value(tf.matmul(inv2(A), b), 0, 1)
    
                F = tf.reshape(fb[:, 0, :], [h, w, 3])
                B = tf.reshape(fb[:, 1, :], [h, w, 3])
    
            # If original image size is reached, return result
            if w >= w0 and h >= h0:
                return F, B
    
            # Grow image size to next level
            w = min(w0, int(np.ceil(w * growth_factor)))
            h = min(h0, int(np.ceil(h * growth_factor)))
    
            F = tf.image.resize_nearest_neighbor(F[tf.newaxis], [h, w])[0]
            B = tf.image.resize_nearest_neighbor(B[tf.newaxis], [h, w])[0]
    
    
    
    ######################################################################
    def estimate_foreground_background_tf():
        image_np = np.array(Image.open("./image.png").resize([256, 144]))[:, :, :3] / 255
        alpha_np = np.array(Image.open("./alpha.png").resize([256, 144])) / 255
        image = tf.placeholder(tf.float32, [144, 256, 3])
        alpha = tf.placeholder(tf.float32, [144, 256])
        foreground, background = estimate_fb_ml(image, alpha, n_iter_func=2)
        sess = tf.Session()
        for i in range(10):
            s = time.time()
            sess.run(foreground, feed_dict={image: image_np, alpha: alpha_np})
            e = time.time()
            print("time: ", e - s)
    
    
    ######################################################################
    def main():
        estimate_foreground_background_tf()
    
    
    if __name__ == "__main__":
        main()
    
    
    opened by MingtaoGuo 1
  • [BUG 🐛] division by zero error in estimate_foreground_ml

    [BUG 🐛] division by zero error in estimate_foreground_ml

    i am getting division by zero errors in estimate_foreground_ml()

    what i tried:

    • pymatting 1.1.1 and 1.1.3
    • making sure both the image and the mask are not uniform (i've seen the error when both have min_val=0 and max_val=1)
    • default parameters and different variations

    the environment is google colab. also sometime this (or something else in pymatting) causes the colab itself to crash and disconnect.

    opened by eyaler 14
Releases(v1.1.2)
Doods2 - API for detecting objects in images and video streams using Tensorflow

DOODS2 - Return of DOODS Dedicated Open Object Detection Service - Yes, it's a b

Zach 101 Jan 04, 2023
.NET bindings for the Pytorch engine

TorchSharp TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is a work in progress, but already provides a .NET

Matteo Interlandi 17 Aug 30, 2021
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료

Boostcamp-AI-Tech-Product-Serving 부스트캠프 AI Tech - Product Serving 자료 Repository 구조 part1(MLOps 개론, Model Serving, 머신러닝 프로젝트 라이프 사이클은 별도의 코드가 없으며, part

Sung Yun Byeon 269 Dec 21, 2022
Learning kernels to maximize the power of MMD tests

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Douga

Danica J. Sutherland 201 Dec 17, 2022
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

567 Dec 26, 2022
FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI

FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI 声明: 本项目仅限于学习交流,不可用于非法用途,包括但不限于:用于游戏外挂等,使用本项目产生的任何后果与本人无关! 简介 本项目基于yolov5,实现了一款FPS类游戏(CF、CSGO等)的自瞄AI,本项目旨在使用现

Fabian 246 Dec 28, 2022
Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021)

UNITE and UNITE+ Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021) Unbalanced Intrinsic Feature Transport for Exemplar-bas

Fangneng Zhan 183 Nov 09, 2022
Prompts - Read a textfile of prompts and import into anki via ankiconnect

prompts read a textfile of prompts and import into anki via ankiconnect Usage In

Alexander Cobleigh 2 Jul 28, 2022
GAN-based 3D human pose estimation model for 3DV'17 paper

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation". @inproceedings{jack20

Dominic Jack 15 Feb 27, 2021
PyTorch implementation of Weak-shot Fine-grained Classification via Similarity Transfer

SimTrans-Weak-Shot-Classification This repository contains the official PyTorch implementation of the following paper: Weak-shot Fine-grained Classifi

BCMI 60 Dec 02, 2022
Numba-accelerated Pythonic implementation of MPDATA with examples in Python, Julia and Matlab

PyMPDATA PyMPDATA is a high-performance Numba-accelerated Pythonic implementation of the MPDATA algorithm of Smolarkiewicz et al. used in geophysical

Atmospheric Cloud Simulation Group @ Jagiellonian University 15 Nov 23, 2022
Repo for "Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks"

Summary This is the code for the paper Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks by Yanxiang Wang, Xian Zh

zhangxian 54 Jan 03, 2023
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

TransMaS This repository is the official pytorch implementation of the following paper: NIPS2021 Mixed Supervised Object Detection by TransferringMask

BCMI 49 Jul 27, 2022
Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
MogFace: Towards a Deeper Appreciation on Face Detection

MogFace: Towards a Deeper Appreciation on Face Detection Introduction In this repo, we propose a promising face detector, termed as MogFace. Our MogFa

48 Dec 20, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 07, 2023
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

15 Dec 27, 2022
[ICML'21] Estimate the accuracy of the classifier in various environments through self-supervision

What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments? [Paper] [ICML'21 Project] PyTorch Implementation T

24 Oct 26, 2022