Python class that generates pixel art from images

Overview

Super Pyxelate converts images to 8-bit pixel art. It is an improved, faster implementation of the original Pyxelate algorithm with palette transfer support and enhanced dithering.

Super Pyxelate is currently in beta.

Pixel art corgi

Usage

from skimage import io
from pyxelate import Pyx, Pal

# load image with 'skimage.io.imread()'
image = io.imread("examples/blazkowicz.jpg")  

downsample_by = 14  # new image will be 1/14th of the original in size
palette = 7  # find 7 colors

# 1) Instantiate Pyx transformer
pyx = Pyx(factor=downsample_by, palette=palette)

# 2) fit an image, allow Pyxelate to learn the color palette
pyx.fit(image)

# 3) transform image to pixel art using the learned color palette
new_image = pyx.transform(image)

# save new image with 'skimage.io.imsave()'
io.imsave("pixel.png", new_image)

Definitely not cherry picking

Pyxelate extends scikit-learn transformers, allowing the same learned palette to be reused on other, aesthetically similar images (so it's somewhat like an 8-bit style transfer):

car = io.imread("examples/f1.jpg")
robocop = io.imread("examples/robocop.jpg")

# fit a model on each
pyx_car = Pyx(factor=5, palette=8, dither="none").fit(car)
pyx_robocop = Pyx(factor=6, palette=7, dither="naive").fit(robocop)

"""
pyx_car.transform(car)
pyx_car.transform(robocop)
pyx_robocop.transform(car)
pyx_robocop.transform(robocop)
"""

Fit Transform Palette

For a single image, it is possible to call both fit() and transform() at the same time:

# fit() and transform() on image with alpha channel
trex = io.imread("examples/trex.png")
trex_p = Pyx(factor=9, palette=4, dither="naive", alpha=.6).fit_transform(trex)

Transparency for sprites

Hyperparameters for Pyx()

Parameter Description
height The height of the transformed image. If only height is set, the width of the transofmed image will be calculated to maintain the aspect ratio of the original.
width The width of the transformed image. If only width is set, the height of the transofmed image will be calculated to maintain the aspect ratio of the original.
factor The size of the transformed image will be 1. / factor of the original. Can be used instead of setting width or height.
upscale Resizes the pixels of the transformed image by upscale. Can be a positive int or a tuple of ints for (h, w). Default is 1.
palette The number of colors in the transformed image.
- If it's an int that is larger than 2, Pyxelate will search for this many colors automatically. Default is 8.
- If it's a Pal palette enum object, Pyxelate will use palette transfer to match these colors.
dither The type of dithering to use on the transformed image (see more exampels below):
- "none" no dithering is applied (default, takes no additional time)
- "naive" Pyxelate's naive dithering based on probability mass function (use for images with alpha channel)
- "bayer" Bayer-like ordered dithering using a 4x4 Bayer Matrix (fastest dithering method, use for large images)
- "floyd" Floyd-Steinberg inspired error diffusion dithering (slowest)
- "atkinson" Atkinson inspired error diffusion dithering (slowest)
alpha For images with transparency, the transformed image's pixel will be either visible/invisible above/below this threshold. Default is 0.6.
sobel The size of the sobel operator (N*N area to calculate the gradients for downsampling), must be an int larger than 1. Default is 3, try 2 for a much faster but less accurate output.
depth How many times should the Pyxelate algorithm be applied to downsample the image. More iteratrions will result in blockier aesthatics. Must be a positive int, although it is really time consuming and should never be more than 3. Raise it only for really small images. Default is 1.
boost Adjust contrast and apply preprocessing on the image before transformation for better results. In case you see unwanted dark pixels in your image set this to False. Default is True.

Showcase of available dithering methods: Dithering methods

See more examples in the example Jupyter Notebook.

Assigning existing palette

Common retro palettes are available in Pal:

from pyxelate import Pyx, Pal

vangogh = io.imread("examples/vangogh.jpg")

vangogh_apple = Pyx(factor=12, palette=Pal.APPLE_II_HI, dither="atkinson").fit_transform(vangogh)
vangogh_mspaint = Pyx(factor=6, palette=Pal.MICROSOFT_WINDOWS_PAINT, dither="none").fit_transform(vangogh)

Ever wondered how classical paintings would look like in MS Paint? Assign your own palette:

my_pal = Pal.from_hex(["#FFFFFF", "#000000"])

# same but defined with RGB values
my_pal = Pal.from_rgb([[255, 255, 255], [0, 0, 0]])

Fitting existing palettes on different images will also have different results for transform().

Installation

pip install git+https://github.com/sedthh/pyxelate.git --upgrade

Pyxelate relies on the following libraries to run (included in requirements.txt):

FAQ

The source code is available under the MIT license but I would appreciate the credit if your work uses Pyxelate (for instance you may add me in the Special Thanks section in the credits of your videogame)!

How does it work?

Pyxelate downsamples images by (iteratively) dividing it to 3x3 tiles and calculating the orientation of edges inside them. Each tile is downsampled to a single pixel value based on the angle the magnitude of these gradients, resulting in the approximation of a pixel art. This method was inspired by the Histogram of Oriented Gradients computer vision technique.

Then an unsupervised machine learning method, a Bayesian Gaussian Mixture model is fitted (instead of conventional K-means) to find a reduced palette. The tied gaussians give a better estimate (than Euclidean distance) and allow smaller centroids to appear and then lose importance to larger ones further away. The probability mass function returned by the uncalibrated model is then used as a basis for different dithering techniques.

Preprocessing and color space conversion tricks are also applied for better results.

PROTIPs

  • There is no one setting fits all, try experimenting with different parameters for better results! A setting that generates visually pleasing result on one image might not work well for another.
  • The bigger the resulting image, the longer the process will take. Note that most parts of the algorithm are O(H*W) so an image that is twice the size will take 4 times longer to compute.
  • Assigning existing palettes will take longer for larger palettes, because LAB color distance has to be calculated between each color separately.
  • Dithering takes time (especially atkinson) as they are mostly implemented in plain python with loops. You look like a good pixel

TODOs

  • Add CLI tool for Pyxelate so images can be batch converted from command line.
  • Re-implement Pyxelate for animations / sequence of frames in video.
  • Include PIPENV python environment files instead of just setup.py.
  • Implement Yliluoma's ordered dithering algorithm and experiment with improving visuals through gamma correction.
  • Write a whitepaper on the Pyxelate algorithm.
Comments
  • [Suggestion] Alternatives to hog for faster runtime

    [Suggestion] Alternatives to hog for faster runtime

    The new version of the program runs significantly slower than the previous version (with the speedup provided by #18). A bit of profiling reveals the hog method as the main culprit. I've implemented a few alternative algorithms I thought might show similar results, and benchmarked the time necessary to do a fit_transform using these different methods. I feel like the 2x2 sobel could be a pretty solid alternative to hog, what do you think?

    (All benchmarks run with dither="naive", palette=6 and boost=True)

    There's a small mistake in the labels, 3x3 sobel is in fact 2x2 and vice-versa

    Factor 6

    pyxelate_robocop_med

    Factor 10

    pyxelate_br_small Factor 6

    pyxelate_br_med Factor 3

    pyxelate_br_large

    Factor 10

    pyxelate_corgi_small Factor 6

    pyxelate_corgi_med

    Factor 10

    pyxelate_palms_small

    Factor 6

    pyxelate_palms_med

    opened by Seon82 4
  • Faster convolutions

    Faster convolutions

    Optimized the code for convolutions, improving the speed of _wrapper by about 60.

    The core idea is to compute the convolutions on the whole image at once instead of doing it block by block, which helps quite a bit with the numpy overhead. I also tried to accelerate the convolving itself by:

    • determining the convolution for 4 "base kernels" ([[1,0],[0,0]], [[0,1],[0,0]], etc...)
    • calculating every convolution in CONVOLUTIONS as a linear combination of these pre-computed base convolutions
    opened by Seon82 4
  • Dithering area inconsistent when running image sequences.

    Dithering area inconsistent when running image sequences.

    Hi,

    First of all, what a great tool! This has a great creative potential and I have been using it for a while now.

    One issue I'm having is, with image sequences I get some areas with dithering that are jumping around when run through pyxelate. I know it is a difficult problem to solve as the shading of those areas change from frame to frame. I can try and share some test images if you are interested. I wish there was a way to lock the dithering samples?

    What I tried, is generating the palette from a single image:

    pyx = Pyx(upscale=1, factor=2, dither="naive", alpha=0.4, sobel=5, palette=7)
    pyx.fit(init_image)
    

    init_image being a single image out of the whole sequence.

    But that did not solve my issues.

    opened by mbilyanov 2
  • Segmentation fault running example on Mac Big Sur

    Segmentation fault running example on Mac Big Sur

    Hey, I'm having an odd issue.

    Environment:

    OS: macOS Big Sur 10.16 20G165 x86_64
    
    $ uname -a
    Darwin slim-Macbook.local 20.6.0 Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 x86_64 i386 MacBookPro15,2 Darwin
    $ xcode-select --version
    xcode-select version 2384
    $ python
    Python 3.9.7 (default, Sep 14 2021, 16:22:39)
    [Clang 12.0.5 (clang-1205.0.22.9)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import sys
    >>> print(sys.version)
    3.9.7 (default, Sep 14 2021, 16:22:39)
    [Clang 12.0.5 (clang-1205.0.22.9)]
    $ pip list
    Package         Version
    --------------- ---------
    cycler          0.10.0
    imageio         2.9.0
    joblib          1.0.1
    kiwisolver      1.3.2
    llvmlite        0.37.0
    matplotlib      3.4.3
    networkx        2.6.3
    numba           0.54.0
    numpy           1.20.3
    Pillow          8.3.2
    pip             21.2.3
    pyparsing       2.4.7
    python-dateutil 2.8.2
    PyWavelets      1.1.1
    scikit-image    0.18.3
    scikit-learn    1.0
    scipy           1.7.1
    setuptools      57.4.0
    six             1.16.0
    threadpoolctl   2.2.0
    tifffile        2021.8.30
    

    When I clone the repository and run the example notebook everything runs perfectly.

    However when I try to copy the example code and run it standalone (or simply edit the notebook with an image of my own) I get a segmentation fault.

    Any idea why this is happening? The segfault occurs with the example blazkowicz.jpg image too, even though it runs fine in the notebook on first run.

    I am using Python 3.9.7 by the way. I've also tried installing all deps in both a virtual env as well as my global. On a macbook, do I need more resources?

    EDIT: attempting to run the notebook again (specifically the first example with blazkowicz) causes the jupyter kernel to die.

    Here is my stacktrace:

    ERROR:asyncio:Exception in callback <TaskWakeupMethWrapper object at 0x112d2a0d0>(<Future finis...C: 1\r\n\r\n'>)
    handle: <Handle <TaskWakeupMethWrapper object at 0x112d2a0d0>(<Future finis...C: 1\r\n\r\n'>)>
    Traceback (most recent call last):
      File "/Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/asyncio/events.py", line 80, in _run
        self._context.run(self._callback, *self._args)
    RuntimeError: Cannot enter into task <Task pending name='Task-4' coro=<HTTP1ServerConnection._server_request_loop() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/http1connection.py:823> wait_for=<Future finished result=b'GET /api/co...PC: 1\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/ioloop.py:688]> while another task <Task pending name='Task-2' coro=<KernelManager._async_start_kernel() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/jupyter_client/manager.py:336>> is being executed.
    ERROR:asyncio:Exception in callback <TaskWakeupMethWrapper object at 0x112f06ca0>(<Future finis...db1"\r\n\r\n'>)
    handle: <Handle <TaskWakeupMethWrapper object at 0x112f06ca0>(<Future finis...db1"\r\n\r\n'>)>
    Traceback (most recent call last):
      File "/Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/asyncio/events.py", line 80, in _run
        self._context.run(self._callback, *self._args)
    RuntimeError: Cannot enter into task <Task pending name='Task-5' coro=<HTTP1ServerConnection._server_request_loop() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/http1connection.py:823> wait_for=<Future finished result=b'GET /kernel...9db1"\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/ioloop.py:688]> while another task <Task pending name='Task-2' coro=<KernelManager._async_start_kernel() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/jupyter_client/manager.py:336>> is being executed.
    

    Diving deeper into the actual pyxelate source with the example code and using a local copy of pyxelate codebase, I pinpointed the exact call that causes the segfault in the pyx module: It is the BayesianGaussianMixture.fit() method. The specific line in pyx.py is the super().fit(X) call on line 74


    I also found a relevant issue regarding segfaults on the auto-sklearn repository: https://github.com/automl/auto-sklearn/issues/688

    Looks like there were some success using the the auto-sklearn docker container, but it does not appear pyxelate uses auto-sklearn, so I'm not sure if this docker container would fix the issue (I will attempt to run it in this container after posting).

    opened by limsammy 2
  • Move from skimage to opencv

    Move from skimage to opencv

    Optimizing the transform function as much as possible could be quite interesting for future image sequence conversions.

    A bit of profiling seems to reveal that calls to skimage functions are the major bottleneck (most notably equalize_adapthist, resize, median, and rgb<->hsv conversions, which account for ~80% of the time spent in transform when boost=True). These functions all have heavily optimized cv2 equivalents, maybe we could speed up the code by using them instead?

    Timer unit: 1e-06 s
    
    Total time: 0.374083 s
    File: <ipython-input-6-3ea2832d4ec7>
    Function: transform at line 315
    
    Line #      Hits         Time  Per Hit   % Time  Line Contents
    ==============================================================
       315                                               def transform(self, X, y=None):
       316                                                   """Transform image to pyxelated version"""
       317         1         26.0     26.0      0.0          assert self.is_fitted, "Call 'fit(image_as_numpy)' first before calling 'transform(image_as_numpy)'!"
       318         1          4.0      4.0      0.0          h, w, d = X.shape
       319         1          3.0      3.0      0.0          if self.find_palette:
       320         1          3.0      3.0      0.0              assert h * w > self.palette, "Too many colors for such a small image! Use a larger image or a smaller palette."
       321                                                   else:
       322                                                       assert h * w > len(self.palette), "Too many colors for such a small image! Use a larger image or a smaller palette."
       323                                                   
       324         1          8.0      8.0      0.0          new_h, new_w = self._get_size(h, w)  # get desired size depending on settings
       325         1          3.0      3.0      0.0          if d > 3:
       326                                                       # image has alpha channel
       327                                                       X_ = self._dilate(X)
       328                                                       alpha_mask = resize(X_[:, :, 3], (new_h, new_w), anti_aliasing=True)
       329                                                   else:
       330                                                       # image has no alpha channel
       331         1          2.0      2.0      0.0              X_ = X
       332         1          2.0      2.0      0.0              alpha_mask = None
       333         1          3.0      3.0      0.0          if self.depth:
       334                                                       # change size depending on the number of iterations
       335         1          5.0      5.0      0.0              new_h, new_w = new_h * (self.sobel ** self.depth), new_w * (self.sobel ** self.depth)
       336         1      49371.0  49371.0     13.2          X_ = resize(X_[:, :, :3], (new_h, new_w), anti_aliasing=True)  # colors are now 0. - 1.        
       337                                                   
       338         1          5.0      5.0      0.0          if self.boost:
       339                                                       # adjust contrast
       340         1     113935.0 113935.0     30.5              X_ = rgb2hsv(equalize_adapthist(X_))
       341         1       1638.0   1638.0      0.4              X_[:, :, 1:] *= self.HIST_BRIGHTNESS
       342         1      45119.0  45119.0     12.1              X_ = hsv2rgb(np.clip(X_, 0., 1.))
       343                                                   
       344                                                   # pyxelate iteratively
       345         2          8.0      4.0      0.0          for _ in range(self.depth):
       346         1          2.0      2.0      0.0              if self.boost and d == 3:
       347                                                           # remove noise
       348         1      78951.0  78951.0     21.1                  X_ = self._median(X_)
       349         1      16055.0  16055.0      4.3              X_ = self._pyxelate(X_)  # downsample in each iteration
       350                                                       
       351         1          3.0      3.0      0.0          final_h, final_w, _ = X_.shape
       352         1          2.0      2.0      0.0          if self.find_palette:
       353         1         63.0     63.0      0.0              X_ = ((X_ - .5) * self.SCALE_RGB) + .5  # values were already altered before in .fit()
       354         1          9.0      9.0      0.0          reshaped = np.reshape(X_, (final_h * final_w, 3))
       355                                                       
       356                                                   # add dithering
       357         1          2.0      2.0      0.0          if self.dither is None or self.dither == "none":
       358                                                       probs = self.model.predict(reshaped)
       359                                                       X_ = self.colors[probs]
       360         1          1.0      1.0      0.0          elif self.dither == "naive":
       361                                                       # pyxelate dithering based on BGM probability density
       362         1       4953.0   4953.0      1.3              probs = self.model.predict_proba(reshaped)
       363         1         92.0     92.0      0.0              p = np.argmax(probs, axis=1)
       364         1       1055.0   1055.0      0.3              X_ = self.colors[p]
       365         1         86.0     86.0      0.0              probs[np.arange(len(p)), p] = 0
       366         1        116.0    116.0      0.0              p2 = np.argmax(probs, axis=1)  # second best
       367         1        517.0    517.0      0.1              v1 = np.max(probs, axis=1) > (1.  / (len(self.colors) + 1))
       368         1        612.0    612.0      0.2              v2 = np.max(probs, axis=1) > (1.  / (len(self.colors) * self.DITHER_NAIVE_BOOST + 1))
       369         1          2.0      2.0      0.0              pad = not bool(final_w % 2)
       370      8763      10951.0      1.2      2.9              for i in range(0, len(X_), 2):
       371      8762      11332.0      1.3      3.0                  m = (i // final_w) % 2
       372      8762      10834.0      1.2      2.9                  if pad:
       373                                                               i += m
       374      8762      10942.0      1.2      2.9                  if m:
       375      4312       6475.0      1.5      1.7                      if v1[i]:
       376       862       2319.0      2.7      0.6                          X_[i] = self.colors[p2[i]]
       377      4450       5665.0      1.3      1.5                  elif v2[i]:
       378      1065       2790.0      2.6      0.7                      X_[i] = self.colors[p2[i]]
       379                                                   elif self.dither == "bayer":
       380                                                       # Bayer-like dithering
       381                                                       self._warn_on_dither_with_alpha(d)
       382                                                       probs = self.model.predict_proba(reshaped)
       383                                                       probs = [convolve(probs[:, i].reshape((final_h, final_w)), self.DITHER_BAYER_MATRIX, mode="reflect") for i in range(len(self.colors))]
       384                                                       probs = np.argmin(probs, axis=0)
       385                                                       X_ = self.colors[probs]
       386                                                   elif self.dither == "floyd":
       387                                                       # Floyd-Steinberg-like algorithm
       388                                                       self._warn_on_dither_with_alpha(d)
       389                                                       X_ = self._dither_floyd(reshaped, (final_h, final_w))
       390                                                   elif self.dither == "atkinson":
       391                                                       # Atkinson-like algorithm
       392                                                       self._warn_on_dither_with_alpha(d)
       393                                                       res = np.zeros((final_h + 2, final_w + 3), dtype=int)
       394                                                       X_ = np.pad(X_, ((0, 2), (1, 2), (0, 0)), "reflect")
       395                                                       for y in range(final_h):
       396                                                           for x in range(1, final_w+1):
       397                                                               pred = self.model.predict_proba(X_[y, x, :3].reshape(-1, 3))
       398                                                               res[y, x] = np.argmax(pred)
       399                                                               quant_error = (X_[y, x, :3] - self.model.means_[res[y, x]]) / 8.
       400                                                               X_[y, x+1, :3] += quant_error
       401                                                               X_[y, x+2, :3] += quant_error
       402                                                               X_[y+1, x-1, :3] += quant_error
       403                                                               X_[y+1, x, :3] += quant_error
       404                                                               X_[y+1, x+1, :3] += quant_error
       405                                                               X_[y+2, x, :3] += quant_error
       406                                                       # fix edges
       407                                                       res = res[:final_h, 1:final_w+1]
       408                                                       X_ = self.colors[res.reshape(final_h * final_w)]
       409                                                   
       410         1         14.0     14.0      0.0          X_ = np.reshape(X_, (final_h, final_w, 3))  # reshape to actual image dimensions
       411         1          1.0      1.0      0.0          if alpha_mask is not None:
       412                                                       # attach lost alpha layer
       413                                                       alpha_mask[alpha_mask >= self.alpha] = 255
       414                                                       alpha_mask[alpha_mask < self.alpha] = 0
       415                                                       X_ = np.dstack((X_[:, :, :3], alpha_mask.astype(int)))
       416                                                   
       417                                                   # return upscaled image
       418         1         88.0     88.0      0.0          X_ = np.repeat(np.repeat(X_, self.upscale[0], axis=0), self.upscale[1], axis=1)
       419         1         16.0     16.0      0.0          return X_.astype(np.uint8)
    
    opened by Seon82 2
  • Sobel filter

    Sobel filter

    • Replaced hog by sobel filter.
    • The size of the sobel filter can be chosen by using the sobel parameter when initializing a Pyx object (defaults to 3).
    • Other methods depending on 3x3 squares (Pyx._median, Pyx._dilate) remain independant of the sobel parameter.
    • Pyx._pad now takes a pad_size argument instead of always padding to a divisor of 3.
    opened by Seon82 2
  • cupy as alternative to numpy on critical sections

    cupy as alternative to numpy on critical sections

    So, i looked a bit around online and stumbled across cupy. A library that basicly wraps numpy functionality in a library that runs on the gpu to perform highly concurrent calculations faster.

    I tinkered a bit around but didn't really got to a state to test it effectivly. Mainly because I am an absolute python scrub and also have no clue about image computation whatsoever. But I am hoping that someone else can implement it into the code, just to see if it gives any performance upgrade on larger images.

    Currently it's not that trivial to setup an environment for it, but I got it running on my Arch Linux with a GeForce 1050 Ti. cupy GitHub page cupy installation instructions

    opened by NilsKrause 2
  • This isnt pixel art filter but color limiter filter, pretty average one

    This isnt pixel art filter but color limiter filter, pretty average one

    The name is misleading, pixel art is not any image with 8 colours, its actually art made in a way that it appears to have depth ,shadows and highlights using pixels to minimize color banding, and in your case its full on 100% color banding. Maybe aim for 16 colours or try to use dither patterns common in pixel art to make it look better. I know it is not simple color limiter but the result still asthetically looks like color limiter.

    opened by 2blackbar 1
  • Unable to use height and width arguments

    Unable to use height and width arguments

    Description

    Whenever I try to convert an image while giving height and width arguments I get a valueError from the factor being set. However, I only supplied the input, output, height, and width.

    PS C:\...\pyxelate\pyxelate> python main.py "testIcon.jpg" "output.png" --height 32 --width 32
    Pyxelating testIcon.jpg...
    Traceback (most recent call last):
      File "main.py", line 202, in <module>
        main()
      File "main.py", line 195, in main
        convert(args)
      File "main.py", line 37, in convert
        pyx = get_model(args)
      File "main.py", line 23, in get_model
        return Pyx(
      File "C:\...\pyxelate\pyxelate\pyx.py", line 121, in __init__
        raise ValueError("You can only set either height + width or the downscaling factor, but not both!")
    ValueError: You can only set either height + width or the downscaling factor, but not both!
    

    https://github.com/sedthh/pyxelate/blob/fbbcfbc2894c8bbf825b0667923dca45d617b523/pyxelate/pyx.py#L120-L121

    Seems like the argument parsing is defaulting the factor to 1

    https://github.com/sedthh/pyxelate/blob/fbbcfbc2894c8bbf825b0667923dca45d617b523/pyxelate/main.py#L91

    Setting the default to None makes the command work as intended (outputs a 32x32 image)

    opened by MikkyD23 1
  • Add command-line wrapper for Pyxelate.

    Add command-line wrapper for Pyxelate.

    This PR adds a small wrapper around Pyxelate which can be invoked with the "pyxelate" command once installed.

    Example usage:

    $ pyxelate input.jpg output.png --factor 10 --palette PICO_8
    

    Here is the full help message:

    $ pyxelate --help
    usage: pyxelate [-h] [--width WIDTH] [--height HEIGHT] [--factor FACTOR]
                    [--upscale UPSCALE] [--depth DEPTH] [--palette PALETTE]
                    [--dither {none,naive,bayer,floyd,atkinson}] [--sobel SOBEL]
                    [--alpha ALPHA] [--noboost] [--quiet]
                    INFILE OUTFILE
    
    positional arguments:
      INFILE                Input image filename.
      OUTFILE               Output image filename.
    
    optional arguments:
      -h, --help            show this help message and exit
      --width WIDTH         Output image width.
      --height HEIGHT       Output image height.
      --factor FACTOR       Downsample factor.
      --upscale UPSCALE     Upscale factor for output pixels.
      --depth DEPTH         Number of times to downscale.
      --palette PALETTE     Number of colors in output palette, or a palette name.
                            Valid choices are: ['TELETEXT', 'BBC_MICRO',
                            'CGA_MODE4_PAL1', 'CGA_MODE5_PAL1', 'CGA_MODE4_PAL2',
                            'ZX_SPECTRUM', 'APPLE_II_LO', 'APPLE_II_HI',
                            'COMMODORE_64', 'GAMEBOY_COMBO_UP',
                            'GAMEBOY_COMBO_DOWN', 'GAMEBOY_COMBO_LEFT',
                            'GAMEBOY_COMBO_RIGHT', 'GAMEBOY_A_UP',
                            'GAMEBOY_A_DOWN', 'GAMEBOY_A_LEFT', 'GAMEBOY_A_RIGHT',
                            'GAMEBOY_B_UP', 'GAMEBOY_B_DOWN', 'GAMEBOY_B_LEFT',
                            'GAMEBOY_B_RIGHT', 'GAMEBOY_ORIGINAL',
                            'GAMEBOY_POCKET', 'GAMEBOY_VIRTUALBOY',
                            'MICROSOFT_WINDOWS_16', 'MICROSOFT_WINDOWS_20',
                            'MICROSOFT_WINDOWS_PAINT', 'PICO_8', 'MSX',
                            'MONO_OBRADINN_IBM', 'MONO_OBRADINN_MAC', 'MONO_BJG',
                            'MONO_BW', 'MONO_PHOSPHOR_AMBER',
                            'MONO_PHOSPHOR_LTAMBER', 'MONO_PHOSPHOR_GREEN1',
                            'MONO_PHOSPHOR_GREEN2', 'MONO_PHOSPHOR_GREEN3',
                            'MONO_PHOSPHOR_APPLE', 'APPLE_II_MONO',
                            'MONO_PHOSPHOR_APPLEC', 'APPLE_II_MONOC']
      --dither {none,naive,bayer,floyd,atkinson}
                            Type of dithering to use.
      --sobel SOBEL         Size of the Sobel operator.
      --alpha ALPHA         Alpha threshold for output pixel visibility.
      --noboost             By default, adjust contrast and apply preprocessing on
                            the image before transformation for better results. In
                            case you see unwanted dark pixels in your image, use
                            --noboost.
      --quiet               Suppress logging output.
    
    opened by mdwelsh 1
  • 2x2 Block Bottleneck Solution Idea

    2x2 Block Bottleneck Solution Idea

    I'm having a hard time wrapping my head around what's going on, but it looks like the problem area is just a shitload of 2x2 images going through convolution.

    What if it was just one big image? For example, convolution on an image with this kernel:

    0 0 0
    0 0 1
    0 0 0
    

    should be the same as moving the image over 1 pixel, and convolution on an image with this kernel:

     0 0 0
    -1 0 1
     0 0 0
    

    should be the same as duplicating the image, moving one right 1 pixel, moving the other duplicate left 1 pixel and multiplying the values by -1, and adding them together.

    If there's an issue with 2x2 blocks overlapping due to the entire image moving, I think you could do like mod(x - (floor(x / 2) * 2) + offset, 2) + (floor(x / 2) * 2) on the texture coordinates when in 0 to (width-1) range rather than 0-1, thinking in GLSL at least. It's been a while since I've done GLSL though so please don't take my word for it.

    opened by torridgristle 1
  • question about the _svd function

    question about the _svd function

    Hi, sedthh, this is a great work that I am really appreciate!

    But I am confusing about the svd function.

    X_ are in range[0., 1.] before calling self._svd. https://github.com/sedthh/pyxelate/blob/ae2de9249d11063d0c1563b8e30a634c7d07faf8/pyxelate/pyx.py#L362-L365

    But in _svd function, the result still divide 255. What is the purpose to cast the range of color to [0, 1. / 255.]? https://github.com/sedthh/pyxelate/blob/ae2de9249d11063d0c1563b8e30a634c7d07faf8/pyxelate/pyx.py#L337

    Thank you for your great work, and I am looking forward to hearing from you!

    opened by joe-zxh 2
  • Numpy error when trying to run the example

    Numpy error when trying to run the example

    Hi, running the example on ArchLinux with numpy 1.21.5 (downgraded from numpy 1.22.3 which is current), I’m getting this error:

      File "/usr/bin/pyxelate", line 33, in <module>
        sys.exit(load_entry_point('pyxelate==2.1.1', 'console_scripts', 'pyxelate')())
      File "/usr/bin/pyxelate", line 25, in importlib_load_entry_point
        return next(matches).load()
      File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 171, in load
        module = import_module(match.group('module'))
      File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
      File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
      File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/usr/lib/python3.10/site-packages/pyxelate/__init__.py", line 11, in <module>
        from .pyx import Pyx
      File "/usr/lib/python3.10/site-packages/pyxelate/pyx.py", line 10, in <module>
        from skimage.transform import resize
      File "/usr/lib/python3.10/site-packages/skimage/__init__.py", line 151, in <module>
        from ._shared import geometry
      File "skimage/_shared/geometry.pyx", line 1, in init skimage._shared.geometry
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
    opened by knochenhans 0
  • Suggest to allow define part of the palette

    Suggest to allow define part of the palette

    Now there are only 2 ways to define the palette that one is full manual control while another is auto detection. However, when using auto detection on a image with rich color, it usually failed to detect some key color that is not occupying large area but it is critical (eg. Eyes, it is small but it does exist).

    Therefore, I suggest for the auto detection, for example, allow 5 palette, that it can also accept manual defined color (as my above example, I can point out the eyes' color) within the 5 palette.

    (This project is really a good work, thank you!)

    opened by BugsSeeker 1
  • Add contours for more

    Add contours for more "comic look"

    I played around a bit with active black contours like here before and after the application of the transformation and I think it would be a great addition to the library and should be pretty straight forward to implement. It can give a more comic-like look to the results.

    opened by h4gen 1
Releases(2.0.2)
  • 2.0.2(Apr 11, 2021)

    • Palette transfer + common palettes
    • More dithering options
    • Improved pre-, and postprocessing methods
    • Sobel operator instead of HOG (huge speed boost)
    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Apr 6, 2021)

Owner
Richard Nagyfi
Senior Data Scientist https://www.facebook.com/sedthh https://medium.com/@sedthh https://www.linkedin.com/in/sedthh/
Richard Nagyfi
A Robust Avatar Generator with a huge number of templates

CoolAvatars Welcome to this repository of CoolAvatars. Using this project, you can generate cool avatars not only from the samples present in my image

RAVI PRAKASH 5 Oct 12, 2021
starfish is a Python library for processing images of image-based spatial transcriptomics.

starfish: scalable pipelines for image-based transcriptomics starfish is a Python library for processing images of image-based spatial transcriptomics

199 Dec 08, 2022
GPU-accelerated image processing using cupy and CUDA

napari-cupy-image-processing GPU-accelerated image processing using cupy and CUDA This napari plugin was generated with Cookiecutter using with @napar

Robert Haase 16 Oct 26, 2022
A Blender add-on to create interesting meshes using symmetry

Procedural Symmetries This Blender add-on automates the process of iteratively applying a set of reflection planes to a base mesh. The result will con

1 Dec 29, 2021
Fill holes in binary 2D & 3D images fast.

Fill holes in binary 2D & 3D images fast.

11 Dec 09, 2022
Easy to use Python module to extract Exif metadata from digital image files.

Easy to use Python module to extract Exif metadata from digital image files.

ianaré sévi 719 Jan 05, 2023
Python pygame project that turns your images to matrix rain

Matrix-Rain-An-Image This project implements the classic Matrix digital rain effect in python with pygame to build up an image provided with multiple

7 Dec 11, 2022
TRREASURE_IMAGE is python lib by which you can hide anything in a .jpg image with Command-Line Interface[cli] feature

TRREASURE_IMAGE TRREASURE_IMAGE is a python third-party library with Command-Line Interface[cli] feature. Table of Contents General Info Python librar

Fatin Shadab 3 Jun 07, 2022
missing-pixel-filler is a python package that, given images that may contain missing data regions (like satellite imagery with swath gaps), returns these images with the regions filled.

Missing Pixel Filler This is the official code repository for the Missing Pixel Filler by SpaceML. missing-pixel-filler is a python package that, give

SpaceML 11 Jul 19, 2022
Tool to create a Phunk image with a custom background

Create Phunk image Tool to create a Phunk image with a custom background Installation Clone the repo git clone https://github.com/albanow/etherscan_sa

Albano Pena Torres 6 Mar 31, 2022
Python QR Code image generator

Pure python QR Code generator Generate QR codes. For a standard install (which will include pillow for generating images), run: pip install qrcode[pil

Lincoln Loop 3.5k Dec 31, 2022
A proof-of-concept implementation of a parallel-decodable PNG format

mtpng A parallelized PNG encoder in Rust by Brion Vibber [email protected] Backgrou

Brion Vibber 193 Dec 16, 2022
A pure python implementation of the GIMP XCF image format. Use this to interact with GIMP image formats

Pure Python implementation of the GIMP image formats (.xcf projects as well as brushes, patterns, etc)

FHPyhtonUtils 8 Dec 30, 2022
API to help generating QR-code for ZATCA's e-invoice known as Fatoora with any programming language

You can try it @ api-fatoora api-fatoora API to help generating QR-code for ZATCA's e-invoice known as Fatoora with any programming language Disclaime

نافع الهلالي 12 Oct 05, 2022
Make GIFs from time-stacked xarray.DataArrays (time, [optional band], y, x), dead-simple.

GeoGIF Make GIFs from time-stacked xarray.DataArrays (time, [optional band], y, x), dead-simple. from geogif import gif, dgif gif(data_array) dgif(das

Gabe Joseph 47 Dec 22, 2022
Blue noise image stippling in Processing (requires GPU)

Blue noise image stippling in Processing (requires GPU)

Mike Wong 141 Oct 09, 2022
Cat avatars for adult independent users

Cat avatars for adult independent users Samples (Natasha, wake up!) Usage Check values from https://shantichat.github.io/avacats/index.json: { "sizes"

4 Nov 05, 2021
Simple Python / ImageMagick script to package images into WAD3s for use as GoldSrc textures.

WADs Out For [The] Ladies Simple Python / ImageMagick script to package images into WAD3s for use as GoldSrc textures. Development mostly focused on L

5 Apr 09, 2022
A ray tracing render implemented using Taichi language.

A ray tracing render implemented using Taichi language.

Mingrui Zhang 45 Oct 23, 2022
Hello, this project is an example of how to generate a QR Code using python 😁

Hello, this project is an example of how to generate a QR Code using python 😁

Davi Antonaji 2 Oct 12, 2021