🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.

Overview

Image Super-Resolution (ISR)

Build Status Docs License

The goal of this project is to upscale and improve the quality of low resolution images.

This project contains Keras implementations of different Residual Dense Networks for Single Image Super-Resolution (ISR) as well as scripts to train these networks using content and adversarial loss components.

The implemented networks include:

Read the full documentation at: https://idealo.github.io/image-super-resolution/.

Docker scripts and Google Colab notebooks are available to carry training and prediction. Also, we provide scripts to facilitate training on the cloud with AWS and nvidia-docker with only a few commands.

ISR is compatible with Python 3.6 and is distributed under the Apache 2.0 license. We welcome any kind of contribution. If you wish to contribute, please see the Contribute section.

Contents

Troubleshooting

Training not delivering good/patchy results

When training your own model, start with only PSNR loss (50+ epochs, depending on the dataset) and only then introduce GANS and feature loss. This can be controlled by the loss weights argument.

This is just sample, you will need to tune these parameters.

PSNR only:

loss_weights = {
  'generator': 1.0,
  'feature_extractor': 0.0,
  'discriminator': 0.00
}

Later:

loss_weights = {
  'generator': 0.0,
  'feature_extractor': 0.0833,
  'discriminator': 0.01
}

Weights loading

If you are having trouble loading your own weights or the pre-trained weights (AttributeError: 'str' object has no attribute 'decode'), try:

pip install 'h5py==2.10.0' --force-reinstall

Issue

Pre-trained networks

The weights used to produced these images are available directly when creating the model object.

Currently 4 models are available:

  • RDN: psnr-large, psnr-small, noise-cancel
  • RRDN: gans

Example usage:

model = RRDN(weights='gans')

The network parameters will be automatically chosen. (see Additional Information).

Basic model

RDN model, PSNR driven, choose the option weights='psnr-large' or weights='psnr-small' when creating a RDN model.

butterfly-sample
Low resolution image (left), ISR output (center), bicubic scaling (right). Click to zoom.

GANS model

RRDN model, trained with Adversarial and VGG features losses, choose the option weights='gans' when creating a RRDN model.

baboon-comparison
RRDN GANS model (left), bicubic upscaling (right).
-> more detailed comparison

Artefact Cancelling GANS model

RDN model, trained with Adversarial and VGG features losses, choose the option weights='noise-cancel' when creating a RDN model.

temple-comparison
Standard vs GANS model. Click to zoom.
sandal-comparison
RDN GANS artefact cancelling model (left), RDN standard PSNR driven model (right).
-> more detailed comparison

Installation

There are two ways to install the Image Super-Resolution package:

  • Install ISR from PyPI (recommended):
pip install ISR
  • Install ISR from the GitHub source:
git clone https://github.com/idealo/image-super-resolution
cd image-super-resolution
python setup.py install

Usage

Prediction

Load image and prepare it

import numpy as np
from PIL import Image

img = Image.open('data/input/test_images/sample_image.jpg')
lr_img = np.array(img)

Load a pre-trained model and run prediction (check the prediction tutorial under notebooks for more details)

from ISR.models import RDN

rdn = RDN(weights='psnr-small')
sr_img = rdn.predict(lr_img)
Image.fromarray(sr_img)

Large image inference

To predict on large images and avoid memory allocation errors, use the by_patch_of_size option for the predict method, for instance

sr_img = model.predict(image, by_patch_of_size=50)

Check the documentation of the ImageModel class for further details.

Training

Create the models

from ISR.models import RRDN
from ISR.models import Discriminator
from ISR.models import Cut_VGG19

lr_train_patch_size = 40
layers_to_extract = [5, 9]
scale = 2
hr_train_patch_size = lr_train_patch_size * scale

rrdn  = RRDN(arch_params={'C':4, 'D':3, 'G':64, 'G0':64, 'T':10, 'x':scale}, patch_size=lr_train_patch_size)
f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract)
discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3)

Create a Trainer object using the desired settings and give it the models (f_ext and discr are optional)

from ISR.train import Trainer
loss_weights = {
  'generator': 0.0,
  'feature_extractor': 0.0833,
  'discriminator': 0.01
}
losses = {
  'generator': 'mae',
  'feature_extractor': 'mse',
  'discriminator': 'binary_crossentropy'
}

log_dirs = {'logs': './logs', 'weights': './weights'}

learning_rate = {'initial_value': 0.0004, 'decay_factor': 0.5, 'decay_frequency': 30}

flatness = {'min': 0.0, 'max': 0.15, 'increase': 0.01, 'increase_frequency': 5}

trainer = Trainer(
    generator=rrdn,
    discriminator=discr,
    feature_extractor=f_ext,
    lr_train_dir='low_res/training/images',
    hr_train_dir='high_res/training/images',
    lr_valid_dir='low_res/validation/images',
    hr_valid_dir='high_res/validation/images',
    loss_weights=loss_weights,
    learning_rate=learning_rate,
    flatness=flatness,
    dataname='image_dataset',
    log_dirs=log_dirs,
    weights_generator=None,
    weights_discriminator=None,
    n_validation=40,
)

Start training

trainer.train(
    epochs=80,
    steps_per_epoch=500,
    batch_size=16,
    monitored_metrics={'val_PSNR_Y': 'max'}
)

Additional Information

You can read about how we trained these network weights in our Medium posts:

RDN Pre-trained weights

The weights of the RDN network trained on the DIV2K dataset are available in weights/sample_weights/rdn-C6-D20-G64-G064-x2/PSNR-driven/rdn-C6-D20-G64-G064-x2_PSNR_epoch086.hdf5.
The model was trained using C=6, D=20, G=64, G0=64 as parameters (see architecture for details) for 86 epochs of 1000 batches of 8 32x32 augmented patches taken from LR images.

The artefact can cancelling weights obtained with a combination of different training sessions using different datasets and perceptual loss with VGG19 and GAN can be found at weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5 We recommend using these weights only when cancelling compression artefacts is a desirable effect.

RDN Network architecture

The main parameters of the architecture structure are:

  • D - number of Residual Dense Blocks (RDB)
  • C - number of convolutional layers stacked inside a RDB
  • G - number of feature maps of each convolutional layers inside the RDBs
  • G0 - number of feature maps for convolutions outside of RDBs and of each RBD output


source: Residual Dense Network for Image Super-Resolution

RRDN Network architecture

The main parameters of the architecture structure are:

  • T - number of Residual in Residual Dense Blocks (RRDB)
  • D - number of Residual Dense Blocks (RDB) insider each RRDB
  • C - number of convolutional layers stacked inside a RDB
  • G - number of feature maps of each convolutional layers inside the RDBs
  • G0 - number of feature maps for convolutions outside of RDBs and of each RBD output


source: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Contribute

We welcome all kinds of contributions, models trained on different datasets, new model architectures and/or hyperparameters combinations that improve the performance of the currently published model.

Will publish the performances of new models in this repository.

See the Contribution guide for more details.

Bump version

To bump up the version, use

bumpversion {part} setup.py

Citation

Please cite our work in your publications if it helps your research.

@misc{cardinale2018isr,
  title={ISR},
  author={Francesco Cardinale et al.},
  year={2018},
  howpublished={\url{https://github.com/idealo/image-super-resolution}},
}

Maintainers

Copyright

See LICENSE for details.

Comments
  • Any chance on getting the sample weights? Drive, Dropbox...

    Any chance on getting the sample weights? Drive, Dropbox...

    Can someone please share the old weights files? rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5 and rdn-C6-D20-G64-G064-x2/PSNR-driven/rdn-C6-D20-G64-G064-x2_PSNR_epoch086.hdf5

    opened by talvasconcelos 13
  • cna not get my train model

    cna not get my train model

    after training ,80 epochs, I can not find my model under the project folder. The code just ended, but does not tell me where to save the weights or something. Anyone got same problom?

    opened by Flyzzz 8
  •  Unable to open .hdf5 file (file signature not found)

    Unable to open .hdf5 file (file signature not found)

    Hello, When I tried to run he perdaction using this piece of code :

    import tensorflow as tf from ISR.models import RDN import h5py rdn = RDN(arch_params={'C':6, 'D':20, 'G':64, 'G0':64, 'x':2}) rdn.model.load_weights('weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5')

    I got an error


    OSError Traceback (most recent call last) in 3 import h5py 4 rdn = RDN(arch_params={'C':6, 'D':20, 'G':64, 'G0':64, 'x':2}) ----> 5 rdn.model.load_weights('weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5') 6 7 sr_img = rdn.predict(lr_img)

    ~\Anaconda3\lib\site-packages\keras\engine\network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape) 1155 if h5py is None: 1156 raise ImportError('load_weights requires h5py.') -> 1157 with h5py.File(filepath, mode='r') as f: 1158 if 'layer_names' not in f.attrs and 'model_weights' in f: 1159 f = f['model_weights']

    ~\Anaconda3\lib\site-packages\h5py_hl\files.py in init(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds) 392 fid = make_fid(name, mode, userblock_size, 393 fapl, fcpl=make_fcpl(track_order=track_order), --> 394 swmr=swmr) 395 396 if swmr_support:

    ~\Anaconda3\lib\site-packages\h5py_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr) 168 if swmr and swmr_support: 169 flags |= h5f.ACC_SWMR_READ --> 170 fid = h5f.open(name, flags, fapl=fapl) 171 elif mode == 'r+': 172 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

    h5py_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py\h5f.pyx in h5py.h5f.open()

    OSError: Unable to open file (file signature not found)

    So any help please, has anyone faced the same problem? How can I fix it?

    Thanks a lot!

    opened by Elwarfalli 8
  • Running prediction on GPU

    Running prediction on GPU

    Great project, thank you!

    Just wondering if you have been able to run the predictions (not the training) on GPU. Using the Dockerfile.gpu I find that ISR will use the CPU to calculate the prediction and not the GPU.

    bug 
    opened by cesarandreslopez 8
  • Unable to save trained model.

    Unable to save trained model.

    Hi, I was successfully ran the training code, but at the end it seems that the model cannot be saved. After I did some research in Google, I found the issue probably related to tensorflow's bug, which is 'failed to serialize as JSON'. I got this warning too, please see below.

    Did anyone encounter the same issue, any workaround? Thanks a lot.


    WARNING:tensorflow:Model failed to serialize as JSON. Ignoring... can't pickle _thread.RLock objects Epoch 0/1 Current learning rate: 0.00039999998989515007 100%|██████████| 1/1 [00:12<00:00, 12.36s/it] Epoch 0 took 12.4s 160/1 [==========================================================================] - 11s 71ms/sample - loss: 0.1855 - generator_loss: 0.1919 - discriminator_loss: 0.6483 - feature_extractor_loss: 0.9845 - feature_extractor_1_loss: 6.6397 - generator_PSNR_Y: 14.3487 val_PSNR_Y is NOT among the model metrics, removing it. {'val_loss': 0.3240301303565502, 'val_generator_loss': 0.19187796, 'val_discriminator_loss': 0.64834464, 'val_feature_extractor_loss': 0.9844631, 'val_feature_extractor_1_loss': 6.6397047, 'val_generator_PSNR_Y': 14.348654, 'train_d_real_loss': 0.9044446, 'train_d_real_accuracy': 0.240625, 'train_d_fake_loss': 0.91425353, 'train_d_fake_accuracy': 0.70671874, 'train_loss': 0.14742404, 'train_generator_loss': 0.076132715, 'train_discriminator_loss': 0.6488205, 'train_feature_extractor_loss': 0.44856423, 'train_feature_extractor_1_loss': 2.9352493, 'train_generator_PSNR_Y': 16.832394}

    opened by leowang7 6
  • Local jupyter server giving error for weight directory

    Local jupyter server giving error for weight directory

    I am running a Jupyter notebook on localhost and all the commands run flawlessly except when i start to initialize training, get below mentioned errors. I tried google colab notebook without any problems but can not replicate the same thing with local Jupyter server.

    trainer.train( epochs=1, steps_per_epoch=20, batch_size=4, monitored_metrics={'val_PSNR_Y': 'max'} )

    NotADirectoryError Traceback (most recent call last) in 3 steps_per_epoch=20, 4 batch_size=4, ----> 5 monitored_metrics={'val_PSNR_Y': 'max'} 6 )

    ~\AppData\Local\Programs\Python\Python37\lib\site-packages\ISR\train\trainer.py in train(self, epochs, steps_per_epoch, batch_size, monitored_metrics) 290 self.settings['training_parameters']['batch_size'] = batch_size 291 starting_epoch = self.helper.initialize_training( --> 292 self 293 ) # load_weights, creates folders, creates basename 294

    ~\AppData\Local\Programs\Python\Python37\lib\site-packages\ISR\utils\train_helper.py in initialize_training(self, object) 300 301 self.callback_paths = self._make_callback_paths() --> 302 self.callback_paths['weights'].mkdir(parents=True) 303 self.callback_paths['logs'].mkdir(parents=True) 304 object.settings['training_parameters']['starting_epoch'] = last_epoch

    ~\AppData\Local\Programs\Python\Python37\lib\pathlib.py in mkdir(self, mode, parents, exist_ok) 1256 self._raise_closed() 1257 try: -> 1258 self._accessor.mkdir(self, mode) 1259 except FileNotFoundError: 1260 if not parents or self.parent == self:

    NotADirectoryError: [WinError 267] The directory name is invalid: 'weights\rrdn-C4-D3-G64-G064-T10-x2\2019-10-28_21:32'

    opened by janakptl00 6
  • Get Strange results  when training a X3 upsacle ESRGAN model

    Get Strange results when training a X3 upsacle ESRGAN model

    Dear all,

    When I try to train a ArtefactCancelling ESRGAN model, I get some result images with strange patterns.

    I put some sample images here:

    InkedHotel_Train_RRDN_x3CghzflSbJ-GAK_psAABGxZOL6Ig915_LI InkedHotel_Train_RRDN_x3CghzflVTL4uAPDGLAABObN_WAik568_LI InkedHotel_Train_RRDN_x3CghzgVTuyQ6AWJKmAABsCb5v9w8531_LI

    I first train the model with only mae loss as the script below.

    lr_train_patch_size = 50
    layers_to_extract = [5, 9]
    
    scale = 3
    hr_train_patch_size = lr_train_patch_size * scale
    
    rrdn  = RRDN(arch_params={'C':4, 'D':3, 'G':32, 'G0':32, 'T':10, 'x':scale}, patch_size=lr_train_patch_size)
    
    from ISR.train import Trainer
    loss_weights = {
      'generator': 1,
      'feature_extractor': 0.0,
      'discriminator': 0.0
    }
    losses = {
      'generator': 'mae',
      'feature_extractor': 'mse',
      'discriminator': 'binary_crossentropy'
    }
    
    log_dirs = {'logs': './logs', 'weights': './weights'}
    
    learning_rate = {'initial_value': 0.0004, 'decay_factor': 0.5, 'decay_frequency': 30}
    
    flatness = {'min': 0.0, 'max': 0.15, 'increase': 0.01, 'increase_frequency': 5}
    
    trainer = Trainer(
        generator=rrdn,
        discriminator=None,
        feature_extractor=None,
        lr_train_dir='train_LR',
        hr_train_dir= 'train_HR',
        lr_valid_dir='valid_LR',
        hr_valid_dir='valid_HR',
        loss_weights=loss_weights,
        learning_rate=learning_rate,
        flatness=flatness,
        dataname='hotel',
        log_dirs=log_dirs,
        weights_generator=None,
        weights_discriminator=None,
        n_validation=40,
    )
    
    trainer.train(
        epochs=100,
        steps_per_epoch=700,
        batch_size=16,
        monitored_metrics={'val_generator_PSNR_Y': 'max','val_generator_loss': 'min','val_loss': 'min'}
    )
    
    

    Then I train the model with the following configuration :

    lr_train_patch_size = 50
    layers_to_extract = [5, 9]
    
    scale = 3
    hr_train_patch_size = lr_train_patch_size * scale
    
    rrdn  = RRDN(arch_params={'C':4, 'D':3, 'G':32, 'G0':32, 'T':10, 'x':scale}, patch_size=lr_train_patch_size)
    f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract)
    discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3)
    
    from ISR.train import Trainer
    loss_weights = {
      'generator': 0.1,
      'feature_extractor': 0.8,
      'discriminator': 0.1
    }
    losses = {
      'generator': 'mae',
      'feature_extractor': 'mse',
      'discriminator': 'binary_crossentropy'
    }
    
    log_dirs = {'logs': './logs', 'weights': './weights'}
    
    learning_rate = {'initial_value': 0.0004, 'decay_factor': 0.5, 'decay_frequency': 30}
    
    flatness = {'min': 0.0, 'max': 0.15, 'increase': 0.01, 'increase_frequency': 5}
    
    trainer = Trainer(
        generator=rrdn,
        discriminator=discr,
        feature_extractor=f_ext,
        lr_train_dir='train_LR',
        hr_train_dir= 'train_HR',
        lr_valid_dir='valid_LR',
        hr_valid_dir='valid_HR',
        loss_weights=loss_weights,
        learning_rate=learning_rate,
        flatness=flatness,
        dataname='hotel',
        log_dirs=log_dirs,
        weights_generator='weights/rrdn-C4-D3-G32-G032-T10-x3/rrdn-C4-D3-G32-G032-T10-x3_perceptual_epoch099.hdf5',
        weights_discriminator=None,
        n_validation=40,
    )
    
    trainer.train(
        epochs=400,
        steps_per_epoch=500,
        batch_size=16,
        monitored_metrics={'val_generator_loss': 'min', 'val_generator_PSNR_Y': 'max','val_loss': 'min'}
    )
    
    

    Can somebody give me some hints about what may cause the unpleased image patterns?

    Best Regards, Even

    opened by JunbinWang 6
  • Out of memory error when model is called repeatedly

    Out of memory error when model is called repeatedly

    Hi The model with pretrained weights is working correctly while making a prediction on a single image but when I try to make prediction on a lots of images, this gives me out of memory error or system kills it because of it consumption of all the memory. Sometimes it does make prediction for on 5 to 10 images before giving out of memory error. I think it is because of the fact that keras keeps on building the computation graphs before the previous graphs gets completed and in this way free memory keeps on decreasing. Now is it possible that it(model) first predicts on one image and after that for the next image and so on. My code

    rdn.model.load_weights('image-super-resolution/weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5')
    for f in files:
        img = Image.open(f)
        lr_img = np.array(img)
        sr_img = rdn.predict(lr_img)
        hr_img = Image.fromarray(sr_img)
        hr_img.save("model_"+f)
        print(f+" is done")
    

    The error I am getting is this

    2019-07-15 10:54:26.136079: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
    2019-07-15 10:54:26.149052: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
    2019-07-15 10:54:26.149107: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: host_tower
    2019-07-15 10:54:26.149118: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: cledl2-Precision-7820-Tower
    2019-07-15 10:54:26.149170: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.116.0
    2019-07-15 10:54:26.149202: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.116.0
    2019-07-15 10:54:26.149212: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.116.0
    2019-07-15 10:58:10.703357: W tensorflow/core/framework/allocator.cc:122] Allocation of 5865369600 exceeds 10% of system memory.
    b320_150dpi_25-58.jpg is done
    2019-07-15 11:00:52.459730: W tensorflow/core/framework/allocator.cc:122] Allocation of 4101580800 exceeds 10% of system memory.
    b320_150dpi_25-64.jpg is done
    2019-07-15 11:03:38.891812: W tensorflow/core/framework/allocator.cc:122] Allocation of 4270080000 exceeds 10% of system memory.
    b320_150dpi_25-10.jpg is done
    2019-07-15 11:06:22.367807: W tensorflow/core/framework/allocator.cc:122] Allocation of 4185067520 exceeds 10% of system memory.
    b320_150dpi_25-35.jpg is done
    2019-07-15 11:10:14.411686: W tensorflow/core/framework/allocator.cc:122] Allocation of 6017474560 exceeds 10% of system memory.
    b210_150dpi_91.jpg is done
    b209_150dpi_15.jpg is done
    b320_150dpi_25-12.jpg is done
    2019-07-15 11:26:49.040817: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at concat_op.cc:153 : Resource exhausted: OOM when allocating tensor with shape[1,2368,1386,1280] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
    Traceback (most recent call last):
      File "inference.py", line 38, in <module>
        sr_img = rdn.predict(lr_img)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/ISR-2.0.5-py3.5.egg/ISR/models/imagemodel.py", line 21, in predict
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/engine/training.py", line 1169, in predict
        steps=steps)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/engine/training_arrays.py", line 294, in predict_loop
        batch_outs = f(ins_batch)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/backend/tensorflow_backend.py", line 2715, in __call__
        return self._call(inputs)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/backend/tensorflow_backend.py", line 2675, in _call
        fetched = self._callable_fn(*array_vals)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1439, in __call__
        run_metadata_ptr)
      File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
        c_api.TF_GetCode(self.status.status))
    tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,2368,1386,1280] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
             [[{{node LRLs_Concat/concat}} = ConcatV2[N=20, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](LRL_1/add, LRL_2/add, LRL_3/add, LRL_4/add, LRL_5/add, LRL_6/add, LRL_7/add, LRL_8/add, LRL_9/add, LRL_10/add, LRL_11/add, LRL_12/add, LRL_13/add, LRL_14/add, LRL_15/add, LRL_16/add, LRL_17/add, LRL_18/add, LRL_19/add, LRL_20/add, RDB_Concat_20_1/concat/axis)]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
    

    And more often the program gets killed by os.

    opened by MNMaqsood 6
  • What changes should be made to run on GPU machine

    What changes should be made to run on GPU machine

    I've used this model with pre-trained weights for prediction and It's working perfectly (Thank you for that! ) . The issue is that it's running on CPU and taking long time to upscale a single image (about 2 minutes). I've GPU in my machine and this module is not utilizing it. I think there's need to be some changes in the code (or whatever). Now my question is that what should be changed in the code for prediction (as well for training) such that this module uses my (precious) GPU resources?

    Thanks in advance. Waiting for your reply

    opened by MNMaqsood 6
  • OSError while loading sample weights

    OSError while loading sample weights

    While I was loading model with sample weights in Google Colab this exception occurred:

    Using TensorFlow backend.
    WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Colocations handled automatically by placer.
    ---------------------------------------------------------------------------
    OSError                                   Traceback (most recent call last)
    <ipython-input-5-913aee289ebe> in <module>()
          2 
          3 rdn = RDN(arch_params={'C':6, 'D':20, 'G':64, 'G0':64, 'x':2})
    ----> 4 rdn.model.load_weights('/content/image-super-resolution/weights/sample_weights/rdn-C6-D20-G64-G064-x2_div2k-e086.hdf5')
          5 
          6 sr_img = rdn.model.predict(lr_img)[0]
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
       1155         if h5py is None:
       1156             raise ImportError('`load_weights` requires h5py.')
    -> 1157         with h5py.File(filepath, mode='r') as f:
       1158             if 'layer_names' not in f.attrs and 'model_weights' in f:
       1159                 f = f['model_weights']
    
    /usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, **kwds)
        310             with phil:
        311                 fapl = make_fapl(driver, libver, **kwds)
    --> 312                 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
        313 
        314                 if swmr_support:
    
    /usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
        140         if swmr and swmr_support:
        141             flags |= h5f.ACC_SWMR_READ
    --> 142         fid = h5f.open(name, flags, fapl=fapl)
        143     elif mode == 'r+':
        144         fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    h5py/h5f.pyx in h5py.h5f.open()
    
    OSError: Unable to open file (file signature not found)
    

    I found that this error might mean that the file is corrupted. Or maybe it's the fault of Google Colab 🤔
    opened by pniedzwiedzinski 6
  • Error with Pretrained Model When Converting to Tensorflow.js

    Error with Pretrained Model When Converting to Tensorflow.js

    Hi,

    I am trying to convert the model weights into a json file using the instructions here: https://js.tensorflow.org/tutorials/import-keras.html . The model is able to convert correctly but I keep getting a 'Uncaught (in promise) TypeError: Cannot read property 'model_config' of null' error when trying to load the model in javascript. I think it has something to do with not using model.save() when saving the keras model in python. Any help will be appreciated. Thanks.

    opened by GregFrench 6
  • Download model weights connection timed out

    Download model weights connection timed out

    Hi, I am trying to download the weights of the model, but get:

    Downloading data from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/ISR/rrdn-C4-D3-G32-G032-T10-x4-GANS/rrdn-C4-D3-G32-G032-T10-x4_epoch299.hdf5
    
    File "flask_cam.py", line 89, in gradcam
        rdn = RRDN(weights="gans")
      File "/dev_data/wlh/conda/envs/noisy/lib/python3.8/site-packages/ISR-2.2.0-py3.8.egg/ISR/models/rrdn.py", line 91, in __init__
        weights_path = tf.keras.utils.get_file(fname=fname, origin=url)
      File "/dev_data/wlh/conda/envs/noisy/lib/python3.8/site-packages/keras-2.11.0-py3.8.egg/keras/utils/data_utils.py", line 304, in get_file
        raise Exception(error_msg.format(origin, e.errno, e.reason))
    Exception: URL fetch failure on https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/ISR/rrdn-C4-D3-G32-G032-T10-x4-GANS/rrdn-C4-D3-G32-G032-T10-x4_epoch299.hdf5: None -- [Errno 110] Connection timed out
    

    Then, I turn to load the model with the weights in weights/xxx/xxx.hdf5, and get

      File "flask_cam.py", line 90, in gradcam
        rdn.model.load_weights(
      File "/dev_data/wlh/conda/envs/noisy/lib/python3.8/site-packages/keras-2.11.0-py3.8.egg/keras/utils/traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
      File "/dev_data/wlh/conda/envs/noisy/lib/python3.8/site-packages/h5py/_hl/files.py", line 406, in __init__
        fid = make_fid(name, mode, userblock_size,
      File "/dev_data/wlh/conda/envs/noisy/lib/python3.8/site-packages/h5py/_hl/files.py", line 173, in make_fid
        fid = h5f.open(name, flags, fapl=fapl)
      File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
      File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
      File "h5py/h5f.pyx", line 88, in h5py.h5f.open
    OSError: Unable to open file (file signature not found)
    

    Maybe the network wall in China blocks the connection. Are there any solutions? Thanks in advance!

    opened by Waterkin 0
  • predict image conatins many abnormal pixel block

    predict image conatins many abnormal pixel block

    Hi I train a model with 400 epoch as provided in the ISR_Traininig_Tutorial.ipynb, but after predict with my model I the high resolution image contains too many abnormal pixel blocks, like this image

    Wondering how this happens? Anyone can give a hint about this issue? Thanks a lot

    opened by ZouYao0720 0
  • ISR Dependency functools32  Fails Install using Poetry

    ISR Dependency functools32 Fails Install using Poetry

    While getting requirements to build wheel for functools32, Poetry throws an error stating "This backport is for Python 2.7 only."

    I've searched the issues and have not found any regarding functools32. Is there a workaround? Thank you.

    Poetry's pyproject.toml:

    [tool.poetry]
    name = "Imgovore-SuperScaler"
    version = "0.1.0"
    description = ""
    authors = ["anon <[email protected]>"]
    readme = "README.md"
    
    [tool.poetry.dependencies]
    python = "^3.9.13"
    httpx = "^0.23.0"
    ISR = "*"
    
    [build-system]
    requires = ["poetry-core"]
    build-backend = "poetry.core.masonry.api"
    

    Error message:

      • Installing functools32 (3.2.3-2): Failed
    
      CalledProcessError
    
      Command '['/media/anon/36782A484452D4B2/Projects/dalleflow/.venv/bin/python', '/home/anon/.local/share/pypoetry/venv/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/pip-22.1.2-py3-none-any.whl/pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/media/anon/36782A484452D4B2/Projects/dalleflow/.venv', '--no-deps', '/home/anon/.cache/pypoetry/artifacts/40/ec/4a/429967dd7cfd0d2348afa71339ab609621377e44b77bc9be27e030b55e/functools32-3.2.3-2.tar.gz']' returned non-zero exit status 1.
    
      at /usr/lib/python3.10/subprocess.py:524 in run
           520│             # We don't call process.wait() as .__exit__ does that for us.
           521│             raise
           522│         retcode = process.poll()
           523│         if check and retcode:
        →  524│             raise CalledProcessError(retcode, process.args,
           525│                                      output=stdout, stderr=stderr)
           526│     return CompletedProcess(process.args, retcode, stdout, stderr)
           527│ 
           528│ 
    
    The following error occurred when trying to handle this error:
    
    
      EnvCommandError
    
      Command ['/media/anon/36782A484452D4B2/Projects/dalleflow/.venv/bin/python', '/home/anon/.local/share/pypoetry/venv/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/pip-22.1.2-py3-none-any.whl/pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/media/anon/36782A484452D4B2/Projects/dalleflow/.venv', '--no-deps', '/home/anon/.cache/pypoetry/artifacts/40/ec/4a/429967dd7cfd0d2348afa71339ab609621377e44b77bc9be27e030b55e/functools32-3.2.3-2.tar.gz'] errored with the following return code 1, and output: 
      Processing /home/anon/.cache/pypoetry/artifacts/40/ec/4a/429967dd7cfd0d2348afa71339ab609621377e44b77bc9be27e030b55e/functools32-3.2.3-2.tar.gz
        Installing build dependencies: started
        Installing build dependencies: finished with status 'done'
        Getting requirements to build wheel: started
        Getting requirements to build wheel: finished with status 'error'
        error: subprocess-exited-with-error
        
        × Getting requirements to build wheel did not run successfully.
        │ exit code: 1
        ╰─> [1 lines of output]
            This backport is for Python 2.7 only.
            [end of output]
        
        note: This error originates from a subprocess, and is likely not a problem with pip.
      error: subprocess-exited-with-error
      
      × Getting requirements to build wheel did not run successfully.
      │ exit code: 1
      ╰─> See above for output.
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
      
    
      at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/utils/env.py:1497 in _run
          1493│                 output = subprocess.check_output(
          1494│                     command, stderr=subprocess.STDOUT, env=env, **kwargs
          1495│                 )
          1496│         except CalledProcessError as e:
        → 1497│             raise EnvCommandError(e, input=input_)
          1498│ 
          1499│         return decode(output)
          1500│ 
          1501│     def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
    
    The following error occurred when trying to handle this error:
    
    
      PoetryException
    
      Failed to install /home/anon/.cache/pypoetry/artifacts/40/ec/4a/429967dd7cfd0d2348afa71339ab609621377e44b77bc9be27e030b55e/functools32-3.2.3-2.tar.gz
    
      at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/utils/pip.py:55 in pip_install
           51│ 
           52│     try:
           53│         return environment.run_pip(*args)
           54│     except EnvCommandError as e:
        →  55│         raise PoetryException(f"Failed to install {path.as_posix()}") from e
           56│ 
    
    

    Poetry config:

    Poetry-version = 1.2.0b2
    cache-dir = "/home/anon/.cache/pypoetry"
    experimental.new-installer = true
    experimental.system-git-client = false
    installer.max-workers = null
    installer.no-binary = null
    installer.parallel = true
    virtualenvs.create = true
    virtualenvs.in-project = true
    virtualenvs.options.always-copy = false
    virtualenvs.options.no-pip = false
    virtualenvs.options.no-setuptools = false
    virtualenvs.options.system-site-packages = false
    virtualenvs.path = "{cache-dir}/virtualenvs"  # /home/anon/.cache/pypoetry/virtualenvs
    virtualenvs.prefer-active-python = true
    virtualenvs.prompt = "{project_name}-py{python_version}"
    
    opened by johnziebro 2
  • ValueError: Input 0 of layer

    ValueError: Input 0 of layer "generator" is incompatible with the layer

    Hi, I'm getting this error when I try to predict after training:

    ValueError: Input 0 of layer "generator" is incompatible with the layer: expected shape=(None, 40, 40, 3), found shape=(None, 64, 64, 3)
    

    I trained on a set of 64x64 images, with 512x512 upscaled versions. I split the original full set of 64x64 images into a training set and a validation set, and tried to predict with the validation set. That's when I got this error. I'm not sure why the generator is expecting a 40x40 image as input, given that these weights were trained on 64x64 images.

    Here is the full code for training / running:

    from ISR.train import Trainer
    from ISR.models import RRDN, Cut_VGG19, Discriminator
    import os
    from PIL import Image
    import numpy as np
    
    loss_weights = {'generator': 0.0, 'feature_extractor': 0.0833, 'discriminator': 0.01}
    losses = {'generator': 'mae', 'feature_extractor': 'mse', 'discriminator': 'binary_crossentropy'}
    log_dirs = {'logs': '/workspace/image-super-resolution/logs', 'weights': '/workspace/image-super-resolution/weights'}
    learning_rate = {'initial_value': 0.0004, 'decay_factor': 0.5, 'decay_frequency': 30}
    flatness = {'min': 0.0, 'max': 0.15, 'increase': 0.01, 'increase_frequency': 5}
    
    # model hyperparams
    lr_train_patch_size = 40
    layers_to_extract = [5, 9]
    scale = 4
    hr_train_patch_size = lr_train_patch_size * scale
    # pretrained_weights_loc = "/workspace/rrdn-C4-D3-G32-G032-T10-x4_epoch299.hdf5"
    
    arch_params = {'C': 4, 'D': 3, 'G': 32, 'G0':32, 'T': 10, 'x': scale}
    rrdn = RRDN(arch_params=arch_params, patch_size=lr_train_patch_size, weights='gans')
    f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract)
    discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3)
    
    
    trainer = Trainer(generator=rrdn, 
    discriminator=discr, 
    feature_extractor=f_ext,
    lr_train_dir="/workspace/IR_preprocessed/train/lr_64", 
    hr_train_dir="/workspace/IR_preprocessed/train/hr_512",
    loss_weights=loss_weights, 
    learning_rate=learning_rate, 
    flatness=flatness,
    dataname="IR_dataset", 
    log_dirs=log_dirs, 
    weights_generator=None,
    weights_discriminator=None,
    n_validation=40, 
    lr_valid_dir="/workspace/IR_preprocessed/val/lr_64",
    hr_valid_dir="/workspace/IR_preprocessed/val/hr_512", 
    )
    
    trainer.train(epochs=3, steps_per_epoch=10, batch_size=16, monitored_metrics={"val_generator_PSNR_Y": "max"})
    
    run validation
    saved_weights = "/workspace/image-super-resolution/weights/rrdn-C4-D3-G32-G032-T10-x4/2022-05-03_1809/rrdn-C4-D3-G32-G032-T10-x4_best-val_generator_PSNR_Y_epoch003.hdf5"
    rrdn.model.load_weights(saved_weights)
    lr_valid_dir = "/workspace/IR_preprocessed/val/lr_64"
    for imgfile in os.listdir(lr_valid_dir):
        if imgfile.endswith(".png"):
            imgfile = os.path.join(lr_valid_dir, imgfile)
            print(f"processing {imgfile}...")
            img = Image.open(imgfile)
            lr_img = np.array(img)
            sr_img = rrdn.predict(lr_img)
    
    
    opened by loftusa 0
Releases(v2.2.0)
  • v2.2.0(Jan 8, 2020)

    ✨ New features and improvements

    • Add model API, download the weights directly from S3 instead of using Git LFS #59
    • Upgrade to TensorFlow 2.0 #44
    Source code(tar.gz)
    Source code(zip)
Owner
idealo
idealo's technology org page, Germany's largest price comparison service. Visit us at https://idealo.github.io/.
idealo
一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目

定时面板上的签到盒 一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 特别声明 本仓库发布的脚本及其中涉及的任何解锁和解密分析脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合

Leon 1.1k Dec 30, 2022
Contra is a lightweight, production ready Tensorflow alternative for solving time series prediction challenges with AI

Contra AI Engine A lightweight, production ready Tensorflow alternative developed by Styvio styvio.com » How to Use · Report Bug · Request Feature Tab

styvio 14 May 25, 2022
TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

YeongHyeon Park 7 Aug 28, 2022
A simple, unofficial implementation of MAE using pytorch-lightning

Masked Autoencoders in PyTorch A simple, unofficial implementation of MAE (Masked Autoencoders are Scalable Vision Learners) using pytorch-lightning.

Connor Anderson 20 Dec 03, 2022
A cross-lingual COVID-19 fake news dataset

CrossFake An English-Chinese COVID-19 fake&real news dataset from the ICDMW 2021 paper below: Cross-lingual COVID-19 Fake News Detection. Jiangshu Du,

Yingtong Dou 11 Dec 01, 2022
GAN-generated image detection based on CNNs

GAN-image-detection This repository contains a GAN-generated image detector developed to distinguish real images from synthetic ones. The detector is

Image and Sound Processing Lab 17 Dec 15, 2022
Convolutional 2D Knowledge Graph Embeddings resources

ConvE Convolutional 2D Knowledge Graph Embeddings resources. Paper: Convolutional 2D Knowledge Graph Embeddings Used in the paper, but do not use thes

Tim Dettmers 586 Dec 24, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David Álvarez de la Torre 0 Feb 09, 2022
Research code for the paper "Variational Gibbs inference for statistical estimation from incomplete data".

Variational Gibbs inference (VGI) This repository contains the research code for Simkus, V., Rhodes, B., Gutmann, M. U., 2021. Variational Gibbs infer

Vaidotas Šimkus 1 Apr 08, 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

[ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training The Unreasonable Effectiveness of

VITA 44 Dec 23, 2022
Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings

Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings Results on STS Tasks Model STS12 STS13 STS14 STS15 STS16 STSb SICK-R Avg. unsup-prompt-be

196 Jan 08, 2023
MPI-IS Mesh Processing Library

Perceiving Systems Mesh Package This package contains core functions for manipulating meshes and visualizing them. It requires Python 3.5+ and is supp

Max Planck Institute for Intelligent Systems 494 Jan 06, 2023
Using deep learning model to detect breast cancer.

Breast-Cancer-Detection Breast cancer is the most frequent cancer among women, with around one in every 19 women at risk. The number of cases of breas

1 Feb 13, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
A transformer model to predict pathogenic mutations

MutFormer MutFormer is an application of the BERT (Bidirectional Encoder Representations from Transformers) NLP (Natural Language Processing) model wi

Wang Genomics Lab 2 Nov 29, 2022
Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

This repo is for the paper: Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration The DAC environment is based on the Dynam

Carola Doerr 1 Aug 19, 2022
Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 short.

Session-aware BERT4Rec Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 shor

Jamie J. Seol 22 Dec 13, 2022
Towards Fine-Grained Reasoning for Fake News Detection

FinerFact This is the PyTorch implementation for the FinerFact model in the AAAI 2022 paper Towards Fine-Grained Reasoning for Fake News Detection (Ar

Ahren_Jin 15 Dec 15, 2022
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning

MaCan 4.2k Dec 29, 2022
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

HiddenLayer A lightweight library for neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. HiddenLayer is simple, easy to ex

Waleed 1.7k Dec 31, 2022