Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Overview

Super Resolution Examples

We run this script under TensorFlow 2.0 and the TensorLayer2.0+. For TensorLayer 1.4 version, please check release.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

SRGAN Architecture

TensorFlow Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Results

Prepare Data and Pre-trained VGG

    1. You need to download the pretrained VGG19 model in here as tutorial_models_vgg19.py show.
    1. You need to have the high resolution images for training.
    • In this experiment, I used images from DIV2K - bicubic downscaling x4 competition, so the hyper-paremeters in config.py (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
    • If you dont want to use DIV2K dataset, you can also use Yahoo MirFlickr25k, just simply download it using train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None) in main.py.
    • If you want to use your own images, you can set the path to your image folder via config.TRAIN.hr_img_path in config.py.

Run

config.TRAIN.img_path = "your_image_folder/"
  • Start training.
python train.py
  • Start evaluation.
python train.py --mode=evaluate 

Reference

Author

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}

Other Projects

Discussion

License

  • For academic and non-commercial use only.
  • For commercial use, please contact [email protected].
Comments
  • I'm delete the one subpixel convolution, then there was a problem.

    I'm delete the one subpixel convolution, then there was a problem.

    I removed one subpixel convolution to upscale the picture twice, not quadruple. Then, the following error message appeared : ValueError: Dimension 2 in both shapes must be equal, but are 256 and 64. Shapes are [1,1,256,3] and [1,1,64,3]. for 'Assign_171' (op: 'Assign') with input shapes: [1,1,256,3], [1,1,64,3]. Something is wrong. Did you know how to resolve this error? please help.

    opened by bluewidy 11
  • problems running on windows

    problems running on windows

    python train.py

    2019-10-08 20:55:32.978162: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2019-10-08 20:55:35.246078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2019-10-08 20:55:35.333633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P1000 major: 6 minor: 1 memoryClockRate(GHz): 1.5185 pciBusID: 0000:01:00.0 2019-10-08 20:55:35.341328: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-10-08 20:55:35.348135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 20:55:35.351468: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019-10-08 20:55:35.359874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P1000 major: 6 minor: 1 memoryClockRate(GHz): 1.5185 pciBusID: 0000:01:00.0 2019-10-08 20:55:35.366853: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-10-08 20:55:35.372725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 20:55:36.070183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-10-08 20:55:36.075760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2019-10-08 20:55:36.079969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2019-10-08 20:55:36.084090: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3005 MB memory) -> physical GPU (device: 0, name: Quadro P1000, pci bus id: 0000:01:00.0, compute capability: 6.1) 2019-10-08 20:55:36.279220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2019-10-08 20:55:37.369250: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows Relying on driver to perform ptx compilation. This message will be only logged once. Traceback (most recent call last): File "train.py", line 202, in train() File "train.py", line 74, in train G = get_G((batch_size, 96, 96, 3)) File "D:\Users<Username>\Downloads\srgan-master\model.py", line 27, in get_G n = BatchNorm(gamma_init=g_init)(n) NameError: name 'BatchNorm' is not defined

    packages which I noticed installing/ needed to install

    tensorboard 2.0.0 tensorflow-estimator 2.0.0 tensorflow-gpu 2.0.0 tensorlayer 2.1.0

    Pillow 6.2.0 google-pasta 0.1.7 Lasagne 0.1 Markdown 3.1.1

    pip 19.2.3 Python 3.7.4

    Os: win10 CUDA computing toolkit 10.1 and 10.0 GPU Nvidia qadro p1000 CPU: Intel core I7 8750H

    opened by mcDandy 10
  • InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    Traceback (most recent call last):

    File "", line 1, in runfile('/home/dongwen/Desktop/SRGAN/train.py', wdir='/home/dongwen/Desktop/SRGAN')

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile execfile(filename, namespace)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

    File "/home/dongwen/Desktop/SRGAN/train.py", line 292, in train()

    File "/home/dongwen/Desktop/SRGAN/train.py", line 148, in train logits_fake = D(fake_patchs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/models/core.py", line 296, in call return self.forward(inputs, **kwargs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/models/core.py", line 339, in forward memory[node.name] = node(node_input)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/layers/core.py", line 431, in call outputs = self.layer.forward(inputs, **kwargs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/layers/dense/base_dense.py", line 106, in forward z = tf.matmul(inputs, self.W)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 2580, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5753, in mat_mul _six.raise_from(_core._status_to_exception(e.code, message), None)

    File "", line 3, in raise_from

    InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    opened by yonghuixu 10
  • The BatchNorm is not defined

    The BatchNorm is not defined

    Traceback (most recent call last): File "train.py", line 202, in train() File "train.py", line 74, in train G = get_G((batch_size, 96, 96, 3)) File "/xx/SR/srgan-tf/model.py", line 27, in get_G n = BatchNorm(gamma_init=g_init)(n) NameError: name 'BatchNorm' is not defined

    why ?

    opened by rophen2333 8
  • The process automatically killed before running adversarial learning.

    The process automatically killed before running adversarial learning.

    When I run python train.py --mode=evaluate command I get the following error:

    Traceback (most recent call last): File "train.py", line 204, in evaluate() File "train.py", line 172, in evaluate G.load_weights(os.path.join(checkpoint_dir, 'g.h5')) File "/home/himanshu/BTP_AB/lib/python3.7/site-packages/tensorlayer/models/core.py", line 944, in load_weights raise FileNotFoundError("file {} doesn't exist.".format(filepath)) FileNotFoundError: file models/g.h5 doesn't exist.

    It is may be because The Process killed after running through initializing learning (line 89 to 105 in train.py) and before adversarial learning (line 106 to 132 in tran.py).

    opened by amanattrish 8
  • "'time' is not defined" error while training

    NameError: name 'time' is not defined same as #76, #91 but I can't find a solution... Where should I add exactly ' import time' in model.py? I tried like 10 times to add import time in model.py, but it didn't work..

    opened by bberry25 8
  • Running our of memory

    Running our of memory

    I am running it on google colab and I have been assigned to TESLA K80 GPU GPU_name

    Even 12GB ram is not suffice. I wonder if someone else is facing same problem! GPU_problem

    Consequently the popped out error is: 2019-07-25 11:02:16.370912: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 28311552 exceeds 10% of system memory. Traceback (most recent call last): File "train.py", line 357, in train() File "train.py", line 100, in train grad = tape.gradient(mse_loss, G.weights) AttributeError: 'Model' object has no attribute 'weights'

    note:

    Pre-trained model vgg19.npy is in "srgan/models" directory.

    opened by amanattrish 7
  • Is there any pretrained model?

    Is there any pretrained model?

    I found this model is hard to train with my dataset on my machine. Is there any pretrained model? Maybe a pretrained model can save my hard-working graphics card.

    opened by tuxzz 7
  • generated image with serious checkboard(or mosaic)

    generated image with serious checkboard(or mosaic)

    Hello, author. I run your code without modification, but the resulted image has serious checkboard / mosaic shape. Could you tell me why? Have you met this problem before? How should I do to solve it? Thank you very much.

    "this is original image" 0801

    "this is resulted image" girl_gen

    opened by yugsdu 6
  • Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    [!] Load checkpoint/g_srgan.npz failed! [!] Load checkpoint/g_srgan_init.npz failed! [!] Load checkpoint/d_srgan.npz failed! Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    opened by alanMachineLeraning 6
  • Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1]

    Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1]

    My training code:

        # initialize learning (G)
        n_step_epoch = round(n_epoch_init // batch_size)
        for step, (lr_patchs, hr_patchs) in enumerate(train_ds):
            step_time = time.time()
            with tf.GradientTape() as tape:
                fake_hr_patchs = G(lr_patchs)
                mse_loss = tl.cost.mean_squared_error(fake_hr_patchs, hr_patchs, is_mean=True)
            grad = tape.gradient(mse_loss, G.trainable_weights)
            g_optimizer_init.apply_gradients(zip(grad, G.trainable_weights))
            step += 1
            epoch = step//n_step_epoch
            print("Epoch: [{}/{}] step: [{}/{}] time: {}s, mse: {} ".format(
                epoch, n_epoch_init, step, n_step_epoch, time.time() - step_time, mse_loss))
            if (epoch != 0) and (step % n_step_epoch == 0):
                tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}.png'.format(epoch))
            if (epoch >= n_epoch_init):
                break
    
        # adversarial learning (G, D)
        n_step_epoch = round(n_epoch // batch_size)
        for step, (lr_patchs, hr_patchs) in enumerate(train_ds):
            with tf.GradientTape(persistent=True) as tape:
                fake_patchs = G(lr_patchs)
                logits_fake = D(fake_patchs)
                logits_real = D(hr_patchs)
                feature_fake = VGG((fake_patchs+1)/2.)
                feature_real = VGG((hr_patchs+1)/2.)
                d_loss1 = tl.cost.sigmoid_cross_entropy(logits_real, tf.ones_like(logits_real))
                d_loss2 = tl.cost.sigmoid_cross_entropy(logits_fake, tf.zeros_like(logits_fake))
                d_loss = d_loss1 + d_loss2
                g_gan_loss = 1e-3 * tl.cost.sigmoid_cross_entropy(logits_fake, tf.ones_like(logits_fake))
                mse_loss = tl.cost.mean_squared_error(fake_patchs, hr_patchs, is_mean=True)
                vgg_loss = 2e-6 * tl.cost.mean_squared_error(feature_fake, feature_real, is_mean=True)
                g_loss = mse_loss + vgg_loss + g_gan_loss
            grad = tape.gradient(g_loss, G.trainable_weights)
            g_optimizer.apply_gradients(zip(grad, G.trainable_weights))
            grad = tape.gradient(d_loss, D.weights)
            d_optimizer.apply_gradients(zip(grad, D.trainable_weights))
            step += 1
            epoch = step//n_step_epoch
            print("Epoch: [{}/{}] step: [{}/{}] time: {}s, g_loss(mse:{}, vgg:{}, adv:{}) d_loss: {}".format(
                epoch, n_epoch_init, step, n_step_epoch, time.time() - step_time, mse_loss, vgg_loss, g_gan_loss, d_loss))
    
            # update learning rate
            if epoch != 0 and (epoch % decay_every == 0):
                new_lr_decay = lr_decay**(epoch // decay_every)
                lr_v.assign(lr_init * new_lr_decay)
                log = " ** new learning rate: %f (for GAN)" % (lr_init * new_lr_decay)
                print(log)
    
            if (epoch != 0) and (step % n_step_epoch == 0):
                tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_{}.png'.format(epoch))
                G.save_weights(checkpoint_dir + '/g_{}.h5'.format(tl.global_flag['mode']))
                D.save_weights(checkpoint_dir + '/d_{}.h5'.format(tl.global_flag['mode']))
            if (epoch >= n_epoch):
                break
    

    My error:

     File "train.py", line 370, in <module>
    
      File "train.py", line 125, in train
        with tf.GradientTape(persistent=True) as tape:
      File "F:\Python\Python37\lib\site-packages\tensorlayer\models\core.py", line 295, in __call__
        return self.forward(inputs, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\models\core.py", line 338, in forward
        memory[node.name] = node(node_input)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\layers\core.py", line 433, in __call__
        outputs = self.layer.forward(inputs, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\layers\dense\base_dense.py", line 106, in forward
        z = tf.matmul(inputs, self.W)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
        return target(*args, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2647, in matmul
        a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6285, in mat_mul
        _six.raise_from(_core._status_to_exception(e.code, message), None)
      File "<string>", line 3, in raise_from
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1] [Op:MatMul] name: MatMul/
    

    My loading of images:

        def generator_train():
            i = 0
            while i < len(train_hr_imgs):
                yield train_hr_imgs[i], train_lr_imgs[i]
                i+=1
        def _map_fn_train(imgh, imgl):
            hr_patch = imgh
            lr_patch = imgl
            
            hr_patch = hr_patch / (255. / 2.)
            hr_patch = hr_patch - 1.
            
            lr_patch = lr_patch / (255. / 2.)
            lr_patch = lr_patch - 1.
            
            return lr_patch, hr_patch
        train_ds = tf.data.Dataset.from_generator(generator_train, output_types=(tf.float32, tf.float32))
        train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count())
    

    I prescale the input images to 384 (HR) and 96 (LR)

    Any idea how to fix this?

    opened by Kjos 5
  • question for pretrained net

    question for pretrained net

    in model = eval , it shows RuntimeError: Weights named 'conv2d_1/filters:0' not found in network. Hint: set argument skip=Ture if you want to skip redundant or mismatch weights.

    i download g.npz and d.npz and pretrained vgg19 in your readme

    opened by jinyu-118 0
  • How the weights of different loss functions affect performance of the network in GAN based SISR?

    How the weights of different loss functions affect performance of the network in GAN based SISR?

    Total perceptual loss in SRGAN paper is weighted sum of content loss and adversarial loss.

    Total loss = Content loss + (10^(-3)) Adversarial loss Please tell why 10^(-3) is used? what is its impact on performance if some other value is used ?? or does it affects number of iterations for training of network ?

    opened by KhushbooChauddhary 1
  • tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in)

    tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in)

    Using TensorFlow backend. 2022-04-26 18:53:56.612121: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-04-26 18:53:56.616458: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. Traceback (most recent call last): File "/home/wajoud/srgan/train.py", line 105, in G = SRGAN_g() File "/home/wajoud/srgan/srgan.py", line 34, in init self.residual_block = self.make_layer() File "/home/wajoud/srgan/srgan.py", line 46, in make_layer return SequentialLayer(layer_list) File "/home/wajoud/anaconda3/envs/srgan/lib/python3.9/site-packages/tensorlayerx/nn/layers/deprecated.py", line 451, in SequentialLayer raise NonExistingLayerError("SequentialLayer(layer) --> Sequential(layer)(in)" + log) tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in) Hint: 1) downgrade TL from version TensorLayerX to TensorLayer2.x. 2) check the documentation of TF version 2.x and TL version X

    facing this issue can someone help me out !! thank you

    opened by wajoud 2
  • Was anyone able to replicate the results from the paper ?

    Was anyone able to replicate the results from the paper ?

    In the paper the authors have tested on Set5, Set14 and BSD dataset, was anyone able to replicate the same results ? @Laicheng0830 can you share your benchmarking results ---> Image results/ PSNR/ SSIM metric Also is it possible to share your validation loss plots ?

    opened by f2015238 1
  • Improve the documentation

    Improve the documentation

    One thing that boils my blood is bad documentation and this project dosent lack it a little bit No really how a new user (in my case a 3d designer) should know What is bicubic LG valid and train images or what the hell is evaluating and what should I do with vgg 19 ? It really pathetic that some one made an ai but can't document it And reading the issues half of them could be easily solved with decent documentation (not good documentation)

    opened by b-aaz 0
Releases(1.4.1)
Owner
TensorLayer Community
A neutral open community to promote AI technology.
TensorLayer Community
Tensorflow implementation of Character-Aware Neural Language Models.

Character-Aware Neural Language Models Tensorflow implementation of Character-Aware Neural Language Models. The original code of author can be found h

Taehoon Kim 751 Dec 26, 2022
Scripts and outputs related to the paper Prediction of Adverse Biological Effects of Chemicals Using Knowledge Graph Embeddings.

Knowledge Graph Embeddings and Chemical Effect Prediction, 2020. Scripts and outputs related to the paper Prediction of Adverse Biological Effects of

Knowledge Graphs at the Norwegian Institute for Water Research 1 Nov 01, 2021
EZ graph is an easy to use AI solution that allows you to make and train your neural networks without a single line of code.

EZ-Graph EZ Graph is a GUI that allows users to make and train neural networks without writing a single line of code. Requirements python 3 pandas num

1 Jul 03, 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

[ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training The Unreasonable Effectiveness of

VITA 44 Dec 23, 2022
🔊 Audio and fastai v2

Fastaudio An audio module for fastai v2. We want to help you build audio machine learning applications while minimizing the need for audio domain expe

152 Dec 28, 2022
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

5 Nov 21, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Code for Transformer Hawkes Process, ICML 2020.

Transformer Hawkes Process Source code for Transformer Hawkes Process (ICML 2020). Run the code Dependencies Python 3.7. Anaconda contains all the req

Simiao Zuo 111 Dec 26, 2022
Learning trajectory representations using self-supervision and programmatic supervision.

Trajectory Embedding for Behavior Analysis (TREBA) Implementation from the paper: Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Y

58 Jan 06, 2023
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022
Аналитика доходности инвестиционного портфеля в Тинькофф брокере

Аналитика доходности инвестиционного портфеля Тиньков Видео на YouTube Для работы скрипта нужно установить три переменных окружения: export TINKOFF_TO

Alexey Goloburdin 64 Dec 17, 2022
Arabic Car License Recognition. A solution to the kaggle competition Machathon 3.0.

Transformers Arabic licence plate recognition 🚗 Solution to the kaggle competition Machathon 3.0. Ranked in the top 6️⃣ at the final evaluation phase

Noran Hany 17 Dec 04, 2022
"Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices This repository contains the official PyTorch implemen

Yandex Research 21 Oct 18, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
Airborne magnetic data of the Osborne Mine and Lightning Creek sill complex, Australia

Osborne Mine, Australia - Airborne total-field magnetic anomaly This is a section of a survey acquired in 1990 by the Queensland Government, Australia

Fatiando a Terra Datasets 1 Jan 21, 2022
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch

PyTorch implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping Paper: https://arxiv.org/abs/2102.06171.pdf Original code: htt

Vaibhav Balloli 320 Jan 02, 2023
Multi agent DDPG algorithm written in Python + Pytorch

Multi agent DDPG algorithm written in Python + Pytorch. It also includes a Jupyter notebook, Tennis.ipynb, as a showcase.

Rogier Wachters 2 Feb 26, 2022
Summary of related papers on visual attention

This repo is built for paper: Attention Mechanisms in Computer Vision: A Survey paper Vision-Attention-Papers Channel attention Spatial attention Temp

MenghaoGuo 2.1k Dec 30, 2022