Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Overview

Super Resolution Examples

We run this script under TensorFlow 2.0 and the TensorLayer2.0+. For TensorLayer 1.4 version, please check release.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

🚀 🚀 🚀 🚀 🚀 🚀 THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.

SRGAN Architecture

TensorFlow Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Results

Prepare Data and Pre-trained VGG

    1. You need to download the pretrained VGG19 model in here as tutorial_models_vgg19.py show.
    1. You need to have the high resolution images for training.
    • In this experiment, I used images from DIV2K - bicubic downscaling x4 competition, so the hyper-paremeters in config.py (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
    • If you dont want to use DIV2K dataset, you can also use Yahoo MirFlickr25k, just simply download it using train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None) in main.py.
    • If you want to use your own images, you can set the path to your image folder via config.TRAIN.hr_img_path in config.py.

Run

config.TRAIN.img_path = "your_image_folder/"
  • Start training.
python train.py
  • Start evaluation.
python train.py --mode=evaluate 

Reference

Author

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}

Other Projects

Discussion

License

  • For academic and non-commercial use only.
  • For commercial use, please contact [email protected].
Comments
  • I'm delete the one subpixel convolution, then there was a problem.

    I'm delete the one subpixel convolution, then there was a problem.

    I removed one subpixel convolution to upscale the picture twice, not quadruple. Then, the following error message appeared : ValueError: Dimension 2 in both shapes must be equal, but are 256 and 64. Shapes are [1,1,256,3] and [1,1,64,3]. for 'Assign_171' (op: 'Assign') with input shapes: [1,1,256,3], [1,1,64,3]. Something is wrong. Did you know how to resolve this error? please help.

    opened by bluewidy 11
  • problems running on windows

    problems running on windows

    python train.py

    2019-10-08 20:55:32.978162: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2019-10-08 20:55:35.246078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2019-10-08 20:55:35.333633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P1000 major: 6 minor: 1 memoryClockRate(GHz): 1.5185 pciBusID: 0000:01:00.0 2019-10-08 20:55:35.341328: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-10-08 20:55:35.348135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 20:55:35.351468: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019-10-08 20:55:35.359874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P1000 major: 6 minor: 1 memoryClockRate(GHz): 1.5185 pciBusID: 0000:01:00.0 2019-10-08 20:55:35.366853: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2019-10-08 20:55:35.372725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 20:55:36.070183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-10-08 20:55:36.075760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2019-10-08 20:55:36.079969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2019-10-08 20:55:36.084090: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3005 MB memory) -> physical GPU (device: 0, name: Quadro P1000, pci bus id: 0000:01:00.0, compute capability: 6.1) 2019-10-08 20:55:36.279220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2019-10-08 20:55:37.369250: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows Relying on driver to perform ptx compilation. This message will be only logged once. Traceback (most recent call last): File "train.py", line 202, in train() File "train.py", line 74, in train G = get_G((batch_size, 96, 96, 3)) File "D:\Users<Username>\Downloads\srgan-master\model.py", line 27, in get_G n = BatchNorm(gamma_init=g_init)(n) NameError: name 'BatchNorm' is not defined

    packages which I noticed installing/ needed to install

    tensorboard 2.0.0 tensorflow-estimator 2.0.0 tensorflow-gpu 2.0.0 tensorlayer 2.1.0

    Pillow 6.2.0 google-pasta 0.1.7 Lasagne 0.1 Markdown 3.1.1

    pip 19.2.3 Python 3.7.4

    Os: win10 CUDA computing toolkit 10.1 and 10.0 GPU Nvidia qadro p1000 CPU: Intel core I7 8750H

    opened by mcDandy 10
  • InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    Traceback (most recent call last):

    File "", line 1, in runfile('/home/dongwen/Desktop/SRGAN/train.py', wdir='/home/dongwen/Desktop/SRGAN')

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile execfile(filename, namespace)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

    File "/home/dongwen/Desktop/SRGAN/train.py", line 292, in train()

    File "/home/dongwen/Desktop/SRGAN/train.py", line 148, in train logits_fake = D(fake_patchs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/models/core.py", line 296, in call return self.forward(inputs, **kwargs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/models/core.py", line 339, in forward memory[node.name] = node(node_input)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/layers/core.py", line 431, in call outputs = self.layer.forward(inputs, **kwargs)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorlayer/layers/dense/base_dense.py", line 106, in forward z = tf.matmul(inputs, self.W)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 2580, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)

    File "/home/dongwen/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5753, in mat_mul _six.raise_from(_core._status_to_exception(e.code, message), None)

    File "", line 3, in raise_from

    InvalidArgumentError: Matrix size-incompatible: In[0]: [4,4096], In[1]: [256,1] [Op:MatMul] name: MatMul/

    opened by yonghuixu 10
  • The BatchNorm is not defined

    The BatchNorm is not defined

    Traceback (most recent call last): File "train.py", line 202, in train() File "train.py", line 74, in train G = get_G((batch_size, 96, 96, 3)) File "/xx/SR/srgan-tf/model.py", line 27, in get_G n = BatchNorm(gamma_init=g_init)(n) NameError: name 'BatchNorm' is not defined

    why ?

    opened by rophen2333 8
  • The process automatically killed before running adversarial learning.

    The process automatically killed before running adversarial learning.

    When I run python train.py --mode=evaluate command I get the following error:

    Traceback (most recent call last): File "train.py", line 204, in evaluate() File "train.py", line 172, in evaluate G.load_weights(os.path.join(checkpoint_dir, 'g.h5')) File "/home/himanshu/BTP_AB/lib/python3.7/site-packages/tensorlayer/models/core.py", line 944, in load_weights raise FileNotFoundError("file {} doesn't exist.".format(filepath)) FileNotFoundError: file models/g.h5 doesn't exist.

    It is may be because The Process killed after running through initializing learning (line 89 to 105 in train.py) and before adversarial learning (line 106 to 132 in tran.py).

    opened by amanattrish 8
  • "'time' is not defined" error while training

    NameError: name 'time' is not defined same as #76, #91 but I can't find a solution... Where should I add exactly ' import time' in model.py? I tried like 10 times to add import time in model.py, but it didn't work..

    opened by bberry25 8
  • Running our of memory

    Running our of memory

    I am running it on google colab and I have been assigned to TESLA K80 GPU GPU_name

    Even 12GB ram is not suffice. I wonder if someone else is facing same problem! GPU_problem

    Consequently the popped out error is: 2019-07-25 11:02:16.370912: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 28311552 exceeds 10% of system memory. Traceback (most recent call last): File "train.py", line 357, in train() File "train.py", line 100, in train grad = tape.gradient(mse_loss, G.weights) AttributeError: 'Model' object has no attribute 'weights'

    note:

    Pre-trained model vgg19.npy is in "srgan/models" directory.

    opened by amanattrish 7
  • Is there any pretrained model?

    Is there any pretrained model?

    I found this model is hard to train with my dataset on my machine. Is there any pretrained model? Maybe a pretrained model can save my hard-working graphics card.

    opened by tuxzz 7
  • generated image with serious checkboard(or mosaic)

    generated image with serious checkboard(or mosaic)

    Hello, author. I run your code without modification, but the resulted image has serious checkboard / mosaic shape. Could you tell me why? Have you met this problem before? How should I do to solve it? Thank you very much.

    "this is original image" 0801

    "this is resulted image" girl_gen

    opened by yugsdu 6
  • Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    [!] Load checkpoint/g_srgan.npz failed! [!] Load checkpoint/g_srgan_init.npz failed! [!] Load checkpoint/d_srgan.npz failed! Please download vgg19.npz from : https://github.com/machrisaa/tensorflow-vgg

    opened by alanMachineLeraning 6
  • Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1]

    Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1]

    My training code:

        # initialize learning (G)
        n_step_epoch = round(n_epoch_init // batch_size)
        for step, (lr_patchs, hr_patchs) in enumerate(train_ds):
            step_time = time.time()
            with tf.GradientTape() as tape:
                fake_hr_patchs = G(lr_patchs)
                mse_loss = tl.cost.mean_squared_error(fake_hr_patchs, hr_patchs, is_mean=True)
            grad = tape.gradient(mse_loss, G.trainable_weights)
            g_optimizer_init.apply_gradients(zip(grad, G.trainable_weights))
            step += 1
            epoch = step//n_step_epoch
            print("Epoch: [{}/{}] step: [{}/{}] time: {}s, mse: {} ".format(
                epoch, n_epoch_init, step, n_step_epoch, time.time() - step_time, mse_loss))
            if (epoch != 0) and (step % n_step_epoch == 0):
                tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_init_{}.png'.format(epoch))
            if (epoch >= n_epoch_init):
                break
    
        # adversarial learning (G, D)
        n_step_epoch = round(n_epoch // batch_size)
        for step, (lr_patchs, hr_patchs) in enumerate(train_ds):
            with tf.GradientTape(persistent=True) as tape:
                fake_patchs = G(lr_patchs)
                logits_fake = D(fake_patchs)
                logits_real = D(hr_patchs)
                feature_fake = VGG((fake_patchs+1)/2.)
                feature_real = VGG((hr_patchs+1)/2.)
                d_loss1 = tl.cost.sigmoid_cross_entropy(logits_real, tf.ones_like(logits_real))
                d_loss2 = tl.cost.sigmoid_cross_entropy(logits_fake, tf.zeros_like(logits_fake))
                d_loss = d_loss1 + d_loss2
                g_gan_loss = 1e-3 * tl.cost.sigmoid_cross_entropy(logits_fake, tf.ones_like(logits_fake))
                mse_loss = tl.cost.mean_squared_error(fake_patchs, hr_patchs, is_mean=True)
                vgg_loss = 2e-6 * tl.cost.mean_squared_error(feature_fake, feature_real, is_mean=True)
                g_loss = mse_loss + vgg_loss + g_gan_loss
            grad = tape.gradient(g_loss, G.trainable_weights)
            g_optimizer.apply_gradients(zip(grad, G.trainable_weights))
            grad = tape.gradient(d_loss, D.weights)
            d_optimizer.apply_gradients(zip(grad, D.trainable_weights))
            step += 1
            epoch = step//n_step_epoch
            print("Epoch: [{}/{}] step: [{}/{}] time: {}s, g_loss(mse:{}, vgg:{}, adv:{}) d_loss: {}".format(
                epoch, n_epoch_init, step, n_step_epoch, time.time() - step_time, mse_loss, vgg_loss, g_gan_loss, d_loss))
    
            # update learning rate
            if epoch != 0 and (epoch % decay_every == 0):
                new_lr_decay = lr_decay**(epoch // decay_every)
                lr_v.assign(lr_init * new_lr_decay)
                log = " ** new learning rate: %f (for GAN)" % (lr_init * new_lr_decay)
                print(log)
    
            if (epoch != 0) and (step % n_step_epoch == 0):
                tl.vis.save_images(fake_hr_patchs.numpy(), [ni, ni], save_dir_gan + '/train_g_{}.png'.format(epoch))
                G.save_weights(checkpoint_dir + '/g_{}.h5'.format(tl.global_flag['mode']))
                D.save_weights(checkpoint_dir + '/d_{}.h5'.format(tl.global_flag['mode']))
            if (epoch >= n_epoch):
                break
    

    My error:

     File "train.py", line 370, in <module>
    
      File "train.py", line 125, in train
        with tf.GradientTape(persistent=True) as tape:
      File "F:\Python\Python37\lib\site-packages\tensorlayer\models\core.py", line 295, in __call__
        return self.forward(inputs, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\models\core.py", line 338, in forward
        memory[node.name] = node(node_input)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\layers\core.py", line 433, in __call__
        outputs = self.layer.forward(inputs, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorlayer\layers\dense\base_dense.py", line 106, in forward
        z = tf.matmul(inputs, self.W)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
        return target(*args, **kwargs)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2647, in matmul
        a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
      File "F:\Python\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6285, in mat_mul
        _six.raise_from(_core._status_to_exception(e.code, message), None)
      File "<string>", line 3, in raise_from
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [1,18432], In[1]: [512,1] [Op:MatMul] name: MatMul/
    

    My loading of images:

        def generator_train():
            i = 0
            while i < len(train_hr_imgs):
                yield train_hr_imgs[i], train_lr_imgs[i]
                i+=1
        def _map_fn_train(imgh, imgl):
            hr_patch = imgh
            lr_patch = imgl
            
            hr_patch = hr_patch / (255. / 2.)
            hr_patch = hr_patch - 1.
            
            lr_patch = lr_patch / (255. / 2.)
            lr_patch = lr_patch - 1.
            
            return lr_patch, hr_patch
        train_ds = tf.data.Dataset.from_generator(generator_train, output_types=(tf.float32, tf.float32))
        train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count())
    

    I prescale the input images to 384 (HR) and 96 (LR)

    Any idea how to fix this?

    opened by Kjos 5
  • question for pretrained net

    question for pretrained net

    in model = eval , it shows RuntimeError: Weights named 'conv2d_1/filters:0' not found in network. Hint: set argument skip=Ture if you want to skip redundant or mismatch weights.

    i download g.npz and d.npz and pretrained vgg19 in your readme

    opened by jinyu-118 0
  • How the weights of different loss functions affect performance of the network in GAN based SISR?

    How the weights of different loss functions affect performance of the network in GAN based SISR?

    Total perceptual loss in SRGAN paper is weighted sum of content loss and adversarial loss.

    Total loss = Content loss + (10^(-3)) Adversarial loss Please tell why 10^(-3) is used? what is its impact on performance if some other value is used ?? or does it affects number of iterations for training of network ?

    opened by KhushbooChauddhary 1
  • tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in)

    tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in)

    Using TensorFlow backend. 2022-04-26 18:53:56.612121: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-04-26 18:53:56.616458: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. Traceback (most recent call last): File "/home/wajoud/srgan/train.py", line 105, in G = SRGAN_g() File "/home/wajoud/srgan/srgan.py", line 34, in init self.residual_block = self.make_layer() File "/home/wajoud/srgan/srgan.py", line 46, in make_layer return SequentialLayer(layer_list) File "/home/wajoud/anaconda3/envs/srgan/lib/python3.9/site-packages/tensorlayerx/nn/layers/deprecated.py", line 451, in SequentialLayer raise NonExistingLayerError("SequentialLayer(layer) --> Sequential(layer)(in)" + log) tensorlayerx.nn.layers.deprecated.NonExistingLayerError: SequentialLayer(layer) --> Sequential(layer)(in) Hint: 1) downgrade TL from version TensorLayerX to TensorLayer2.x. 2) check the documentation of TF version 2.x and TL version X

    facing this issue can someone help me out !! thank you

    opened by wajoud 2
  • Was anyone able to replicate the results from the paper ?

    Was anyone able to replicate the results from the paper ?

    In the paper the authors have tested on Set5, Set14 and BSD dataset, was anyone able to replicate the same results ? @Laicheng0830 can you share your benchmarking results ---> Image results/ PSNR/ SSIM metric Also is it possible to share your validation loss plots ?

    opened by f2015238 1
  • Improve the documentation

    Improve the documentation

    One thing that boils my blood is bad documentation and this project dosent lack it a little bit No really how a new user (in my case a 3d designer) should know What is bicubic LG valid and train images or what the hell is evaluating and what should I do with vgg 19 ? It really pathetic that some one made an ai but can't document it And reading the issues half of them could be easily solved with decent documentation (not good documentation)

    opened by b-aaz 0
Releases(1.4.1)
Owner
TensorLayer Community
A neutral open community to promote AI technology.
TensorLayer Community
This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.

TransMix: Attend to Mix for Vision Transformers This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transf

Jie-Neng Chen 130 Jan 01, 2023
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Manas Sharma 19 Feb 28, 2022
Scene-Text-Detection-and-Recognition (Pytorch)

Scene-Text-Detection-and-Recognition (Pytorch) Competition URL: https://tbrain.t

Gi-Luen Huang 9 Jan 02, 2023
Repository for XLM-T, a framework for evaluating multilingual language models on Twitter data

This is the XLM-T repository, which includes data, code and pre-trained multilingual language models for Twitter. XLM-T - A Multilingual Language Mode

Cardiff NLP 112 Dec 27, 2022
Siamese TabNet

Raifhack-DS-2021 https://raifhack.ru/ - Команда Звёздочка Siamese TabNet Сиамская TabNet предсказывает стоимость объекта недвижимости с price_type=1,

Daniel Gafni 15 Apr 16, 2022
This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness This repository provides the code for the paper On Interaction B

Meta Research 33 Dec 08, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
pip install python-office

🍬 python for office 👉 http://www.python4office.cn/ 👈 🌎 English Documentation 📚 简介 Python-office 是一个 Python 自动化办公第三方库,能解决大部分自动化办公的问题。而且每个功能只需一行代码,

程序员晚枫 272 Dec 29, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Matlab Python Heuristic Battery Opt - SMOP conversion and manual conversion

SMOP is Small Matlab and Octave to Python compiler. SMOP translates matlab to py

Tom Xu 1 Jan 12, 2022
Motion Reconstruction Code and Data for Skills from Videos (SFV)

Motion Reconstruction Code and Data for Skills from Videos (SFV) This repo contains the data and the code for motion reconstruction component of the S

268 Dec 01, 2022
Survival analysis (SA) is a well-known statistical technique for the study of temporal events.

DAGSurv Survival analysis (SA) is a well-known statistical technique for the study of temporal events. In SA, time-to-an-event data is modeled using a

Rahul Kukreja 1 Sep 05, 2022
Continuous Security Group Rule Change Detection & Response at scale

Introduction Get notified of Security Group Changes across all AWS Accounts & Regions in an AWS Organization, with the ability to respond/revert those

Raajhesh Kannaa Chidambaram 3 Aug 13, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

🐤 Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 09, 2023
[v1 (ISBI'21) + v2] MedMNIST: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification

MedMNIST Project (Website) | Dataset (Zenodo) | Paper (arXiv) | MedMNIST v1 (ISBI'21) Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bili

683 Dec 28, 2022
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs This is the official code for Towards Multi-Grained Explainability for Graph Neural Networks (NeurIPS 20

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
Kaggle DSTL Satellite Imagery Feature Detection

Kaggle DSTL Satellite Imagery Feature Detection

Konstantin Lopuhin 206 Oct 29, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022