A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Overview


Build Status License: MIT

Machine Learning Collection

In this repository you will find tutorials and projects related to Machine Learning. I try to make the code as clear as possible, and the goal is be to used as a learning resource and a way to lookup problems to solve specific problems. For most I have also done video explanations on YouTube if you want a walkthrough for the code. If you got any questions or suggestions for future videos I prefer if you ask it on YouTube. This repository is contribution friendly, so if you feel you want to add something then I'd happily merge a PR 😃

Table Of Contents

Machine Learning

PyTorch Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Basics

More Advanced

Object Detection

Object Detection Playlist

Generative Adversarial Networks

GAN Playlist

Architectures

TensorFlow Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Beginner Tutorials

CNN Architectures

Comments
  • ProGAN Pretrained weights link is broken!

    ProGAN Pretrained weights link is broken!

    When i click to dowload pretrained weights i get redirected to https://github.com/aladdinpersson/Machine-Learning-Collection/tree/master/ML/Pytorch/GANs/ProGAN

    opened by extremety1989 11
  • ProGan RuntimeError

    ProGan RuntimeError

    i downloaded celeba_hq image dataset,modified config.py (DATASET = 'celeba_hq') , modified train.py( at main()
    # import sys # sys.exit()) then when i run python train.py i got this error

    return F.conv_transpose2d( RuntimeError: Expected 4-dimensional input for 4-dimensional weight [512, 512, 4, 4], but got 2-dimensional input of size [256, 512] instead

    opened by extremety1989 9
  • Transformer Question, and Request

    Transformer Question, and Request

    Learning PyTorch and love your videos. Your code is so clean and your explanations so crisp.

    Question/Bug?: In SelfAttention you split values, keys, and query by the number of heads. Then pass this into Linear with same input and output dimension. Why not keep the full dimension (ie: not split) and let the Linear do the reduction? This would allow linear to learn what to take out of the input rather?

    btw, https://github.com/tunz/transformer-pytorch/blob/master/model/transformer.py, class MultiHeadAttention(nn.Module) does this (if I interpret their code correctly).

    The paper https://arxiv.org/pdf/1706.03762.pdf indicates "learned linear projections to dk, dk and dv dimensions".

    If I'm all wrong, would love to be corrected as I learning. If I'm right, would also love to know that I'm starting to understand this stuff.

    Request: Starting to understand torch.einsum power but I am sure I am missing a bunch. Can you do a video on this?

    Regards, John

    opened by johngrabner 4
  • Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    You've defined forward_expansion as 4,

    See the implementation of the FeedForward network at the end of encoders & decoders

    So, putting your variable in code. will look like -

    self.linear1 = Linear(d_model,  4, **factory_kwargs)
    self.dropout = Dropout(dropout)
    self.linear2 = Linear(4, d_model, **factory_kwargs)
    

    where d_model = 512 (emb_size)

    Obserser PyTorch's official implementation for more clearance.

    I will suggest you to change that variable to something bigger like 512, 1024 or 2048 I am surprised that even after using 4 as dim_feedforward (Or what you call it forward_expansion), You're getting great results

    opened by KrishPro 3
  • Getting error while executing Sementic segmentation w. UNET in pytorch

    Getting error while executing Sementic segmentation w. UNET in pytorch

    Hi, I watched your recent tutorial on sementic segmentation with pytorch. Being new to pytorch I was looking for some tutorial with good explanations especially in segmentation module and your tutorial came as a great help. I tried to implement your way on a UNet network for segmentation on google-colab but getting an error. I tried to fix it but no luck. Can you please help me in fixing the error. The error I am getting is:


    TypeError Traceback (most recent call last) in () 85 86 if name == "main": ---> 87 main()

    7 frames in main() 67 68 for epoch in range(Num_epochs): ---> 69 train_fn(train_loader, model, optimizer, loss_fn, scaler) 70 71

    in train_fn(loader, model, optimizer, loss_fn, scaler) 2 loop = tqdm(loader) 3 ----> 4 for batch_idx, (data, targets) in enumerate(loop): 5 data= data.to(device=device) 6 targets= targets.float().unsqueeze(1).to(device=device)

    /usr/local/lib/python3.6/dist-packages/tqdm/std.py in iter(self) 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1103 -> 1104 for obj in iterable: 1105 yield obj 1106 # Update and possibly print the progressbar.

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in next(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 473 def _next_data(self): 474 index = self._next_index() # may raise StopIteration --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 476 if self._pin_memory: 477 data = _utils.pin_memory.pin_memory(data)

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in (.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    in getitem(self, index) 17 18 if self.transform is not None: ---> 19 augmentations= self.transform(image=image, mask=mask) 20 image = augmentations["image"] 21 mask = augmentations["mask"]

    TypeError: 'int' object is not callable

    opened by sautami26 3
  • Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Hi Aladdin , Thank you so much for your video and explanations,
    I am currently doing a project on object detection , and your video helped me a lot. thank you once again.

    I have a problem of overfitting in the model . I am getting test map as 10% , train map 90% . I trained on PASCAL VOC 2007 + VOC 2012 data. I have tried every way I could think of to reduce the overfitting ( dropout layer , weight decay , added 5k more images ,data augmentation , used pretrained extraction weights ,step LR etc etc ) , tried everything as close as possible to original paper its been a month now and I am still not able to figure out why . could you please help me? ( I have used your code for everything). it would be a great help if you can suggest something with respect to your code .

    P.S : I used the same code and modified it for 2 classes and 5 classes , I have got good results , 2classes : test map 50% , 5 classes : test map 60%.

    opened by 100daggers 3
  • Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Hi,

    I rewrote the code along with watching your tutorial. When I run the training procedure, I get the following error:

    Traceback (most recent call last):
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/pydevd.py", line 1415, in _exec
        pydev_imports.execfile(file, globals, locals)  # execute the script
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
        exec(compile(contents+"\n", file, 'exec'), glob, loc)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 147, in <module>
        main()
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 126, in main
        train_loader, model, iou_threshold=0.5, threshold=0.4
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 255, in get_bboxes
        true_bboxes = cellboxes_to_boxes(labels)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 322, in cellboxes_to_boxes
        converted_pred = convert_cellboxes(out).reshape(out.shape[0], S * S, -1)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 315, in convert_cellboxes
        (predicted_class, best_confidence, converted_bboxes), dim=-1
    RuntimeError: Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'
    

    Then I tried to copy the exact same code from your train.py and dataset.py file but the error still persisted. I guess getitem in dataset.py should return long instead of float types for bounding boxes. Do you know what might be the cause of the error above?

    opened by nikogamulin 3
  • CNN Architecture implementations in Tensorflow

    CNN Architecture implementations in Tensorflow

    Your YT contents are awesome!! Especially those CNN architectures from scratch videos. I myself is trying to learn DL and your videos helped me understand the concept better when I was reading the academic papers.

    Not very long ago, I started implementing some of the popular CNN architectures with Tensorflow 2.0 in my repo and I think it would be good to PR those to here so the rest can checkout both PyTorch and Tensorflow implementations.

    I am not super good with Tensorflow, so if there's something that can be improved, feel free to give comments.

    I have implemented

    • AlexNet
    • GoogLeNet / Inception V1
    • LeNet5
    • ResNet
    • VGGNet
    opened by the-robot 3
  • error when running code

    error when running code

    when running your code i get this error:

    RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select

    any tips? thank you

    opened by javismiles 3
  • Why do you need to slice captions?

    Why do you need to slice captions?

    https://github.com/AladdinPerzon/Machine-Learning-Collection/blob/9f3b2a82c0b8b6ba8c16293d8118d8d8c888f8e6/ML/Pytorch/more_advanced/image_captioning/train.py#L82

    Hello, thank you for your version of the image captioning solution! However, one thing is not clear to me. Why would you do that slice? If I correctly understood the captions, in that case, is a padded batch of captions, so it looks like: 1 1 1 1 1 2 1 1 1 2 0 0 1 1 1 1 2 0

    and if you make a slice [:,: - 1] that would be: 1 1 1 1 1 1 1 1 2 0 1 1 1 1 2 (1 is any token, 2 is and 0 is padding)

    So if you want to get rid of tokens that would not work.

    opened by concrete13377 3
  • Image captioning: all training example output is <UNK>

    Image captioning: all training example output is

    When training for image captioning, in the first epoch, the print_examples function returns the following

    Example 1 CORRECT: Dog on a beach by the ocean
    Example 1 OUTPUT: chasing stores mossy participates player brush museum phone handle drops native punk buried alongside cellphones very bags hairy paintball mouths mats markings volleyball backpacker dressed backpacks legos light bitten various pillow singing attempt superman weather try gnawing ceiling shaped tree someone phone scarf crouching courtyard cows indoors seeds hits hits
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: trucks each that cheerleader hawk jeeps formal ring skeleton forested various plastic goofy snowmobile dances very wearing seaweed cards kick works baseman past daughter football waterfalls bathroom motorcycle bar bikers phone following kid ring past converse nose nose college wide skyscraper rough holding bending seeds broken kissing follows pouring pouring
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: avoid windsurfing alongside roof between enjoys dimly artists artists others biting upon holding silhouette ascending apples curve tennis o leaves gives dinner chasing picnic pack ceremony kayak kayak office festive hikes covered visible signs dancing construction construction when hiking pillow foot leotard about all pit between stool ear sports cigarette
    

    however, after the first epoch and later, the print_examples function returns:

    Example 1 CORRECT: Dog on a beach by the ocean                                  
    Example 1 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    

    Im not sure what's going on

    opened by tbass134 3
  • Issues with the YOLO V1 Loss Function

    Issues with the YOLO V1 Loss Function

    I recently decided to try to make a YOLO V1 implementation as my first serious project, based on your guide, but doing all the pre-training and training the full model myself. I have succeeded in making a sort of working model, though there are probably still some mistakes as it is not optimal. For reference, my repository is here.

    Doing this led to me to noticing some issues with your implementation of the loss function:

    • Your target confidence (the tensor torch.flatten(exists_box * target[..., 20:21])) is going to be 1 for every cell where a box exists, and 0 for every box where it does not exist. In fact target[..., 20:21] is the same thing as exists_box. This is not true to the paper, which instead asks that in the case of a responsible predictor, the target confidence should be equal to the IOU of the currently predicted box with the ground truth box. The correct target tensor is exists_box * iou_maxes.unsqueeze(3) (not tested working, but this is the right idea). There is actually currently an open pull request (#44 ) which would fix this.
    • Your no-object loss does not factor in non-responsible predictors which share a cell with a responsible predictor, which it should, as the "1_ij^noobj" from the paper will be 1 for these.
    • You set your MSE function with reduction='sum', but then do not normalize for batch size. This means that the loss scales linearly with the batch size, which results in much larger losses (forcing low learning rates), and is also an entanglement of hyperparameters, which is bad. The correct implementation is to calculate sum-squared error for each sample in the batch independently, then average them. To fix this, replace return loss with return loss / float(predictions.size()[0]) (you will have to use a larger learning rate, but this is a good thing!).
    • Those flatten layers in are totally unnecessary, or rather, they do nothing. Torch MSE is smart enough to be given any two tensors of the same dimension.
    • In dataset.py, you have your width and height target values for each box calculated relative to the cell dimensions: width_cell, height_cell = (width * self.S, height * self.S,) This is incorrect, they are supposed to be relative to the dimensions of the entire image (even though x and y are relative to the dimensions of a cell!) The reason for this, as stated in the paper, is so that each element of [x,y,w,h] will be between 0.0 and 1.0. To fix this, just remove multiplication by self.S. This will also need to be fixed on the other end when you convert predicted labels back to boxes for visualization. This is really more about the dataloading than the loss function, but because it unbalances the loss function it has the same sort of effect: failing to fix this causes mode collapse on object classification when you try to generalize the model.

    Obviously your project is just about overfitting the model, and none of these issues are apparent when attempting to overfit. They do, however, cause serious issues when you are trying to train the whole thing. If you want to fix it, feel free to have reference to my re-implementation of the loss function, which should be compatible with yours, but is re-written to try to mimic the paper's formula as close as possible. Do bear in mind, though, that mine evidently isn't perfect either (I can't get my model stable under 1e-2 learning rate, indicating a probable scaling mistake somewhere).

    opened by a-g-moore 0
  • StyleGAN - what's exacttly wrong with it?

    StyleGAN - what's exacttly wrong with it?

    I run the code and it seem to work. Could you please clarify what's wrong with your implementation? It is too slow, or the generated faces are poor, or something else?

    opened by moneroexamples 0
  • warning: Embedding dir exists, did you set global_step for add_embedding()?

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    I was doing the pytorch tensor board tutorial. While running the pytorch_tensorboard_.py notebook file, I get this:

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    Somebody else also faced this (might be in a different case). I don't understand the cause & effect of this, any help?

    Thanks in advance!

    opened by massisenergy 0
  • Error while training YOLOv3 on COCO dataset

    Error while training YOLOv3 on COCO dataset

    Training on PASCAL VOC dataset work fine but while training on COCO dataset I am having the following error:

    File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\albumentations\core\bbox_utils.py", line 417, in check_bbox raise ValueError(f"Expected {name} for bbox {bbox} to be in the range [0.0, 1.0], got {value}.") ValueError: Expected x_min for bbox (-0.0020920502092049986, 0.09853100000000004, 0.327091949790795, 0.681844, 0.0) to be in the range [0.0, 1.0], got -0.0020920502092049986.

    opened by michalt38 0
  • Query: DCGAN implementation saving results/sampling

    Query: DCGAN implementation saving results/sampling

    Thank you for the tutorial. I have followed the tutorial and coded in parallel. I have been able to execute the model, however I wanted to know how to save/sample from the model so that I can visualize the various versions of model with different model architectures. I request you to guide me. Regards Prabhav

    P.S. It would be great if you could share how to use multiple GPUs in a single node for training.

    opened by KomputerMaster64 0
Owner
Aladdin Persson
I'm a math geek who likes programming. Particularly interested in machine learning, algorithms and software development.
Aladdin Persson
Task-based end-to-end model learning in stochastic optimization

Task-based End-to-end Model Learning in Stochastic Optimization This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains th

CMU Locus Lab 164 Dec 29, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
This repository provides some of the code implemented and the data used for the work proposed in "A Cluster-Based Trip Prediction Graph Neural Network Model for Bike Sharing Systems".

cluster-link-prediction This repository provides some of the code implemented and the data used for the work proposed in "A Cluster-Based Trip Predict

Bárbara 0 Dec 28, 2022
Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch implementation of semantic segmentation models on MIT ADE20K scene parsing da

MIT CSAIL Computer Vision 4.5k Jan 08, 2023
Py-FEAT: Python Facial Expression Analysis Toolbox

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial

Computational Social Affective Neuroscience Laboratory 147 Jan 06, 2023
GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs

GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs GraphLily is the first FPGA overlay for graph processing. GraphLily supports a rich se

Cornell Zhang Research Group 39 Dec 13, 2022
Unofficial keras(tensorflow) implementation of MAE model from Masked Autoencoders Are Scalable Vision Learners

MAE-keras Unofficial keras(tensorflow) implementation of MAE model described in 'Masked Autoencoders Are Scalable Vision Learners'. This work has been

Yewon 11 Jun 12, 2022
Tool for installing and updating MiSTer cores and other files

MiSTer Downloader This tool installs and updates all the cores and other extra files for your MiSTer. It also updates the menu core, the MiSTer firmwa

72 Dec 24, 2022
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech Jaehyeon Kim, Jungil Kong, and Juhee Son In our rece

Jaehyeon Kim 1.7k Jan 08, 2023
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation(DANN), support Office-31 and Office-Home dataset

DANN A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation Prerequisites Linux or OSX NVIDIA GPU + CUDA (may CuDNN) and corre

8 Apr 16, 2022
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,最短只需一行命令即可将普通的Python脚本打包成EXE可执行程序,并选择性添加CUDA和NoAVX的支持,尽可能兼容更多的用户环境。 感觉还可

QPT Family 545 Dec 28, 2022
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022
a morph transfer UGATIT for image translation.

Morph-UGATIT a morph transfer UGATIT for image translation. Introduction 中文技术文档 This is Pytorch implementation of UGATIT, paper "U-GAT-IT: Unsupervise

55 Nov 14, 2022
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
FreeSOLO for unsupervised instance segmentation, CVPR 2022

FreeSOLO: Learning to Segment Objects without Annotations This project hosts the code for implementing the FreeSOLO algorithm for unsupervised instanc

NVIDIA Research Projects 253 Jan 02, 2023
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
A curated list of awesome deep long-tailed learning resources.

A curated list of awesome deep long-tailed learning resources.

vanint 210 Dec 25, 2022
A reimplementation of DCGAN in PyTorch

DCGAN in PyTorch A reimplementation of DCGAN in PyTorch. Although there is an abundant source of code and examples found online (as well as an officia

Diego Porres 6 Jan 08, 2022
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022