A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN.

Overview

artificial intelligence

cosmic love and attention

fire in the sky

a pyramid made of ice

a lonely house in the woods

marriage in the mountains

lantern dangling from a tree in a foggy graveyard

a vivid dream

balloons over the ruins of a city

the death of the lonesome astronomer - by moirage

the tragic intimacy of the eternal conversation with oneself - by moirage

demon fire - by WiseNat

Big Sleep

Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU.

You will be able to have the GAN dream up images using natural language with a one-line command in the terminal.

Original notebook Open In Colab

Simplified notebook Open In Colab

Install

$ pip install big-sleep

Usage

$ dream "a pyramid made of ice"

Images will be saved to wherever the command is invoked

Advanced

You can invoke this in code with

from big_sleep import Imagine

dream = Imagine(
    text = "fire in the sky",
    lr = 5e-2,
    save_every = 25,
    save_progress = True
)

dream()

You can now train more than one phrase using the delimiter "|"

Train on Multiple Phrases

In this example we train on three phrases:

  • an armchair in the form of pikachu
  • an armchair imitating pikachu
  • abstract
from big_sleep import Imagine

dream = Imagine(
    text = "an armchair in the form of pikachu|an armchair imitating pikachu|abstract",
    lr = 5e-2,
    save_every = 25,
    save_progress = True
)

dream()

Penalize certain prompts as well!

In this example we train on the three phrases from before,

and penalize the phrases:

  • blur
  • zoom
from big_sleep import Imagine

dream = Imagine(
    text = "an armchair in the form of pikachu|an armchair imitating pikachu|abstract",
    text_min = "blur|zoom",
)
dream()

You can also set a new text by using the .set_text() command

dream.set_text("a quiet pond underneath the midnight moon")

And reset the latents with .reset()

dream.reset()

To save the progression of images during training, you simply have to supply the --save-progress flag

$ dream "a bowl of apples next to the fireplace" --save-progress --save-every 100

Due to the class conditioned nature of the GAN, Big Sleep often steers off the manifold into noise. You can use a flag to save the best high scoring image (per CLIP critic) to {filepath}.best.png in your folder.

$ dream "a room with a view of the ocean" --save-best

Experimentation

You can set the number of classes that you wish to restrict Big Sleep to use for the Big GAN with the --max-classes flag as follows (ex. 15 classes). This may lead to extra stability during training, at the cost of lost expressivity.

$ dream 'a single flower in a withered field' --max-classes 15

Alternatives

Deep Daze - CLIP and a deep SIREN network

Used By

Citations

@misc{unpublished2021clip,
    title  = {CLIP: Connecting Text and Images},
    author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
    year   = {2021}
}
@misc{brock2019large,
    title   = {Large Scale GAN Training for High Fidelity Natural Image Synthesis}, 
    author  = {Andrew Brock and Jeff Donahue and Karen Simonyan},
    year    = {2019},
    eprint  = {1809.11096},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
Comments
  • Possible bug in latent vector loss calculation?

    Possible bug in latent vector loss calculation?

    I'm confused by this, and wondering if it could be a bug? It seems as though latents is of size (32,128), which means that for array in latents: iterates 32 times. However, the results from these iterations aren't stored anywhere, so they are at best a waste of time and at worst causing a miscalculation. Perhaps the intention was to accumulate the kurtoses and skews for each array in latents, and then computing lat_loss using all the accumulated values?

    for array in latents:
        mean = torch.mean(array)
        diffs = array - mean
        var = torch.mean(torch.pow(diffs, 2.0))
        std = torch.pow(var, 0.5)
        zscores = diffs / std
        skews = torch.mean(torch.pow(zscores, 3.0))
        kurtoses = torch.mean(torch.pow(zscores, 4.0)) - 3.0
    
    lat_loss = lat_loss + torch.abs(kurtoses) / num_latents + torch.abs(skews) / num_latents
    

    Occurs at https://github.com/lucidrains/big-sleep/blob/main/big_sleep/big_sleep.py#L211

    opened by walmsley 14
  • Failure on first epoch

    Failure on first epoch

    opens folder where picture should be saved, but this error shows up immediately:

    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)

    torch version: 1.7.1 torch.cuda.is_available() == true

    what am i missing?

    opened by nhalsteadvt 11
  • Generated images are completely black?! 😡 What am I doing wrong?

    Generated images are completely black?! 😡 What am I doing wrong?

    Hello, I am on Windows 10, and my gpu is a PNY Nvidia GTX 1660 TI 6 Gb. I installed Big Sleep like so:

    • conda create --name bigsleep python=3.8
    • conda activate bigsleep
    • conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch (as per Pytorch website instructions)
    • pip install big-sleep

    The problem is that when I launch dream --text="a beautiful mind" --num-cutouts=64 --image-size=512 --save_every=10 --seed=12345 the generated images are completely black (although the inference process seems to run nicely and without errors).

    Things I've tried:

    • installing previous pytorch version with conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
    • removing the big-sleep conda environment completely and recreating it anew
    • uninstalling nvidia drivers and performing a new clean driver install (I tried both Nvidia Studio drivers and Nvidia Game Ready drivers)
    • uninstalling and reinstalling Conda completely

    But nothing helped... and at this point I don't know what else to try...

    The only interesting piece of information I could gather is that for some reason this problem also happens with another text-to-image project called v-diffusion-pytorch where similar to Big Sleep the inference process appears to run correctly but the generated images are all black.

    I think there must be some simple detail I'm overlooking... which it's making me go insane... 😡 Please let me know something if you think you can help! THANKS !

    opened by illtellyoulater 7
  • dream.set_text(

    dream.set_text("a quiet pond underneath the midnight moon") = object has no attribute 'set_text'

    Looks like def set_text is now def set_clip_encoding? Also, various settings aren't honoured using dream.set_clip_encoding then dream (such as save_best = True). Would it be possible / make sense to retain settings from the previous dream?

    opened by nerdyrodent 6
  • Overwrite Progress Bars instead of printing each iteration

    Overwrite Progress Bars instead of printing each iteration

    Update tqdm calls to overwrite a single line This leads to less console spam. Also use the tqdm for the image update notifications (I think the math on this is wrong, please check)

    opened by DrJKL 6
  • Can't install because of conflicting dependencies

    Can't install because of conflicting dependencies

    OS: Arch GNU/Linux Python: 3.10.1 Command run: pip3 install big-sleep Error:

      Downloading big_sleep-0.0.1-py3-none-any.whl (1.4 MB)
         |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.4 MB 9.1 MB/s
    ERROR: Cannot install big-sleep==0.0.1, big-sleep==0.0.2, big-sleep==0.1.0, big-sleep==0.1.1, big-sleep==0.1.2, big-sleep==0.1.4, big-sleep==0.2.0, big-sleep==0.2.2, big-sleep==0.2.3, big-sleep==0.2.4, big-sleep==0.2.5, big-sleep==0.2.6, big-sleep==0.2.7, big-sleep==0.2.8, big-sleep==0.2.9, big-sleep==0.3.0, big-sleep==0.3.1, big-sleep==0.3.2, big-sleep==0.3.3, big-sleep==0.3.4, big-sleep==0.3.5, big-sleep==0.3.6, big-sleep==0.3.7, big-sleep==0.3.8, big-sleep==0.4.0, big-sleep==0.4.1, big-sleep==0.4.10, big-sleep==0.4.11, big-sleep==0.4.2, big-sleep==0.4.3, big-sleep==0.4.4, big-sleep==0.4.5, big-sleep==0.4.6, big-sleep==0.4.7, big-sleep==0.4.8, big-sleep==0.4.9, big-sleep==0.5.0, big-sleep==0.5.1, big-sleep==0.5.2, big-sleep==0.5.3, big-sleep==0.6.0, big-sleep==0.6.1, big-sleep==0.6.2, big-sleep==0.7.0, big-sleep==0.7.1, big-sleep==0.7.2, big-sleep==0.8.0, big-sleep==0.8.1, big-sleep==0.8.2, big-sleep==0.8.3, big-sleep==0.8.4 and big-sleep==0.8.5 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        big-sleep 0.8.5 depends on torchvision>=0.8.2
        big-sleep 0.8.4 depends on torchvision>=0.8.2
        big-sleep 0.8.3 depends on torchvision>=0.8.2
        big-sleep 0.8.2 depends on torchvision>=0.8.2
        big-sleep 0.8.1 depends on torchvision>=0.8.2
        big-sleep 0.8.0 depends on torchvision>=0.8.2
        big-sleep 0.7.2 depends on torchvision>=0.8.2
        big-sleep 0.7.1 depends on torchvision>=0.8.2
        big-sleep 0.7.0 depends on torchvision>=0.8.2
        big-sleep 0.6.2 depends on torchvision>=0.8.2
        big-sleep 0.6.1 depends on torchvision>=0.8.2
        big-sleep 0.6.0 depends on torchvision>=0.8.2
        big-sleep 0.5.3 depends on torchvision>=0.8.2
        big-sleep 0.5.2 depends on torchvision>=0.8.2
        big-sleep 0.5.1 depends on torchvision>=0.8.2
        big-sleep 0.5.0 depends on torchvision>=0.8.2
        big-sleep 0.4.11 depends on torchvision>=0.8.2
        big-sleep 0.4.10 depends on torchvision>=0.8.2
        big-sleep 0.4.9 depends on torchvision>=0.8.2
        big-sleep 0.4.8 depends on torchvision>=0.8.2
        big-sleep 0.4.7 depends on torchvision>=0.8.2
        big-sleep 0.4.6 depends on torchvision>=0.8.2
        big-sleep 0.4.5 depends on torchvision>=0.8.2
        big-sleep 0.4.4 depends on torchvision>=0.8.2
        big-sleep 0.4.3 depends on torchvision>=0.8.2
        big-sleep 0.4.2 depends on torchvision>=0.8.2
        big-sleep 0.4.1 depends on torchvision>=0.8.2
        big-sleep 0.4.0 depends on torchvision>=0.8.2
        big-sleep 0.3.8 depends on torchvision>=0.8.2
        big-sleep 0.3.7 depends on torchvision>=0.8.2
        big-sleep 0.3.6 depends on torchvision>=0.8.2
        big-sleep 0.3.5 depends on torchvision>=0.8.2
        big-sleep 0.3.4 depends on torchvision>=0.8.2
        big-sleep 0.3.3 depends on torchvision>=0.8.2
        big-sleep 0.3.2 depends on torchvision>=0.8.2
        big-sleep 0.3.1 depends on torchvision>=0.8.2
        big-sleep 0.3.0 depends on torchvision>=0.8.2
        big-sleep 0.2.9 depends on torchvision>=0.8.2
        big-sleep 0.2.8 depends on torchvision>=0.8.2
        big-sleep 0.2.7 depends on torch>=1.7.1
        big-sleep 0.2.6 depends on torch>=1.7.1
        big-sleep 0.2.5 depends on torch>=1.7.1
        big-sleep 0.2.4 depends on torch>=1.7.1
        big-sleep 0.2.3 depends on torch>=1.7.1
        big-sleep 0.2.2 depends on torch>=1.7.1
        big-sleep 0.2.0 depends on torch>=1.7.1
        big-sleep 0.1.4 depends on torch>=1.7.1
        big-sleep 0.1.2 depends on torch>=1.7.1
        big-sleep 0.1.1 depends on torch>=1.7.1
        big-sleep 0.1.0 depends on torch>=1.7.1
        big-sleep 0.0.2 depends on torch>=1.7.1
        big-sleep 0.0.1 depends on torch>=1.7.1
    
        ```
    opened by ThatOneCalculator 5
  • Method 'forward' is not defined

    Method 'forward' is not defined

    I installed the module via

    $ pip install deep-daze and just tried the provided example with

    $ imagine "a house in the forest" but after it loaded something for a few minutes (the first time I run the command) it throws this error

    Traceback (most recent call last):

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-32b6fbd8f807> in <module>()
    ----> 1 from deep_daze import Imagine
         2 
         3 imagine = Imagine(
         4     text = 'cosmic love and attention',
         5     num_layers = 24,
    
    E:\Anaconda\lib\site-packages\deep_daze\__init__.py in <module>()
    ----> 1 from deep_daze.deep_daze import DeepDaze, Imagine
    
    E:\Anaconda\lib\site-packages\deep_daze\deep_daze.py in <module>()
        37 signal.signal(signal.SIGINT, signal_handling)
        38 
    ---> 39 perceptor, normalize_image = load()
        40 
        41 # Helpers
    
    E:\Anaconda\lib\site-packages\deep_daze\clip.py in load()
       190                     node.copyAttributes(device_node)
       191 
    --> 192     model.apply(patch_device)
       193     patch_device(model.encode_image)
       194     patch_device(model.encode_text)
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       472         for module in self.children():
       473             module.apply(fn)
    --> 474         fn(self)
       475         return self
       476 
    
    E:\Anaconda\lib\site-packages\deep_daze\clip.py in patch_device(module)
       181 
       182     def patch_device(module):
    --> 183         graphs = [module.graph] if hasattr(module, "graph") else []
       184         if hasattr(module, "forward1"):
       185             graphs.append(module.forward1.graph)
    
    E:\Anaconda\lib\site-packages\torch\jit\_script.py in graph(self)
       447             ``forward`` method. See :ref:`interpreting-graphs` for details.
       448             """
    --> 449             return self._c._get_method("forward").graph
       450 
       451         @property
    
    RuntimeError: Method 'forward' is not defined.
    

    My system is: Windows 10 GeForce GTX1060 6G pytorch 1.8.0+cu111 python 3.7.0

    opened by Jeffrey0Liao 4
  • Question about Colab environments affecting results

    Question about Colab environments affecting results

    This isn't really an issue, but I've been using this package to morph from one text input to the next by updating the input while its running and storing all intermediate output. I'm using variations of the notebook I've checked in here: https://github.com/lots-of-things/Story2Hallucination

    When doing this I've gotten to really see inside how the algorithm converges on its solution. And I've noticed that there are at least two distinct modes. Sometimes, the algorithm quickly converges and sticks and other times the algorithm wobbles around an image and then much more easily warps to something new.

    Here's two images with the same input and framerate: jerky/sticky output

    wobbly/warpy output

    If these two modes happened randomly I'd understand it, but here is the really strange part. This behavior will be consistent in the same colab environment. If I pull up a colab env and run the notebook and it does the sticky way. Then no matter how many times I restart the run, it will always be sticky. I have to factory reset the env to get it to change. Then if it starts to do it the wobbly way, it'll stay wobbly.

    It sounds bizarre but is there any reason this would be possible? I know there are different CUDA environments, but not sure if/why that would make it so different.

    opened by stedn 4
  • Backslash character in file names.

    Backslash character in file names.

    I'm now getting file names containing "\" which were previously removed automatically. This is causing havoc when I attempt to use Windows to read files over the network which were created on my Linux system.

    opened by pmonck 3
  • Image saving error

    Image saving error

    Hello I am using Big-Sleep straight out of command line on windows with no extra flags but for some reason when ever I get to image update 2 or 5 (most times 2) it errors out giving the error OSError: [Errno 22] Invalid argument: I couldn't find any fix that would work in this situation. Any and all help as to solving this error is greatly appreciated

    opened by Wemmons831 2
  • Not Working with Nemo File Manager

    Not Working with Nemo File Manager

    Installed successfully on Arch 5.11.16 with pip install big-sleep and it worked great but after a reboot it fails to generate images. After running any commands prime-run dream --num-cutouts=25 --save-progress --save-every 100 "whatever" or just dream --num-cutouts 25 "whatever" it opens the file manager (Nemo) to the directory but no images are generated. It sits while still using RAM but no GPU or CPU. Previously I was able to get it to work by reinstalling torch and big-sleep but that doesn't solve the problem anymore.

    Edit: After letting it sit for a few minutes, cancelling the command with CTRL+C shows detecting keyboard interrupt, gracefully exiting with empty progess bars before exiting.

    Running with the --open-folder False option fixes this

    opened by tilktilk5 2
  • Man in Sky

    Man in Sky

    A man who is levitating above the city, towards the clouds, part of his body is disintegrating and turning into binary numbers. At the bottom of the sky there is a transparent, blinding hand that receives it and welcomes it

    opened by FrankLaheraOcallaghan 0
  • Error: tensor is not a torch image

    Error: tensor is not a torch image

    I tried to run several examples on GCP with torch installed correctly and GPU.

    dream = Imagine(
        text = "an armchair in the form of pikachu",
    )
    dream()
    

    Sadly, I get this error:

    TypeError: tensor is not a torch image.
    

    Maybe someone else encoutered that?

    opened by dokato 0
  • How to use the CUDA to big-sleep?

    How to use the CUDA to big-sleep?

    I try to use my ubuntu machine to run big-sleep. It installs cuda. Big-sleep is also running normally, but during the running process, the performance of my GPU is not used, and the CPU usage reaches 100%. For a 512*512 image, I need to train for 6 hours, which is too long.

    opened by andy1994220 0
  • Does it work on mac ?

    Does it work on mac ?

    Hello Newbie here. I'm on Mac M1 Installation worked with "pip install big-sleep" But then when I try "dream..." it says "command not found" Does anyone know why and how to make it work ? Thanks a lot

    opened by NtnmrnNtnmrn 6
  • Option for initializing the generation with an image file?

    Option for initializing the generation with an image file?

    Hi @lucidrains, @afiaka87, @DrJKL, @enricoros, and all other contributors to this amazing project.

    I am trying to figure out the following: I saw there is an "--img" flag for using an image as prompt, but is there a way to use an image as initializer for the generation?

    If not, then do you guys have any plans implementing it? I'd say this is probably be one of the most important feature still missing from Big Sleep and... I would be happy to contribute to this myself, but this is not my field (yet!) so I honestly have no idea where to start...

    But ! If you think this could be similar to how other projects implemented it (for example deep daze), then I could take a look at it and try to make my way around it... and I could probably get somewhere...

    So yeah, let me know what you think!

    opened by illtellyoulater 15
Releases(0.9.1)
Owner
Phil Wang
Working with Attention.
Phil Wang
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
Code for the paper "Relation of the Relations: A New Formalization of the Relation Extraction Problem"

This repo contains the code for the EMNLP 2020 paper "Relation of the Relations: A New Paradigm of the Relation Extraction Problem" (Jin et al., 2020)

YYY 27 Oct 26, 2022
Command-line tool for downloading and extending the RedCaps dataset.

RedCaps Downloader This repository provides the official command-line tool for downloading and extending the RedCaps dataset. Users can seamlessly dow

RedCaps dataset 33 Dec 14, 2022
GANTheftAuto is a fork of the Nvidia's GameGAN

Description GANTheftAuto is a fork of the Nvidia's GameGAN, which is research focused on emulating dynamic game environments. The early research done

Harrison 801 Dec 27, 2022
An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Deep-motion-editing This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The co

1.2k Dec 29, 2022
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Yu Meng 38 Dec 12, 2022
World Models with TensorFlow 2

World Models This repo reproduces the original implementation of World Models. This implementation uses TensorFlow 2.2. Docker The easiest way to hand

Zac Wellmer 234 Nov 30, 2022
Encode and decode text application

Text Encoder and Decoder Encode and decode text in many ways using this application! Encode in: ASCII85 Base85 Base64 Base32 Base16 Url MD5 Hash SHA-1

Alice 1 Feb 12, 2022
Codes for "Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier"

Deep-RTC [project page] This repository contains the source code accompanying our ECCV 2020 paper. Solving Long-tailed Recognition with Deep Realistic

Gina Wu 16 May 26, 2022
gitγ€ŠUSD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation》(2020) GitHub: [fig2]

USD-Seg This project is an implement of paper USD-Seg:Learning Universal Shape Dictionary for Realtime Instance Segmentation, based on FCOS detector f

Ruolin Ye 80 Nov 28, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
IDA file loader for UF2, created for the DEFCON 29 hardware badge

UF2 Loader for IDA The DEFCON 29 badge uses the UF2 bootloader, which conveniently allows you to dump and flash the firmware over USB as a mass storag

Kevin Colley 6 Feb 08, 2022
A collection of IPython notebooks covering various topics.

ipython-notebooks This repo contains various IPython notebooks I've created to experiment with libraries and work through exercises, and explore subje

John Wittenauer 2.6k Jan 01, 2023
Python interface for the DIGIT tactile sensor

DIGIT-INTERFACE Python interface for the DIGIT tactile sensor. For updates and discussions please join the #DIGIT channel at the www.touch-sensing.org

Facebook Research 35 Dec 22, 2022
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

VΓ­ctor Mayoral Vilches 233 Dec 30, 2022
TensorFlow 2 implementation of the Yahoo Open-NSFW model

TensorFlow 2 implementation of the Yahoo Open-NSFW model

Bosco Yung 101 Jan 01, 2023
Campsite Reservation Finder

yellowstone-camping UPDATE: yellowstone-camping is being expanded and renamed to camply. The updated tool now interfaces with the Recreation.gov API a

Justin Flannery 233 Jan 08, 2023
multimodal transformer

This repo holds the code to perform experiments with the multimodal autoregressive probabilistic model Transflower. Overview of the repo It is structu

Guillermo Valle 68 Dec 13, 2022
CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels

CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels Accurate pressure drop estimat

Alejandro Montanez 0 Jan 21, 2022
[PAMI 2020] Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation

Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-segmentation This repository contains the source code for

Yun-Chun Chen 60 Nov 25, 2022