⭐
If you are also interested in open-ended text generation and would like to see more details of our contrastive search decoding method, please refer to our SimCTG [paper] and [repo].
⭐
Replicate has provided a great web [demo] of MAGIC that is super easy to use and to interact with. Check it out!
Generative language models (LMs) such as GPT-2/3 can be prompted to generate text with remarkable quality. While they are designed for text-prompted generation, it remains an open question how the generation process could be guided by modalities beyond text such as images. In this work, we propose a training-free framework, called MAGIC (iMAge-Guided text generatIon with CLIP), for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks (e.g., image captioning) in a zero-shot manner. MAGIC is a simple yet efficient plug-and-play framework, which directly combines an off-the-shelf LM (i.e., GPT-2) and an image-text matching model (i.e., CLIP) for image-grounded text generation. During decoding, MAGIC influences the generation of the LM by introducing a CLIP-induced score, called magic score, which regularizes the generated result to be semantically related to a given image while being coherent to the previously generated context. Notably, the proposed decoding scheme does not involve any gradient update operation, therefore being computationally efficient. On the challenging task of zero-shot image captioning, MAGIC outperforms the state-of-the-art method by notable margins with a nearly 27 times decoding speedup. MAGIC is a flexible framework and is theoretically compatible with any text generation tasks that incorporate image grounding. In the experiments, we showcase that it is also capable of performing visually grounded story generation given both an image and a text prompt.
2. News:
[2022/05/06] MAGIC is publicly released!
3. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our papers. Thanks!
@article{DBLP:journals/corr/abs-2205-02655,
author = {Yixuan Su and Tian Lan and Yahui Liu and Fangyu Liu and Dani Yogatama and Yan Wang and Lingpeng Kong and Nigel Collier},
title = {Language Models Can See: Plugging Visual Controls in Text Generation},
journal = {CoRR},
volume = {abs/2205.02655},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2205.02655},
doi = {10.48550/arXiv.2205.02655},
eprinttype = {arXiv},
eprint = {2205.02655},
timestamp = {Wed, 11 May 2022 17:29:40 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2205-02655.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{DBLP:journals/corr/abs-2202-06417,
author = {Yixuan Su and Tian Lan and Yan Wang and Dani Yogatama and Lingpeng Kong and Nigel Collier},
title = {A Contrastive Framework for Neural Text Generation},
journal = {CoRR},
volume = {abs/2202.06417},
year = {2022},
url = {https://arxiv.org/abs/2202.06417},
eprinttype = {arXiv},
eprint = {2202.06417},
timestamp = {Fri, 18 Feb 2022 12:23:53 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2202-06417.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
To ensure the reproductity of our work, we provide all related resources to implement our experiments on the task of zero-shot image captioning. Please refer more details [here].
5.2. Example Usage of Magic Search:
In the following, we illustrate how to perform zero-shot image captioning with magic search. Specifically, we show how to generate the results as shown in our case study in the paper.
To generate the caption of a random image, we need to load the image as:
fromPILimportImage# to load imagesfromIPython.displayimportdisplay# to display imagesimage_name_list= ['COCO_val2014_000000336777.jpg', 'COCO_val2014_000000182784.jpg', 'COCO_val2014_000000299319.jpg', 'COCO_val2014_000000516750.jpg',
'COCO_val2014_000000207151.jpg', 'COCO_val2014_000000078707.jpg', 'COCO_val2014_000000027440.jpg', 'COCO_val2014_000000033645.jpg',
'COCO_val2014_000000348905.jpg', 'COCO_val2014_000000545385.jpg', 'COCO_val2014_000000210032.jpg', 'COCO_val2014_000000577526.jpg']
index=1''' you can easily reproduce all results shown in our case study (index from 0 to 3) and the results in the appendix (index from 4 to 11).'''image_path=r'./image_captioning/example_images/'+image_name_list[index]
image_instance=Image.open(image_path)
display(image_instance)
5.2.5. Zero-Shot Image Captioning with Magic Search:
Now, let's generate the image caption with magic search!
''' setup the configurations of magic search k: the k in magic search alpha: the alpha in magic search beta: the beta in magic search decoding_len: the number of tokens to generate'''k, alpha, beta, decoding_len=45, 0.1, 2.0, 16eos_token='<|endoftext|>'output=generation_model.magic_search(input_ids, k,
alpha, decoding_len, beta, image_instance, clip, 60)
print (output)
''' A large cow standing in a street stall.'''
5.2.6. Reproduce Our Results in the Paper:
If you would like to reproduce all the results shown in the case study and appendix of our paper, you can run this demo file as
python image_caption_demo.py
6. Visually Grounded Story Generation:
6.1. Implementation of Experiments:
To ensure the reproductity of our work, we provide all related resources to implement our experiments on the task of visually grounded story generation. Please refer more details [here].
6.2. Example Usage of Magic Search:
In the following, we illustrate how to perform visually grounded story generation with magic search. Specifically, we show how to generate the results as shown in our case study in the paper.
6.2.1. Load Language Model:
We first load the language model and prepare the story title as:
Next, let's get the images that are related to the story tile. We provide two ways of doing it as shown below:
6.3.2.1. Retrieve from Image Index:
The first way is to retrieve the images from a constructed image index. Before running the following commands, please make sure you have built the image index from scrath as described [here] or downloaded our provided image index as described [here].
After the image index is ready, we can load the image index as
image_name_list, image_instance_list=index.search_image(title, top_k=1)
''' image_name_list: the list of names of the retrieved images image_instance_list: the list of images that we retrieve'''
Let's see the retrieved images we got
fromIPython.displayimportdisplay# display the top-1 imagedisplay(image_instance_list[0])
6.3.2.2. Directly Load Image:
Alternatively, if you have not prepared the image index, we have provided these the image in the repo. You can directly load it as
6.3.3. Visually Grounded Story Generation with Magic Search:
[Note] Recall that, in this example, our story title is 'Ice Cream Tasting <|endoftext|>'.
Now, let's generate the story conditioned on the retrieved image
fromIPython.displayimportdisplayk, alpha, beta, decoding_len=5, 0.6, 0.15, 100''' The k, alpha, beta correspond to the k, alpha, beta in magic search'''image_instance=image_instance_list[0]
eos_token=r'<|endoftext|>'output, _=generation_model.magic_search(title_ids, k, alpha, decoding_len, beta, image_instance,
clip, 60, eos_token)
_, generated_story=generation_model.parse_generated_result(output, num_of_sentences_to_keep=5)
print (generated_story)
display(image_instance)
''' My family went to a ice cream shop. They ordered three flavors of ice cream. The first one was strawberry, the second was chocolate, and the third was orange. I was excited to try all three flavors. It was very good and I had a great time at the ice cream shop.'''
Then, let's see what we can get using the vanilla contrastive search without the image grounding.
k, alpha, decoding_len=5, 0.6, 100''' The k and alpha correspond to the k and alpha in contrastive search'''eos_token=r'<|endoftext|>'output, _=generation_model.fast_contrastive_search(title_ids, k, alpha, decoding_len, eos_token)
_, generated_story=generation_model.parse_generated_result(output, num_of_sentences_to_keep=5)
print (generated_story)
''' My family went to a ice cream shop. We ordered the Ice Cream Truck. It was delicious. The customer service was terrible. We had to leave for another day.'''
6.3.4. Reproduce Our Results in the Paper:
If you would like to reproduce all the results shown in the case study and appendix of our paper, you can run this demo file as
python story_generation_demo.py
7. Contact
If you have any questions, feel free to contact me via (ys484 at cam.ac.uk).
8. MAGIC Elsewhere
We thank the community's effort for extending MAGIC!
Replicate has provided a great [demo] of MAGIC that is super easy to use. Thanks for the effort!
Owner
Yixuan Su
I am a third-year (final-year) Ph.D. student at the Language Technology Lab of the University of Cambridge.
Hopenet is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance.