Transformers and related deep network architectures are summarized and implemented here.

Overview

Transformers: from NLP to CV

cover

This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV)

  1. Introduction
  2. ViT: Transformers for Computer Vision
  3. Visualizing the attention Open In Colab
  4. MLP-Mixer Open In Colab
  5. Hybrid MLP-Mixer + ViT Open In Colab
  6. ConvMixer Open In Colab
  7. Hybrid ConvMixer + MLP-Mixer Open In Colab

1) Introduction

What is wrong with RNNs and CNNs

Learning Representations of Variable Length Data is a basic building block of sequence-to-sequence learning for Neural machine translation, summarization, etc

  • Recurrent Neural Networks (RNNs) are natural fit variable-length sentences and sequences of pixels. But sequential computation inhibits parallelization. No explicit modeling of long and short-range dependencies.
  • Convolutional Neural Networks (CNNs) are trivial to parallelize (per layer) and exploit local dependencies. However, long-distance dependencies require many layers.

Attention!

The Transformer archeticture was proposed in the paper Attention is All You Need. As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

"Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train"

Machine Translation (MT) is the task of translating a sentence x from one language (the source language) to a sentence y in another language (the target language). One basic and well known neural network architecture for NMT is called sequence-to-sequence seq2seq and it involves two RNNs.

  • Encoder: RNN network that encodes the input sequence to a single vector (sentence encoding)
  • Decoder: RNN network that generates the output sequences conditioned on the encoder's output. (conditioned language model)

seqseq

The problem of the vanilla seq2seq is information bottleneck, where the encoding of the source sentence needs to capture all information about it in one vector.

As mentioned in the paper Neural Machine Translation by Jointly Learning to Align and Translate

"A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus."

attention001.gif

Attention provides a solution to the bottleneck problem

  • Core idea: on each step of the decoder, use a direct connection to the encoder to focus on a particular part of the source sequence. Attention is basically a technique to compute a weighted sum of the values (in the encoder), dependent on another value (in the decoder).

The main idea of attention can be summarized as mention the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Query and Values

  • In the seq2seq + attention model, each decoder hidden state (query) attends to all the encoder hidden states (values)
  • The weighted sum is a selective summary of the information contained in the values, where the query determines which values to focus on.
  • Attention is a way to obtain a fixed-size representation of an arbitrary set of representations (the values), dependent on some other representation (the query).

2) Transformers for Computer Vision

Transfomer based architectures were used not only for NLP but also for computer vision tasks. One important example is Vision Transformer ViT that represents a direct application of Transformers to image classification, without any image-specific inductive biases. As mentioned in the paper:

"We show that reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks"

"Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks"

vit

As we see, an input image is splitted into patches which are treated the same way as tokens (words) in an NLP application. Position embeddings are added to the patch embeddings to retain positional information. Similar to BERT’s class token, a classification head is attached here and used during pre-training and fine-tuning. The model is trained on image classification in supervised fashion.

Multi-head attention

The intuition is similar to have a multi-filter in CNNs. Here we can have multi-head attention, to give the network more capacity and ability to learn different attention patterns. By having multiple different layers that generate (or project) the vectors of queries, keys and values, we can learn multiple representations of these queries, keys and values.

mha

Where each token is projected (in a learnable way) into three vecrors Q, K, and V:

  • Q: Query vector: What I want
  • K: Key vector: What type of info I have
  • V: Value vector: What actual info I have

3) Visualizing the attention

Open In Colab

The basic ViT architecture is used, however with only one transformer layer with one (or four) head(s) for simplicity. The model is trained on CIFAR-10 classification task. The image is splitted in to 12 x 12 = 144 patches as usual, and after training, we can see the 144 x 144 attention scores (where each patch can attend to the others).

imgpatches

Attention map represents the correlation (attention) between all the tokens, where the sum of each row equals 1 representing the probability distribution of attention from a query patch to all others.

attmap

Long distance attention we can see two interesting patterns where background patch attends to long distance other background patches, and this flight patch attends to long distance other flight patches.

attpattern

We can try more heads and more transfomer layers and inspect the attention patterns.

attanim


4) MLP-Mixer

Open In Colab

MLP-Mixer is proposed in the paper An all-MLP Architecture for Vision. As mentioned in the paper:

"While convolutions and attention are both sufficient for good performance, neither of them is necessary!"

"Mixer is a competitive but conceptually and technically simple alternative, that does not use convolutions or self-attention"

Mixer accepts a sequence of linearly projected image patches (tokens) shaped as a “patches × channels” table as an input, and maintains this dimensionality. Mixer makes use of two types of MLP layers:

mixer

  • Channel-mixing MLPs allow communication between different channels, they operate on each token independently and take individual rows of the table as inputs
  • Token-mixing MLPs allow communication between different spatial locations (tokens); they operate on each channel independently and take individual columns of the table as inputs.

These two types of layers are interleaved to enable interaction of both input dimensions.

"The computational complexity of the network is linear in the number of input patches, unlike ViT whose complexity is quadratic"

"Unlike ViTs, Mixer does not use position embeddings"

It is commonly observed that the first layers of CNNs tend to learn detectors that act on pixels in local regions of the image. In contrast, Mixer allows for global information exchange in the token-mixing MLPs.

"Recall that the token-mixing MLPs allow global communication between different spatial locations."

vizmixer

The figure shows hidden units of the four token-mixing MLPs of Mixer trained on CIFAR10 dataset.


5) Hybrid MLP-Mixer and ViT

Open In Colab

We can use both the MLP-Mixer and ViT in one network architecture to get the best of both worlds.

mixvit

Adding a few self-attention sublayers to mixer is expected to offer a simple way to trade off speed for accuracy.


6) CovMixer

Open In Colab

Patches Are All You Need?

Is the performance of ViTs due to the inherently more powerful Transformer architecture, or is it at least partly due to using patches as the input representation.

ConvMixer, an extremely simple model that is similar in many aspects to the ViT and the even-more-basic MLP-Mixer

Despite its simplicity, ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet.

While self-attention and MLPs are theoretically more flexible, allowing for large receptive fields and content-aware behavior, the inductive bias of convolution is well-suited to vision tasks and leads to high data efficiency.

ConvMixers are substantially slower at inference than the competitors!

conmixer01


7) Hybrid MLP-Mixer and CovMixer

Open In Colab

Once again, we can use both the MLP-Mixer and ConvMixer in one network architecture to get the best of both worlds. Here is a simple example.

convmlpmixer


References and more information

Owner
Ibrahim Sobh
Ibrahim Sobh
2021搜狐校园文本匹配算法大赛baseline

sohu2021-baseline 2021搜狐校园文本匹配算法大赛baseline 简介 分享了一个搜狐文本匹配的baseline,主要是通过条件LayerNorm来增加模型的多样性,以实现同一模型处理不同类型的数据、形成不同输出的目的。 线下验证集F1约0.74,线上测试集F1约0.73。

苏剑林(Jianlin Su) 45 Sep 06, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Dec 30, 2022
本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。

【关于 NLP】那些你不知道的事 作者:杨夕、芙蕖、李玲、陈海顺、twilight、LeoLRH、JimmyDU、艾春辉、张永泰、金金金 介绍 本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。 目录架构 一、【

1.4k Dec 30, 2022
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

Dense Passage Retrieval Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain Q&A research. It is based on the

Meta Research 1.1k Jan 07, 2023
Python generation script for BitBirds

BitBirds generation script Intro This is published under MIT license, which means you can do whatever you want with it - entirely at your own risk. Pl

286 Dec 06, 2022
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
The Easy-to-use Dialogue Response Selection Toolkit for Researchers

The Easy-to-use Dialogue Response Selection Toolkit for Researchers

GMFTBY 32 Nov 13, 2022
2021海华AI挑战赛·中文阅读理解·技术组·第三名

文字是人类用以记录和表达的最基本工具,也是信息传播的重要媒介。透过文字与符号,我们可以追寻人类文明的起源,可以传播知识与经验,读懂文字是认识与了解的第一步。对于人工智能而言,它的核心问题之一就是认知,而认知的核心则是语义理解。

21 Dec 26, 2022
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
Transcribing audio files using Hugging Face's implementation of Wav2Vec2 + "chain-linking" NLP tasks to combine speech-to-text with downstream tasks like translation and summarisation.

PART 2: CHAIN LINKING AUDIO-TO-TEXT NLP TASKS 2A: TRANSCRIBE-TRANSLATE-SENTIMENT-ANALYSIS In notebook3.0, I demo a simple workflow to: transcribe a lo

Chua Chin Hon 30 Jul 13, 2022
Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries.

VirtualAssistant Simple virtual assistant using pyttsx3 and speech recognition optionally with pywhatkit and pther libraries. Third Party Libraries us

Logadheep 1 Nov 27, 2021
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

MADUSHANKA 10 Dec 14, 2022
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Highlights The strongest performances Tracker

Multimedia Research 485 Jan 04, 2023
A NLP program: tokenize method, PoS Tagging with deep learning

IRIS NLP SYSTEM A NLP program: tokenize method, PoS Tagging with deep learning Report Bug · Request Feature Table of Contents About The Project Built

Zakaria 7 Dec 13, 2022
nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

Bernhard Liebl 2 Jun 10, 2022
Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
Minimal GUI for accessing the Watson Text to Speech service.

Description Minimal graphical application for accessing the Watson Text to Speech service. Requirements Python 3 plus all dependencies listed in requi

Moritz Maxeiner 1 Oct 22, 2021
Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP

Stat4ML Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP This is the first course from our trio courses: Statistics Foundatio

Omid Safarzadeh 83 Dec 29, 2022
Nested Named Entity Recognition for Chinese Biomedical Text

CBio-NAMER CBioNAMER (Nested nAMed Entity Recognition for Chinese Biomedical Text) is our method used in CBLUE (Chinese Biomedical Language Understand

8 Dec 25, 2022