Deep Learning Theory

Overview

Deep Learning Theory

整理了一些深度学习的理论相关内容,持续更新。

Overview

  1. Recent advances in deep learning theory 总结了目前深度学习理论研究的六个方向的一些结果,概述型,没做深入探讨(2021)。

    • 1.1 complexity and capacity-basedapproaches for analyzing the generalizability of deep learning;

    • 1.2 stochastic differential equations andtheir dynamic systems for modelling stochastic gradient descent and its variants, which characterizethe optimization and generalization of deep learning, partially inspired by Bayesian inference;

    • 1.3 thegeometrical structures of the loss landscape that drives the trajectories of the dynamic systems;

    • 1.4 theroles of over-parameterization of deep neural networks from both positive and negative perspectives;

    • 1.5 theoretical foundations of several special structures in network architectures;

    • 1.6 the increasinglyintensive concerns in ethics and security and their relationships with generalizability

Course

  1. Theory of Deep LearningTTIC,西北大学等组织的一系列课程和讲座,基础课程涉及DL的基础(符号化,简化后的数学问题和结论),信息论和学习,统计和计算,信息论,统计学习和强化学习(2020)。

  2. MathsDL-spring19,MathDL系列,18,19,20年均有。

    • 3.1 Geometry of Data

      • Euclidean Geometry: transportation metrics, CNNs , scattering.
      • Non-Euclidean Geometry: Graph Neural Networks.
      • Unsupervised Learning under Geometric Priors (Implicit vs explicit models, microcanonical, transportation metrics).
      • Applications and Open Problems: adversarial examples, graph inference, inverse problems.
    • 3.2 Geometry of Optimization and Generalization

      • Stochastic Optimization (Robbins & Munro, Convergence of SGD)
      • Stochastic Differential Equations (Fokker-Plank, Gradient Flow, Langevin + + Dynamics, links with SGD; open problems) Dynamics of Neural Network Optimization (Mean Field Models using Optimal Transport, Kernel Methods)
      • Landscape of Deep Learning Optimization (Tensor/Matrix factorization, Deep Nets; open problems).
      • Generalization in Deep Learning.
    • 3.3 Open qustions on Reinforcement Learning

Architecture

  1. Partial Differential Equations is All You Need for Generating Neural Architectures -- A Theory for Physical Artificial Intelligence Systems 将统计物理的反应扩散方程,量子力学中的薛定谔方程,傍轴光学中的亥姆霍兹方程统一整合到神经网络偏微分方程中(NPDE),利用有限元方法找到数值解,从离散过程中,构造了多层感知,卷积网络,和循环网络,并提供了优化方法L-BFGS等,主要是建立了经典物理模型和经典神经网络的联系(2021)。

Approximation

  1. NN Approximation Theory

Optimization

  1. SGD

  2. offconvex几个学术工作者维护的AI博客。

Geometry

  1. Optima transmission

Book

  1. Theory of Deep Learning(draft)Rong Ge 等(2019)。

  2. Spectral Learning on Matrices and TensorsMajid Janzamin等(2020)

  3. Deep Learning Architectures A Mathematical Approach(2020),你可以libgen获取,内容如其名字,大概包含:工业问题,DL基础(激活,结构,优化等),函数逼近,万有逼近,RELU等逼近新研究,函数表示,以及两大方向,信息角度,几何角度等相关知识,实际场景中的卷积,池化,循环,生成,随机网络等具体实用内容的数学化,另外附录集合论,测度论,概率论,泛函,实分析等基础知识。

  4. The Principles of Deep Learning Theory(2021)Daniel A. Roberts and Sho Yaida(mit),Beginning from a first-principles component-level picture of networks,本书解释了如何通过求解层到层迭代方程和非线性学习动力学来确定训练网络输出的准确描述。一个主要的结果是网络的预测是由近高斯分布描述的,网络的深度与宽度的纵横比控制着与无限宽度高斯描述的偏差。本书解释了这些有效深度网络如何从训练中学习非平凡的表示,并更广泛地分析非线性模型的表示学习机制。从近内核方法的角度来看,发现这些模型的预测对底层学习算法的依赖可以用一种简单而通用的方式来表达。为了获得这些结果,作者开发了表示组流(RG 流)的概念来表征信号通过网络的传播。通过将网络调整到临界状态,他们为梯度爆炸和消失问题提供了一个实用的解决方案。作者进一步解释了 RG 流如何导致近乎普遍的行为,从而可以将由不同激活函数构建的网络做类别划分。Altogether, they show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks。利用信息理论,作者估计了模型性能最好的最佳深宽比,并证明了残差连接能将深度推向任意深度。利用以上理论工具,就可以更加细致的研究架构的归纳偏差,超参数,优化。

  5. Physics-based Deep Learning(2021)N. Thuerey, P. Holl,etc.github resources深度学习与物理学的联系。比如基于物理的损失函数,可微流体模拟,逆问题的求解,Navier-Stokes方程的前向模拟,Controlling Burgers’ Equation和强化学习的关系等。

Session

  1. Foundations of Deep Learning(2019),西蒙研究中心会议。
  2. Deep Learning Theory 4(2021, ICML)Claire Monteleoni主持...,深度学习理论会议4,包含论文和视频。
  3. Deep Learning Theory 5 (2021,ICML)MaYi主持...,深度学习理论会议5,包含论文和视频。

Others

  1. Theoretical issues in deep networks 表明指数型损失函数中存在隐式的正则化,其优化的结果和一般损失函数优化结果一致,优化收敛结果和梯度流的迹有关,目前还不能证明哪个结果最优(2020)。
  2. The Dawning of a New Erain Applied MathematicsWeinan E关于在DL的新处境下结合历史的工作范式给出的指导性总结(2021)。
  3. Mathematics of deep learning from Newton Institute
  4. DEEP NETWORKS FROM THE PRINCIPLE OF RATE REDUCTION,白盒神经网络。
  5. redunet_paper白盒神经网络代码。
  6. Theory of Deep Convolutional Neural Networks:Downsampling下采样的数学分析Ding-Xuan Zhou(2020)
  7. Theory of deep convolutional neural networks II: Spherical analysis还有III:radial functions 逼近,(2020)。不过这些工作到底如何,只是用数学转换了一下,理论上没做过多贡献,或者和实际结合没难么紧密,还不得而知。
  8. The Modern Mathematics of Deep Learning(2021)主要是deep laerning的数学分析描述,涉及的问题包括:超参数网络的通用能力,深度在深度模型中的核心作用,深度学习对维度灾难的克服,优化在非凸优化问题的成功,学习的表示特征的数学分析,为何深度模型在物理问题上有超常表现,模型架构中的哪些因素以何种方式影响不同任务的学习中的不同方面。
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
基于Pytorch实现优秀的自然图像分割框架!(包括FCN、U-Net和Deeplab)

语义分割学习实验-基于VOC数据集 usage: 下载VOC数据集,将JPEGImages SegmentationClass两个文件夹放入到data文件夹下。 终端切换到目标目录,运行python train.py -h查看训练 (torch) Li Xiang 28 Dec 21, 2022

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

CIFS This repository provides codes for CIFS (ICML 2021). CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Sel

Hanshu YAN 19 Nov 12, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
You Only Look Once for Panopitic Driving Perception

You Only 👀 Once for Panoptic 🚗 Perception You Only Look at Once for Panoptic driving Perception by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wan

Hust Visual Learning Team 1.4k Jan 04, 2023
LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on

MI^2 DataLab 4 Aug 01, 2022
Code repository for the paper "Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation" with instructions to reproduce the results.

Doubly Trained Neural Machine Translation System for Adversarial Attack and Data Augmentation Languages Experimented: Data Overview: Source Target Tra

Steven Tan 1 Aug 18, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.

Vision Transformer(ViT) in Tensorflow2 Tensorflow2 implementation of the Vision Transformer(ViT). This repository is for An image is worth 16x16 words

sungjun lee 42 Dec 27, 2022
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP The implementation of paper CLIP2Video: Mastering Video-Text Retrieval via Image CLIP. CLIP2

168 Dec 29, 2022
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
Catch-all collection of generative art made using processing

Generative art with Processing.py Some art I have created for fun. Dependencies Processing for Python, see how to download/use here Packages contained

2 Mar 12, 2022
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Here is deepparse. Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning. Use deepparse to Use the pr

GRAAL/GRAIL 192 Dec 20, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Python Classes: Medical Insurance Project using Object Oriented Programming Concepts

Medical-Insurance-Project-OOP Python Classes: Medical Insurance Project using Object Oriented Programming Concepts Classes are an incredibly useful pr

Hugo B. 0 Feb 04, 2022
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

ASSL This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR)

Huan Wang 47 Nov 28, 2022