Datasets, Transforms and Models specific to Computer Vision

Related tags

Deep Learningvision
Overview

vision

Datasets, Transforms and Models specific to Computer Vision

Installation

  • First install the nightly version of OneFlow
python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
  • Then install the latest stable release of flowvision
pip install flowvision==0.0.4
  • Or install the nightly release of flowvision
pip install -i https://test.pypi.org/simple/ flowvision==0.0.4

Supported Model

All of the supported models can be found in our model summary page here.

Usage

Quick Start
  • list supported model
from flowvision import ModelCreator
ModelCreator.model_table()
  • search supported model by wildcard
from flowvision import ModelCreator
ModelCreator.model_table("*vit*", pretrained=True)
ModelCreator.model_table("*vit*", pretrained=False)
ModelCreator.model_table('alexnet')
  • create model use ModelCreator
from flowvision import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)
ModelCreator
  • Create model in a simple way
from flowvision.models import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)

the pretrained weight will be saved to ./checkpoints

  • Supported model table
from flowvision.models import ModelCreator
ModelCreator.model_table()
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

show all of the supported model in the table manner

  • List models with pretrained weights
from flowvision.models import ModelCreator
ModelCreator.model_table(pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*')
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model with pretrained weights by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*', pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

Model Zoo

We have conducted all the tests under the same setting, please refer to the model page here for more details.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Comments
  • Support Poolformer

    Support Poolformer

    Support Poolformer

    • [x] build poolformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison oneflow版本过慢,待解决
    New Features Priority: 0 
    opened by thinksoso 16
  • delete flowvision.models._util

    delete flowvision.models._util

    1. flowvision.models下面有_utils.pyutils.py
    2. IntermediateLayerGetter方法在flowvision.models._utils.pyflowvision.models.segmentation.seg_utils.py重复。

    所以删除flowvision.models._utils.py,并暂时引用flowvision.models.segmentation.seg_utils.py

    Priority: 1 Improvements 
    opened by kaijieshi7 9
  • pickle module :EOFError Ran out of input

    pickle module :EOFError Ran out of input

    when I want to use the model of vit_tiny_patch16_224 from flowvison module ,it prompt this EOFError: Ran out of input. 环境就是OneFlow实训平台的3090显卡:oneflow-0.7.0+torch-1.8.1-cu11.2-cudnn8

    opened by WanShaw 8
  • Support UniFormer

    Support UniFormer

    Support Uniformer

    • [x] build uniformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo small_plus
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by thinksoso 6
  • add LeViT

    add LeViT

    Add LeViT

    • [x] build model
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update readme
    • [x] update changelog
    • [x] pytorch speed comparison
    opened by kaijieshi7 5
  • 解压预训练权重文件时报错

    解压预训练权重文件时报错

    使用 models 中的模型时,例如 model = vgg11(pretrained=True) ,成功下载 zip 权重文件后,解压过程出错,导致解压中断、参数文件不完整。如果自行将下载的 zip 解压,就能正常使用。多个模型都有同样的问题。

    Traceback (most recent call last):
      File "temp.py", line 77, in <module>
        model = vgg11(pretrained=True)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 182, in vgg11
        return _vgg("vgg11", "A", False, pretrained, progress, **kwargs)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 156, in _vgg
        state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 146, in load_state_dict_from_url
        return _legacy_zip_load(cached_file, model_dir, map_location, delete_file)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 78, in _legacy_zip_load
        f.extractall(model_dir)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1636, in extractall
        self._extract_member(zipinfo, path, pwd)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1691, in _extract_member
        shutil.copyfileobj(source, target)
      File "/usr/local/miniconda3/lib/python3.7/shutil.py", line 79, in copyfileobj
        buf = fsrc.read(length)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 930, in read
        data = self._read1(n)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1006, in _read1
        data = self._decompressor.decompress(data, n)
    zlib.error: Error -2 while decompressing data: inconsistent stream state
    
    opened by Alive1024 5
  • module 'flowvision.models' has no attribute 'face_recognition'

    module 'flowvision.models' has no attribute 'face_recognition'

    Hello, I need method for create model iresnet. I saw in documentation that flowvision has model iresnet, but when I import and use resnest50 = flowvision.models.face_recognition.iresnest50(pretrained=False, progress=True), python says module 'flowvision.models' has no attribute 'face_recognition'. What can be problem?

    good first issue Bug Fixes 
    opened by PhilippShemetov 4
  • add model: regionvit

    add model: regionvit

    Add RegionViT

    • [x] build model (F.unfold 算子不支持 https://github.com/Oneflow-Inc/oneflow/issues/3785)
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by kaijieshi7 4
  • Add speed test script

    Add speed test script

    脚本运行方式:

    cd ci/check
    bash run_speed_test.sh
    

    结果会输出到 当前目录下的 result 文件里

    目前通过测速脚本发现的问题

    import torch as flow 运行会崩

    • vit
    • conv_mixer
    • crossformer
    • cswin
    • mlp_mixer
    • pvt
    • res_mlp
    • vgg

    本身运行也会报错,输入是 224x224 的时候

    • efficientnet
    • res2net
    Priority: 0 Improvements Bug Fixes 
    opened by Ldpe2G 4
  • add useful model utils

    add useful model utils

    TODO

    Model relative

    • [x] freeze_bn
    • [ ] unfreeze_bn
    • [x] ActivationHook
    • [ ] freeze_unfreeze_fn

    Others

    • [x] random seed

    Test

    • [x] test freeze_bn
    • [ ] test activation_hook
    New Features Priority: 2 
    opened by rentainhe 4
  • bug: module 'oneflow.nn' has no attribute 'ReLU'

    bug: module 'oneflow.nn' has no attribute 'ReLU'

    oneflow/nn/init.py

    from oneflow.python.ops.math_ops import fused_scale_tril from oneflow.python.ops.math_ops import fused_scale_tril_softmax_dropout from oneflow.python.ops.math_ops import relu from oneflow.python.ops.math_ops import tril

    应该是 as ReLU? 难道我的oneflow版本装错了。。 flowvision-0.1.0 oneflow==0.7.0+cu102

    bug 
    opened by zhanggj821 3
  • flow.div 算子和 torch.div 没对齐

    flow.div 算子和 torch.div 没对齐

    image

    import oneflow as flow
    import torch
    import numpy as np
    
    a = np.random.randn(3,3).astype(np.float32)
    
    b = 2
    
    torch_a = torch.from_numpy(a)
    flow_a = flow.from_numpy(a)
    
    print(torch.div(torch_a,b,rounding_mode='floor'))
    print(flow.div(flow_a,b).floor())
    print(flow.div(flow_a,b,rounding_mode='floor'))
    
    opened by triple-Mu 0
  • ResNet-50 训练

    ResNet-50 训练

    ResNet-50 训练

    参照当前 vision 下的 project 复现 resnet-50 训练和精度对齐。

    参考

    主要目标

    • [ ] 2022.05.11 - 2022.5.12:熟悉 vision 下的分类模型训练代码,数据集配置并跑通。
    • [ ] 2022.05.12 - 2022.05.20:对照 timm 和 pytorch 复现 reset-50 训练代码,对齐相关训练条件,测试并使用多卡训练。
    • [ ] 2022.05.21 - 2022.05.27:对比精度差异调整并复现精度,最终将训练好的权重替换为 oneflow 版本。

    项目负责人:林松 预计完成时间:2022.05.27

    相关 PR

    罗列对应的 PR,以为一个 issue 可能会对应多个 PR,所以这里提供的是表格

    | PR | 作者 | reviewer | 日期 | | | ------------------------------------------------------------ | ---- | -------- | -------- | ---- | | 首次上传提交代码 | 林松 | zzzzzzz | 20220510 | |

    opened by triple-Mu 0
  • Vision有效性验证 - 完善Vision下的训练项目

    Vision有效性验证 - 完善Vision下的训练项目

    目前Vision下已经有的一个可以参考的projects,迁移了Swin-T的训练代码,用于Vision下进行模型的训练,但是vision中绝大部分模型的精度复现还无法保证,所以这里开启一个完善训练的projects: 用于复现vision下实现的模型的精度,并且在后续逐渐将vision下迁移的权重替换为oneflow自身训练的权重,这里是暂时的规划,需要2-3位实习生参与完成:

    可参考的projects:

    • https://github.com/rwightman/pytorch-image-models
    • https://github.com/microsoft/Swin-Transformer

    训练的任务,以及首批需要复现精度的模型:

    • 完善Vision下的这个projects: - https://github.com/Oneflow-Inc/vision/tree/main/projects/classification, 熟悉这个projects的用法(与Swin-T基本一致)
    • 这里我们列举一下第一阶段在vision下需要复现精度的模型以及相关paper:

    | Model | Paper | 认领人 | PR | |:----:|:----:|:----:|:----:| | ResNet50 | ResNet strikes back: An improved training procedure in timm | 林松 | | DeiT | Training data-efficient image transformers & distillation through attention | | | Swin-Transformer | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | 林德铝 | | DeiT III | DeiT III: Revenge of ViT | | |

    • 需要的硬件条件:8卡V100机器,能跑得下单卡256batchsize即可
    opened by rentainhe 0
Releases(v0.1.0)
  • v0.1.0(Feb 17, 2022)

    Flowvision V0.1.0 Stable Release

    New Features

    • Support trunc_normal_ in flowvision.layers.weight_init #92
    • Support DeiT model #115
    • Support PolyLRScheduler and TanhLRScheduler in flowvision.scheduler #85
    • Add resmlp_12_224_dino model and pretrained weight #128
    • Support ConvNeXt model #93
    • Add ReXNet weights #132

    Bug Fixes

    • Fix F.normalize usage in SSD #116
    • Fix bug in EfficientNet and Res2Net #122
    • Fix error pretrained weight usage in vit_small_patch32_384 and res2net50_48w_2s #128

    Improvements

    • Refator trunc_normal_ and linspace usage in Swin-T, Cross-Former, PVT and CSWin models #100
    • Refator Vision Transformer model #115
    • Refine flowvision.models.ModelCreator to support ModelCreator.model_list func #123
    • Refator README #124
    • Refine load_state_dict_from_url in flowvision.models.utils to support downloading pretrained weights to cache dir ~/.oneflow/flowvision_cache #127
    • Rebuild a cleaner model zoo and test all the model with pretrained weights released in flowvision #128

    Docs Update

    • Update Vision Transformer docs #115
    • Add Getting Started docs #124
    • Add resmlp_12_224_dino docs #128
    • Fix VGG docs bug #128
    • Add ConvNeXt docs #93

    Contributors

    A total of 5 developers contributed to this release. Thanks @rentainhe, @simonJJJ, @kaijieshi7, @lixiang007666, @Ldpe2G

    Source code(tar.gz)
    Source code(zip)
Owner
OneFlow
OneFlow
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
Eth brownie struct encoding example

eth-brownie struct encoding example Overview This repository contains an example of encoding a struct, so that it can be used in a function call, usin

Ittai Svidler 2 Mar 04, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral) Figure: Face image editing controlled via style images and segmenta

Peihao Zhu 579 Dec 30, 2022
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"

GINC small-scale in-context learning dataset GINC (Generative In-Context learning Dataset) is a small-scale synthetic dataset for studying in-context

P-Lambda 29 Dec 19, 2022
Real-CUGAN - Real Cascade U-Nets for Anime Image Super Resolution

Real Cascade U-Nets for Anime Image Super Resolution 中文 | English 🔥 Real-CUGAN

tarsin 111 Dec 28, 2022
Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN

Single Image Super-Resolution (SISR) with SRResNet, EDSR and SRGAN Introduction Image super-resolution (SR) is the process of recovering high-resoluti

8 Apr 15, 2022
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
Revisting Open World Object Detection

Revisting Open World Object Detection Installation See INSTALL.md. Dataset Our n

58 Dec 23, 2022
Code for our paper Aspect Sentiment Quad Prediction as Paraphrase Generation in EMNLP 2021.

Aspect Sentiment Quad Prediction (ASQP) This repo contains the annotated data and code for our paper Aspect Sentiment Quad Prediction as Paraphrase Ge

Isaac 39 Dec 11, 2022
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022
An inofficial PyTorch implementation of PREDATOR based on KPConv.

PREDATOR: Registration of 3D Point Clouds with Low Overlap An inofficial PyTorch implementation of PREDATOR based on KPConv. The code has been tested

ZhuLifa 14 Aug 03, 2022
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
UCSD Oasis platform

oasis UCSD Oasis platform Local project setup Install Docker Compose and make sure you have Pip installed Clone the project and go to the project fold

InSTEDD 4 Jun 16, 2021
Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes

Using Language Model to Bootstrap Human Activity Recognition Ambient Sensors Based in Smart Homes This repository is the official implementation of Us

Damien Bouchabou 0 Oct 18, 2021