shufflev2-yolov5:lighter, faster and easier to deploy

Overview

shufflev2-yolov5:lighter, faster and easier to deploy

0111

Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, and fewer parameters) and faster (add shuffle channel, yolov5 head for channel reduce. It can infer at least 10+ FPS On the Raspberry Pi 4B when input the frame with 320×320) and is easier to deploy (removing the Focus layer and four slice operations, reducing the model quantization accuracy to an acceptable range).

Comparison of ablation experiment results

ID Model Input_size Flops Params Size(M) [email protected] [email protected]:0.95
001 yolo-faster 320×320 0.25G 0.35M 1.4 24.4 -
002 nanodet-m 320×320 0.72G 0.95M 1.8 - 20.6
003 shufflev2-yolov5 320×320 1.43G 1.62M 3.3 35.5 -
004 nanodet-m 416×416 1.2G 0.95M 1.8 - 23.5
005 shufflev2-yolov5 416×416 2.42G 1.62M 3.3 40.5 23.5
006 yolov4-tiny 416×416 5.62G 8.86M 33.7 40.2 21.7
007 yolov3-tiny 416×416 6.96G 6.06M 23.0 33.1 16.6

Comparison on different platforms

Equipment Computing backend System Framework Input Speed{our} Speed{yolov5s}
Inter @i5-10210U window(x86) 640×640 torch-cpu 112ms 179ms
Nvidia @RTX 2080Ti Linux(x86) 640×640 torch-gpu 11ms 13ms
Raspberrypi 4B @ARM Cortex-A72 Linux(arm64) 320×320 ncnn 97ms 371ms

Detection effect

Pytorch{640×640}:

person

NCNN{FP16}@{640×640}:

image

NCNN{Int8}@{640×640}:

image

Base on YOLOv5

image

10FPS can be used with yolov5 on the Raspberry Pi with only 0.1T computing power

Excluding the first three warm-ups, the device temperature is stable above 45°, the forward reasoning framework is ncnn, and the two benchmark comparisons are recorded

# 第四次
[email protected]:~/Downloads/ncnn/build/benchmark $ ./benchncnn 8 4 0
loop_count = 8
num_threads = 4
powersave = 0
gpu_device = -1
cooling_down = 1
    shufflev2-yolov5  min =   90.86  max =   93.53  avg =   91.56
shufflev2-yolov5-int8  min =   83.15  max =   84.17  avg =   83.65
shufflev2-yolov5-416  min =  154.51  max =  155.59  avg =  155.09
         yolov4-tiny  min =  298.94  max =  302.47  avg =  300.69
           nanodet_m  min =   86.19  max =  142.79  avg =   99.61
          squeezenet  min =   59.89  max =   60.75  avg =   60.41
     squeezenet_int8  min =   50.26  max =   51.31  avg =   50.75
           mobilenet  min =   73.52  max =   74.75  avg =   74.05
      mobilenet_int8  min =   40.48  max =   40.73  avg =   40.63
        mobilenet_v2  min =   72.87  max =   73.95  avg =   73.31
        mobilenet_v3  min =   57.90  max =   58.74  avg =   58.34
          shufflenet  min =   40.67  max =   41.53  avg =   41.15
       shufflenet_v2  min =   30.52  max =   31.29  avg =   30.88
             mnasnet  min =   62.37  max =   62.76  avg =   62.56
     proxylessnasnet  min =   62.83  max =   64.70  avg =   63.90
     efficientnet_b0  min =   94.83  max =   95.86  avg =   95.35
   efficientnetv2_b0  min =  103.83  max =  105.30  avg =  104.74
        regnety_400m  min =   76.88  max =   78.28  avg =   77.46
           blazeface  min =   13.99  max =   21.03  avg =   15.37
           googlenet  min =  144.73  max =  145.86  avg =  145.19
      googlenet_int8  min =  123.08  max =  124.83  avg =  123.96
            resnet18  min =  181.74  max =  183.07  avg =  182.37
       resnet18_int8  min =  103.28  max =  105.02  avg =  104.17
             alexnet  min =  162.79  max =  164.04  avg =  163.29
               vgg16  min =  867.76  max =  911.79  avg =  889.88
          vgg16_int8  min =  466.74  max =  469.51  avg =  468.15
            resnet50  min =  333.28  max =  338.97  avg =  335.71
       resnet50_int8  min =  239.71  max =  243.73  avg =  242.54
      squeezenet_ssd  min =  179.55  max =  181.33  avg =  180.74
 squeezenet_ssd_int8  min =  131.71  max =  133.34  avg =  132.54
       mobilenet_ssd  min =  151.74  max =  152.67  avg =  152.32
  mobilenet_ssd_int8  min =   85.51  max =   86.19  avg =   85.77
      mobilenet_yolo  min =  327.67  max =  332.85  avg =  330.36
  mobilenetv2_yolov3  min =  221.17  max =  224.84  avg =  222.60

# 第八次
[email protected]:~/Downloads/ncnn/build/benchmark $ ./benchncnn 8 4 0
loop_count = 8
num_threads = 4
powersave = 0
gpu_device = -1
cooling_down = 1
           nanodet_m  min =   84.03  max =   87.68  avg =   86.32
       nanodet_m-416  min =  143.89  max =  145.06  avg =  144.67
    shufflev2-yolov5  min =   84.30  max =   86.34  avg =   85.79
shufflev2-yolov5-int8  min =   80.98  max =   82.80  avg =   81.25
shufflev2-yolov5-416  min =  142.75  max =  146.10  avg =  144.34
         yolov4-tiny  min =  276.09  max =  289.83  avg =  285.99
           nanodet_m  min =   81.15  max =   81.71  avg =   81.33
          squeezenet  min =   59.37  max =   61.19  avg =   60.35
     squeezenet_int8  min =   49.30  max =   49.66  avg =   49.43
           mobilenet  min =   72.40  max =   74.13  avg =   73.37
      mobilenet_int8  min =   39.92  max =   40.23  avg =   40.07
        mobilenet_v2  min =   71.57  max =   73.07  avg =   72.29
        mobilenet_v3  min =   54.75  max =   56.00  avg =   55.40
          shufflenet  min =   40.07  max =   41.13  avg =   40.58
       shufflenet_v2  min =   29.39  max =   30.25  avg =   29.86
             mnasnet  min =   59.54  max =   60.18  avg =   59.96
     proxylessnasnet  min =   61.06  max =   62.63  avg =   61.75
     efficientnet_b0  min =   91.86  max =   95.01  avg =   92.84
   efficientnetv2_b0  min =  101.03  max =  102.61  avg =  101.71
        regnety_400m  min =   76.75  max =   78.58  avg =   77.60
           blazeface  min =   13.18  max =   14.67  avg =   13.79
           googlenet  min =  136.56  max =  138.05  avg =  137.14
      googlenet_int8  min =  118.30  max =  120.17  avg =  119.23
            resnet18  min =  164.78  max =  166.80  avg =  165.70
       resnet18_int8  min =   98.58  max =   99.23  avg =   98.96
             alexnet  min =  155.06  max =  156.28  avg =  155.56
               vgg16  min =  817.64  max =  832.21  avg =  827.37
          vgg16_int8  min =  457.04  max =  465.19  avg =  460.64
            resnet50  min =  318.57  max =  323.19  avg =  320.06
       resnet50_int8  min =  237.46  max =  238.73  avg =  238.06
      squeezenet_ssd  min =  171.61  max =  173.21  avg =  172.10
 squeezenet_ssd_int8  min =  128.01  max =  129.58  avg =  128.84
       mobilenet_ssd  min =  145.60  max =  149.44  avg =  147.39
  mobilenet_ssd_int8  min =   82.86  max =   83.59  avg =   83.22
      mobilenet_yolo  min =  311.95  max =  374.33  avg =  330.15
  mobilenetv2_yolov3  min =  211.89  max =  286.28  avg =  228.01

NCNN_Android_demo

This is a Redmi phone, the processor is Snapdragon 730G, and shufflev2-yolov5 is used for detection. The performance is as follows:


This is the quantized int8 model:


Outdoor scene example:


More detailed explanation

Detailed model link: https://zhuanlan.zhihu.com/p/400545131

image

NCNN deployment and int8 quantization:https://zhuanlan.zhihu.com/p/400975662

int8

Reference

https://github.com/Tencent/ncnn

https://github.com/ultralytics/yolov5

https://github.com/megvii-model/ShuffleNet-Series

Comments
  • 在COCO上直接训练v5lite-s 416x416,未修改任何参数,map仅35.2

    在COCO上直接训练v5lite-s 416x416,未修改任何参数,map仅35.2

    直接用原始参数在COCO上训练v5lite-s,输入416x416,测试的结果如下: 模型测试命令:python test.py --device 0 --conf-thres 0.1 --iou-thres 0.5

    Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 79/79 [00:45<00:00 all 5000 36335 0.537 0.363 0.352 0.203

    使用博主提供的模型v5lite-s,输入416x416,测试的结果如下: 模型测试命令:python test.py --device 0 --conf-thres 0.1 --iou-thres 0.5

    Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 79/79 [00:48<00:00 all 5000 36335 0.542 0.388 0.373 0.225

    mAP相差2个点,请问这是什么原因导致的呢?期待大佬的回复!谢谢。

    documentation 
    opened by Broad-sky 12
  • speed problem

    speed problem

    使用原始yolov5s6 640输入,速度为12ms,使用下面repvgg_block 640输入为13ms GPU v100

    YOLOv5 🚀 by Ultralytics, GPL-3.0 license

    Parameters

    nc: 80 # number of classes depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple anchors:

    • [19,27, 44,40, 38,94] # P3/8
    • [96,68, 86,152, 180,137] # P4/16
    • [140,301, 303,264, 238,542] # P5/32
    • [436,615, 739,380, 925,792] # P6/64

    YOLOv5 v6.0 backbone

    backbone:

    [from, number, module, args]

    [[-1, 1, Conv, [32, 6, 2, 2]], # 0-P1/2 [-1, 1, Conv, [64, 3, 2]], # 1-P2/4 [-1, 1, C3, [64]], [-1, 1, RepVGGBlock, [128, 3, 2]], # 3-P3/8 [-1, 3, C3, [128]], [-1, 1, RepVGGBlock, [256, 3, 2]], # 5-P4/16 [-1, 3, C3, [256]], [-1, 1, RepVGGBlock, [512, 3, 2]], # 7-P5/32 [-1, 3, C3, [512]], [-1, 1, RepVGGBlock, [768, 3, 2]], # 9-P6/64 [-1, 3, C3, [768]], [-1, 1, SPPF, [768, 5]], # 11 ]

    YOLOv5 v6.0 head

    head: [[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 8], 1, Concat, [1]], # cat backbone P5 [-1, 3, C3, [512, False]], # 15

    [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, Concat, [1]], # cat backbone P4 [-1, 3, C3, [256, False]], # 19

    [-1, 1, Conv, [128, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, Concat, [1]], # cat backbone P3 [-1, 3, C3, [128, False]], # 23 (P3/8-small)

    [-1, 1, Conv, [128, 3, 2]], [[-1, 20], 1, Concat, [1]], # cat head P4 [-1, 3, C3, [256, False]], # 26 (P4/16-medium)

    [-1, 1, Conv, [256, 3, 2]], [[-1, 16], 1, Concat, [1]], # cat head P5 [-1, 3, C3, [512, False]], # 29 (P5/32-large)

    [-1, 1, Conv, [512, 3, 2]], [[-1, 12], 1, Concat, [1]], # cat head P6 [-1, 3, C3, [768, False]], # 32 (P6/64-xlarge)

    [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6) ]

    opened by Hiwyl 10
  • 关于使用c++部署的问题?

    关于使用c++部署的问题?

    你好,大佬,就是我想问一下如果使用trt或tensorrt加速能够用c++的api吗,因为目前我的项目其他部分是用的c++,然后用tensorrt加速yolov5用的是c++,主要是我看您这个把pplcnet和yolov5s结合的工作在速度上有很大提升,我就是想用openvino的c++的api来推理可以吗?

    opened by Hezhexi2002 10
  • 使用openvino推理xml文件报错

    使用openvino推理xml文件报错

    我在项目根目录下python openvino/openvino.py -m v5lite-c.xml -i openvino/bike.jpg, 我事先运行了/opt/intel/openvino_2021/bin/setupvars.sh 报错:

    Traceback (most recent call last):
      File "openvino/openvino.py", line 23, in <module>
        import ngraph
      File "/opt/intel/openvino_2021/python/python3.8/ngraph/__init__.py", line 16, in <module>
        from ngraph.helpers import function_from_cnn
      File "/opt/intel/openvino_2021/python/python3.8/ngraph/helpers.py", line 7, in <module>
        from openvino.inference_engine import IENetwork
      File "/home/fraunhofer/Software/YOLOv5-Lite/openvino/openvino.py", line 25, in <module>
        from openvino.inference_engine import IENetwork, IECore
    ModuleNotFoundError: No module named 'openvino.inference_engine'; 'openvino' is not a package
    
    opened by ghost 8
  • Attention imporved yolov5 performance

    Attention imporved yolov5 performance

    hi. i am interested in this project. maybe i can contribute for this good job.

    | Model | map0.5:0.95 | map0.5:0.05 | Speed CPU b1 | Speed 2080ti b1 | Speed 2080ti b32 |params(M) | FLOPS | | ---------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | 5s offical | 37.2 | 56.0 | 98 | 6.4(V100) | 0.9(V100) | 7.2 | 16.5 | | 5s | 37.2 | 56.8 | 47.7 | 7.3 | 1.0 | 7.2 | 16.5 | | 5sbackbone + C3GC | 37.7 | 57.3 | 63.6 | 8.4 | 1.4 | 7.5 | 16.8 | | 5sC3GC + FPN_conv | 40.5 | 59.6 | 92.7.0 | 12.0 | 1.5 | 11.0 | 22.7 | | 5m | 45.2 | 63.9 | 224 | 8.2(V100) | 1.7(V100) | 21.2 | 49.0 |

    opened by 315386775 7
  • MNN performance much less than NCNN

    MNN performance much less than NCNN

    Hi, has anybody met the problem that MNN has less performance than NCNN? Like the confidence and mAP? I printed out the outputs of NCNN and MNN in terms of the same image, but their outputs differ a lot. The confidences of MNN are almost all lower than NCNN, and bboxs are worse as well. Another question is that my input size of an image is 320 * 240, but the model input size is 320 * 256. Is that reasonable?

    opened by bl6g6 7
  • 模型文件在master版本的yolo下面跑不起来

    模型文件在master版本的yolo下面跑不起来

    python3 train.py --data data/data.yaml --cfg models/v5Lite-g.yaml --batch-size 8

    RuntimeError: Given groups=1, weight of size [3, 64, 2, 2], expected input[1, 32, 128, 128] to have 64 channels, but got 32 channels instead

    opened by deep-practice 7
  • 测试帧率问题

    测试帧率问题

    您好,复现shufflev2-yolov5代码与官方yolov5s代码对比,在同一实验环境下测试同一段视频为何测试速度反而是yolov5s更快一些(测试了两次)。

    shufflev2-yolov5 | yolov5s ------------ | ------------- 96.353s | 92.874s 95.978s | 90.501s

    opened by luckywangchenxi 7
  • shufflenetv2预训练的模型的问题

    shufflenetv2预训练的模型的问题

    请问您有用shufflenetv2的预训练模型训练吗? 我要用pytorch官网的'shufflenetv2_x1.0' 模型的时候出现层的shape对应不上的问题:左边是官方的,右边是你的 10 : stage2.0.branch1.2.weight torch.Size([58, 24, 1, 1]) __ model.1.branch1.1.running_var torch.Size([24]) 11 : stage2.0.branch1.3.weight torch.Size([58]) __ model.1.branch1.1.num_batches_tracked torch.Size([]) 12 : stage2.0.branch1.3.bias torch.Size([58]) __ model.1.branch1.2.weight torch.Size([60, 24, 1, 1]

    你的模型会出现多出一层‘model.1.branch1.1.num_batches_tracked’ 而且shape是[ ]空的,网上说这是新版pytorch的问题 但是主要问题是左边stage2.0.branch1.2 ([58, 24, 1, 1]) 和右边你的branch1.2.weight torch.Size([60, 24, 1, 1] 开始出现shape不匹配,接下去陆陆续续都开始shape不一样了

    是您搭建网络出错了吗?

    opened by chenweifu2008 7
  • v5Lite-g模型训练时map很低。

    v5Lite-g模型训练时map很低。

    我正在进行该项目思路相同的工作,我认为将repvgg嵌入到yolov5中是一种很好的想法,根据v5Lite-g.yaml提供的模型结构,我使用nn.model重新构建了该结构,并将提供的权重转换到我重构的模型结构内,在我的测试中两个模型具有完全相同的检测结果,姑且我认为以上的工作,我是正确的。 但是当使用我自己重构的结构来训练模型时,我发现得到的map(Terminal截图)会比该项目使用lite-g训练的map(日志截图)会低很多,即使我使用了完全相同的训练配置。 image image Yolov5-Lite项目基于v5 5.0版本开发,请问,相较u版的代码,该项目是否在lite-g模型测试时添加了其他额外的操作,或者额外的方法呢?

    opened by Materx 6
  • unexpected result

    unexpected result

    Hello

    Sorry if this is more a question that an issue, i'm not sure if i'm doing something wrong.

    I trained a new model, using the v5lite-s weight with "person" class using coco + voc dataset, the 1k background images as negative samples and to have an input of 320. I did not change any hyperparameter and using ncnn framework

    I get: mAP 05:0.95 of 0.498 mAP 05 of 0.795 Precision 0.84 Recall 0.68

    I was expecting to have a better result than the provided model trained on all coco classes... but as you can see there is at least two persons not detected.

    Maybe is a problem on the model conversion?

    Thanks

    Standard coco model result-coco

    New coco_voc model result-coco-voc

    opened by natxopedreira 6
  • TypeError: argument of type 'int' is not iterable

    TypeError: argument of type 'int' is not iterable

    in linux and windows,both are occured error

    Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.45, device='', exist_ok=False, img_size=640, iou_thres=0.5, name='exp', nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='0', update=False, view_img=False, weights='weights/v5lite-s.pt')
    YOLOv5  v1.4-51-gca7ed7c torch 1.13.0+cpu CPU
    
    Fusing layers...
    Model Summary: 230 layers, 1640661 parameters, 0 gradients
    1/1: 0... Traceback (most recent call last):
      File "detect.py", line 178, in <module>
        detect()
      File "detect.py", line 51, in detect
        dataset = LoadStreams(source, img_size=imgsz, stride=stride)
      File "D:\er_code\yolov5-lite\utils\datasets.py", line 279, in __init__
        if 'youtube.com/' in url or 'youtu.be/' in url:  # if source is YouTube video
    TypeError: argument of type 'int' is not iterable
    

    捕获

    another question,is that possible for me to load the model by hub or something else?

    opened by danyow-cheung 0
  • Axera's AX620A has supported the YOLOv5-Lite

    Axera's AX620A has supported the YOLOv5-Lite

    repo

    • https://github.com/AXERA-TECH/ax-samples

    source code

    • https://github.com/AXERA-TECH/ax-samples/blob/main/examples/ax_yolov5_lite_steps.cc

    model

    • https://github.com/AXERA-TECH/ax-models/blob/main/ax620/v5Lite-g-sim-640.joint

    result on AXera-Pi

    • https://github.com/AXERA-TECH/ax-samples/tree/main/examples#yolov5-lite
    [email protected]:~/samples# ./ax_yolov5_lite -i cengiz-sari-X4spr8Kuwxc-unsplash.jpg -m ./models/v5Lite-g-sim-640.joint
    --------------------------------------
    model file : ./models/v5Lite-g-sim-640.joint
    image file : cengiz-sari-X4spr8Kuwxc-unsplash.jpg
    img_h, img_w : 640 640
    [AX_SYS_LOG] AX_SYS_Log2ConsoleThread_Start
    Run-Joint Runtime version: 0.5.10
    --------------------------------------
    [INFO]: Virtual npu mode is 1_1
    
    Tools version: 0.6.1.20
    07305a6
    run over: output len 3
    --------------------------------------
    Create handle took 492.77 ms (neu 34.07 ms, axe 0.00 ms, overhead 458.70 ms)
    --------------------------------------
    Repeat 10 times, avg time 22.56 ms, max_time 22.97 ms, min_time 22.48 ms
    --------------------------------------
    detection num: 18
     0:  94%, [1866, 1142, 2485, 2806], person
     0:  92%, [2417, 1240, 2971, 2807], person
     0:  89%, [1356, 1234, 1762, 2432], person
     2:  88%, [2827, 1334, 3797, 2230], car
     2:  85%, [3385, 1416, 4031, 2852], car
     0:  84%, [ 895, 1276, 1281, 2424], person
     0:  78%, [ 747, 1278,  926, 1729], person
     0:  77%, [  25, 1254,  213, 1809], person
     0:  73%, [ 419, 1325,  585, 1780], person
     0:  71%, [ 247, 1316,  423, 1801], person
    28:  64%, [ 729, 1812,  998, 2319], suitcase
     0:  61%, [ 610, 1421,  744, 1729], person
     2:  53%, [3808, 1353, 4031, 1502], car
     2:  50%, [2782, 1353, 2954, 1519], car
     0:  42%, [1167, 1204, 1325, 1572], person
     0:  39%, [1318, 1261, 1459, 1632], person
    12:  38%, [1861, 1370, 1949, 1530], parking meter
     0:  35%, [ 171, 1305,  284, 1788], person
    
    opened by BUG1989 1
  • 文档中我看4b可以运行10帧多?说是80ms左右,实际用cm4跑 只有2帧,问题在哪呀

    文档中我看4b可以运行10帧多?说是80ms左右,实际用cm4跑 只有2帧,问题在哪呀

    这是我的打印信息:

    [email protected]:~/YOLOv5-Lite $ python detect.py --source 0 --img-size 320 /usr/local/lib/python3.9/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: warn(f"Failed to load image Python extension: {e}") Namespace(weights='weights/v5lite-s.pt', source='0', img_size=320, conf_thres=0.45, iou_thres=0.5, device='', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False) YOLOv5 🚀 v1.4-46-gcdf42dd torch 1.11.0 CPU

    Fusing layers... /usr/local/lib/python3.9/dist-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /root/pytorch/aten/src/ATen/native/TensorShape.cpp:2227.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 230 layers, 1640661 parameters, 0 gradients, 3.9 GFLOPS 1/1: 0... success (320x240 at 30.00 FPS).

    0: 256x320 Done. (0.336s) 0: 256x320 Done. (0.542s) 0: 256x320 Done. (0.508s) 0: 256x320 Done. (0.561s) 0: 256x320 Done. (0.494s) 0: 256x320 Done. (0.601s)

    是用的v5lite-s模型 img-size也改成了320 请问还有哪里没设置对吗,为何帧数相差如此之大

    opened by jd3096-mpy 2
Releases(v1.4)
  • v1.4(Mar 5, 2022)

    • update export.py to extract v5lite onnx model with concat head. @ppogg
    • add tensorrt inference sdk thanks for @ChaucerG
    • add onnxruntime inference sdk,thanks for @hpc203
    • add gcnet model , thanks for @315386775
    • undate yolo.py @ChaucerG @Alexsdfdfs @315386775
    • undate model.py @ppogg @Alexsdfdfs @315386775 Now YOLOv5-Lite support android, ncnn, mnn, tnn, onnxruntime, tensorrt, openvino, tflite. May be the repo will support more in the future~ Thanks for all the contributors of YOLOv5-Lite!
    Source code(tar.gz)
    Source code(zip)
    YOLOv5-Lite-1.4.zip(3.49 MB)
  • v1.3(Oct 14, 2021)

  • v1.2(Oct 14, 2021)

  • v1.1(Aug 25, 2021)

    • Remove some redundant code
    • Add the example of Android development
    • Release the first version of Android apk
    • Add lighter baseline
    • Add eval.py
    • Update baseline
    # evaluate in 320×320:
    Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.208
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.362
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.206
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.049
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.197
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.373
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.216
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.339
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.368
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.122
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.403
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.597
    
    # evaluate in 416×416:
    Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.244
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.413
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.246
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.076
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.244
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.401
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.238
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.380
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.412
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.181
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.448
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.626
    
    # evaluate in 640×640:
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.271
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.457
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.274
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.125
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.297
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.364
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.254
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.422
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.460
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.272
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.497
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.616
    
    Source code(tar.gz)
    Source code(zip)
    YOLOv5-Lite.zip(79.21 MB)
  • v1.0(Aug 24, 2021)

    About shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

    Source code(tar.gz)
    Source code(zip)
Owner
pogg
大家好,我是平头哥。我将在这里记录非常有趣的实验。
pogg
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
Train emoji embeddings based on emoji descriptions.

emoji2vec This is my attempt to train, visualize and evaluate emoji embeddings as presented by Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko

Miruna Pislar 17 Sep 03, 2022
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 03, 2023
RL agent to play μRTS with Stable-Baselines3

Gym-μRTS with Stable-Baselines3/PyTorch This repo contains an attempt to reproduce Gridnet PPO with invalid action masking algorithm to play μRTS usin

Oleksii Kachaiev 24 Nov 11, 2022
PyTorch implementation of the cross-modality generative model that synthesizes dance from music.

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang,

NVIDIA Research Projects 485 Dec 26, 2022
Code release for Universal Domain Adaptation(CVPR 2019)

Universal Domain Adaptation Code release for Universal Domain Adaptation(CVPR 2019) Requirements python 3.6+ PyTorch 1.0 pip install -r requirements.t

THUML @ Tsinghua University 229 Dec 23, 2022
Deep ViT Features as Dense Visual Descriptors

dino-vit-features [paper] [project page] Official implementation of the paper "Deep ViT Features as Dense Visual Descriptors". We demonstrate the effe

Shir Amir 113 Dec 24, 2022
Code, Models and Datasets for OpenViDial Dataset

OpenViDial This repo contains downloading instructions for the OpenViDial dataset in 《OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Vis

119 Dec 08, 2022
This is implementation of AlexNet(2012) with 3D Convolution on TensorFlow (AlexNet 3D).

AlexNet_3dConv TensorFlow implementation of AlexNet(2012) by Alex Krizhevsky, with 3D convolutiional layers. 3D AlexNet Network with a standart AlexNe

Denis Timonin 41 Jan 16, 2022
tensorflow code for inverse face rendering

InverseFaceRender This is tensorflow code for our project: Learning Inverse Rendering of Faces from Real-world Videos. (https://arxiv.org/abs/2003.120

Yuda Qiu 18 Nov 16, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
History Aware Multimodal Transformer for Vision-and-Language Navigation

History Aware Multimodal Transformer for Vision-and-Language Navigation This repository is the official implementation of History Aware Multimodal Tra

Shizhe Chen 46 Nov 23, 2022
FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware.

FIRM-AFL FIRM-AFL is the first high-throughput greybox fuzzer for IoT firmware. FIRM-AFL addresses two fundamental problems in IoT fuzzing. First, it

356 Dec 23, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 03, 2023
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Phil Wang 17 May 06, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

50 Oct 19, 2022
Discover hidden deepweb pages

DeepWeb Scapper Att: Demo version An simple script to scrappe deepweb to find pages. Will return if any of those exists and will save on a file. You s

Héber Júlio 77 Oct 02, 2022
The implementation of "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Band Speech Enhancement"

SF-Net for fullband SE This is the repo of the manuscript "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Ban

Guochen Yu 36 Dec 02, 2022