tf2-keras implement yolov5

Overview

YOLOv5 in tesnorflow2.x-keras

模型测试

  • 训练 COCO2017(val 5k)

  • 检测效果

  • 精度/召回率

Requirements

pip3 install -r requirements.txt

Get start

  1. 训练
python3 train.py
  1. tensorboard
tensorboard --host 0.0.0.0 --logdir ./logs/ --port 8053 --samples_per_plugin=images=40
  1. 查看
http://127.0.0.1:8053
  1. 测试, 修改detect.py里面input_imagemodel_path
python3 detect.py

训练自己的数据

  1. labelme打标自己的数据
  2. 打开data/labelme2coco.py脚本, 修改如下地方
input_dir = '这里写labelme打标时保存json标记文件的目录'
output_dir = '这里写要转CoCo格式的目录,建议建一个空目录'
labels = "这里是你打标时所有的类别名, txt文本即可, 每行一个类, 类名无需加引号"
  1. 执行data/labelme2coco.py脚本会在output_dir生成对应的json文件和图片
  2. 修改train.py文件中coco_annotation_file以及num_class, 注意classes通过CoCoDataGenrator(*).coco.cats[label_id]['name']可获得,由于coco中类别不连续,所以通过coco.cats拿到的数组下标拿到的类别可能不准.
  3. 开始训练, python3 train.py
Comments
  • 关于类别损失计算的问题

    关于类别损失计算的问题

    您好,loss这段不是很理解, https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L142-L152 请问targets最后两位应该是置信度1和最佳的anchor索引吗? https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L288-L293 那这边split出来的true_obj, true_cls应该就是对应的置信度1和最佳的anchor索引吧。 那这个类别损失 https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L356 计算的不是最佳anchor索引吗,是跟obj_mask 有关系吗

    opened by whalefa1I 5
  • sparse_categorical_crossentropy训练时有nan结果

    sparse_categorical_crossentropy训练时有nan结果

    有的数据会在这行出现nan https://github.com/yyccR/yolov5_in_tf2_keras/blob/033a1156c1481f4258bf24a4a8215af39682da94/loss.py#L357 查看了input的is_nan,都正常。而且把sparse_categorical_crossentropy换成binary_crossentropy就好了。 请问这两者在这里计算有差别吗,是否可以进行替换

    opened by whalefa1I 3
  • lebelme2coco处理逻辑有误

    lebelme2coco处理逻辑有误

    我在实际使用您的代码训练自己的数据集时发现,labelme2coco.py 好像缺少对shape_type == "rectangle"时的处理,导致我最后生成的json文件annotations项为空。 以下是labelme2coco.py文件100行到124行代码: ` if shape_type == "polygon": mask = labelme.utils.shape_to_mask( img.shape[:2], points, shape_type ) # cv2.imshow("",np.array(mask, dtype=np.uint8)*255) # cv2.waitKey(0)

                if group_id is None:
                    group_id = uuid.uuid1()
    
                instance = (label, group_id)
                # print(instance)
    
                if instance in masks:
                    masks[instance] = masks[instance] | mask
                else:
                    masks[instance] = mask
                # print(masks[instance].shape)
    
                if shape_type == "rectangle":
                    (x1, y1), (x2, y2) = points
                    x1, x2 = sorted([x1, x2])
                    y1, y2 = sorted([y1, y2])
                    points = [x1, y1, x2, y1, x2, y2, x1, y2]
                if shape_type == "circle": 
                ....
    

    ` 代码永远不会执行到shape_type == "rectangle"或shape_type == "circle"。

    opened by aijialin 2
  • layers.py

    layers.py

    根據ultralytics/yolov5:

    https://github.com/ultralytics/yolov5/blob/63ddb6f0d06f6309aa42bababd08c859197a27af/models/common.py#L70-L73

    這一段程式:

    https://github.com/yyccR/yolov5_in_tf2_keras/blob/46298d7c98073750176d64896ee9dc01b55c5aca/layers.py#L127-L132

    是不是應該改寫成:

        def call(self, inputs, *args, **kwargs):
            y = self.multiheadAttention(self.q(inputs), self.v(inputs), self.k(inputs)) + inputs
            x = self.fc1(x)
            x = self.fc2(x)
            x = x +  y
            return x
    
    opened by AugustusHsu 1
  • What is the mAP on COCO17 val ?

    What is the mAP on COCO17 val ?

    Hi @yyccR, thanks for your repo. I want to know if you can reach the same mAP as in original YOLOV5 (Train on COCO17 train and test on COCO17 val)? And do you have plan to release some pretrained checkpoint ?

    opened by Tyler-D 1
Releases(v1.1)
  • v1.1(Jun 24, 2022)

    v1.1 几个总结:

    • [1]. 调整tf.keras.layers.BatchNormalization的__call__方法中training=True
    • [2]. 新增TFLite/onnx格式导出与验证,详见/data/h5_to_tflite.py, /data/h5_to_onnx.py
    • [3]. 修改backbone网络里batch_size,在训练和测试时需指定,避免tflite导出时FlexOps问题
    • [4]. YoloHead里对类别不再做softmax,直接sigmoid,支持多类别输出
    • [5]. release里的yolov5s-best.h5为kaggle猫狗脸数据集的重新训练权重,训练:测试为8:2,val精度大概如下:

    | class | [email protected] | [email protected]:0.95 | precision | recall | | :-: | :-: | :-: | :-: | :-: | | cat | 0.962680 | 0.672483 | 0.721003 | 0.958333 | | dog | 0.934285 | 0.546893 | 0.770701 | 0.923664 | | total | 0.948482 | 0.609688 | 0.745852 | 0.940999 |

    • [6]. release里的yolov5s-best.tflite为上述yolov5s-best.h5的tflite量化模型,建议用Netron软件打开查看输入输出
    • [7]. release里的yolov5s-best.onnx为上述yolov5s-best.h5的onnx模型,建议用Netron软件打开查看输入输出
    • [8]. android 模型测试效果如下:

    就这样,继续加油!💪🏻💪🏻💪🏻

    Source code(tar.gz)
    Source code(zip)
    yolov5s-best.h5(27.51 MB)
    yolov5s-best.onnx(27.25 MB)
    yolov5s-best.tflite(6.95 MB)
  • v1.0(Jun 21, 2022)

    v1.0 几个总结:

    • [1]. 模型结构总的与 ultralytics/yolov5 v6.0 保持一致
    • [2]. 其中Conv层替换swishRelu
    • [3]. 整体数据增强与 ultralytics/yolov5 保持一致
    • [4]. readme中训练所需的数据集为kaggle公开猫狗脸检测数据集,已放到release列表中
    • [5]. 为什么不训练coco数据集?因为没资源,跑一个coco要很久的,服务器一直都有任务在跑所以没空去跑 - . -
    • [6]. release里的yolov5s-best.h5为上述kaggle猫狗脸数据集的训练权重,训练:测试为8:2,val精度大概如下:

    | class | [email protected] | [email protected]:0.95 | precision | recall | | :-: | :-: | :-: | :-: | :-: | | cat | 0.905156 | 0.584378 | 0.682848 | 0.886555 | | dog | 0.940633 | 0.513005 | 0.724036 | 0.934866 | | total | 0.922895 | 0.548692 | 0.703442 | 0.910710 |

    就这样,继续加油!💪🏻💪🏻💪🏻

    Source code(tar.gz)
    Source code(zip)
    JPEGImages.zip(260.17 MB)
    yolov5s-best.h5(27.51 MB)
Owner
yangcheng
yangcheng
Fair Recommendation in Two-Sided Platforms

Fair Recommendation in Two-Sided Platforms

gourabgggg 1 Nov 10, 2021
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 01, 2022
A strongly-typed genetic programming framework for Python

monkeys "If an army of monkeys were strumming on typewriters they might write all the books in the British Museum." monkeys is a framework designed to

H. Chase Stevens 115 Nov 27, 2022
FwordCTF 2021 Infrastructure and Source code of Web/Bash challenges

FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our

Kahla 5 Nov 25, 2022
Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

Rahul Nadkarni 41 Nov 30, 2022
A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021)

GDN A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021) Abstract In this paper, we consider an inverse problem i

4 Sep 13, 2022
Extracting and filtering paraphrases by bridging natural language inference and paraphrasing

nli2paraphrases Source code repository accompanying the preprint Extracting and filtering paraphrases by bridging natural language inference and parap

Matej Klemen 1 Mar 09, 2022
Videocaptioning.pytorch - A simple implementation of video captioning

pytorch implementation of video captioning recommend installing pytorch and pyth

Yiyu Wang 2 Jan 01, 2022
UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
Spam your friends and famly and when you do your famly will disown you and you will have no friends.

SpamBot9000 Spam your friends and family and when you do your family will disown you and you will have no friends. Terms of Use Disclaimer: Please onl

DJ15 0 Jun 09, 2022
SARS-Cov-2 Recombinant Finder for fasta sequences

Sc2rf - SARS-Cov-2 Recombinant Finder Pronounced: Scarf What's this? Sc2rf can search genome sequences of SARS-CoV-2 for potential recombinants - new

Lena Schimmel 41 Oct 03, 2022
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
Vehicles Counting using YOLOv4 + DeepSORT + Flask + Ngrok

A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok

Duong Tran Thanh 37 Dec 16, 2022
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
Unofficial implementation (replicates paper results!) of MINER: Multiscale Implicit Neural Representations in pytorch-lightning

MINER_pl Unofficial implementation of MINER: Multiscale Implicit Neural Representations in pytorch-lightning. 📖 Ref readings Laplacian pyramid explan

AI葵 51 Nov 28, 2022
S-attack library. Official implementation of two papers "Are socially-aware trajectory prediction models really socially-aware?" and "Vehicle trajectory prediction works, but not everywhere".

S-attack library: A library for evaluating trajectory prediction models This library contains two research projects to assess the trajectory predictio

VITA lab at EPFL 71 Jan 04, 2023
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Implementation of C-RNN-GAN.

Implementation of C-RNN-GAN. Publication: Title: C-RNN-GAN: Continuous recurrent neural networks with adversarial training Information: http://mogren.

Olof Mogren 427 Dec 25, 2022