๐Ÿ… Top 5% in ์ œ2ํšŒ ์—ฐ๊ตฌ๊ฐœ๋ฐœํŠน๊ตฌ ์ธ๊ณต์ง€๋Šฅ ๊ฒฝ์ง„๋Œ€ํšŒ AI SPARK ์ฑŒ๋ฆฐ์ง€

Overview

AI_SPARK_CHALLENG_Object_Detection

์ œ2ํšŒ ์—ฐ๊ตฌ๊ฐœ๋ฐœํŠน๊ตฌ ์ธ๊ณต์ง€๋Šฅ ๊ฒฝ์ง„๋Œ€ํšŒ AI SPARK ์ฑŒ๋ฆฐ์ง€

๐Ÿ… Top 5% in mAP(0.75) (443๋ช… ์ค‘ 13๋“ฑ, mAP: 0.98116)

๋Œ€ํšŒ ์„ค๋ช…

  • Edge ํ™˜๊ฒฝ์—์„œ์˜ ๊ฐ€์ถ• Object Detection (Pig, Cow)
  • ์‹ค์ œ ํ™˜๊ฒฝ์—์„œ ํ™œ์šฉ๊ฐ€๋Šฅํ•œ Edge Device (ex: ์ ฏ์Šจ ๋‚˜๋…ธ๋ณด๋“œ ๋“ฑ) ๊ธฐ๋ฐ˜์˜ ๊ฐ€๋ฒผ์šด ๊ฒฝ๋Ÿ‰ํ™” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์ด ๋ชฉํ‘œ์ด๋‹ค.
  • ๊ฐ€์ค‘์น˜ ํŒŒ์ผ์˜ ์šฉ๋Ÿ‰์€ 100MB๋กœ ์ œํ•œํ•œ๋‹ค.
  • ๊ฐ€์ค‘์น˜ ํŒŒ์ผ์˜ ์šฉ๋Ÿ‰์ด 100MB์ดํ•˜์ด๋ฉด์„œ mAP(IoU 0.75)๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ˆœ์œ„๋ฅผ ๋งค๊ธด๋‹ค.
  • ๋ณธ ๋Œ€ํšŒ์˜ ๋ชจ๋“  ๊ณผ์ •์€ Colab Pro ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ ๋ฐ ์žฌํ˜„ํ•œ๋‹ค.

Hardware

  • Colab Pro (P100 or T4)

Data

  • AI Hub์—์„œ ์ œ๊ณตํ•˜๋Š” ๊ฐ€์ถ• ํ–‰๋™ ์˜์ƒ ๋ฐ์ดํ„ฐ์…‹ (๋‹ค์šด๋กœ๋“œ ๋งํฌ)
  • [์›์ฒœ]์†Œ_bbox.zip: ์†Œ image ํŒŒ์ผ
  • [๋ผ๋ฒจ]์†Œ_bbox.zip: ์†Œ annotation ํŒŒ์ผ
  • [์›์ฒœ]๋ผ์ง€_bbox.zip: ๋ผ์ง€ image ํŒŒ์ผ
  • [๋ผ๋ฒจ]๋ผ์ง€_bbox.zip: ๋ผ์ง€ annotation ํŒŒ์ผ
  • ์ถ”๊ฐ€์ ์œผ๋กœ, annotation์—์„œ์˜ "categories"์˜ ๊ฐ’๊ณผ annotation list์˜ "category_id"๋Š” ์†Œ, ๋ผ์ง€ ํด๋ž˜์Šค์™€ ๋ฌด๊ด€ํ•˜๋ฏ€๋กœ ์ด๋ฅผ ํ™œ์šฉํ•  ๊ฒฝ์šฐ ์ž˜๋ชป๋œ ๊ฒฐ๊ณผ๋กœ ์ด์–ด์งˆ ์ˆ˜ ์žˆ๋‹ค.

Code

+- data (.gitignore) => zipํŒŒ์ผ๋งŒ ์ตœ์ดˆ ์ƒ์„ฑ(AI Hub) ํ›„ ์ถ”๊ฐ€ ๋ฐ์ดํ„ฐ๋Š” EDA ํด๋” ์ฝ”๋“œ๋กœ๋ถ€ํ„ฐ ์ƒ์„ฑ
|   +- [๋ผ๋ฒจ]๋ผ์ง€_bbox.zip
|   +- [๋ผ๋ฒจ]์†Œ_bbox.zip
|   +- [์›์ฒœ]๋ผ์ง€_bbox.zip
|   +- [์›์ฒœ]์†Œ_bbox.zip
|   +- Train_Dataset.tar (EDA - Make_Dataset_Multilabel.ipynb์—์„œ ์ƒ์„ฑ) 
|   +- Valid_Dataset.tar (EDA - Make_Dataset_Multilabel.ipynb์—์„œ ์ƒ์„ฑ)
|   +- Train_Dataset_Full.tar (EDA - Make_Dataset_Full.ipynb์—์„œ ์ƒ์„ฑ)
|   +- Train_Dataset_mini.tar (EDA - Make_Dataset_Mini.ipynb์—์„œ ์ƒ์„ฑ)
|   +- Valid_Dataset_mini.tar (EDA - Make_Dataset_Mini.ipynb์—์„œ ์ƒ์„ฑ)
|   +- plus_image.tar (EDA - Data_Augmentation.ipynb์—์„œ ์ƒ์„ฑ)
|   +- plus_lable.tar (EDA - Data_Augmentation.ipynb์—์„œ ์ƒ์„ฑ)
+- data_test (.gitignore) => Inference์‹œ ์‚ฌ์šฉํ•  test data (AI Hub์œผ๋กœ๋ถ€ํ„ฐ ๋‹ค์šด๋กœ๋“œ)
|   +- [์›์ฒœ]๋ผ์žฌ_bbox.zip
|   +- [์›์ฒœ]์†Œ_bbox.zip
+- trained_model (.gitignore) => ํ•™์Šต ๊ฒฐ๊ณผ๋ฌผ ์ €์žฅ
|   +- m6_pretrained_full_b10_e20_hyp_tuning_v1_linear.pt
+- EDA
|   +- Data_Augmentation.ipynb (Plus Dataset ์ƒ์„ฑ)
|   +- Data_Checking.ipynb (Error Analysis)
|   +- EDA.ipynb
|   +- Make_Dataset_Multilabel.ipynb (Train / Valid Dataset ์ƒ์„ฑ)
|   +- Make_Dataset_Full.ipynb (Train + Valid Dataset ์ƒ์„ฑ)
|   +- Make_Dataset_Mini.ipynb (Train mini / Valid mini Dataset ์ƒ์„ฑ)
+- hyp
|   +- experiment_hyp_v1.yaml (์ตœ์ข… HyperParameter)
+- exp
|   +- hyp_train.py (๋ณธ ์ฝ”๋“œ์™€ ๊ฐ™์ด ์ˆ˜์ •ํ•˜์—ฌ, ์—ฌ๋Ÿฌ ์‹คํ—˜ ์ง„ํ–‰)
|   +- YOLOv5_hp_search_lr_momentum.ipynb (HyperParameter Tuning with mini dataset)
+- train
|   +- YOLOv5_ExpandDataset_hp_tune.ipynb (Plus Dataset์„ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต)
|   +- YOLOv5_FullDataset_hp_tune.ipynb (์ตœ์ข… ๊ฒฐ๊ณผ๋ฌผ ์ƒ์„ฑ)
|   +- YOLOv5_MultiLabelSplit.ipynb (์ดˆ๊ธฐ ํ•™์Šต ์ฝ”๋“œ)
+- YOLOv5_inference.ipynb
+- answer.csv (์ตœ์ข… ์ •๋‹ต csv)

Core Strategy

  • YOLOv5m6 Pretrained Model ์‚ฌ์šฉ (68.3MB)
  • MultiLabelStratified KFold (Box count, Class, Box Ratio, Box Size)
  • HyperParameter Tuning (with GA Algorithm)
  • Data Augmentation with Error Analysis
  • Inference Tuning (IoU Threshold, Confidence Threshold)

EDA

์ž์„ธํžˆ

Cow Dataset vs Pig dataset

PIG COW
Image ๊ฐœ์ˆ˜ 4303 12152
  • Data์˜ ๋ถ„ํฌ๊ฐ€ "Cow : Pig = 3 : 1"
  • Train / Valid splitํ•  ๊ฒฝ์šฐ, ๊ณจ๊ณ ๋ฃจ ๋ถ„ํฌํ•˜๋„๋ก ์ง„ํ–‰

Image size ๋ถ„ํฌ

Pig Image Size Cow Image Size
1920x1080 3131 12152
1280x960 1172 0
  • ๋Œ€๋ถ€๋ถ„์˜ Image์˜ ํฌ๊ธฐ๋Š” 1920x1080
  • Pig Data์—์„œ ์ผ๋ถ€ image์˜ ํฌ๊ธฐ๊ฐ€ 1280x960
  • ์ขŒํ‘œ๋ณ€ํ™˜ ์ ์šฉ์‹œ, Image size๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ๋ณ€ํ™˜

Box์˜ ๊ฐœ์ˆ˜์— ๋”ฐ๋ฅธ ๋ถ„ํฌ

3

  • pig data์™€ cow data์—์„œ Box์˜ ๊ฐœ์ˆ˜๊ฐ€ ์„œ๋กœ ์ƒ์ดํ•˜๊ฒŒ ๋ถ„ํฌ
  • Train / Valid splitํ•  ๊ฒฝ์šฐ, ๊ฐ image๋ณ„๋กœ ๊ฐ€์ง€๋Š” Box์˜ ๊ฐœ์ˆ˜์— ๋”ฐ๋ผ์„œ ๊ณจ๊ณ ๋ฃจ ๋ถ„ํฌํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง„ํ–‰.

Box์˜ ๋น„์œจ์— ๋”ฐ๋ฅธ ๋ถ„ํฌ

4

  • pig data์™€ cow data์—์„œ Box์˜ ๋น„์œจ์€ ์œ ์‚ฌ
  • Train / Valid splitํ•  ๊ฒฝ์šฐ, ๊ฐ image๋ณ„๋กœ ๊ฐ€์ง€๋Š” Box์˜ ๋น„์œจ์— ๋”ฐ๋ผ์„œ ๊ณจ๊ณ ๋ฃจ ๋ถ„ํฌํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง„ํ–‰.

Box์˜ ํฌ๊ธฐ์— ๋”ฐ๋ฅธ ๋ถ„ํฌ

5

  • pig data, cow data ๋ชจ๋‘ small size bounding box (๋„“์ด: 1000~10000)์˜ ๊ฐœ์ˆ˜๊ฐ€ ์ƒ๋Œ€์ ์œผ๋กœ ์ ์Œ.
  • small size bounding box๋ฅผ ์ง€์šธ ๊ฒƒ์ธ๊ฐ€? => ์„ ํƒ์˜ ๋ฌธ์ œ (๋ณธ ๊ณผ์ •์—์„œ๋Š” ์ง€์šฐ์ง€ ์•Š์Œ)

Small size bounding box์— ๋Œ€ํ•œ ์„ธ๋ฐ€ํ•œ ๋ถ„ํฌ ์กฐ์‚ฌ

6

๋„“์ด๊ฐ€ 4000์ดํ•˜์ธ Data์˜ ๊ฐœ์ˆ˜ PIG COW
๊ฐœ์ˆ˜ 137 71
๋น„์œจ 0.003 0.0018
  • ๋„“์ด๊ฐ€ 4000์ดํ•˜์ธ Data์˜ ๊ฐœ์ˆ˜๊ฐ€ pig data 137๊ฐœ, cow data 71๊ฐœ
  • ์ „์ฒด Data์— ๋Œ€ํ•œ ๋น„์œจ (137 -> 0.003, 71 -> 0.0018). ์ฆ‰, 0.3%, 0.18%
  • ๋„“์ด๊ฐ€ 4000์ดํ•˜์ธ Bounding Box๋ฅผ ์ง€์šธ ๊ฒƒ์ธ๊ฐ€? => ์„ ํƒ์˜ ๋ฌธ์ œ (๋ณธ ๊ณผ์ •์—์„œ๋Š” ์ง€์šฐ์ง€ ์•Š์Œ)

Box๊ฐ€ ์—†๋Š” ์ด๋ฏธ์ง€ ๋ถ„ํฌ

Box๊ฐ€ ์—†๋Š” ์ด๋ฏธ์ง€ PIG COW
๊ฐœ์ˆ˜ 0 3
  • Cow Image์—์„œ 3๊ฐœ ์กด์žฌ
  • White Noise๋กœ ํŒ๋‹จํ•˜์—ฌ ์‚ญ์ œํ•˜์ง€ ์•Š์Œ.

Model

  • YOLOv5m6 Pretrained Model ์‚ฌ์šฉ
  • YOLOv5 ๊ณ„์—ด Pretrained Model ์ค‘ 100MB ์ดํ•˜์ธ Model ์„ ์ •
YOLOv5l Pretrained YOLOv5m6 w/o Pretrained YOLOv5m6 Pretrained
[email protected] 0.9806 0.9756 0.9838
[email protected]:.95 0.9002 0.8695 0.9156
  • ์ตœ์ข… ์‚ฌ์šฉ Model๋กœ์„œ YOLOv5m6 Pretrained Model ์„ ํƒ

MultiLabelStratified KFold

  • PIG / COW์˜ Data์˜ ๊ฐœ์ˆ˜์— ๋Œ€ํ•œ ์ฐจ์ด
  • Image๋ณ„ ์†Œ์œ ํ•˜๋Š” Box์˜ ๊ฐœ์ˆ˜์— ๋Œ€ํ•œ ์ฐจ์ด
  • ์œ„ ๋‘ Label์„ ๋ฐ”ํƒ•์œผ๋กœ Stratifiedํ•˜๊ฒŒ Train/valid Split ์ง„ํ–‰
Cow-Many Cow-Medium Cow-Little Pig-Many Pig-Medium Pig-Little
Train 2739 1097 5886 2190 827 425
Valid 674 259 1497 559 221 81

HyperParameter Tuning

  • Genetic Algorithm์„ ํ™œ์šฉํ•œ HyperParameter Tuning (YOLOv5 default ์ œ๊ณต)
  • Runtime์˜ ์ œ์•ฝ(Colab Pro)์œผ๋กœ ์ธํ•œ, Mini Dataset(50% ์‚ฌ์šฉ) ์ œ์ž‘ ๋ฐ HyperParameter Search ๊ฐœ๋ณ„ํ™” ์ž‘์—…์ง„ํ–‰

Core Code ์ˆ˜์ •

์ž์„ธํžˆ
meta = {'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)
        'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)
        'momentum': (0.3, 0.6, 0.98),  # SGD momentum/Adam beta1
        }

        with open(opt.hyp, errors='ignore') as f:
            hyp = yaml.safe_load(f)  # load hyps dict
            if 'anchors' not in hyp:  # anchors commented in hyp.yaml
                hyp['anchors'] = 3

        # Updateํ•  HyperParameter๋งŒ new_hyp์— ์ €์žฅ
        new_hyp = {}
        for k, v in hyp.items():
            if k in meta.keys():
                new_hyp[k] = v
        
        opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir)  # only val/save final epoch
        # ei = [isinstance(x, (int, float)) for x in hyp.values()]  # evolvable indices
        evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
        if opt.bucket:
            os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}')  # download evolve.csv if exists

        for _ in range(opt.evolve):  # generations to evolve
            if evolve_csv.exists():  # if evolve.csv exists: select best hyps and mutate
                # Select parent(s)
                parent = 'single'  # parent selection method: 'single' or 'weighted'
                x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)
                n = min(5, len(x))  # number of previous results to consider
                x = x[np.argsort(-fitness(x))][:n]  # top n mutations
                w = fitness(x) - fitness(x).min() + 1E-6  # weights (sum > 0)
                if parent == 'single' or len(x) == 1:
                    # x = x[random.randint(0, n - 1)]  # random selection
                    x = x[random.choices(range(n), weights=w)[0]]  # weighted selection
                elif parent == 'weighted':
                    x = (x * w.reshape(n, 1)).sum(0) / w.sum()  # weighted combination

                # Mutate
                mp, s = 0.8, 0.2  # mutation probability, sigma
                npr = np.random
                npr.seed(int(time.time()))
                # new_hyp์— ์žˆ๋Š” HyperParameter์— ๋Œ€ํ•ด์„œ๋งŒ meta๊ฐ’ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
                g = np.array([meta[k][0] for k in new_hyp.keys()])  # gains 0-1
                ng = len(meta)
                v = np.ones(ng)
                while all(v == 1):  # mutate until a change occurs (prevent duplicates)
                    v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
                for i, k in enumerate(hyp.keys()):  # plt.hist(v.ravel(), 300)
                    if k in new_hyp.keys(): # new_hyp์— ์กด์žฌํ•˜๋Š” hyperParameter์— ๋Œ€ํ•ด์„œ๋งŒ Update
                        hyp[k] = float(x[i + 7] * v[i])  # mutate

            # Constrain to limits
            for k, v in meta.items():
                hyp[k] = max(hyp[k], v[1])  # lower limit
                hyp[k] = min(hyp[k], v[2])  # upper limit
                hyp[k] = round(hyp[k], 5)  # significant digits

            # Train mutation
            results = train(hyp.copy(), opt, device, callbacks)

Default HyperParameter vs Tuning HyperParameter

  • obj, box, cls์— ๋Œ€ํ•œ HyperParameter์— ๋”ฐ๋ฅธ ์„ฑ๋Šฅ ๋ณ€ํ™”ํญ ์ฆ๊ฐ€ (NOTE: ํ•™์Šต ํ™˜๊ฒฝ์˜ ์ œ์•ฝ์œผ๋กœ ์ธํ•ด, ๊ฐ ์„ฑ๋Šฅ๋น„๊ตํ‘œ ๋งˆ๋‹ค Epoch ์ˆ˜์˜ ์ฐจ์ด๊ฐ€ ์กด์žฌํ•˜์—ฌ ์„ฑ๋Šฅ์˜ ์ฐจ์ด๊ฐ€ ์žˆ๋‹ค. ์„ฑ๋Šฅ ๋น„๊ต์—๋งŒ ์ฐธ๊ณ ํ•˜๋„๋ก ํ•˜์ž)
Default Tuning
obj_loss 0.023 0.003
box_loss 0.0095 0.0038
cls_loss 0.00003 0.00001
Default Tuning
[email protected] 0.9826 0.9824
[email protected]:.95 0.8924 0.9016
  • Optimizer
Adam AdamW SGD
[email protected] 0.9635 0.9804 0.9848
[email protected]:.95 0.8302 0.8994 0.914

์ตœ์ข… ๋ณ€๊ฒฝ HyperParameter

optimizer lr_scheduler lr0 lrf momentum weight_decay warmup_epochs warmup_momentum warmup_bias_lr box cls cls_pw obj obj_pw iou_t anchor_t fl_gamma hsv_h hsv_s hsv_v degrees translate scale shear perspective flipud fliplr mosaic mixup copy_paste
SGD linear 0.009 0.08 0.94 0.001 0.11 0.77 0.0004 0.02 0.2 0.95 0.2 0.5 0.2 4.0 0.0 0.009 0.1 0.9 0.0 0.1 0.5 0.0 0.0 0.0095 0.1 1.0 0.0 0.0

Error Analysis

ํ•™์Šต ๊ฒฐ๊ณผ ํ™•์ธ

Data ์–‘ Train Valid
PIG 3442 881
COW 9722 2430
์˜ˆ์ธก ๊ฒฐ๊ณผ Label ๊ฐœ์ˆ˜ Precision Recall [email protected] [email protected]:.95
PIG 3291 0.984 0.991 0.993 0.928
COW 3291 0.929 0.911 0.974 0.889
  • ์œ„์˜ ํ‘œ์™€ ๊ฐ™์ด, Cow์˜ Data์˜ ์–‘์ด PIG์˜ Data๋ณด๋‹ค ๋” ๋งŽ๋‹ค.
  • YOLOv5 Pretrained Model์˜ ๊ฒฝ์šฐ COCO Dataset์—์„œ Cow ์ด๋ฏธ์ง€๋ฅผ ๋ณด์œ ํ•˜๊ณ  ์žˆ๋‹ค.
  • ์œ„์˜ ๋‘ ๊ฐ€์ง€ ์ด์ ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , Model์ด Cow Detection์—์„œ์˜ ์–ด๋ ค์›€์„ ๊ฒช๋Š”๋‹ค.

Box์˜ ๊ฐœ์ˆ˜ ๋ฐ Plotting

Box์˜ ๊ฐœ์ˆ˜

9

Train - Bounding Box Plotting

10

Valid - Bounding Box Plotting

11

Error ๋ถ„์„ ๊ฒฐ๊ณผ

  • ์ „๋ฐ˜์ ์œผ๋กœ Cow Dataset์—์„œ์˜ Bounding Box์˜ ๊ฐœ์ˆ˜๊ฐ€ ์ ๋‹ค.
  • Image๋ฅผ Plottingํ•œ ๊ฒฐ๊ณผ, Cow Dataset์—์„œ์˜ Labeling์ด ์ œ๋Œ€๋กœ ๋˜์–ด์žˆ์ง€ ์•Š๋‹ค.
    • FP์˜ ์ฆ๊ฐ€๋กœ ์ด์–ด์งˆ ์ˆ˜ ์žˆ๋‹ค. (Labeling์ด ๋˜์–ด์žˆ์ง€ ์•Š์ง€๋งŒ, Cow๋ผ๊ณ  ์˜ˆ์ธก)
  • ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋กœ๋ถ€ํ„ฐ, Silver Dataset์„ ๋งŒ๋“ค์–ด ์žฌํ•™์Šต์‹œํ‚ค๋„๋ก ํ•œ๋‹ค.
    • ํ•™์Šต๋œ Model๋กœ Cow Image์— ๋Œ€ํ•˜์—ฌ Bounding Box๋ฅผ ์˜ˆ์ธกํ•œ๋‹ค.
    • ์˜ˆ์ธก๋œ ๊ฒฐ๊ณผ๋ฅผ ์ถ”๊ฐ€ํ•™์Šต๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉํ•œ๋‹ค.

Data Augmentation with Silver Dataset

  • YOLOv5m6 Pretrained with Full_Dataset(Train + Valid) (๊ธฐ์กด Dataset์œผ๋กœ ํ•™์Šตํ•œ ๋ชจ๋ธ ํ™œ์šฉ)
  • ์ด 12151๊ฐœ์˜ Cow Data์— ๋Œ€ํ•˜์—ฌ Detection ์ง„ํ–‰ (IoU threshod: 0.7, Confidence threshold: 0.05)

Bounding Box ๊ฐœ์ˆ˜ ์‹œ๊ฐํ™”

12

  • ์œ„์˜ ์‹œ๊ฐํ™”์ž๋ฃŒ๋กœ ๋ถ€ํ„ฐ, ๋ถ„์„๊ฐ€(๋ณธ์ธ)์˜ ์ž„์˜๋Œ€๋กœ Bounding Box์˜ ๊ฐœ์ˆ˜๊ฐ€ 4๊ฐœ ์ด์ƒ์ธ Image๋งŒ ์ตœ์ข… ์„ ์ •
  • ์ด 6628๊ฐœ์˜ Cow์— ๋Œ€ํ•œ Silver Dataset ์ถ”๊ฐ€

๊ฒฐ๊ณผ

์ตœ์ข… ์„ ์ • ๋ชจ๋ธ

  • Dataset: Train + Valid Dataset์„ ํ•™์Šต
  • YOLOv5m6 Pretrained Model ํ™œ์šฉ
  • HyperParameter Tuning (์œ„์˜ HyperParameter Tuning์—์„œ ์ž‘์„ฑํ•œ ํ‘œ ์ฐธ๊ณ )
  • Inference Tuning (IoU Threshold: 0.68, Confidence Threshold: 0.001)
Silver Dataset ๊ฒฐ๊ณผ๋น„๊ต [email protected]
์ตœ์ข… ๋ชจ๋ธ(w/o Silver Dataset) 0.98116
Plus Model(w Silver Dataset) 0.97965
Full vs Split ๊ฒฐ๊ณผ๋น„๊ต [email protected] [email protected]:.95
Full(Train + Valid) 0.9858 0.9271
Split(Train) 0.9845 0.9215

์‹œ๋„ํ–ˆ์œผ๋‚˜ ์•„์‰ฌ์› ๋˜ ์ 

Knowledge Distillation

  • 1 Stage Model to 1 Stage Model
  • ์„ฑ๋Šฅ์ด ๋†’์€ 1 Stage Model์„ ์ฐพ์œผ๋ ค๊ณ  ํ–ˆ์œผ๋‚˜ YOLOv5x6์„ ์ ์šฉํ•˜์˜€์„ ๋•Œ, [email protected]: 0.9821 / [email protected]:.95: 0.939๋กœ ์ ์ˆ˜์˜ ํฐ ๊ฐœ์„ ์ด ์—†์—ˆ์Œ.
  • ์ฆ‰, Teacher Model๋กœ ํ™œ์šฉํ•จ์œผ๋กœ์„œ ์–ป์–ด์ง€๋Š” ์ด๋“์ด ์ ๋‹ค.

ํšŒ๊ณ 

  • Pretrained Model
    • COCO Dataset์—์„œ์˜ Cow Image์˜ ํ˜•ํƒœ๋Š” ์–ด๋– ํ•œ์ง€?
    • Pig(COCO Dataset์— ์—†์Œ)์˜ ๊ฒฝ์šฐ, ์ž˜ ๋งž์ท„๊ธฐ ๋•Œ๋ฌธ์— PreTrained Weight์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  Epoch์„ ๋Š˜๋ ค์„œ ํ•™์Šตํ•˜๋ฉด ๋” ์ข‹์€ ๊ฒฐ๊ณผ๋กœ ์ด์–ด์ง€์ง€ ์•Š์„๊นŒ?
  • Silver Dataset
    • Silver Dataset์„ ๋งŒ๋“œ๋Š” ๊ณผ์ •์— ์žˆ์–ด์„œ, IoU Threshold์™€ Confidence Threshold๋ฅผ ์ตœ์ ํ™”ํ•œ๋‹ค๋ฉด ์„ฑ๋Šฅ๊ฐœ์„ ์œผ๋กœ ์ด์–ด์งˆ ์ˆ˜ ์žˆ์ง€ ์•Š์„๊นŒ?
    • Test Datsaet์—์„œ ์• ์ดˆ์— Labeling์ด ์ œ๋Œ€๋กœ ๋˜์–ด์žˆ์ง€ ์•Š๋Š”๋‹ค๋ฉด, ์ด๋Ÿฌํ•œ ์ด์œ ๋กœ ์ธํ•ด ํ•„์—ฐ์ ์œผ๋กœ ์„ฑ๋Šฅ๊ฐœ์„ ์ด ์•ˆ ์ด๋ฃจ์–ด์งˆ ์ˆ˜ ์žˆ์ง€ ์•Š์„๊นŒ?
  • MultiLabelStratified SPlit
    • Bounding Box์™€ Ratio์™€ Size์— ๋”ฐ๋ฅธ ๋ถ„๋ฅ˜๋ฅผ ํ•จ๊ป˜ ์ง„ํ–‰ํ•ด๋ณด๋ฉด ์–ด๋–จ๊นŒ?
    • ๋”๋ถˆ์–ด, Bounding Box์˜ ๊ฒฝ์šฐ, Image๊ฐ€ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” Box๋งˆ๋‹ค ๋‹ค๋ฅธ๋ฐ ์ด๋Š” ์–ด๋–ป๊ฒŒ MultiLabelํ•˜๊ฒŒ Splitํ•  ์ˆ˜ ์žˆ์„๊นŒ?
  • ํ™•์‹คํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ์„œ ๊ธฐ์กด Train Dataset์— Cow Image์— ๋Œ€ํ•œ Labeling์„ ์ง์ ‘ํ–ˆ๋‹ค๋ฉด ์„ฑ๋Šฅ ๊ฐœ์„ ์œผ๋กœ ์ด์–ด์ง€์ง€ ์•Š์•˜์„๊นŒ?!

์ถ”ํ›„ ๊ณผ์ œ

  • MultiLabelStratified Split ์ง„ํ–‰์‹œ, ๊ฐ ์ด๋ฏธ์ง€๊ฐ€ ๊ฐ€์ง€๋Š” Bounding Box์˜ Ratio, Size์— ๋”ฐ๋ฅธ ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ• ์—ฐ๊ตฌ
  • BackGround Image ๋„ฃ๊ธฐ => ํƒ์ง€ํ•  ๋ฌผ์ฒด๊ฐ€ ์—†๋Š” Image๋ฅผ ์ถ”๊ฐ€ํ•ด์คŒ์œผ๋กœ์„œ False Positive๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ๋‹ค๊ณ  ํ•œ๋‹ค.
  • ๊ณ ๋„ํ™”๋œ HyperParameter Tuning ๊ธฐ๋ฒ• ์ ์šฉ (ex, Bayesian Algorithm)
  • Train Dataset์— ๋Œ€ํ•œ Silver Dataset์„ ๋งŒ๋“ค์–ด ์ด๋ฅผ ์ถ”๊ฐ€์ ์œผ๋กœ ํ•™์Šตํ•  ๊ฒฝ์šฐ ์„ฑ๋Šฅ ํ–ฅ์ƒ์œผ๋กœ ์ด์–ด์ง€๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ (Train Gold + Train Silver)
  • Object Detection์—์„œ SGD๊ฐ€ AdamW๋ณด๋‹ค ์ข‹์€ ๊ฒƒ์€ ๊ฒฝํ—˜์ ์ธ ๊ฒฐ๊ณผ์ธ์ง€ ํ˜น์€ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ
  • Pruning, Tensor Decomposition ์ ์šฉํ•ด๋ณด๊ธฐ
  • Object Detection Knowledge Distillation์˜ ๊ฒฝ์šฐ, 2 Stage to 1 Stage์— ๋Œ€ํ•œ ๋ฐฉ๋ฒ•๋ก  ์ฐพ์•„๋ณด๊ธฐ
[ICCV 2021] Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation

ADDS-DepthNet This is the official implementation of the paper Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation I

LIU_LINA 52 Nov 24, 2022
Project repo for the paper SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition

SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition (BMVC 2021) Project repo for the paper SILT: Self-supervised Lighting Trans

6 Dec 04, 2022
This is the PyTorch implementation of GANs Nโ€™ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
Additional functionality for use with fastaiโ€™s medical imaging module

fmi Adding additional functionality to fastai's medical imaging module To learn more about medical imaging using Fastai you can view my blog Install g

14 Oct 31, 2022
A curated list of awesome Machine Learning frameworks, libraries and software.

Awesome Machine Learning A curated list of awesome machine learning frameworks, libraries and software (by language). Inspired by awesome-php. If you

Joseph Misiti 57.1k Jan 03, 2023
Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models"

Introduction Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models". In this work, we demonstrate that existi

Wei-Cheng Tseng 7 Nov 01, 2022
Object classification with basic computer vision techniques

naive-image-classification Object classification with basic computer vision techniques. Final assignment for the computer vision course I took at univ

2 Jul 01, 2022
Streamlit tool to explore coco datasets

What is this This tool given a COCO annotations file and COCO predictions file will let you explore your dataset, visualize results and calculate impo

Jakub Cieslik 75 Dec 16, 2022
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David รlvarez de la Torre 1 Dec 02, 2022
The official implementation of paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks" (IJCV under review).

DGMS This is the code of the paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks". Installation Our code works with Pytho

Runpei Dong 3 Aug 28, 2022
FwordCTF 2021 Infrastructure and Source code of Web/Bash challenges

FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our

Kahla 5 Nov 25, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel This repository is the official PyTorch implementation of BSRDM w

Zongsheng Yue 69 Jan 05, 2023
VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.

What's New Below we share, in reverse chronological order, the updates and new releases in VISSL. All VISSL releases are available here. [Oct 2021]: V

Meta Research 2.9k Jan 07, 2023
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 07, 2023
MARS: Learning Modality-Agnostic Representation for Scalable Cross-media Retrieva

Introduction This is the source code of our TCSVT 2021 paper "MARS: Learning Modality-Agnostic Representation for Scalable Cross-media Retrieval". Ple

7 Aug 24, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
A machine learning malware analysis framework for Android apps.

๐Ÿ•ต๏ธ A machine learning malware analysis framework for Android apps. โ˜ข๏ธ DroidDetective is a Python tool for analysing Android applications (APKs) for p

James Stevenson 77 Dec 27, 2022