yolov5實戰之二維碼檢測

haoliuhust發表於2021-10-02

1.前沿

之前總結過yolov5來做皮卡丘的檢測,用來熟悉yolov5的使用,不過總歸是個demo型的應用,沒啥實用價值。後來正好專案上有需要在成像條件不好的情況去檢測二維碼,傳統的二維碼檢測方式基本上是通過角點檢測定位二維碼的三個定位點,在成像不好的時候,很容易失敗。如果用深度學習去做魯棒性就強很多,在檢測到二維碼之後,可以進行調焦或影像增強等手段,輔助後續的二維碼識別過程。
環境準備同 yolov5實戰之皮卡丘檢測

2.二維碼資料

首先第一步肯定是需要準備資料了,通過網路我們可以找到不少二維碼資料,通過打標後,就可以得到第一批資料了。僅通過網路圖片還是不夠的,因為找到的二維碼影像的背景不一定符合我們的實際使用場景,僅僅用這些資料訓練,雖然能檢測到二維碼,但是誤檢也會比較嚴重。所以還需要人造一些資料,我們可以將二維碼摳出來,貼到各種各樣的背景圖上去,用於擴增我們的資料集。
image
可以在文末去下載我收集的二維碼資料,基於它們在自己的資料上貼圖生成更多的資料。其中資料的標籤和yolov5的格式一致,具體也可以參考皮卡丘那篇,或者官方repo: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
資料生成大概的程式碼:

def synthetise_image(background_image,front_image,scale=0.1,degree=10,borderValue=(114,114,114)):
    background_image_cp=copy.deepcopy(background_image)
    bg_h, bg_w = background_image_cp.shape[0:2]
    qr_h, qr_w = front_image.shape[0:2]
    roate_image, rotate_label = random_perspective(front_image, np.array([0, 0, 0, qr_w, qr_h]).reshape((-1, 5)),
                                                       translate=0, scale=scale,degrees=degree, shear=0, border=(qr_w//2,qr_w//2),borderValue=borderValue)

    crop_rotate = roate_image[rotate_label[0][2]:rotate_label[0][4], rotate_label[0][1]:rotate_label[0][3]]

    if bg_w<crop_rotate.shape[1] or bg_h<crop_rotate.shape[0]:
        return None,None

    random_x = random.randint(0, bg_w - crop_rotate.shape[1])
    random_y = random.randint(0, bg_h - crop_rotate.shape[0])

    if random_y + crop_rotate.shape[0]>bg_h or random_x + crop_rotate.shape[1]>bg_w:
        return None,None

    mask = (crop_rotate != np.array(list(borderValue)))
    mask = (mask[:, :, 0] | mask[:, :, 1] | mask[:, :, 2])
    mask_inv=(~mask)

    roi = background_image_cp[random_y:random_y + crop_rotate.shape[0], random_x:random_x + crop_rotate.shape[1]]
    roi_bg = cv2.bitwise_and(roi, roi, mask=mask_inv.astype(np.uint8))

    roi_fg = cv2.bitwise_and(crop_rotate, crop_rotate, mask=mask.astype(np.uint8))

    dst = cv2.add(roi_bg, roi_fg)
    roi[:, :] = dst[:, :]

    return background_image_cp,[random_x,random_y,crop_rotate.shape[1],crop_rotate.shape[0]]
    
...    

background_image=cv2.imread(os.path.join(background_dir,background_lst[index]))
background_h,background_w=background_image.shape[0:2]

mixup_image,box=synthetise_image(background_image,qr_image,scale=0.2,degree=45)

if mixup_image is None or (box[2]<60 or box[3]<60):
    continue

cnt+=1

center_x=box[0]+box[2]/2
center_y=box[1]+box[3]/2
label=[0,center_x/background_w,center_y/background_h,box[2]/background_w,box[3]/background_h]

#save image
cv2.imwrite(os.path.join(save_dir,"{:0>8d}_sync_{}.jpg".format(thread_start_cnt,sub_dir)),mixup_image)
#save label
with open(os.path.join(save_label,"{:0>8d}_sync_{}.txt".format(thread_start_cnt,sub_dir)),'w') as f:
    f.write("{0} {1} {2} {3} {4}\n".format(label[0],label[1],label[2],label[3],label[4]))

3.訓練配置

3.1資料集設定

新建qrcode_dataset.yml, 設定下資料集的路徑

train: ./data/qrcode/images/train/  # train
val: ./data/qrcode/images/val/  # val

# number of classes
nc: 1

# class names
names: ['qrcode']

3.2訓練引數的配置

可以根據自己的任務設定下資料增強比例這些:

# Hyperparameters for COCO training from scratch
# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials


lr0: 0.01  # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.001  # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 0.05  # box loss gain
cls: 0.5  # cls loss gain
cls_pw: 1.0  # cls BCELoss positive_weight
obj: 1.0  # obj loss gain (scale with pixels)
obj_pw: 1.0  # obj BCELoss positive_weight
iou_t: 0.20  # IoU training threshold
anchor_t: 4.0  # anchor-multiple threshold
# anchors: 3  # anchors per output layer (0 to ignore)
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.  # image translation (+/- fraction)
scale: 0.0  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 0.2  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)

3.3網路結構設定

根據需要調整設定自己的網路大小以及根據想要檢測的二維碼大小設定anchor:
anchor在非finetune訓練方式下,預設是通過資料集計算出來的,若要關閉這個功能,需要訓練時開啟”--noautoanchor"選項。

# parameters
nc: 1  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

# anchors
anchors:
  - [38,38,  53,53,  68,67]  # P3/8
  - [83,83,  101,100,  121,121]  # P4/16
  - [146,145,  176,175,  218,218]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [#[-1, 1, Focus, [64, 3]],  # 0-P1/2
  [ -1, 1, Conv, [ 32, 3, 2 ] ],
  [ -1, 1, Conv, [ 64, 3, 1 ] ], # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 10
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [ -1, 1, DeConv, [ 512, 4, 2 ] ],
   [[-1, 7], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 14

   [-1, 1, Conv, [256, 1, 1]],
   [ -1, 1, DeConv, [ 256, 4, 2 ] ],
   [[-1, 5], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 18 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 15], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 21 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 11], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 24 (P5/32-large)

   [[18, 21, 24], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

3.4訓練

設定上述這些,就可以開始訓練了,由於我修改了網路結構,所以先是從頭訓練了,然後又進行了一次finetune訓練。訓練可以指定的引數:

    parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')#finetune時基於的模型
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')#模型結構檔案
    parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')#資料配置
    parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')#訓練引數
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--notest', action='store_true', help='only test final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')#開啟後不自動計算anchor
    parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')#顯示卡
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    parser.add_argument('--log-imgs', type=int, default=16, help='number of images for W&B logging, max 100')
    parser.add_argument('--log-artifacts', action='store_true', help='log artifacts, i.e. final trained model')
    parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')#資料讀取執行緒數
    parser.add_argument('--project', default='runs/train', help='save to project/name')
    parser.add_argument('--name', default='exp', help='save to project/name')

同樣的,訓練結果可以在控制檯看或通過wandb檢視(參見pikachu那篇)。

3.5結果示例

image

附錄:資料集下載

連結:https://pan.baidu.com/s/1LFN2Kqip-hf_6V1S8sw-3g
提取碼:6666

相關文章