本文手把手教你用YoloV8訓練自己的資料集並實現物體識別
操作環境:
系統:Windows10
Python:3.9
Pytorch:2.2.2+cu121
環境安裝
- 安裝CUDA以及cudnn
參考部落格《Windows安裝CUDA 12.1及cudnn》(https://www.cnblogs.com/RiverRiver/p/18103991)
- 安裝torch, torchvision對應版本,這裡先下載好,直接安裝
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- pip安裝yolo包
pip3 install ultralytics
- pip安裝資料標註工具
pip install labelimg
資料準備
提前準備好需要訓練的圖片資料(儘量多一點),我這裡以驗證碼的形狀為例,如下圖:
- 命令列輸入 labelimg 開啟資料標註工具,資料集型別切換成YOLO,然後依次完成標註即可
點選Create RectBox開始標註,將需要識別的圖形框起來,框起來後需要輸入標籤(注意:同一型別物體要用一個標籤)。如下圖:
- 標註劃分
標註好之後,使用下面的指令碼劃分訓練集、驗證集,注意設定正確的圖片和txt路徑:
import os import random import shutil # 設定檔案路徑和劃分比例 root_path = "D:\\dataset" # 標註過的圖片存放目錄 image_dir = "D:\\dataset\\images" # 標註過生成的txt存放目錄 label_dir = "D:\\dataset\\labels" train_ratio = 0.7 val_ratio = 0.2 test_ratio = 0.1 # 建立訓練集、驗證集和測試集目錄 os.makedirs("images/train", exist_ok=True) os.makedirs("images/val", exist_ok=True) os.makedirs("images/test", exist_ok=True) os.makedirs("labels/train", exist_ok=True) os.makedirs("labels/val", exist_ok=True) os.makedirs("labels/test", exist_ok=True) # 獲取所有影像檔名 image_files = os.listdir(image_dir) total_images = len(image_files) random.shuffle(image_files) # 計算劃分數量 train_count = int(total_images * train_ratio) val_count = int(total_images * val_ratio) test_count = total_images - train_count - val_count # 劃分訓練集 train_images = image_files[:train_count] for image_file in train_images: label_file = image_file[:image_file.rfind(".")] + ".txt" shutil.copy(os.path.join(image_dir, image_file), "images/train/") shutil.copy(os.path.join(label_dir, label_file), "labels/train/") # 劃分驗證集 val_images = image_files[train_count:train_count+val_count] for image_file in val_images: label_file = image_file[:image_file.rfind(".")] + ".txt" shutil.copy(os.path.join(image_dir, image_file), "images/val/") shutil.copy(os.path.join(label_dir, label_file), "labels/val/") # 劃分測試集 test_images = image_files[train_count+val_count:] for image_file in test_images: label_file = image_file[:image_file.rfind(".")] + ".txt" shutil.copy(os.path.join(image_dir, image_file), "images/test/") shutil.copy(os.path.join(label_dir, label_file), "labels/test/") # 生成訓練集圖片路徑txt檔案 with open("train.txt", "w") as file: file.write("\n".join([root_path + "images/train/" + image_file for image_file in train_images])) # 生成驗證集圖片路徑txt檔案 with open("val.txt", "w") as file: file.write("\n".join([root_path + "images/val/" + image_file for image_file in val_images])) # 生成測試集圖片路徑txt檔案 with open("test.txt", "w") as file: file.write("\n".join([root_path + "images/test/" + image_file for image_file in test_images])) print("資料劃分完成!")
執行後會生成劃分好的資料集如下:
訓練與預測
- 開始訓練
訓練指令碼如下:
from ultralytics import YOLO # Load a model model = YOLO('yolov8n.yaml') results = model.train(data='shield.yml', epochs=1300, imgsz=640, device=[0], workers=0, lr0=0.001, batch=128, amp=False)
shield.yml內容如下,注意修改自己的資料集路徑即可:
# Ultralytics YOLO 🚀, AGPL-3.0 license
# COCO8 dataset (first 8 images from COCO train2017) by Ultralytics
# Documentation: https://docs.ultralytics.com/datasets/detect/coco8/
# Example usage: yolo train data=coco8.yaml
# parent
# ├── ultralytics
# └── datasets
# └── coco8 ← downloads here (1 MB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: E:\Code\Python\yolov8 # dataset root dir
train: E:\Code\Python\yolov8/images/train # train images (relative to 'path') 4 images
val: E:\Code\Python\yolov8/images/val # val images (relative to 'path') 4 images
test: # test images (optional)
# Classes 類別填寫標記時的標籤多個型別的話按照順序0,1,2,3....新增
names:
0: shield
# Download script/URL (optional)
# download: https://ultralytics.com/assets/coco8.zip
訓練完成後會再runs/detect/train資料夾下生成如下內容:
在weights資料夾下生成兩個模型檔案,直接使用best.pt即可。
預測推理
- 預測指令碼如下
from ultralytics import YOLO # Load a model model = YOLO('E:\\Code\\Python\\yolov8\\runs\\detect\\train\\weights\\best.pt') # pretrained YOLOv8n model # Run batched inference on a list of images results = model(['.\\images\\test\\Screenshot_20230118_210923_com.tencent.mobileqq.jpg','.\\images\\test\\Screenshot_20230118_210936_com.tencent.mobileqq.jpg', './images/test/ax1.png']) # return a list of Results objects # Process results list for result in results: boxes = result.boxes # Boxes object for bounding box outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs result.show() # display to screen result.save(filename='result.jpg') # save to disk
預測結果: