技能 人工智能 YOLO模型微调与部署

YOLO模型微调与部署

v20260421
model-training
该技能提供了一个完整的计算机视觉模型生命周期管理工具,支持从自定义数据集标注到模型部署的全流程。用户可以在该平台上使用Agent驱动的工作流,在自定义的COCO格式数据集上微调YOLO模型。它具备硬件感知训练能力,并能自动将模型导出为TensorRT、CoreML、OpenVINO等优化格式,最后可一键部署为活动的检测技能。
获取技能
347 次下载
概览

Model Training

Agent-driven custom model training powered by Aegis's Training Agent. Closes the annotation-to-deployment loop: take a COCO dataset from dataset-annotation, fine-tune a YOLO model, auto-export to the optimal format for your hardware, and optionally deploy it as your active detection skill.

What You Get

  • Fine-tune YOLO26 — start from nano/small/medium/large pre-trained weights
  • COCO dataset input — uses standard format from dataset-annotation skill
  • Hardware-aware training — auto-detects CUDA, MPS, ROCm, or CPU
  • Auto-export — converts trained model to TensorRT / CoreML / OpenVINO / ONNX via env_config.py
  • One-click deploy — replace the active detection model with your fine-tuned version
  • Training telemetry — real-time loss, mAP, and epoch progress streamed to Aegis UI

Training Loop (Aegis Training Agent)

dataset-annotation          model-training              yolo-detection-2026
┌─────────────┐        ┌──────────────────┐        ┌──────────────────┐
│ Annotate    │───────▶│ Fine-tune YOLO   │───────▶│ Deploy custom    │
│ Review      │  COCO  │ Auto-export      │ .pt    │ model as active  │
│ Export      │  JSON  │ Validate mAP     │ .engine│ detection skill  │
└─────────────┘        └──────────────────┘        └──────────────────┘
       ▲                                                    │
       └────────────────────────────────────────────────────┘
                    Feedback loop: better detection → better annotation

Protocol

Aegis → Skill (stdin)

{"event": "train", "dataset_path": "~/datasets/front_door_people/", "base_model": "yolo26n", "epochs": 50, "batch_size": 16}
{"event": "export", "model_path": "runs/train/best.pt", "formats": ["coreml", "tensorrt"]}
{"event": "validate", "model_path": "runs/train/best.pt", "dataset_path": "~/datasets/front_door_people/"}

Skill → Aegis (stdout)

{"event": "ready", "gpu": "mps", "base_models": ["yolo26n", "yolo26s", "yolo26m", "yolo26l"]}
{"event": "progress", "epoch": 12, "total_epochs": 50, "loss": 0.043, "mAP50": 0.87, "mAP50_95": 0.72}
{"event": "training_complete", "model_path": "runs/train/best.pt", "metrics": {"mAP50": 0.91, "mAP50_95": 0.78, "params": "2.6M"}}
{"event": "export_complete", "format": "coreml", "path": "runs/train/best.mlpackage", "speedup": "2.1x vs PyTorch"}
{"event": "validation", "mAP50": 0.91, "per_class": [{"class": "person", "ap": 0.95}, {"class": "car", "ap": 0.88}]}

Setup

python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
信息
Category 人工智能
Name model-training
版本 v20260421
大小 1.87KB
更新时间 2026-04-28
语言