Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.
| Task | Interface | Reference |
|---|---|---|
| Logging metrics during training | Python API | references/logging_metrics.md |
| Firing alerts for training diagnostics | Python API | references/alerts.md |
| Retrieving metrics & alerts after/during training | CLI | references/retrieving_metrics.md |
Use import trackio in your training scripts to log metrics:
trackio.init()
trackio.log() or use TRL's report_to="trackio"
trackio.finish()
Key concept: For remote/cloud training, pass space_id — metrics sync to a Space dashboard so they persist after the instance terminates.
→ See references/logging_metrics.md for setup, TRL integration, and configuration options.
Insert trackio.alert() calls in training code to flag important events — like inserting print statements for debugging, but structured and queryable:
trackio.alert(title="...", level=trackio.AlertLevel.WARN) — fire an alertINFO, WARN, ERROR
Key concept for LLM agents: Alerts are the primary mechanism for autonomous experiment iteration. An agent should insert alerts into training code for diagnostic conditions (loss spikes, NaN gradients, low accuracy, training stalls). Since alerts are printed to the terminal, an agent that is watching the training script's output will see them automatically. For background or detached runs, the agent can poll via CLI instead.
→ See references/alerts.md for the full alerts API, webhook setup, and autonomous agent workflows.
Use the trackio command to query logged metrics and alerts:
trackio list projects/runs/metrics — discover what's availabletrackio get project/run/metric — retrieve summaries and valuestrackio list alerts --project <name> --json — retrieve alertstrackio show — launch the dashboardtrackio sync — sync to HF SpaceKey concept: Add --json for programmatic output suitable for automation and LLM agents.
→ See references/retrieving_metrics.md for all commands, workflows, and JSON output formats.
import trackio
trackio.init(project="my-project", space_id="username/trackio")
trackio.log({"loss": 0.1, "accuracy": 0.9})
trackio.log({"loss": 0.09, "accuracy": 0.91})
trackio.finish()
trackio list projects --json
trackio get metric --project my-project --run my-run --metric loss --json
When running experiments autonomously as an LLM agent, the recommended workflow is:
trackio.alert() calls for diagnostic conditionstrackio list alerts --project <name> --json --since <timestamp> to check for new alertstrackio get metric ... to inspect specific valuesimport trackio
trackio.init(project="my-project", config={"lr": 1e-4})
for step in range(num_steps):
loss = train_step()
trackio.log({"loss": loss, "step": step})
if step > 100 and loss > 5.0:
trackio.alert(
title="Loss divergence",
text=f"Loss {loss:.4f} still high after {step} steps",
level=trackio.AlertLevel.ERROR,
)
if step > 0 and abs(loss) < 1e-8:
trackio.alert(
title="Vanishing loss",
text="Loss near zero — possible gradient collapse",
level=trackio.AlertLevel.WARN,
)
trackio.finish()
Then poll from a separate terminal/process:
trackio list alerts --project my-project --json --since "2025-01-01T00:00:00"