Produce one finished image asset per turn unless the user asks for variations. Image generation rewards a tight, structured prompt — your job is to assemble that prompt from the user's brief, then dispatch.
image-poster/
├── SKILL.md ← you're reading this
└── example.html ← what the resulting card looks like in Examples
The active project carries imageModel, imageAspect, and (optional)
imageStyle notes. Use them as the upstream model + canvas + style
anchor; only ask the user to fill them in if they're marked (unknown — ask).
Plan in this exact order before calling any tool:
Use the unified dispatcher — do not call upstream provider APIs by hand. Run from your shell tool:
node "$OD_BIN" media generate \
--project "$OD_PROJECT_ID" \
--surface image \
--model "<imageModel from metadata>" \
--aspect "<imageAspect from metadata>" \
--output "<short-descriptive-name>.png" \
--prompt "<the full assembled prompt from Step 1>"
The command prints one line of JSON: {"file": {"name": "...", ...}}.
The daemon writes the bytes into the project folder; the FileViewer
picks it up automatically.
Reply with a one-paragraph summary of the prompt you used and the
filename returned by the dispatcher (e.g. I generated hero-poster.png
with gpt-image-2 at 1:1.). Do not emit an <artifact> tag.
imageAspect exactly — the upstream cost is the same; matching
the aspect avoids a re-render.