video-prompting

Draft and refine prompts for video generation models (text-to-video and image-to-video). Use when a user asks for a "video prompt" or a model-specific prompt such as Ovi, Sora, Veo 3, Wan 2.2, LTX-2, or LTX-2.3, including requests like "text-to-video prompt", "image-to-video prompt", or "write a prompt for [model]".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "video-prompting" with this command: npx skills add square-zero-labs/video-prompting-skill/square-zero-labs-video-prompting-skill-video-prompting

Video Prompting

Overview

Turn a user’s intent into a strong, model-compliant video prompt by routing to the correct model guide and applying its formatting/tokens.

Model-specific guidance lives in references/models/. This file is the entry point: pick the model, ask the minimum clarifying questions, then draft the prompt in that model’s expected format.

Model Index

  • Ovi: references/models/ovi/prompting.md
  • Sora (Sora 2): references/models/sora/prompting.md
  • Veo 3 / 3.1: references/models/veo3/prompting.md
  • Wan 2.2: references/models/wan22/prompting.md
  • LTX-2: references/models/ltx2/prompting.md
  • LTX-2.3: references/models/ltx2-3/prompting.md

To add a new model later: create references/models/<model>/prompting.md, then add it to this index.

Workflow

Step 1 — Identify the model and input mode

If the user did not name a model, ask which model they are using (or offer supported options from the Model Index).

Then confirm the input mode:

  • Text-to-video (t2v), or
  • Image-to-video (i2v)

If i2v: ask the user to share the image (optional, but it will help you generate a better prompt). Use the image as an anchor according to the chosen model’s guidance (e.g., keep identity/wardrobe/composition stable; focus your text on motion/camera/what changes).

If the chosen model has versions, duration constraints, or required parameters, ask the minimum questions needed to select the right format (see the model guide). For LTX-2.3 specifically: default to a 10-second clip when duration is missing, ask if the user wants shorter or longer, and scale motion complexity to match that duration.

Step 2 — Load the model reference and follow its format

Open the model’s prompting.md from the Model Index and follow its rules strictly (tokens, audio formatting, parameter constraints, recommended structure).

Step 3 — Draft the prompt as a coherent clip

Default structure (adapt to the model’s style and required sections):

  1. Subject(s): who/what, distinctive details
  2. Setting: where/when, lighting, mood
  3. Action progression: what changes over time (start → beat → beat → end)
  4. Camera: framing/movement only if it matters
  5. Dialogue/audio: only if the model supports it, using the model’s exact format

Avoid keyword soup. Prefer a single, well-described shot unless the user explicitly wants multiple cuts/shots.

Step 4 — Output

Default: output only the final prompt text. Default formatting: output prompts as a single line with no line breaks unless the user explicitly requests multiline formatting.

If the user asks for options: provide 2–3 distinct prompt variants, each fully self-contained and compliant with the model’s formatting.

If the model uses required API parameters (e.g., duration/size), include a short “Recommended parameters” line only when the user has specified them or explicitly asks for them.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ll-feishu-audio

飞书语音交互技能。支持语音消息自动识别、AI 处理、语音回复全流程。需要配置 FEISHU_APP_ID 和 FEISHU_APP_SECRET 环境变量。使用 faster-whisper 进行语音识别,Edge TTS 进行语音合成,自动转换 OPUS 格式并通过飞书发送。适用于飞书平台的语音对话场景。

Archived SourceRecently Updated
General

test_skill

import json import tkinter as tk from tkinter import messagebox, simpledialog

Archived SourceRecently Updated
General

51mee-resume-profile

简历画像。触发场景:用户要求生成候选人画像;用户想了解候选人的多维度标签和能力评估。

Archived SourceRecently Updated
General

51mee-resume-parse

简历解析。触发场景:用户上传简历文件要求解析、提取结构化信息。

Archived SourceRecently Updated