varg-video-generation

Video Generation with varg React Engine

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "varg-video-generation" with this command: npx skills add varghq/skills/varghq-skills-varg-video-generation

Video Generation with varg React Engine

Generate AI videos using declarative JSX syntax with automatic caching and parallel generation.

Quick Setup

Run the setup script to initialize a project:

bun scripts/setup.ts

Or manually check API keys:

cat .env 2>/dev/null | grep -E "^(FAL_API_KEY|ELEVENLABS_API_KEY)=" || echo "No API keys found"

Required API Keys

FAL_API_KEY (Required)

Detail Value

Provider Fal.ai

Get it https://fal.ai/dashboard/keys

Free tier Yes (limited credits)

Used for Image generation (Flux), Video generation (Wan 2.5, Kling)

If user doesn't have FAL_API_KEY :

Optional Keys

Feature Key Provider URL

Music/Voice ELEVENLABS_API_KEY

ElevenLabs https://elevenlabs.io/app/settings/api-keys

Lipsync REPLICATE_API_TOKEN

Replicate https://replicate.com/account/api-tokens

Transcription GROQ_API_KEY

Groq https://console.groq.com/keys

Warn user about missing optional keys but continue with available features.

Available Features by API Key

FAL_API_KEY only:

  • Image generation (Flux models)

  • Image-to-video animation (Wan 2.5, Kling)

  • Text-to-video generation

  • Slideshows with transitions

  • Ken Burns zoom effects

FAL + ELEVENLABS:

  • All above, plus:

  • AI-generated background music

  • Text-to-speech voiceovers

  • Talking character videos

All keys:

  • Full production pipeline with lipsync and auto-captions

Quick Templates

Simple Slideshow (FAL only)

import { render, Render, Clip, Image } from "vargai/react";

const SCENES = ["sunset over ocean", "mountain peaks", "city at night"];

await render( <Render width={1080} height={1920}> {SCENES.map((prompt, i) => ( <Clip key={i} duration={3} transition={{ name: "fade", duration: 0.5 }}> <Image prompt={prompt} zoom="in" /> </Clip> ))} </Render>, { output: "output/slideshow.mp4" } );

Animated Video (FAL + ElevenLabs)

import { render, Render, Clip, Image, Animate, Music } from "vargai/react"; import { fal, elevenlabs } from "vargai/ai";

await render( <Render width={1080} height={1920}> <Music prompt="upbeat electronic" model={elevenlabs.musicModel()} duration={10} /> <Clip duration={5}> <Animate image={Image({ prompt: "cute cat on windowsill" })} motion="cat turns head, blinks slowly" model={fal.videoModel("wan-2.5")} duration={5} /> </Clip> </Render>, { output: "output/video.mp4" } );

Talking Character

import { render, Render, Clip, Image, Animate, Speech } from "vargai/react"; import { fal, elevenlabs } from "vargai/ai";

await render( <Render width={1080} height={1920}> <Clip duration="auto"> <Animate image={Image({ prompt: "friendly robot, blue metallic", aspectRatio: "9:16" })} motion="robot talking, subtle head movements" model={fal.videoModel("wan-2.5")} /> <Speech voice="adam" model={elevenlabs.speechModel("turbo")}> Hello! I'm your AI assistant. Let's create something amazing! </Speech> </Clip> </Render>, { output: "output/talking-robot.mp4" } );

See references/templates.md for more templates.

Running Videos

bun run your-video.tsx

Key Components

Component Purpose Required Key

<Render>

Root container

<Clip>

Sequential segment

<Image>

AI image FAL

<Animate>

Image-to-video FAL

<Music>

Background music ElevenLabs

<Speech>

Text-to-speech ElevenLabs

Common Patterns

Character Consistency

const character = Image({ prompt: "blue robot" }); // Reuse same reference = same generated image <Animate image={character} motion="waving" /> <Animate image={character} motion="dancing" />

Transitions

<Clip transition={{ name: "fade", duration: 0.5 }}> // Options: fade, crossfade, wipeleft, cube, slideup, etc.

Aspect Ratios

  • 9:16

  • TikTok, Reels, Shorts (vertical)

  • 16:9

  • YouTube (horizontal)

  • 1:1

  • Instagram (square)

Zoom Effects

<Image prompt="landscape" zoom="in" /> // Zoom in <Image prompt="landscape" zoom="out" /> // Zoom out <Image prompt="landscape" zoom="left" /> // Pan left

Troubleshooting

"FAL_API_KEY not found"

  • Check .env file exists in project root

  • Ensure no spaces around = sign

  • Restart terminal after adding keys

"Rate limit exceeded"

  • Free tier has limited credits

  • Wait or upgrade plan

  • Use caching to avoid regenerating

"Video generation failed"

  • Check prompt doesn't violate content policy

  • Try simpler motion descriptions

  • Reduce video duration

Next Steps

  • Run bun scripts/setup.ts to initialize project

  • Add your FAL_API_KEY to .env

  • Run bun run examples/my-first-video.tsx

  • Or ask the agent: "create a 10 second tiktok video about cats"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

media-generation

No summary provided by upstream source.

Repository SourceNeeds Review
General

varg-ai

No summary provided by upstream source.

Repository SourceNeeds Review
General

Find Skills for ClawHub

Search for and discover OpenClaw skills from ClawHub (the official skill registry). Activate when user asks about finding skills, installing skills, or wants...

Registry SourceRecently Updated
2771Profile unavailable
General

Skill Listing Polisher

Improve a skill's public listing before publish. Use when tightening title, description, tags, changelog, and scan-friendly packaging so the listing looks cl...

Registry SourceRecently Updated
1130Profile unavailable