This commit is contained in:
8
fal-workflow/.skillshare-meta.json
Normal file
8
fal-workflow/.skillshare-meta.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"source": "github.com/fal-ai-community/skills/tree/main/skills/claude.ai/fal-workflow",
|
||||
"type": "github-subdir",
|
||||
"installed_at": "2026-01-30T02:27:24.734185123Z",
|
||||
"repo_url": "https://github.com/fal-ai-community/skills.git",
|
||||
"subdir": "skills/claude.ai/fal-workflow",
|
||||
"version": "69efe6e"
|
||||
}
|
||||
330
fal-workflow/SKILL.md
Normal file
330
fal-workflow/SKILL.md
Normal file
@@ -0,0 +1,330 @@
|
||||
---
|
||||
name: fal-workflow
|
||||
description: Generate production-ready fal.ai workflow JSON files. Use when user requests "create workflow", "chain models", "multi-step generation", "image to video pipeline", or complex AI generation pipelines.
|
||||
metadata:
|
||||
author: fal-ai
|
||||
version: "3.0.0"
|
||||
---
|
||||
|
||||
# fal.ai Workflow Generator
|
||||
|
||||
Generate **100% working, production-ready fal.ai workflow JSON files**. Workflows chain multiple AI models together for complex generation pipelines.
|
||||
|
||||
**References:**
|
||||
- [Model Reference](references/MODELS.md) - Detailed model configurations
|
||||
- [Common Patterns](references/PATTERNS.md) - Reusable workflow patterns
|
||||
- [Code Examples](references/EXAMPLES.md) - Code snippets and partial examples
|
||||
|
||||
**Troubleshooting Reference:**
|
||||
- [Complete Workflows](references/WORKFLOWS.md) - Working JSON examples for debugging (use ONLY when user reports errors)
|
||||
|
||||
---
|
||||
|
||||
## Core Architecture
|
||||
|
||||
### Valid Node Types
|
||||
|
||||
⚠️ **ONLY TWO VALID NODE TYPES EXIST:**
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `"run"` | Execute a model/app |
|
||||
| `"display"` | Output results to user |
|
||||
|
||||
**❌ INVALID:** `type: "input"` - This does NOT exist! Input is defined ONLY in `schema.input`.
|
||||
|
||||
### Minimal Working Example
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "my-workflow",
|
||||
"title": "My Workflow",
|
||||
"contents": {
|
||||
"name": "workflow",
|
||||
"nodes": {
|
||||
"output": {
|
||||
"type": "display",
|
||||
"id": "output",
|
||||
"depends": ["node-image"],
|
||||
"input": {},
|
||||
"fields": { "image": "$node-image.images.0.url" }
|
||||
},
|
||||
"node-image": {
|
||||
"type": "run",
|
||||
"id": "node-image",
|
||||
"depends": ["input"],
|
||||
"app": "fal-ai/flux/dev",
|
||||
"input": { "prompt": "$input.prompt" }
|
||||
}
|
||||
},
|
||||
"output": { "image": "$node-image.images.0.url" },
|
||||
"schema": {
|
||||
"input": {
|
||||
"prompt": {
|
||||
"name": "prompt",
|
||||
"label": "Prompt",
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"modelId": "node-image"
|
||||
}
|
||||
},
|
||||
"output": {
|
||||
"image": { "name": "image", "label": "Generated Image", "type": "string" }
|
||||
}
|
||||
},
|
||||
"version": "1",
|
||||
"metadata": {
|
||||
"input": { "position": { "x": 0, "y": 0 } },
|
||||
"description": "Simple text to image workflow"
|
||||
}
|
||||
},
|
||||
"is_public": true,
|
||||
"user_id": "",
|
||||
"user_nickname": "",
|
||||
"created_at": ""
|
||||
}
|
||||
```
|
||||
|
||||
### Reference Syntax
|
||||
|
||||
| Reference | Use Case | Example |
|
||||
|-----------|----------|---------|
|
||||
| `$input.field` | Input value | `$input.prompt` |
|
||||
| `$node.output` | LLM text output | `$node-llm.output` |
|
||||
| `$node.images.0.url` | First image URL | `$node-img.images.0.url` |
|
||||
| `$node.image.url` | Single image URL | `$node-upscale.image.url` |
|
||||
| `$node.video.url` | Video URL | `$node-vid.video.url` |
|
||||
| `$node.audio_file.url` | Audio URL | `$node-music.audio_file.url` |
|
||||
| `$node.frame.url` | Extracted frame | `$node-extract.frame.url` |
|
||||
|
||||
### CRITICAL: No String Interpolation
|
||||
|
||||
**⚠️ NEVER mix text with variables! Variable MUST be the ENTIRE value.**
|
||||
|
||||
```json
|
||||
// ❌ WRONG - WILL BREAK
|
||||
"prompt": "Create image of $input.subject in $input.style"
|
||||
|
||||
// ✅ CORRECT - Variable is the ENTIRE value
|
||||
"prompt": "$input.prompt"
|
||||
"prompt": "$node-llm.output"
|
||||
```
|
||||
|
||||
**To combine values:** Use `fal-ai/text-concat` or `fal-ai/workflow-utilities/merge-text`. See [Model Reference](references/MODELS.md#text-utilities-critical-for-combining-values).
|
||||
|
||||
---
|
||||
|
||||
## Critical Rules
|
||||
|
||||
### C1: Dependencies Must Match References
|
||||
|
||||
```json
|
||||
// ❌ WRONG
|
||||
"node-b": {
|
||||
"depends": [],
|
||||
"input": { "data": "$node-a.output" }
|
||||
}
|
||||
|
||||
// ✅ CORRECT
|
||||
"node-b": {
|
||||
"depends": ["node-a"],
|
||||
"input": { "data": "$node-a.output" }
|
||||
}
|
||||
```
|
||||
|
||||
### C2: ID Must Match Object Key
|
||||
|
||||
```json
|
||||
// ❌ WRONG
|
||||
"my-node": { "id": "different-id" }
|
||||
|
||||
// ✅ CORRECT
|
||||
"my-node": { "id": "my-node" }
|
||||
```
|
||||
|
||||
### C3: Use Correct LLM Type
|
||||
|
||||
- `openrouter/router` → Text only, no image_urls
|
||||
- `openrouter/router/vision` → ONLY when analyzing images
|
||||
|
||||
### C4: Schema modelId Required
|
||||
|
||||
```json
|
||||
"schema": {
|
||||
"input": {
|
||||
"field": { "modelId": "first-consuming-node" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### C5: Output Depends on All Referenced Nodes
|
||||
|
||||
```json
|
||||
"output": {
|
||||
"depends": ["node-a", "node-b", "node-c"],
|
||||
"fields": {
|
||||
"a": "$node-a.video",
|
||||
"b": "$node-b.images.0.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Default Models
|
||||
|
||||
| Task | Default Model |
|
||||
|------|---------------|
|
||||
| Image generation | `fal-ai/nano-banana-pro` |
|
||||
| Image editing | `fal-ai/nano-banana-pro/edit` |
|
||||
| Video (I2V) | `fal-ai/bytedance/seedance/v1.5/pro/image-to-video` |
|
||||
| Text LLM | `openrouter/router` with `google/gemini-2.5-flash` |
|
||||
| Vision LLM | `openrouter/router/vision` with `google/gemini-3-pro-preview` |
|
||||
| Music | `fal-ai/elevenlabs/music` |
|
||||
| Upscale | `fal-ai/seedvr/upscale/image` |
|
||||
| Text concat (2 texts) | `fal-ai/text-concat` |
|
||||
| Text merge (array) | `fal-ai/workflow-utilities/merge-text` |
|
||||
| Video merge | `fal-ai/ffmpeg-api/merge-videos` |
|
||||
| Audio+Video merge | `fal-ai/ffmpeg-api/merge-audio-video` |
|
||||
| Frame extract | `fal-ai/ffmpeg-api/extract-frame` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
### Output References
|
||||
|
||||
| Model Type | Output Reference |
|
||||
|------------|------------------|
|
||||
| LLM | `$node.output` |
|
||||
| Text Concat | `$node.results` |
|
||||
| Merge Text | `$node.text` |
|
||||
| Image Gen (array) | `$node.images.0.url` |
|
||||
| Image Process (single) | `$node.image.url` |
|
||||
| Video | `$node.video.url` |
|
||||
| Music | `$node.audio_file.url` |
|
||||
| Frame Extract | `$node.frame.url` |
|
||||
|
||||
### Common App IDs
|
||||
|
||||
```
|
||||
fal-ai/nano-banana-pro
|
||||
fal-ai/nano-banana-pro/edit
|
||||
fal-ai/text-concat
|
||||
fal-ai/workflow-utilities/merge-text
|
||||
fal-ai/bytedance/seedance/v1.5/pro/image-to-video
|
||||
fal-ai/kling-video/o1/image-to-video
|
||||
fal-ai/kling-video/v2.6/pro/image-to-video
|
||||
fal-ai/elevenlabs/music
|
||||
fal-ai/ffmpeg-api/merge-videos
|
||||
fal-ai/ffmpeg-api/merge-audio-video
|
||||
fal-ai/ffmpeg-api/extract-frame
|
||||
fal-ai/seedvr/upscale/image
|
||||
openrouter/router
|
||||
openrouter/router/vision
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Input Schema
|
||||
|
||||
```json
|
||||
"schema": {
|
||||
"input": {
|
||||
"text_field": {
|
||||
"name": "text_field",
|
||||
"label": "Display Label",
|
||||
"type": "string",
|
||||
"description": "Help text",
|
||||
"required": true,
|
||||
"modelId": "consuming-node"
|
||||
},
|
||||
"image_urls": {
|
||||
"name": "image_urls",
|
||||
"type": { "kind": "list", "elementType": "string" },
|
||||
"required": true,
|
||||
"modelId": "node-id"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pre-Output Checklist
|
||||
|
||||
Before outputting any workflow, verify:
|
||||
|
||||
- [ ] **⚠️ All nodes have `type: "run"` or `type: "display"` ONLY (NO `type: "input"`!)**
|
||||
- [ ] **⚠️ No string interpolation - variable MUST be ENTIRE value**
|
||||
- [ ] Every `$node.xxx` has matching `depends` entry
|
||||
- [ ] Every node `id` matches object key
|
||||
- [ ] Input schema has `modelId` for each field
|
||||
- [ ] Output depends on ALL referenced nodes
|
||||
- [ ] Correct LLM type (router vs router/vision)
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Using Script
|
||||
|
||||
```bash
|
||||
bash /mnt/skills/user/fal-workflow/scripts/create-workflow.sh \
|
||||
--name "my-workflow" \
|
||||
--title "My Workflow Title" \
|
||||
--nodes '[...]' \
|
||||
--outputs '{...}'
|
||||
```
|
||||
|
||||
### Using MCP Tool
|
||||
|
||||
```javascript
|
||||
mcp__fal-ai__create-workflow({
|
||||
smartMode: true,
|
||||
intent: "Generate a story with LLM, create an illustration, then animate it"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Invalid Node Type Error (MOST COMMON)
|
||||
```
|
||||
Error: unexpected value; permitted: 'run', 'display', field required
|
||||
```
|
||||
**Cause:** You created a node with `type: "input"` which does NOT exist.
|
||||
**Solution:** Remove ANY node with `type: "input"`. Define input fields ONLY in `schema.input`.
|
||||
|
||||
### Dependency Error
|
||||
```
|
||||
Error: Node references $node-x but doesn't depend on it
|
||||
```
|
||||
**Solution:** Add the referenced node to the `depends` array.
|
||||
|
||||
### ID Mismatch Error
|
||||
```
|
||||
Error: Node key "my-node" doesn't match id "different-id"
|
||||
```
|
||||
**Solution:** Ensure the object key matches the `id` field exactly.
|
||||
|
||||
### LLM Vision Error
|
||||
```
|
||||
Error: image_urls provided but using text-only router
|
||||
```
|
||||
**Solution:** Switch to `openrouter/router/vision` when analyzing images.
|
||||
|
||||
---
|
||||
|
||||
## Finding Model Schemas
|
||||
|
||||
Every model's input/output schema:
|
||||
```
|
||||
https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=[endpoint_id]
|
||||
```
|
||||
|
||||
Example:
|
||||
```
|
||||
https://fal.ai/api/openapi/queue/openapi.json?endpoint_id=fal-ai/nano-banana-pro
|
||||
```
|
||||
145
fal-workflow/references/EXAMPLES.md
Normal file
145
fal-workflow/references/EXAMPLES.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# Real-World Workflow Examples
|
||||
|
||||
Complete workflow examples for reference.
|
||||
|
||||
## Multi-Destination Marketing Campaign Workflow
|
||||
|
||||
This workflow creates personalized video content for multiple locations:
|
||||
|
||||
1. Takes multiple destinations as input
|
||||
2. Uses Vision LLM to analyze template and generate edit prompts
|
||||
3. Creates destination-specific images
|
||||
4. Removes backgrounds
|
||||
5. Upscales all images
|
||||
6. Generates videos with 360° camera tours
|
||||
7. Merges all videos into final output
|
||||
|
||||
### Key Pattern: Edit → Remove Text → Upscale → Video
|
||||
|
||||
```json
|
||||
// Step 1: Vision LLM analyzes template, generates edit prompt
|
||||
"vision-prompt-dest2": {
|
||||
"id": "vision-prompt-dest2",
|
||||
"type": "run",
|
||||
"depends": ["input"],
|
||||
"app": "openrouter/router/vision",
|
||||
"input": {
|
||||
"image_urls": ["https://example.com/template.jpg"],
|
||||
"prompt": "$input.destination_2",
|
||||
"system_prompt": "Generate prompt to change background to destination, keep layout...",
|
||||
"model": "google/gemini-3-pro-preview",
|
||||
"reasoning": true
|
||||
}
|
||||
},
|
||||
|
||||
// Step 2: Edit image with new destination
|
||||
"edit-dest2": {
|
||||
"id": "edit-dest2",
|
||||
"type": "run",
|
||||
"depends": ["vision-prompt-dest2"],
|
||||
"app": "fal-ai/nano-banana-pro/edit",
|
||||
"input": {
|
||||
"image_urls": ["https://example.com/template.jpg"],
|
||||
"prompt": "$vision-prompt-dest2.output",
|
||||
"aspect_ratio": "16:9"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 3: Create text-free version for video background
|
||||
"edit-notext-dest2": {
|
||||
"id": "edit-notext-dest2",
|
||||
"type": "run",
|
||||
"depends": ["edit-dest2"],
|
||||
"app": "fal-ai/nano-banana-pro/edit",
|
||||
"input": {
|
||||
"image_urls": ["$edit-dest2.images.0.url"],
|
||||
"prompt": "Remove all text and logo, leave only background scene"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 4: Upscale both versions
|
||||
"upscale-dest2": {
|
||||
"id": "upscale-dest2",
|
||||
"type": "run",
|
||||
"depends": ["edit-dest2"],
|
||||
"app": "fal-ai/seedvr/upscale/image",
|
||||
"input": {
|
||||
"image_url": "$edit-dest2.images.0.url"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 5: Vision LLM creates video prompt from both images
|
||||
"video-prompt-dest2": {
|
||||
"id": "video-prompt-dest2",
|
||||
"type": "run",
|
||||
"depends": ["upscale-notext-dest2", "upscale-dest2", "input"],
|
||||
"app": "openrouter/router/vision",
|
||||
"input": {
|
||||
"image_urls": [
|
||||
"$upscale-notext-dest2.image.url",
|
||||
"$upscale-dest2.image.url"
|
||||
],
|
||||
"prompt": "$input.destination_2",
|
||||
"system_prompt": "Create video prompt: camera tours 360° then transitions to tail image..."
|
||||
}
|
||||
},
|
||||
|
||||
// Step 6: Generate video with first/last frame
|
||||
"video-dest2": {
|
||||
"id": "video-dest2",
|
||||
"type": "run",
|
||||
"depends": ["video-prompt-dest2", "upscale-notext-dest2", "upscale-dest2"],
|
||||
"app": "fal-ai/kling-video/o1/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$video-prompt-dest2.output",
|
||||
"image_url": "$upscale-notext-dest2.image.url",
|
||||
"tail_image_url": "$upscale-dest2.image.url"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 7: Merge all destination videos
|
||||
"merge-all-videos": {
|
||||
"id": "merge-all-videos",
|
||||
"type": "run",
|
||||
"depends": ["video-dest1", "video-dest2", "video-dest3"],
|
||||
"app": "fal-ai/ffmpeg-api/merge-videos",
|
||||
"input": {
|
||||
"video_urls": [
|
||||
"$video-dest1.video.url",
|
||||
"$video-dest2.video.url",
|
||||
"$video-dest3.video.url"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Input Schema for This Workflow
|
||||
|
||||
```json
|
||||
"schema": {
|
||||
"input": {
|
||||
"destination_1": {
|
||||
"name": "destination_1",
|
||||
"label": "Destination 1",
|
||||
"type": "string",
|
||||
"description": "First destination name (e.g., Paris, Tokyo)",
|
||||
"required": true,
|
||||
"modelId": "vision-prompt-dest1"
|
||||
},
|
||||
"destination_2": {
|
||||
"name": "destination_2",
|
||||
"label": "Destination 2",
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"modelId": "vision-prompt-dest2"
|
||||
},
|
||||
"destination_3": {
|
||||
"name": "destination_3",
|
||||
"label": "Destination 3",
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"modelId": "vision-prompt-dest3"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
527
fal-workflow/references/MODELS.md
Normal file
527
fal-workflow/references/MODELS.md
Normal file
@@ -0,0 +1,527 @@
|
||||
# Model Reference
|
||||
|
||||
Detailed configuration and usage for all supported models in fal.ai workflows.
|
||||
|
||||
## Image Generation
|
||||
|
||||
### Nano Banana Pro (DEFAULT)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/nano-banana-pro",
|
||||
"input": {
|
||||
"prompt": "$node-prompt.output",
|
||||
"aspect_ratio": "16:9",
|
||||
"num_images": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.images.0.url`
|
||||
|
||||
### Nano Banana Pro Edit (DEFAULT for editing)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/nano-banana-pro/edit",
|
||||
"input": {
|
||||
"prompt": "$node-prompt.output",
|
||||
"image_urls": ["$input.source_image"],
|
||||
"aspect_ratio": "16:9",
|
||||
"resolution": "4K"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.images.0.url`
|
||||
|
||||
### Other Image Models
|
||||
|
||||
| Model | App ID |
|
||||
|-------|--------|
|
||||
| FLUX.1 Dev | `fal-ai/flux/dev` |
|
||||
| FLUX.1 Schnell | `fal-ai/flux/schnell` |
|
||||
| FLUX.1 Pro | `fal-ai/flux-pro` |
|
||||
| Ideogram v3 | `fal-ai/ideogram/v3` |
|
||||
| Recraft v3 | `fal-ai/recraft-v3` |
|
||||
|
||||
---
|
||||
|
||||
## Video Generation
|
||||
|
||||
### Seedance 1.5 Pro (DEFAULT)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/bytedance/seedance/v1.5/pro/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$node-video-prompt.output",
|
||||
"image_url": "$node-image.images.0.url",
|
||||
"aspect_ratio": "16:9",
|
||||
"resolution": "720p",
|
||||
"duration": "5",
|
||||
"generate_audio": true
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.video.url`
|
||||
|
||||
### Kling Video O1 (Image-to-Video with First/Last Frame)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/kling-video/o1/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$node-prompt.output",
|
||||
"image_url": "$node-start-frame.images.0.url",
|
||||
"tail_image_url": "$node-end-frame.images.0.url",
|
||||
"duration": "5",
|
||||
"aspect_ratio": "16:9"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.video.url`
|
||||
|
||||
### Kling Video 2.6 Pro (Best I2V)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/kling-video/v2.6/pro/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$node-prompt.output",
|
||||
"start_image_url": "$node-image.images.0.url",
|
||||
"duration": "5",
|
||||
"negative_prompt": "blur, distort, and low quality",
|
||||
"generate_audio": true
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.video.url`
|
||||
|
||||
**Parameters:**
|
||||
- `prompt` - Video description (can include speech for lip-sync)
|
||||
- `start_image_url` - Starting frame image URL
|
||||
- `duration` - Video length: `"5"` or `"10"` seconds
|
||||
- `negative_prompt` - What to avoid in generation
|
||||
- `generate_audio` - Enable audio generation from prompt
|
||||
|
||||
**Best for:** High quality image-to-video with optional audio generation and lip-sync support.
|
||||
|
||||
### Other Video Models
|
||||
|
||||
| Model | App ID | Notes |
|
||||
|-------|--------|-------|
|
||||
| Veo 3.1 Fast | `fal-ai/veo3.1/fast/image-to-video` | High quality |
|
||||
| Kling 2.6 Pro | `fal-ai/kling-video/v2.6/pro/image-to-video` | **Best I2V** |
|
||||
|
||||
|
||||
---
|
||||
|
||||
## LLM Models
|
||||
|
||||
### Text LLM (DEFAULT - No images)
|
||||
```json
|
||||
{
|
||||
"app": "openrouter/router",
|
||||
"input": {
|
||||
"prompt": "$input.user_input",
|
||||
"system_prompt": "Your instructions here...",
|
||||
"model": "google/gemini-2.5-flash",
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.output`
|
||||
|
||||
### Vision LLM (When image analysis needed)
|
||||
```json
|
||||
{
|
||||
"app": "openrouter/router/vision",
|
||||
"input": {
|
||||
"prompt": "$input.user_request",
|
||||
"system_prompt": "Analyze the image and...",
|
||||
"image_urls": ["$node-image.images.0.url"],
|
||||
"model": "google/gemini-3-pro-preview",
|
||||
"reasoning": true
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.output`
|
||||
|
||||
**Available LLM Models:**
|
||||
- `google/gemini-2.5-flash` - Fast, good quality
|
||||
- `google/gemini-3-pro-preview` - Best reasoning
|
||||
- `anthropic/claude-sonnet-4.5` - Best for complex tasks
|
||||
|
||||
---
|
||||
|
||||
## Audio/Music Generation
|
||||
|
||||
### ElevenLabs Music
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/elevenlabs/music",
|
||||
"input": {
|
||||
"prompt": "Mysterious soundtrack, jungle themes, tribal percussion",
|
||||
"respect_sections_durations": true,
|
||||
"output_format": "mp3_44100_128"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.audio_file.url`
|
||||
|
||||
### MMAudio (Video to Audio)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/mmaudio",
|
||||
"input": {
|
||||
"video_url": "$node-video.video.url",
|
||||
"prompt": "Ambient nature sounds"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Stable Audio
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/stable-audio",
|
||||
"input": {
|
||||
"prompt": "Cinematic orchestral music"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Other Music Models
|
||||
|
||||
| Model | App ID |
|
||||
|-------|--------|
|
||||
| MiniMax Music v2 | `fal-ai/minimax-music/v2` |
|
||||
|
||||
---
|
||||
|
||||
## Text-to-Speech
|
||||
|
||||
### ElevenLabs TTS v3
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/elevenlabs/tts/eleven-v3",
|
||||
"input": {
|
||||
"text": "$node-llm.output",
|
||||
"voice": "Aria",
|
||||
"stability": 0.5,
|
||||
"similarity_boost": 0.75,
|
||||
"speed": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.audio.url`
|
||||
|
||||
**Parameters:**
|
||||
- `text` - Text to convert to speech
|
||||
- `voice` - Voice name (e.g., "Aria", "Roger", "Sarah")
|
||||
- `stability` - Voice stability (0-1)
|
||||
- `similarity_boost` - Voice clarity (0-1)
|
||||
- `speed` - Speech speed multiplier
|
||||
|
||||
### MiniMax Speech 2.6 HD (Best Quality)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/minimax/speech-2.6-hd",
|
||||
"input": {
|
||||
"prompt": "$node-llm.output",
|
||||
"voice_setting": {
|
||||
"voice_id": "Wise_Woman",
|
||||
"speed": 1,
|
||||
"vol": 1,
|
||||
"pitch": 0
|
||||
},
|
||||
"output_format": "mp3"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.audio.url`
|
||||
|
||||
**Parameters:**
|
||||
- `prompt` - Text to convert to speech
|
||||
- `voice_setting.voice_id` - Voice ID (e.g., "Wise_Woman", "Young_Man")
|
||||
- `voice_setting.speed` - Speech speed (0.5-2)
|
||||
- `voice_setting.vol` - Volume (0-1)
|
||||
- `voice_setting.pitch` - Pitch adjustment (-12 to 12)
|
||||
- `output_format` - Output format: `"mp3"`, `"wav"`, `"hex"`
|
||||
|
||||
### MiniMax Voice Clone
|
||||
Clone a voice from audio sample, then use the cloned voice ID in MiniMax Speech.
|
||||
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/minimax/voice-clone",
|
||||
"input": {
|
||||
"audio_url": "$input.voice_sample",
|
||||
"text": "Preview text for the cloned voice",
|
||||
"model": "speech-02-hd"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.audio.url`, `$node.voice_id`
|
||||
|
||||
**Use cloned voice in Speech 2.6 HD:**
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/minimax/speech-2.6-hd",
|
||||
"input": {
|
||||
"prompt": "$node-llm.output",
|
||||
"voice_setting": {
|
||||
"voice_id": "$node-voice-clone.voice_id"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Other TTS Models
|
||||
|
||||
| Model | App ID | Notes |
|
||||
|-------|--------|-------|
|
||||
| MiniMax Speech 2.6 Turbo | `fal-ai/minimax/speech-2.6-turbo` | Fast |
|
||||
| Chatterbox | `fal-ai/chatterbox/multilingual` | Multi-language |
|
||||
|
||||
---
|
||||
|
||||
## Text Utilities (CRITICAL for combining values)
|
||||
|
||||
**⚠️ These are the ONLY ways to combine text values - string interpolation is NOT supported!**
|
||||
|
||||
### Text Concat (2 texts)
|
||||
Concatenates exactly TWO text values. `text1` can be static text!
|
||||
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/text-concat",
|
||||
"input": {
|
||||
"text1": "Brand expert response:",
|
||||
"text2": "$node-llm.output"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.results`
|
||||
|
||||
**Use Cases:**
|
||||
- Add a label/prefix to a variable: `"text1": "Scene 1:", "text2": "$node.output"`
|
||||
- Combine static instruction with dynamic content
|
||||
|
||||
### Merge Text (Multiple texts)
|
||||
Merges an ARRAY of text values with a separator.
|
||||
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/workflow-utilities/merge-text",
|
||||
"input": {
|
||||
"texts": [
|
||||
"$node-a.results",
|
||||
"$node-b.results",
|
||||
"$node-c.results"
|
||||
],
|
||||
"separator": "------"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.text`
|
||||
|
||||
**Use Cases:**
|
||||
- Combine 3+ LLM outputs before passing to next node
|
||||
- Merge multiple expert responses into single context
|
||||
|
||||
### Pattern: Label + Merge
|
||||
Common pattern for combining multiple labeled outputs:
|
||||
|
||||
```json
|
||||
// Step 1: Add labels with text-concat
|
||||
"label-brand": {
|
||||
"app": "fal-ai/text-concat",
|
||||
"input": {
|
||||
"text1": "Brand expert:",
|
||||
"text2": "$brand-llm.output"
|
||||
}
|
||||
},
|
||||
"label-visual": {
|
||||
"app": "fal-ai/text-concat",
|
||||
"input": {
|
||||
"text1": "Visual director:",
|
||||
"text2": "$visual-llm.output"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 2: Merge labeled outputs
|
||||
"merged-context": {
|
||||
"depends": ["label-brand", "label-visual"],
|
||||
"app": "fal-ai/workflow-utilities/merge-text",
|
||||
"input": {
|
||||
"texts": ["$label-brand.results", "$label-visual.results"],
|
||||
"separator": "\n\n---\n\n"
|
||||
}
|
||||
},
|
||||
|
||||
// Step 3: Use merged context
|
||||
"final-llm": {
|
||||
"depends": ["merged-context"],
|
||||
"input": {
|
||||
"prompt": "$merged-context.text"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FFmpeg Utilities (CRITICAL)
|
||||
|
||||
### Extract Frame from Video
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/ffmpeg-api/extract-frame",
|
||||
"input": {
|
||||
"video_url": "$node-video.video.url",
|
||||
"frame_type": "first"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.frame.url`
|
||||
|
||||
**frame_type options:** `"first"` or `"last"`
|
||||
|
||||
**Use Cases:**
|
||||
- Get last frame for video extension
|
||||
- Get first frame for transitions
|
||||
- Extract frame for first/last frame video generation
|
||||
|
||||
### Merge Multiple Videos
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/ffmpeg-api/merge-videos",
|
||||
"input": {
|
||||
"video_urls": [
|
||||
"$node-video-1.video.url",
|
||||
"$node-video-2.video.url",
|
||||
"$node-video-3.video.url"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.video.url`
|
||||
|
||||
### Merge Audio and Video
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/ffmpeg-api/merge-audio-video",
|
||||
"input": {
|
||||
"video_url": "$node-video.video.url",
|
||||
"audio_url": "$node-music.audio_file.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.video.url`
|
||||
|
||||
---
|
||||
|
||||
## Image Utilities
|
||||
|
||||
### Crop Image
|
||||
Crops a portion of an image using percentage-based coordinates.
|
||||
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/workflow-utilities/crop-image",
|
||||
"input": {
|
||||
"image_url": "$node-image.images.0.url",
|
||||
"x_percent": 0,
|
||||
"y_percent": 0,
|
||||
"width_percent": 33.333333,
|
||||
"height_percent": 33.333333
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.image.url`
|
||||
|
||||
**Parameters:**
|
||||
- `x_percent`: Starting X position (0-100)
|
||||
- `y_percent`: Starting Y position (0-100)
|
||||
- `width_percent`: Width of crop area (0-100)
|
||||
- `height_percent`: Height of crop area (0-100)
|
||||
|
||||
**Use Cases:**
|
||||
- Split image into grid tiles (3x3, 2x2, etc.)
|
||||
- Extract specific region from generated image
|
||||
- Create multiple crops for parallel processing
|
||||
|
||||
**Example: 3x3 Grid Split**
|
||||
```json
|
||||
// Top-left tile
|
||||
"crop-1": { "input": { "x_percent": 0, "y_percent": 0, "width_percent": 33.33, "height_percent": 33.33 } }
|
||||
// Top-center tile
|
||||
"crop-2": { "input": { "x_percent": 33.33, "y_percent": 0, "width_percent": 33.33, "height_percent": 33.33 } }
|
||||
// Top-right tile
|
||||
"crop-3": { "input": { "x_percent": 66.67, "y_percent": 0, "width_percent": 33.33, "height_percent": 33.33 } }
|
||||
// ... and so on for all 9 tiles
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Image Processing
|
||||
|
||||
### Upscale Image
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/seedvr/upscale/image",
|
||||
"input": {
|
||||
"image_url": "$node-image.images.0.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.image.url`
|
||||
|
||||
### Remove Background
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/bria/background/remove",
|
||||
"input": {
|
||||
"image_url": "$node-image.images.0.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.image.url`
|
||||
|
||||
---
|
||||
|
||||
## 3D Generation (Image to 3D)
|
||||
|
||||
### Hunyuan3D v3 (Recommended)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/hunyuan3d-v3/image-to-3d",
|
||||
"input": {
|
||||
"input_image_url": "$node-image.images.0.url",
|
||||
"face_count": 500000,
|
||||
"generate_type": "Normal",
|
||||
"polygon_type": "triangle"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.model_mesh.url`
|
||||
|
||||
**Parameters:**
|
||||
- `input_image_url` - Source image URL
|
||||
- `face_count` - Mesh detail level (default: 500000)
|
||||
- `generate_type` - Generation mode: `"Normal"`, `"Fast"`
|
||||
- `polygon_type` - Mesh type: `"triangle"`, `"quad"`
|
||||
|
||||
### Rodin v2 (Multi-view)
|
||||
```json
|
||||
{
|
||||
"app": "fal-ai/hyper3d/rodin/v2",
|
||||
"input": {
|
||||
"input_image_urls": [
|
||||
"$node-front.images.0.url",
|
||||
"$node-left.images.0.url",
|
||||
"$node-right.images.0.url",
|
||||
"$node-back.images.0.url"
|
||||
],
|
||||
"quality_mesh_option": "500K Triangle",
|
||||
"material": "All"
|
||||
}
|
||||
}
|
||||
```
|
||||
**Output:** `$node.model_mesh.url`
|
||||
|
||||
**Best for:** Multi-view 3D generation (provide multiple angles for better results)
|
||||
|
||||
116
fal-workflow/references/PATTERNS.md
Normal file
116
fal-workflow/references/PATTERNS.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Common Workflow Patterns
|
||||
|
||||
Reusable patterns for building fal.ai workflows.
|
||||
|
||||
## Pattern 1: LLM Prompt → Image → Video
|
||||
|
||||
```
|
||||
[Input] → [LLM: Image Prompt] → [Image Gen]
|
||||
↓
|
||||
[LLM: Video Prompt] → [Video Gen] → [Output]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: Parallel Processing (Fan-Out)
|
||||
|
||||
```
|
||||
→ [Process A] →
|
||||
[Hub Node] → [Process B] → [Merge] → [Output]
|
||||
→ [Process C] →
|
||||
```
|
||||
|
||||
All parallel nodes depend on hub, NOT on each other.
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Video Extension with Extract Frame
|
||||
|
||||
```
|
||||
[Video 1] → [Extract Last Frame] → [Video 2 with Start Frame] → [Merge] → [Output]
|
||||
```
|
||||
|
||||
```json
|
||||
"node-extract": {
|
||||
"depends": ["node-video-1"],
|
||||
"app": "fal-ai/ffmpeg-api/extract-frame",
|
||||
"input": {
|
||||
"video_url": "$node-video-1.video.url",
|
||||
"frame_type": "last"
|
||||
}
|
||||
},
|
||||
"node-video-2": {
|
||||
"depends": ["node-extract", "node-prompt-2"],
|
||||
"app": "fal-ai/kling-video/o1/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$node-prompt-2.output",
|
||||
"image_url": "$node-extract.frame.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: First/Last Frame Video (Kling O1)
|
||||
|
||||
```
|
||||
[Start Image] →
|
||||
→ [Kling O1 Video] → [Output]
|
||||
[End Image] →
|
||||
```
|
||||
|
||||
```json
|
||||
"node-video": {
|
||||
"depends": ["node-start-frame", "node-end-frame", "node-prompt"],
|
||||
"app": "fal-ai/kling-video/o1/image-to-video",
|
||||
"input": {
|
||||
"prompt": "$node-prompt.output",
|
||||
"image_url": "$node-start-frame.images.0.url",
|
||||
"tail_image_url": "$node-end-frame.images.0.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 5: Video with Custom Music
|
||||
|
||||
```
|
||||
[Video Gen] → → [Merge Audio/Video] → [Output]
|
||||
[Music Gen] → [audio_file.url] →
|
||||
```
|
||||
|
||||
```json
|
||||
"node-music": {
|
||||
"depends": ["input"],
|
||||
"app": "fal-ai/elevenlabs/music",
|
||||
"input": {
|
||||
"prompt": "$input.music_style"
|
||||
}
|
||||
},
|
||||
"node-merge": {
|
||||
"depends": ["node-video", "node-music"],
|
||||
"app": "fal-ai/ffmpeg-api/merge-audio-video",
|
||||
"input": {
|
||||
"video_url": "$node-video.video.url",
|
||||
"audio_url": "$node-music.audio_file.url"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 6: Multi-Destination Campaign (Complex)
|
||||
|
||||
Pattern for multi-destination marketing videos:
|
||||
|
||||
```
|
||||
[Input: dest_1] → [Vision LLM: Prompt] → [Edit Image] → [Upscale] →
|
||||
→ [Vision LLM: Video Prompt] → [Video Gen]
|
||||
[Edit: Remove Text] → [Upscale] →
|
||||
|
||||
[Input: dest_2] → [Same pattern...]
|
||||
[Input: dest_3] → [Same pattern...]
|
||||
|
||||
All Videos → [Merge Videos] → [Output]
|
||||
```
|
||||
49
fal-workflow/references/WORKFLOWS.md
Normal file
49
fal-workflow/references/WORKFLOWS.md
Normal file
File diff suppressed because one or more lines are too long
138
fal-workflow/scripts/create-workflow.sh
Executable file
138
fal-workflow/scripts/create-workflow.sh
Executable file
@@ -0,0 +1,138 @@
|
||||
#!/bin/bash
|
||||
|
||||
# fal.ai Workflow Creation Script
|
||||
# Usage: ./create-workflow.sh --name NAME --title TITLE --description DESC --nodes JSON --outputs JSON
|
||||
# Returns: Workflow JSON definition
|
||||
|
||||
set -e
|
||||
|
||||
# Default values
|
||||
NAME=""
|
||||
TITLE=""
|
||||
DESCRIPTION=""
|
||||
NODES=""
|
||||
OUTPUTS=""
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--name)
|
||||
NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
--title)
|
||||
TITLE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--description)
|
||||
DESCRIPTION="$2"
|
||||
shift 2
|
||||
;;
|
||||
--nodes)
|
||||
NODES="$2"
|
||||
shift 2
|
||||
;;
|
||||
--outputs)
|
||||
OUTPUTS="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate required inputs
|
||||
if [ -z "$NAME" ]; then
|
||||
echo "Error: --name is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$TITLE" ]; then
|
||||
TITLE="$NAME"
|
||||
fi
|
||||
|
||||
if [ -z "$NODES" ]; then
|
||||
echo "Error: --nodes is required (JSON array)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$OUTPUTS" ]; then
|
||||
echo "Error: --outputs is required (JSON object)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create temp directory
|
||||
TEMP_DIR=$(mktemp -d)
|
||||
trap 'rm -rf "$TEMP_DIR"' EXIT
|
||||
|
||||
echo "Creating workflow: $TITLE..." >&2
|
||||
|
||||
# Parse nodes array and build workflow structure
|
||||
# This is a simplified version - the MCP tool handles complex validation
|
||||
WORKFLOW_FILE="$TEMP_DIR/workflow.json"
|
||||
|
||||
# Build the workflow JSON
|
||||
cat > "$WORKFLOW_FILE" << EOF
|
||||
{
|
||||
"_type": "ComfyApp",
|
||||
"version": "0.1.0",
|
||||
"name": "$NAME",
|
||||
"title": "$TITLE",
|
||||
"description": "$DESCRIPTION",
|
||||
"nodes": {},
|
||||
"outputs": $OUTPUTS
|
||||
}
|
||||
EOF
|
||||
|
||||
# Process nodes and add to workflow
|
||||
# Note: This is a basic implementation. For full validation, use the MCP tool.
|
||||
echo "Processing nodes..." >&2
|
||||
|
||||
# Use Python/jq if available for proper JSON manipulation
|
||||
if command -v python3 &> /dev/null; then
|
||||
python3 << PYTHON_EOF
|
||||
import json
|
||||
import sys
|
||||
|
||||
# Read workflow
|
||||
with open("$WORKFLOW_FILE", "r") as f:
|
||||
workflow = json.load(f)
|
||||
|
||||
# Parse nodes
|
||||
nodes = json.loads('''$NODES''')
|
||||
|
||||
# Build nodes object
|
||||
for node in nodes:
|
||||
node_id = node.get("nodeId", "")
|
||||
model_id = node.get("modelId", "")
|
||||
node_input = node.get("input", {})
|
||||
depends_on = node.get("dependsOn", [])
|
||||
|
||||
# Detect dependencies from input references
|
||||
for key, value in node_input.items():
|
||||
if isinstance(value, str) and value.startswith("\$") and not value.startswith("\$input"):
|
||||
ref_node = value.split(".")[0][1:] # Remove $ and get node name
|
||||
if ref_node not in depends_on:
|
||||
depends_on.append(ref_node)
|
||||
|
||||
workflow["nodes"][node_id] = {
|
||||
"app": model_id,
|
||||
"input": node_input
|
||||
}
|
||||
|
||||
if depends_on:
|
||||
workflow["nodes"][node_id]["dependsOn"] = depends_on
|
||||
|
||||
# Write result
|
||||
print(json.dumps(workflow, indent=2))
|
||||
PYTHON_EOF
|
||||
else
|
||||
# Fallback: output the basic structure
|
||||
echo "Warning: Python not available, outputting basic structure" >&2
|
||||
cat "$WORKFLOW_FILE"
|
||||
fi
|
||||
|
||||
echo "" >&2
|
||||
echo "Workflow created successfully!" >&2
|
||||
Reference in New Issue
Block a user