Skip to content

API > Core Products API > Video

Video Usage guide for Self prompting

At the moment, Video supports only Self-Prompting, meaning you’ll create your own text prompts to define how the generated animation should look and move.
In the near future, we’ll also introduce ready-to-use preset prompts that work well for most input images — making it even easier to achieve great results right out of the box.

Below, you’ll find proven examples, templates, and tips to help you create high-quality and dynamic video outputs using Self-Prompting.

Access Video Endpoint


Tools for self prompting

When working with Self-Prompting, you can choose between two integration options depending on your technical setup and desired level of control.

Direct API Integration

  • Connect directly to our Self-Prompting API endpoints to build a fully custom solution.
  • This option gives you complete flexibility, but requires managing your own database, storing filters, and maintaining your infrastructure.
  • For most users, we recommend starting with the Filter Creator.
  • This tool provides all the necessary features to create, manage, and deploy your own MultiSwap filters without needing any additional backend setup.
  • Once your filters are ready, you can instantly use them through our API, making this the fastest and easiest way to get started with Self-Prompting.

Below, you’ll find a link to the Filter Creator Guide, where you can learn how to create your first custom filters and start using them right away.

Filter Creator


How Do I Get Started with Self-Prompting for MultiSwap?


How to Self-Prompt for Video

At the moment, Video supports only Self-Prompting, meaning you create your own text prompts to define how the generated animation should look and move.
In the near future, we’ll also introduce ready-to-use preset prompts that work well for most input images, making it even easier to achieve great results right out of the box.

Below you’ll find proven examples, templates, and tips to help you create high-quality and dynamic video outputs using Self-Prompting.


Same Prompt – Different Results

The quality and composition of your input image have a major impact on the final video.
A well-structured prompt combined with a clear, high-quality image produces much better results.

Good Example:
person looking at the camera, money falling, flames motion, movement, dynamic, camera zooms out

Bad Example:
person looking at the camera, rain and thunders, motion, movement, dynamic, camera zooms out


Proven Prompt Templates

(Text in brackets [ ] is optional or can be replaced with similar terms.)

For Solo Subjects

  • person looking at the camera, motion, movement, dynamic, [camera orbits left/right]
  • person [smiling], motion, movement, dynamic, [camera zooms out]
  • a person moving in slow motion, movement, dynamic, [camera zooms in]

For Groups

  • people standing, motion, movement, dynamic, [camera orbits right]
  • people [smiling], motion, movement, dynamic, [camera zooms out]

Keyword Suggestions

  • Emotions: smiling, focused, angry
  • Camera Movements: camera zooms in / zooms out / orbits left / orbits right / pans left / pans right
  • Motion: movement, steady, slow motion, fast motion, static

Examples of Effective Camera Movements


Prompting Do’s

  • Use high-resolution, well-lit images
  • Keep the subject centered and in focus
  • Include action keywords like “motion”, “movement”, or “dynamic”
  • Stick to one camera movement per prompt (e.g., camera zooms in)
  • Optionally, add emotions such as smiling, focused, or angry
  • Match the prompt action to the input image

  • Example: a person walking [in slow motion], movement, dynamic, [camera zooms out]


What to Avoid

  • Vague instructions like “make it move”
  • Describing multiple people separately (use “people” instead)
  • Combining several camera movements in one prompt
  • Adding unrealistic elements not present in the image

Final Tips

  • Start with a base prompt template and refine it step by step.
  • If a result fails, simplify the description or reduce the number of actions.
  • Experimentation is key — small wording changes can make a big difference in the animation outcome.