Last Updated: April 2026
Luma Dream Machine is a creative video generation platform for text-to-video and image-to-video work, popular for fast ideation, strong motion, and storytelling-focused workflows.
RunwayMLRunwayML is a cloud-based platform offering a range of AI tools tailored for creative applications such as video editing, image synthesis, and text-to-video generation. It enables users to integrate machine learning into artistic and design workflows, streamlining processes like background removal, motion tracking, and media creation. Widely used in fields like filmmaking, advertising, and digital art, RunwayML bridges the gap between traditional creative techniques and generative AI technologies, making advanced tools accessible for professional and experimental projects.
Powered by JST-1, the first video-3D foundation model with actual physics understanding, Viggle revolutionizes video production by allowing full control over character movements and interactions. Users can mix, animate, or ideate videos starting from any static image or text.
Sora, developed by OpenAI, is an AI model that generates realistic and imaginative videos from text prompts. It is designed to simulate the physical world in motion, capable of producing videos up to one minute long that adhere closely to user inputs. Currently in testing with red teamers and creative professionals, Sora aims to refine its utility and safety prior to broader release.
RunwayML is a cloud-based platform offering a range of AI tools tailored for creative applications such as video editing, image synthesis, and text-to-video generation. It enables users to integrate machine learning into artistic and design workflows...

Powered by JST-1, the first video-3D foundation model with actual physics understanding, Viggle revolutionizes video production by allowing full control over character movements and interactions. Users can mix, animate, or ideate videos starting from...

Veo is Google DeepMind's flagship video generation model built for high-fidelity text-to-video, image-to-video, and native audio-video generation with strong physics and cinematic control.
Seedance 2.0 is ByteDance Seed's multimodal video model that supports text, image, audio, and video inputs for highly controllable cinematic generation.
Seedream 5.0 Lite is ByteDance Seed's multimodal image generation model, widely used alongside Seedance for storyboards, reference frames, visual development, and video pre-production workflows.
Kling AI is one of the leading cinematic video generators in 2026, known for strong motion consistency, multimodal control, longer shots, and native audio in its latest model line.
Pika is a creator-friendly AI video platform known for playful effects, fast generation, and easy prompt-to-video workflows that work well for social and short-form content.
PixVerse is a widely used AI video generator for text prompts, photos, character transformations, and viral social templates, with frequent model upgrades and strong creator adoption.
Hailuo AI is MiniMax's text-to-video and image-to-video platform, known for affordable generation, cinematic clips, and broad creator usage.
Found a useful AI tool? Save this directory or share it with your network to help others discover the future of AI.