Gen-3 Alpha can create surreal and novel transitions between all kinds of objects, animals and characters. To learn more, read the prompt guide: https://lnkd.in/epW-P4S9
Where's the data from?
wow! visual diarrhea! you guys did it! useless video! pixel puke!
It’s cool, but without image guidance it’s hard to make it a part of the pipeline. Results are too unpredictable.
Really appreciate the documentation: https://help.runwayml.com/hc/en-us/articles/30586818553107-Gen-3-Alpha-Prompting-Guide The visual aid has been useful in mapping the keyword to the vision
Once again a prof that ideas can transition through ai to become realities and not the other way around that’s why creativity has no boundaries but those defined by the ability to execute….whatever the tools
Ooooooh, you posting this today is so TIMELY. I had an idea for a video that needs this capability. 🤔 I am SUPER excited to get hands on with Runway Gen-3 Alpha now!!! 🤩
What a time to be alive with technology!
Can we use source video as a start or end point? Would love to turn my cat into a dragon, or vice versa.
oki, I have to say that trippy things from AI are quite cool. Just how much trippy you can get?
I make videos that amplify your brand's visual identity and values | Croak the man-frog who likes ripping people to pieces 🐸🌊😱💔
1moThis ought to solve the Internet garbage shortage.