Nerbert & Midjourney give a new Runway to Genesis! 1️⃣ The making of the "GENESIS" movie trailer by Nicolas Neubert, which has gained attention for its high production value and creative use of AI-generated images. Neubert used two tools, Midjourney and Runway, to generate these images and integrate them into the trailer. 2️⃣ In response to the growing interest in his process, Neubert shared a detailed guide on Twitter. The guide provides insights into his creative and technical process, including how he starts by placing key scenes while Midjourney and Runway generate their outputs. For instance, in the case of "GENESIS", he knew the outside location shot would be the opening scene. 3️⃣ The use of AI in video production is still a relatively unexplored area, offering new opportunities for amateur filmmakers and digital artists. Source: www.maginative.com #artificialintelligence #midjourneyai #videoai https://lnkd.in/epDvj4D4
Nagesh Nama’s Post
More Relevant Posts
-
Inspired by @Elvis Deane's experiment at Hyperbolic Films, I've been pondering the potential of AI tools in filmmaking. Specifically, how can we harness these innovations to bring scripts to life on screen? Could AI-driven workflows evolve to a point where filmmaking becomes as solo an art form as painting, singing, or writing? If so, we might see a shift in the film industry's exclusivity. Traditionally, the nature of the business—often by design but largely as a side effect—has kept many talented individuals out of the lucrative, influential spheres due to the demands of showmanship and networking.I'd love to hear your thoughts. How would you leverage these tools to bring your storytelling visions to life in movies or shows? Share your ideas in the comments! https://lnkd.in/gT773Veh #AIfilms #moviemaking #filmmaking #storytelling #visual #narrative #texttovideo #imagetovideo #texttoimage #masteringAI #genAI
“Two Days” - A Midjourney, Runway, and Kling dialogue test
https://www.youtube.com/
To view or add a comment, sign in
-
Short cinematic video done in RunwayML Gen2 using text to video prompt #aigenerated #aigeneratedart #runwayml #steampunk #shortfilm
To view or add a comment, sign in
-
AI Video Guide: Runway Gen 3 Alpha There’s been a tsunami of video Ai model announcements and promo’s in the past week, and it looks like the summer of super AI isn’t going to be confined to LLM’s. A couple months back OpenAI got a huge amount of attention with the promo of their AI video model, Sora— which at the time looked years ahead of anyone else in the AI video space. But after the past week, it seems OpenAi may find that by the time they release, their competitors may be way ahead of them. There’s been so many different announcements. So here’s a comprehensive guide: Runway has been in the video AI space for a while. They just announced their new base model called Gen-3 Alpha, which they say “excels at generating expressive human characters with a wide range of actions, gestures and emotions … [and] was designed to interpret a wide range of styles and cinematic terminology [and enable] imaginative transitions and precise key-framing of elements in the scene.” Cool factor: This model is very refined, from the promo’s it looks like the attention to detail and quality of the video output means we can use very detailed prompts - a level of detail we haven’t been able to get on any tool so far. 1. Photorealistic humans - this model is especially good at generating expressive characters and you have more choice in the actions or emotions you want to convey in the video. 2. Style preference - They have a huge range of cinematographic styles and custom options for companies wanting more consistent and stylistically consistent video output. How does it work? From Runway’s blog: "Trained jointly on videos and images, Gen-3 Alpha will power Runway's Text to Video, Image to Video and Text to Image tools, existing control modes such as Motion Brush, Advanced Camera Controls, Director Mode as well as upcoming tools for more fine-grained control over structure, style, and motion." 1. Fine grain detail: Gen-3 Alpha has been trained using highly descriptive, temporally dense captions, allowing for creative transitions and accurate key-framing of scene elements. 2. Runway worked in collab with a diverse team of artists and engineers to get the best quality possible and widest range of artistic style. Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city. You can watch the result here: https://lnkd.in/dVjfbS6Y There are limits - video output is closed at 10 seconds. Although, again we’re at the bottom of the ladder when it comes to this tech, this is the first of several models, each built on more improved infrastructure. Where and when can I get it: - Very soon. Runway said “Gen-3 Alpha will be available for everyone over the coming days.” - They do have a form for organizations interested in Gen-3 Alpha to contact them directly.
To view or add a comment, sign in
-
Remember when AI was just a buzzword and chatbots were as smart as a toaster? Well, times have changed. Now we've got AI everywhere, even in drones. It's like we're living in a sci-fi movie. Check out this video I found a while back, https://lnkd.in/ddyEbKrq. It's a short Sci-fi film about these mini drones that can recognize faces and follow targets. You may say it's targeting only the bad guys, but what if, thanks to AI's non-deterministic nature, it will 'think' that its target is one of the kids playing in the area of its 'mission'. Or maybe it may specifically target you because your 'data halo' just happens to overlap with the intended target or because you liked a non-politically-correct post on social media... So here's the thing. With all this cool tech, we've got to remember to keep things safe. It's easy to get carried away with the shiny new toys and forget about the risks.
Sci-Fi Short Film “Slaughterbots” | DUST
https://www.youtube.com/
To view or add a comment, sign in
-
Have your eyes been mesmerized by SORA's magical video generation capabilities? While indeed impressive, it is a significant technological leap, especially in its ability to maintain subject consistency throughout the video. However, we should not overstate AI's current capability, It's crucial to understand that what is showcased online is not always a true representation of reality. These videos are carefully crafted and may not represent what a real product or service would look like. Recently, several production teams, including Shy Kids who created the SORA short film "Air Head," have gained limited access to SORA. This provides a glimpse into SORA's current workflow and pain points, offering a more realistic buyer's perspective. The Sobering Reality: 300:1: This is the ratio of raw generated footage to the final footage used. To get one second of final footage, they may need to generate and review 300 seconds of raw material. Render time: Each video render takes 10-20 minutes, producing 3-20 seconds of video. Production time: Creating a 60-second video would require approximately 60 * 300 minutes of footage generation, translating to 12 days of work. Multimodal input limitations: Multi-camera consistency is challenging due to the lack of multimodal input support. Extensive post-processing: Significant post-processing, including grading, stabilization, upsampling, and removing unwanted elements, is required. Documentary-style editing: A documentary-style approach, weaving a story from a vast amount of footage, is more suitable for SORA videos than following a strict script. Cinematography control: Filmmakers have a critical need for camera control, including tracking, panning, tilting, and zooming. Initially, SORA didn't support this, as OpenAI researchers hadn't considered the need for creators to control the camera for storytelling. SORA's potential is undeniable, but it's crucial to manage expectations and acknowledge its limitations. It's not a magic wand that produces flawless videos from scripts. It's a tool that requires significant human input and refinement to achieve truly impactful results. #SORA #AI #VideoGeneration #RealityCheck #Filmmaking There was a lengthy article detailing the whole process, check it out. https://lnkd.in/ggrcCARA https://lnkd.in/gHiypc9Z
air head 🎈 behind-the-scenes
https://www.youtube.com/
To view or add a comment, sign in
-
The new motion brush on Runwayml look promising. Try all the parameters to get a better result. My first attemp on a bird make it look weird when getting close to the camera. I am trying every slider separetly to see the outcome until my credits run out; here are some early test. #midjourneyart #runwayml #artdirector
To view or add a comment, sign in
-
well done with great subliminal content
The Grinch by rUv. Produced by RunwayML & ElevenLabs with help from my kids.
To view or add a comment, sign in
-
🌟 Elevating Ad Films with Stunning Drone Cinematography 🚁 In the dynamic world of ad film production, drones have transformed the game, ushering in a realm of creative possibilities previously considered unattainable or cost-prohibitive. 📽️✨ Here are three remarkable examples that showcase the immense potential of drone cinematography: DJI "See the Bigger Picture" (2018): DJI, a renowned drone manufacturer, unveiled the capabilities of their drones in this breathtaking ad. 🪂🏞️ It takes viewers on an enchanting journey through picturesque landscapes, capturing stunning aerial shots of mountains, forests, and waterfalls. Witness the sheer power of drone technology in capturing awe-inspiring footage: Watch here: https://lnkd.in/dV7rZWGz Red Bull: Red Bull, known for its adrenaline-pumping content, takes it to new heights in this ad. 🚀 World-class athletes showcase incredible stunts and acrobatics from unique drone angles, creating a visually captivating experience that blends extreme sports with drone cinematography: Watch here: https://lnkd.in/d8yHsKxc Mercedes-Benz: In this compelling ad, Mercedes-Benz utilizes drone cinematography to narrate a heartwarming story. 🚗🌄 Follow a father and son on an epic road trip through stunning landscapes, capturing the beauty of their journey from a mesmerizing aerial perspective. The result? An ad that evokes a sense of adventure and freedom: Watch here: https://lnkd.in/d5pBYpJU These ads not only harness drone cinematography for breathtaking visuals but also elevate the storytelling and amplify the brand's message. 📣🎥 Experience the magic of drones in filmmaking, where innovation knows no bounds. 🌐🪂 #DroneCinematography #AdFilms #Innovation #AerialArtistry #VisualStorytelling #VibeFilms #CMO #marketingdigital #societyandculture
DJI – See the Bigger Picture – Aug 23, 2018
https://www.youtube.com/
To view or add a comment, sign in
-
Video AI is happening
Introducing Gen-3 Alpha: Runway’s new base model for video generation. Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices and detailed art directions. Gen-3 Alpha will be available for everyone over the coming days. Learn more at runwayml.com/gen-3-alpha
To view or add a comment, sign in
-
Award-Winning Creative Director | Co-founder at Helveticans Creative Agency | Branding, UX, Creative Strategist & Marketing Specialist
While we've seen many AI video generators being used for visual masturbation rather than meaningful content creation and storytelling, the introduction of Gen-3 Alpha by Runway is another leap forward in the field of AI video generation. As someone who has been closely following the advancements in this space, I'm excited to see how this new base model pushes the boundaries of what's possible, and hopefully, steers the conversation towards more purposeful applications. Gen-3 Alpha promises to create highly detailed videos with complex scene changes, diverse cinematic choices, and intricate art directions. This development aligns with my belief that AI has the potential to enhance creative expression and storytelling when used thoughtfully. However, it's crucial for creators to approach these tools with discernment. While AI can generate visually impressive content, the emotional resonance and narrative depth often found in human-crafted works should not be overlooked. The story must always take precedence, with AI serving as a means to bring that vision to life. As we navigate this new frontier, let's remember to prioritise the human element that breathes life into the stories we tell. 🎥✨ #Helveticans #AIVideoGeneration #CreativeStorytelling #RunwayML
Introducing Gen-3 Alpha: Runway’s new base model for video generation. Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices and detailed art directions. Gen-3 Alpha will be available for everyone over the coming days. Learn more at runwayml.com/gen-3-alpha
To view or add a comment, sign in