Oliver Cameron’s Post

View profile for Oliver Cameron, graphic

CEO at Odyssey

I'm excited to share Odyssey, my new thing! 🍿 We're building Hollywood-grade visual AI, where beautiful scenery, characters, lighting, and motion can be both generated and directed. Our mission is to deliver a better way to create movies, TV shows, and video games. We are inspired by the pioneering computer graphics research in the 80s and 90s, and the founding story of Pixar. Those technical achievements led to movies that inspired millions of people, grossing billions of dollars. We believe visual AI can be a new frontier for storytelling. But, we must hold it to high standards. Today, we're surrounded by low-quality AI-generated text and imagery. For AI to work for Hollywood, it must be capable of producing glitch-free and mind-blowing visuals. Most importantly, visual AI must enable humans to tell the exact story that they imagined, and to put their signature on it. Zack Snyder said it best: “The best movies, my favorite movies, are where you can feel the hand of the filmmaker.” Text-to-video is interesting, but we believe the paradigm is limiting for Hollywood. As a storyteller, you have little ability to direct your environment or characters, or to iterate on the finer details of your shot until it's just right. More powerful models are required! To solve this, we're going layers deeper. We're training 4 generative models that enable full control of the core layers of visual storytelling. Specifically, models to generate... 🧊 High-quality geometry 🏔️ Photorealistic materials 🌄 Stunning lighting 🏃♀️ Controllable motion Individually, each model will enable you to precisely configure the minutia of your scene. Combined, these models will generate video or scenes, but exactly as you wanted. This approach is harder to build than text-to-video. But, we believe it will result in higher-quality movies, TV shows, and video games. Ultimately, that's how each of these technologies should be judged. Hollywood has always adopted new technologies to create new masterpieces. 2001: A Space Odyssey, Star Wars, Jurassic Park, Toy Story, Avatar, Interstellar. We plan to work alongside Hollywood to shape this technology together, and to put it to the test. This is why we're starting to share our story today, to begin those conversations. To bring this to life, we've recruited an incredible team of AI researchers and Hollywood artists. Our researchers are from Cruise, Wayve, Waymo, Tesla, Meta, and more. And our artists were behind Dune, Godzilla, Avengers, and more. The combination has been so powerful! We've also closed $9M from world-class investors to fuel this. We're lucky to be supported by GV, DCVC, Air Street Capital, Elad Gil, Garry Tan, Soleio, Jeff Dean, Kyle Vogt, Qasar Younis, Guillermo Raunch, Soumith Chintala, and many others. I couldn't be more excited to working on a breakthrough technology like this, and to be collaborating with the most creative industry on the planet. Hollywood-grade visual AI is on the way 🫡

Mirko Jankovic

Freelance senior animator

1mo

It's amusing when people who have had nothing to do with the VFX industry claim to have the solution to take all of Hollywood to the next level. It appears more like another startup trying to capitalize on the current hype before moving on to the next one.

Ryan Grobins

VFX Supervisor | Head of 3D | Filmmaker

1mo

Side stepping the obvious ethical issues of using publicly trained models and the associated copyright issues, there is one huge technical hurdle that will stop "Hollywood" (and I use that term loosely) using tools like this, and that is all the content for current models have been trained on 8 bit data. None of it is HDR 10 bit. So unless you are sourcing 10 bit (or higher) data by ripping HDR films and TV from the studios (obvious copyright issues of course), "Hollywood" will be unable to use this.

Kennon Fleisher

Creative Director at Kinesis | Iconic visuals for Original Series, Documentaries, and Films

1mo

This is just an uninspiring and dated looking manifesto in the style of an ad agency ripomatic from 2012, one promising a “solution” for a problem that doesn’t even exist — and no one in the professional world even wants. One that appears to barely make landscapes on par with unreal engine and says it’s “storytelling”. The one thing these tech bros continually fail to recognize is that people want to see authentic human experiences, real places, real people, real situations — even if it’s a fictional story. That cannot be almagamated out of stolen data or IP by these tech grifters. Just look at all the people commenting who are excited — none of them are creative people, just “AI gurus” and “data” people salivating over the fact that they might soon be able to appear creative. Hilarious.

Looks like you still have the same problem with hands 🤣

  • No alternative text description for this image
Joe Clinton

MEng Computer Science Graduate | Aspiring Offline RL Researcher

1mo

Tackling text-to-video using underlying 3D geometry has the advantages you've mentioned of being high quality, consistent (aka glitch free), and controllable. However there are major drawbacks: 1. Abundance of video data but short supply of geometry data, means scenes will be less diverse, and getting the model to generalise more difficult. 2. 4 models, instead of 1 increases complexity which limits the models ability to be scaled up. 3. Higher computational costs. Generating 1 trillion triangles, for a large scale scene instead of just the pixels seen by the camera will be slower. I'd be interested to learn more about how Odyssey plans to tackle these challenges, and I wish the best of luck to you with this exciting endeavour!

Jordan Brown

Enterprise Account Executive - Dell EMC NY

1mo

What sources will the data ingested to train the models be derived from? I am curious to see how a high degree of quality content can be leveraged to train the LLM without infringing on copyright of existing productions. Ingesting either public or personalized creations seems to open a can of worms around ownership and rights of a final production. If done legally, it seems expensive. Very cool concept and interested to see how this progresses, and what policy may stem as a result of it.

Nick Dunmur

photographer | producer | business & legal adviser for AOP | chair of BPC Also now on Bluesky (@nickdunmur).

1mo

“Text-to-video is interesting, but we believe the paradigm is limiting for Hollywood. As a storyteller, you have little ability to direct your environment or characters, or to iterate on the finer details of your shot until it's just right.” … or, you could do it using humans, and get what you want as a result of collaboration and emotional investment. Seems obvious to me. Maybe you, and others from the ‘tech-bro’ frat, should read some Orson Welles? He said, “The enemy of art is the absence of limitation.” I think that’s apposite.

Visual AI significantly enhances storytelling by enabling the creation of detailed, immersive environments and characters. It allows for precise control over visual elements, ensuring that the narrative vision is fully realized. This technology fosters innovation, expedites production processes, and maintains high-quality output, revolutionizing how stories are told in movies, TV shows, and video games.

Ben Baker

Virtual Production Line Producer & Consultant | Unreal Fellow | Virtual Production EP on 'FATHEAD' at the ETC.

1mo

So this can be rights-cleared worldwide in perpetuity, as required by any broadcaster?

All such solutions should be offered on a creativity platform. Different versions can be offered to different creators. Just like platforms for apps. That way even individual could reap the benefits of technology. Who knows our next masterpiece comes from such an individual. Anyway, it could help creators develop their fan base, and maybe even earn from paid views of their work. Content creation on social media is already full time work for many.

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics