An AI Film Festival And The Multiverse Engine

Posted by Craig S. Smith, Contributor | 3 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 8


In the glassy confines of Alice Tully Hall on Thursday, the third annual Runway AI Film Festival celebrated an entirely new art form.

The winning film, Total Pixel Space, was not made in the traditional sense. It was conjured by Jacob Adler, a composer and educator from Arizona State University, stitched together from image generators, synthetic voices, and video animation tools — most notably Runway’s Gen-3, the company’s text-to-video model (Runway Gen-4 was released in March).

Video generation technology emerged in public in 2022 with Meta’s crude video of a flying Corgi wearing a red cape and sunglasses. Since then, it has fundamentally transformed filmmaking, dramatically lowering barriers to entry and enabling new forms of creative expression. Independent creators and established filmmakers alike now have access to powerful AI tools such as Runway that can generate realistic video scenes, animate storyboards, and even produce entire short films from simple text prompts or reference images.

As a result, production costs and timelines are shrinking, making it possible for filmmakers with limited resources to achieve professional-quality results and bring ambitious visions to life. The democratization of content creation is expanding far beyond traditional studio constraints, empowering anyone with patience and a rich imagination.

Adler’s inspiration came from Jorge Luis Borges’ celebrated short story The Library of Babel, which imagines a universe where every conceivable book exists in an endless repository. Adler found a parallel in the capabilities of modern generative machine learning models, which can produce an unfathomable variety of images from noise (random variations in pixel values much like the “snow” on an old television set) and text prompts.

“How many images can possibly exist,” the dreamy narrator begins as fantastical AI-generated video plays on the screen: a floating, exploding building; a human-sized housecat curled on a woman’s lap. “What lies in the space between order and chaos?”

Adler’s brilliant script is a fascinating thought experiment that attempts to calculate the total number of possible images, unfurling the endless possibilities of the AI-aided human imagination.

“Pixels are the building blocks of digital images, tiny tiles forming a mosaic,” continues the voice, which was generated using ElevenLabs.

“Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers,” the narration continues, the voice itself a sequence of numbers that describe air pressure changes over time. “Therefore, every photograph that could ever be taken exists as coordinates. Every frame of every possible film exists as coordinates.”

Runway was founded in 2018 by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis, after they met at New York University Tisch School of the Arts. Valenzuela, who serves as CEO, says he fell in love with neural networks in 2015, and couldn’t stop thinking about how they might be used by people who create.

Today, it’s a multi-million-user platform, used by filmmakers, musicians, advertisers, and artists, and has been joined by other platforms, including OpenAI’s Sora, and Google’s Veo 3.

What separates Runway from many of its competitors is that it builds from scratch. Its research team — which comprises most of the company — develops its own models, which can now generate up to about 20 seconds of video.

The result, as seen in the works submitted to the AI Film Festival, is what Valenzuela calls “a new kind of media.” The word film may soon no longer apply. Nor, perhaps, will filmmaker. “The Tisches of tomorrow will teach something that doesn’t yet have a name,” he said during opening remarks at the festival.

Indeed, Adler is not a filmmaker by training, but a classically trained composer, a pipe organist, and a theorist of microtonality. “The process of composing music and editing film,” he told me, “are both about orchestrating change through time.”

He used the image generation platform Midjourney to generate thousands of images, then used Runway to animate them. He used ElevenLabs to synthesize the narrator’s voice. The script he wrote himself, drawing from the ideas of Borges, combinatorics, and the sheer mind-bending number of possible images that can exist at a given resolution. He edited it all together in DaVinci Resolve.

The result? A ten-minute film that feels as philosophical as it is visual.

It’s tempting to frame all this as the next step in a long evolution; from the Lumière brothers to CGI, from Technicolor to TikTok. But what we’re witnessing isn’t a continuation. It’s a rupture.

“Artists used to be gatekept by cameras, studios, budgets,” Valenzuela said. “Now, a kid with a thought can press a button and generate a dream.”

At the Runway Film Festival, the lights dimmed, and the films came in waves of animated hallucinations, synthetic voices, and impossible perspectives. Some were rough. Some were polished. All were unlike anything seen before. This isn’t about replacing filmmakers. It’s about unleashing them.

“When photography first came around — actually, when daguerreotypes were first invented — people just didn’t have the word to describe it,” Valenzuela said during his opening remarks at the festival. “They used this idea of a mirror with a memory because they’d never seen anything like that. … I think that’s pretty close to where we are right now.”

Valenzuela was invoking Oliver Wendell Holmes Sr.’s phrase to convey how photography could capture and preserve images of reality, allowing those images to be revisited and remembered long after the moment had passed. Just as photography once astonished and unsettled, generative media now invites a similar rethinking of what creativity means.

When you see it — when you watch Jacob Adler’s film unfold — it’s hard not to feel that the mirror is starting to show us something deeper. AI video generation is a kind of multiverse engine, enabling creators to explore and visualize an endless spectrum of alternate realities, all within the digital realm.

“Evolution itself becomes not a process of creation, but of discovery,” his film concludes. “Each possible path of life’s development … is but one thread in a colossal tapestry of possibility.”





Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *