Whenever Madonna sings the 1980s hit āLa Isla Bonitaā on her concert tour, moving images of swirling, sunset-tinted clouds play on the giant arena screens behind her.
To get that ethereal look, the pop legend embraced a still-uncharted branch of generative artificial intelligence ā the text-to-video tool. Type some words ā say, āsurreal cloud sunsetā or āwaterfall in the jungle at dawnā ā and an instant video is made.
Recommended Videos
Following in the footsteps of AI chatbots and still image-generators, some AI video enthusiasts say the emerging technology could one day upend entertainment, enabling you to choose your own movie with customizable story lines and endings. But there's a long way to go before they can do that, and plenty of ethical pitfalls on the way.
For early adopters like Madonna, whoās long pushed artās boundaries, it was more of an experiment. She nixed an earlier version of āLa Isla Bonitaā concert visuals that used more conventional computer graphics to evoke a tropical mood.
āWe tried CGI. It looked pretty bland and cheesy and she didnāt like it,ā said Sasha Kasiuha, content director for Madonna's Celebration Tour that continues through late April. āAnd then we decided to try AI.ā
ChatGPT-maker OpenAI gave a glimpse of what sophisticated text-to-video technology might look like when the company recently showed off Sora, a new tool that's not yet publicly available. Madonna's team tried a different product from New York-based startup Runway, which helped pioneer the technology by releasing its first public text-to-video model last March. The company released a more advanced āGen-2" version in June.
Runway CEO CristĆ³bal Valenzuela said while some see these tools as a āmagical device that you type a word and somehow it conjures exactly what you had in your head,ā the most effective approaches are by creative professionals looking for an upgrade to the decades-old digital editing software they're already using.
He said Runway can't yet make a full-length documentary. But it could help fill in some background video, or b-roll ā the supporting shots and scenes that help tell the story.
āThat saves you perhaps like a week of work,ā Valenzuela said. āThe common thread of a lot of use cases is people use it as a way of augmenting or speeding up something they could have done before."
Runwayās target customers are ālarge streaming companies, production companies, post-production companies, visual effects companies, marketing teams, advertising companies. A lot of folks that make content for a living,ā Valenzuela said.
Dangers await. Without effective safeguards, AI video-generators could threaten democracies with convincing ādeepfakeā videos of things that never happened, or ā as is already the case with AI image generators ā flood the internet with fake pornographic scenes depicting what appear to be real people with recognizable faces. Under pressure from regulators, major tech companies have promised to watermark AI-generated outputs to help identify what's real.
There also are copyright disputes brewing about the video and image collections the AI systems are being trained upon (neither Runway nor OpenAI discloses its data sources) and to what extent they are unfairly replicating trademarked works. And there are fears that, at some point, video-making machines could replace human jobs and artistry.
For now, the longest AI-generated video clips are still measured in seconds, and can feature jerky movements and telltale glitches such as distorted hands and fingers. Fixing that is "just a question of more data and more training,ā and the computing power on which that training depends, said Alexander Waibel, a computer science professor at Carnegie Mellon University who's been researching AI since the 1970s.
āNow I can say, āMake me a video of a rabbit dressed as Napoleon walking through New York City,āā Waibel said. āIt knows what New York City looks like, what a rabbit looks like, what Napoleon looks like.ā
Which is impressive, he said, but still far from crafting a compelling storyline.
Before it released its first-generation model last year, Runway's claim to AI fame was as a co-developer of the image-generator Stable Diffusion. Another company, London-based Stability AI, has since taken over Stable Diffusion's development.
The underlying ādiffusion modelā technology behind most leading AI generators of images and video works by mapping noise, or random data, onto images, effectively destroying an original image and then predicting what a new one should look like. It borrows an idea from physics that can be used to describe, for instance, how gas diffuses outward.
āWhat diffusion models do is they reverse that process,ā said Phillip Isola, an associate professor of computer science at the Massachusetts Institute of Technology. āThey kind of take the randomness and they congeal it back into the volume. That's the way of going from randomness to content. And thatās how you can make random videos.ā
Generating video is more complicated than still images because it needs to take into account temporal dynamics, or how elements within the video change over time and across sequences of frames, said Daniela Rus, another MIT professor who directs its Computer Science and Artificial Intelligence Laboratory.
Rus said the computing resources required are āsignificantly higher than for still image generationā because āit involves processing and generating multiple frames for each second of video.ā
That's not stopping some well-heeled tech companies from trying to keep outdoing each other in showing off higher-quality AI video generation at longer durations. Requiring written descriptions to make an image was just the start. Google recently demonstrated a new project called Genie that can be prompted to transform a photograph or even a sketch into āan endless varietyā of explorable video game worlds.
In the near term, AI-generated videos will likely show up in marketing and educational content, providing a cheaper alternative to producing original footage or obtaining stock videos, said Aditi Singh, a researcher at Cleveland State University who has surveyed the text-to-video market.
When Madonna first talked to her team about AI, the āmain intention wasnāt, āOh, look, itās an AI video,'ā said Kasiuha, the creative director.
āShe asked me, āCan you just use one of those AI tools to make the picture more crisp, to make sure it looks current and looks high resolution?'ā Kasiuha said. āShe loves when you bring in new technology and new kinds of visual elements.ā
Longer AI-generated movies are already being made. Runway hosts an annual AI film festival to showcase such works. But whether that's what human audiences will choose to watch remains to be seen.
āI still believe in humans," said Waibel, the CMU professor. āI still believe that it will end up being a symbiosis where you get some AI proposing something and a human improves or guides it. Or the humans will do it and the AI will fix it up."
āāāā
Associated Press journalists Joseph B. Frederick and Rodrique Ngowi contributed to this report.