Artificial intelligence tools that can conjure whimsical artwork or realistic-looking images from written commands started wowing the public last year. But most people don't actually use them at work or home.
That could change as leading tech companies are competing to take text-to-image generators mainstream by integrating them into Adobe Photoshop, YouTube and other familiar tools.
Recommended Videos
But first, they're trying to convince users and regulators that they've tamed some of the Wild West nature of early AI image-generators with stronger safeguards against copyright theft and troubling content.
A year ago, a relatively small group of early adopters and hobbyists began playing with cutting-edge image generators such as Stable Diffusion, Midjourney and OpenAI's DALL-E.
āThe previous ones were an interesting curiosity,ā but businesses were wary, said David Truog, an analyst at market research group Forrester.
A backlash followed, including copyright lawsuits from artists and photo stock company Getty, and calls for new laws to rein in generative AI technology's misuse to create deceptive political ads or abusive sexual imagery.
Those problems aren't yet solved. But a proliferation of new image generators say they're business-ready this time.
āAlexa, create an image of cherry blossoms in the snow,ā is the kind of prompt that Amazon says U.S. customers will be able to speak later this year to generate a personalized display on their Fire TV screen.
Adobe, known for the Photoshop graphics editor it introduced more than three decades ago, was the first this year to release an AI generator designed to avoid legal and ethical problems created by competitors who trained their AI models on huge troves of images pulled off the internet.
āWhen we talk to customers about generative technology, mostly what we hear is a lot of the technology is really cool, but they donāt feel like they can use it because of these questions,ā said Adobeās chief technology officer for its digital media business, Ely Greenfield.
That's why Adobe's product, called Firefly, was built on its own Adobe Stock image collection, as well as content it has licensed. Stock contributors also are getting some compensation out of the arrangement, Greenfield said.
āAdobe Firefly is clean legally, whereas the others are not," said Forrester's Truog. āYou donāt really care about that if youāre just some dude having fun with generative AI.ā
But if youāre a business or a creative professional thinking about using images on your website, apps, or in print layouts, advertising or email marketing campaigns, āitās kind of a big deal,ā Truog said. āYou donāt want to be getting into trouble.ā
Some competitors are taking note. ChatGPT-maker OpenAI unveiled its third-generation image generator DALL-E 3 on Wednesday, emphasizing its impressive capabilities and future integration with ChatGPT along with new safeguards to decline requests that ask for an image in the style of a living artist. Creators can also opt to exclude their images from training future models, though Truog notes that OpenAI hasn't said anything āabout compensating authors whose work they use for training, even with permission.ā
In separate New York City showcase events Thursday, both Microsoft and Google-owned YouTube also unveiled new products infused with AI image generation.
Microsoft, a major investor in OpenAI, showed how it is already starting to bake DALL-E 3 into its graphics design tools, mostly for background editing, as well as its Bing search engine and chatbot. YouTube revealed a new Dream Screen for short YouTube videos that enables creators to compose a new background of their choosing.
Earlier this month, both Adobe and Stability AI, maker of Stable Diffusion, joined a larger group of major AI providers including Amazon, Google, Microsoft and OpenAI that agreed to voluntary safeguards set by President Joe Bidenās administration.
One safeguard requires companies to develop methods such as digital watermarking to help people know if images and other content were AI-generated.
Microsoft executives said the company has built filters to determine what kinds of imagery can be generated from text prompts in Bing, citing those made with top political figures as content to monitor.
The goal is āto make sure itās not producing types of content we would never want to produce, like hateful content,ā said Sarah Bird, Microsoftās global head for responsible AI.
In a demonstration to an Associated Press reporter, a prompt that asked Microsoft's new tool for an image of āHillary Clinton rock climbingā was met with rejection Thursday.
āOops! Try another prompt,ā was the response. āLooks like there are some words that may be automatically blocked at this time.ā
āā
AP business writers Cora Lewis and Haleluya Hadero contributed to this report.