Within the award-winning animation “The Wild Robotic,” our robotic Roz realizes she should activate studying mode as a way to perceive and talk with the world round her. Her system not solely needed to study visible parts, but additionally incorporate and interpret sounds as language.
Lately, Adobe’s generative AI platform, Firefly, has included sound technology by way of prompting, and that expands immediately in a brand new announcement.
Generate Sound Results (beta) was obtainable beforehand within the Firefly system. Customers can use their very own voice to manage the model and pacing of the efficiency, so to talk (pun meant). The sounds are generated shortly, a fraction of the time spent creating visible property or utilizing generative AI.
This morning, I prompted Firefly to generate a robotic welcoming you to this web site… and the ensuing audio was creepy sufficient to immediate nightmares (pun meant). So, phrase recognition doesn’t seem like in Generate Sound Results by way of that route. The Textual content-to-Avatar system was extra applicable for that immediate anyway. Maybe we’re not but as subtle as Wild Robotic Roz, who transitions in Chapter 20 of the ebook: “…she not heard animal noises. Now she heard animal phrases.”
As of this morning (7/17/25), for me, “Textual content to Sounds (beta)” and “Voice to Sound Results (beta)” are actually listed as choices for sound technology within the Firefly internet utility. Generate Sound Results was once listed as an possibility on the primary touchdown web page, however the revamped Generate Sound Results is the place you truly land after you click on by way of both Textual content to Sounds or Voice to Sounds.
Moooooving alongside….the system additionally guarantees smoother motion and transitions within the Firefly video mannequin.
Additionally launching this morning is the flexibility to generate photographs and movies by way of a number of integrations of different AI video fashions by way of Firefly. Of specific notice to video editors and photographers, Topaz Labs’ standard picture and video upscalers are headed to Firefly Boards (Adobe’s Firefly-powered temper board app that launched in June). Do we have now a future with Topaz Upscalers additional built-in into Firefly or…dream of desires… immediately in Premiere?? “Can not compute!”
Google Veo 3 with audio is a brand new integration in Textual content to Video within the Firefly App and runway’s Gen-4 video is a brand new integration into Firefly Boards. Additionally coming quickly to the Boards is Moonvalley’s Morey, and Pika 2.2 is headed to the app.
Irrespective of which mannequin is used, Adobe has made clear that the content material made by way of Firefly is not going to be used to coach AI fashions. The entire integrations might be accessed immediately by way of Firefly. Whereas Content material Credentials are utilized to content material created in Adobe Firefly, which seems to nonetheless be the one commercially protected AI Video generator on the checklist (and when you choose your technology possibility, Adobe has made this abundantly clear).
Non-Adobe Fashions may also be turned off by a company, as seen right here in my larger academic setup.
Additionally obtainable in Firefly video this A.M. is composition reference, keyframe cropping, and elegance presets inside the Firefly app. Textual content to Avatar (beta) involves the App as properly.
Extra data is out there by way of Adobe’s announcement.