We are going to explain to you what is adobe firefly and how does it work this artificial intelligence that will help you generate images from scratch. Therefore, it is a direct competitor to other tools like DALL-E, Stable Diffusion, MidJourney or Bing Image Creator.
It is a tool that will initially work independently on a website, like other image generators, but in the future it will be directly integrated into the Adobe suite of applications, such as Photoshop, Illustrator, Adobe Creative Cloud, Document Cloud, Experience Cloud and Adobe Express.
What is Firefly and what does it do
Firefly is an artificial intelligence system created by Adobe, the same creators of image editing tools like Photoshop. Specifically, it is an artificial intelligence system trained to generate images from textso that you only have to describe what you want to see and Firefly will draw it for you, generating the image from nothing.
The main difference of this tool is that will be integrated into the Adobe suite of applicationssuch as Photoshop, Illustrator and others. However, you will also be able to use it independently in a web application when you receive access to it.
This means that, in addition to being an image generator, it is also a “co-pilot” AI, so that you can interact with the options of the tools from Adobe itself to edit the results.
Yes, you will be able to ask him to draw something for you from scratch on a blank canvas, which is the same thing that other similar ones do. But it can also be used for add content to the images you are editing. For example, if you are making a composition you can have it add a continuation below it, images from nothing but that understand the context of what you have done.
You will also be able to generate other elements such as vectors or brushes, plus textures and more. All personalized and based on a few words or even a sketch.
In addition to this, being integrated into Adobe tools, you can also use these tools to edit the result you’ve had from text. For example, you can create a vector image and then go to Illustrator to edit it directly.
You will also be able to create videos from text, first going through an image. For example, you tell it to draw you a landscape, and then with the Adobe tools you can add snow to it and tell it to snow, so you have an animation.
Other options that it offers you are use reference images to create content from one or more, or generate photorealistic three-dimensional images from a 3D model that you provide. And when you create a normal image, you will be able to select elements within it and ask to change them, but only those elements.
How Firefly works
Adobe Firefly is a kind of broadcast model with advanced options. Come on, it’s not only used to create images or vectors from scratch by understanding the text, but it’s also capable of interacting with the creation you’ve made to modify it, or modify and play with images or models that you upload.
Its main difference is that it is not a stand-alone tool, but rather It will be an integrated option within the main Adobe tools. So it’s not a matter of going into Firefly and doing things, it’s a matter of going into Photoshop or Illustrator and, within those, asking Firefly to do something and then continuing to interact with it.
Thus, we have an AI that consists of two parts. First of all, it is able to understand what you write to it like other simpler broadcast AIs. They understand the structure of your text, the words and their order, translate it into data, and then generates an image based on what you have asked. It is a new image, made from scratch, but based on your request.
And then you have the artificial intelligence also understands the context. The image does not vomit and forget, but it also allows you to select areas of it and ask it, for example, to change the design or color of the jacket. Also, if you upload one or more images, it will analyze them, understand your style and contextand you can generate content from them.
There are also differences in how this AI has been trained.. Instead of simply using photos from the Internet, many of which are copyrighted, Adobe has partnered with Nvidia and they say it has been trained with open license datasets and Adobe Stock. Also, Stock authors can receive benefit if they contribute their images to the training, or say they don’t want to train it with their work with a label for it.
The cornerstone of this artificial intelligence is what Adobe has called a “style engine.” This is an engine that allows you to apply to the images that you create the style, color, lighting or composition that you want to the image by telling it at the prompt.
How to use Adobe Firefly
At the moment, Adobe Firefly is still in betaand to be able to access artificial intelligence you need to sign up for a waiting list through this website. And when you’re on the waiting list and it’s your turn, you’ll receive an access link to use Firefly.
Initially you will be able to use Firefly independently in a web application. However, Adobe’s idea is integrate it in the future in your applicationsalthough it is not yet clear when it will arrive in its ecosystem.
And then, how you use Firefly will depend on where you use it. The web version will be similar to other systems for generating images from scratch, with a text field where you can write the prompt and some extra options. But however, in native Adobe applications it will work with more options that application has.
In Xataka Basics | 19 pages and services to create images from scratch using artificial intelligence