Ask Adobe Design: How are you using Adobe Firefly?

Adobe Design
Thinking Design
Published in
9 min readJun 21, 2023

--

How our team is using our generative AI model in their personal and professional projects

Digital illustration created in Firefly

With generative workflows in Adobe Photoshop (Generative Fill), Adobe Illustrator (Generative Recolor), and Adobe Express (text-to-image and text effects) Firefly has been changing the way our designers brainstorm, create concept art, and complete repetitive tasks. Adobe Design helped shape the Firefly experience, and our team members are also using the technology in a range of projects, both professional and personal. We spoke with a handful of Firefly power-users from Adobe Design to hear how they’re using generative AI in their workflows:

“The ability to create unique Photoshop textures (and generate multiple versions) using Firefly seems exponential.”

Lee Brimelow, Software Development Engineer, Design Engineering

“I’ve spent countless hours, since Firefly was released in beta, creating a wide variety of content that’s ranged from cute and funny images to compelling and beautiful illustrations (creative minds can run wild with it). But I’ve also been trying to create more useful assets that designers and photographers commonly use to augment their work: Photoshop overlay textures, which usually live in a layer above the main content and are overlaid onto it using a blend mode. These textures can be used to adjust a photo’s lighting, to add effects like grunge or grain, and a whole host of other effects. In the example below, I used a rainbow light leak texture to recolor an existing photograph.

“Finding just the right overlay texture can be time-consuming, so the ability to generate them in Firefly is great, and the ability to create unique textures and generate multiple versions seems exponential. The work I’ve been doing only scratches the surface of what’s possible, but I still don’t envision that generative AI will ever replace creative professionals, I see it as empowering them to bring their designs and photographs more easily to the level that’s in their mind’s eye.”

The original image of the abandoned car was something I’d generated previously in Firefly using the prompt, “shot of an abandoned car that crashed into the woods, cloudy, foggy, and rainy.”
The next step was to try and generate a colorful light leak texture which, when applied to the image, would change the color and feel of it. I started with the prompt “colorful light leak overlay,” but prompts can be tricky, sometimes you get what you envisioned on the first attempt, but it often takes time and patience to get what you’re looking for. I was looking for a colorful blurred texture but initially nothing I generated worked.
I usually start with short prompts and add to them as needed. I added “on a black background,” since it would make things much easier when trying to overlay it onto the image, and after several failed attempts I finally got what I was looking for.
The last step was to bring both images into Photoshop to create the final piece: I placed the texture onto a layer directly above the photo of the abandoned car and applied an “overlay” blend mode to the texture and set its opacity to 60%. (When applying an overlay texture, it’s good to go through all the blend modes and find the one that works best for that texture.) The next step was to combine the two images in Photoshop.

“The ability for Firefly to produce convincing portraiture from scratch is extremely practical, especially when I need unique assets for demos.”

Davis Brown, Experience Designer, Digital Imaging

“Working with Firefly to generate photorealistic portraits is of particular interest to me in my creative process. The ability for it to produce convincing, lifelike images from scratch is extremely practical, especially when I need unique assets for demos. The reactions of people when they realize that these convincing images are entirely AI-generated never ceases to amaze me. It’s a powerful testament to the advancements in this technology and its potential in the world of art. My process is driven by the ongoing progression of AI so it’s constantly evolving. Every day I’m learning and exploring new creative territories-it’s an exhilarating part of my journey as an artist.”

Creating photorealistic portraits with Firefly involves a precise setup of the prompt. I start by outlining the camera view and the shot’s positioning. Then, I detail the subject’s appearance and surroundings, including their clothing and the setting (including specifics about the environment and lighting). An example of a prompt I might use is, “a medium-shot portrait of a person in a patterned chore jacket, in an art studio, with monstera plants, sunlight streaming through windows during the golden hour DSLR telephoto HD photo backlit.” This detailed approach helps guide the AI model to craft a realistic portrait that aligns with my vision.
The next phase is organization… curating my favorite creations in Adobe Lightroom. I import all my favorite photos, compare them side-by-side, rate them, make basic color adjustments, and upscale the image resolution for quality.
The refinement process occurs in Photoshop. Using Generative Fill I expand the images, tweak the backgrounds, regenerate parts of clothing, and generate new objects or scenery to tell a new story.
All these portraits were generated entirely with Adobe Firefly and edited in Photoshop with Generative Fill.

“Firefly has a style engine that’s extremely useful for achieving a consistent aesthetic throughout a collection of images.”

Veronica Peitong Chen, Experience Designer, Machine Intelligence & New Technologies

“Imagine you’re asked to paint a portrait of your life story. How would you captivate the audience and make every moment come alive? That was the challenge I faced when preparing for Pivotal Moments, one of Adobe Design’s internal speaker series that gives people an opportunity to share transformative moments from their lives and careers.

“I’d crafted my speech around three pivotal moments that shaped my academic and career journey. As part of the core team building and testing Firefly, it occurred to me that it would be the perfect way to create a visual theme to thread through my story. Firefly empowers storytellers to experiment, refine, and enhance their visuals until they capture the essence of the story they want to tell. With each iteration, I had the opportunity to fine tune the results, adjust the composition, and experiment with visual elements-a process that ensured the results closely matched my memory while effectively illustrating the story.”

Using my written narrative as a starting point, I fed it into Firefly as prompts, to generate highly customized visuals for a personalized story. One of the prompts was based on a story where my mom took me to a class on a bicycle on a rainy day. With just a few adjustments, Firefly conjured an image that so authentically captured the environment, the mood, and the essence of the story itself that even I felt transported back in time.
Firefly has a style engine that was extremely useful for achieving a consistent aesthetic throughout the collection of images in my presentation. With “palette knife” and “oil painting” styles selected across all generations, I was easily able to illustrate different subjects and moments with a consistent visual language.
I generated 83 images in total and ended up using 24.
Firefly-generated images helped me thread a visual theme through my Pivotal Moments presentation deck.

“The ability to create any scene I can dream up has really opened possibilities for the types of stories and analogies I use in internal demos.”

Kelsey Mah Smith, Experience Designer, Machine Intelligence & New Technologies

“Part of my job is to shed light on the unknown, to tell stories rooted in research, trends, and an understanding of user needs. I often create decks and designs to share abstract and broad concepts with other teams which require visual analogies to set the stage and strengthen my narrative. Standard deck templates felt stale and didn’t do much to support the variety of stories and concepts that I wanted to convey. Whenever I needed to customize a deck, I would search on stock or create illustrations on my own. It was time consuming to get the right images, textures, and illustrations I needed.

“I’ve been using Firefly for almost all my projects since the beta launched. It’s where I start ideating concepts for what I want my story or visual theme to be. I’ll generate everything from textures, icons to full on visual scenes to help support my stories. From there I download the assets and bring them directly into a design tool where I’ll collage, mask, and layer them. Having the ability to create any scene I can dream up without having to search across stock sites or public domain images has really opened possibilities for the types of stories and analogies I can use. In addition, the time it takes to produce custom assets has reduced while the variety of assets and themes has increased. Essentially, it’s faster to be even more creative than I was before, which in the end ultimately helps me get back to designing strategies to make our products and features easy for our customers to use.

“For this particular deck I wanted to use a deep space analogy-to evoke the feeling of the vastness and excitement of the unknown quality of space exploration.”

I started with a simple prompt “planets in space,” but it was generating images that were much too stylized and fantastical.
From there, I continued to refine the prompt by adding the words “high resolution” and “black space” until I got the look I had in mind.
I still needed other images with textures like smoke and dust so I could layer it onto the planets to make the whole composition a bit more abstract — without detracting from the planets and the emptiness of space. I created a prompt “floating mist over black background,” that generated exactly what I needed.
I ultimately generated three images. None of them required editing so I dropped them in a design tool where I collaged and layered, using different effects and masks, to create my final title screen.

“I can quickly generate images on a specific theme or concept which not only sparks ideas but helps me explore visuals I hadn’t even considered.”

Tomasz Opasinski, Creative Technologist, Machine Intelligence & New Technologies

“One of the most exciting possibilities of Firefly is its ability to aid in the ideation process. As a creative (prior to Adobe I was a poster designer and was part of more than 560 theatrical, streaming, TV and video game campaigns) I always feel a need to be generating fresh ideas, so I experiment a lot. Firefly quickly generates images on a specific theme or concept which not only sparks ideas but helps me explore visuals I hadn’t even considered-it’s like having a design assistant tirelessly generating concepts and visual references, freeing up my time and mental energy to focus on refining and executing my vision.

“I don’t think I’ll ever see generative AI as a replacement for human creativity and ability; for me it’s a tool for exploring ideas, speeding up workflows, and generating high-quality assets. I recently used Firefly to assist in the creation of a poster for a Halloween party, with themed characters, pumpkins, haunted houses, and appropriately scary background images.”

I started with a blank page for ideas and words that I associate with Halloween, then used a second blank page to block placement for characters and content as I gathered assets.
For each prompt, Firefly generates four candidates, each with slightly different characteristics. Selecting one image from many is an iterative process that involves selecting what’s closest to what I have in mind then altering the prompt as necessary to get closer to my final asset. It’s a classic process of narrowing down a search that started broadly with a search for a particular “look” — a colorful and Halloween-esque graffiti style. The center image shows my final selection and the masking in preparation for final compositing in Photoshop.
My final poster consisted of 44 images. On the right are the thumbnails with the associated prompts: The more you tell the computer, the more precise the output will be; simple two- and three-word prompts may not work as well when you have a particular concept in mind.
With so many images in one project, not having to create them myself freed up a lot of time to focus on the poster design.

“Before Firefly, if my content required variations, I would have to manually modify them — each pattern, color, alteration was a separate task.”

Heather Waroff, Senior Experience Designer, Extensibility

“As a designer on the Extensibility team (where we explore how to add new functionality to our applications without changing core functionality and how to embed Adobe tools and assets into third-party applications) I’m often asked to envision how experience interoperability works across Adobe. Doing this requires telling a workflow story that encompasses a user’s entire journey-from first interaction to accomplishing a goal-when interacting with our products or services.

“By telling a visual story of a user’s end-to-end journey, we can help others visualize what an experience will look like before building it. My workflow for these experience stories includes the creation of supporting content so that wireframes and digital prototypes feel real enough that they capture the essence of what a user will be doing. When figuring out what content is needed, I always ask myself the same set of questions:

  1. What’s the result/outcome they’re looking to get to?
  2. How will they shape their content to get to the result?
  3. What are the steps to getting there?
  4. What content is the user starting with?

“Prior to Firefly, I was illustrating stories through template manipulation and manual content creation using Express, and my process started with looking for a template that could communicate what a user would be doing. For example, if I was exploring what a small business owner would be creating, I would create a fake company to illustrate that. If the content required variations, I would have to manually modify them-each pattern, color, and alteration was a separate task. Since the beta launch of Firefly, I’ve been using it to generate content to show user journeys and options-and it’s exponentially sped up my work.

“This example workflow tells the story of how the new Dropbox, Google Drive, and One Drive add-ons would work in Express: The fictional user is a freelance book cover illustrator using Express to create book covers and the process shows how they would pull in images from various cloud storage services. It required a variety of images to make the experience feel realistic.”

My first step was to create the book cover art for my persona. That began by creating multiple variations of each cover illustration — a process that takes minutes with Firefly — to find the one I wanted to use for the final output.
Next, I’d select a single illustration from each grouping to create the cover art. Because my persona would be working in Express, I imported the Firefly generated images into the app to create the covers (this workflow has since been built into the application to make it even easier).
For this presentation, I created six covers. The different outputs help show the progression of an imagined workflow and what a user is doing with content.
Next, using a design tool, I filled in wireframes showing the folders of content in the cloud storage services. I started with greyed-out boxes, then filled them with the Firefly-generated images to show how the add-ons feature would work in Express.
I needed four cloud storage examples — each showing how an illustration would scale — so I used a different image for each.
Since my persona would need to find an image file, I showed how the user would get to nested folders using a different series of images. Firefly was helpful with creating these variations: I began with the prompt, “modern cliff buildings” and reused that base prompt to capture those same images during different times of the day by generating nighttime and sunset variations.
In the end I showed the progression of a user’s workflow to create content in Express by uploading their work through a storage cloud add-on.

The Firefly beta is open to everyone. Visit the site and experiment with how to use it in your work.

Ask Adobe Design is a recurring series that shows the range of perspectives and paths across our global design organization. In it we ask the talented, versatile members of our team about their craft and their careers.

Originally published at https://adobe.design.

--

--

Adobe Design
Thinking Design

Stories from the team designing Creative Cloud, Document Cloud, and Experience Cloud. Visit our site for more stories and job postings: adobe.design