Have you been taking the same images with slight variations over the last few years? Have you seen new technologies shake up the photography world, yet largely kept your distance from them? Today, the esteemed Kristina Sherk from Shark Pixel shares with us one such tool that can revamp the work you’ve been putting out. Following her tutorial written for us below, I decided to try it out myself. I encourage you to break out of your comfort zone, give it a try, and share your photos with us in the comments.
One great thing COVID gave me was the time to finally start seeing what I could actually do with Photoshop composites. I mean… I had been teaching Photoshop for years through my business SharkPixel.com, but I was always petrified of even the word composite. Having all my DC headshot and portrait work disappear turned out to be somewhat of a blessing as it started me on a path (which I am still on) of creating interesting composites.
Within the past year, since Adobe announced Firefly and integrated Generative Fill into Photoshop, my composites have taken a little less time to complete thanks to this amazing (but still growing) tool. Designer Mark Heaps comically refers to Firefly as a “genius toddler.” In this article, I’ll show you a few tips on using Adobe Firefly and Photoshop Generative Fill to speed up the time it takes to create composite images.

Why Firefly Has a One-up on Generative Fill
A lesser-known fact about the new Generative Fill features within Photoshop is that its big brother is an AI engine on the web called Adobe Firefly, located at firefly.adobe.com. This platform is more similar to other AI engines out there since you create the entire image from a written prompt. This model is newer than the one in Photoshop proper, and Adobe is constantly updating Firefly’s AI engine. To do this, they use feedback from Firefly and Generative Fill users to better improve the next AI model that gets rolled out inside Photoshop.
I often use Firefly to create elements for my composite. For example, in “H is for Hijack,” I used Firefly to create the reindeer, the background, and Santa. Now, I did have to use multiple body parts from multiple Santas and reindeer to get the final characters, because as we stated earlier, AI is like a genius toddler. It gets some things right, but it might also go off the rails. So, there’s a lot it also does not so right.
One thing that I love about Firefly is that as you’re writing your prompt, you can check a little checkbox that will allow Firefly to autofill additional context at the end of your prompt. That’s how I learned I can add “on a black background for Photoshop clipping mask” to my prompt, and it will create a solid background to make masking out your objects a breeze. One significant thing I hope to see with the next version of Firefly is the ability to export as a PNG. This would make the process infinitely easier as I, and many others, tend to get sucked into the mask-perfecting rabbit hole often, meticulously perfecting mask edges at 400% zoom. Not good for productivity or my mental health.
Another thing that I love about Firefly is all the easy-to-use buttons that can help you determine style, orientation, depth of field, focal length, and so much more. These buttons make the act of writing literary prompts much easier for us non-reader, visually minded folk.
If you want to start your composite from an image instead of a written prompt, Firefly provides the option to upload a reference image to base your generation on. I used this image on the left to draw out a horribly rudimentary sketch in Photoshop of what I wanted my image to look like. I then I uploaded it to the reference image spot in Firefly within the style section which helped show Firefly the kind of image I was looking for. Because my initial reference image was so bad, it took me a few revisions to do this. Each time, I got a more realistic version of what scene I wanted to use. I would then replace the previous reference image with that one. After the fifth round of re-generations (and updated reference photos), I was able to achieve what I wanted in my final realistic shot. You can see the process here.
Another way Firefly differs from Generative Fill inside Photoshop is that it’s running its second-generation AI model. Here’s a great quote from Andrew KavanaghL “Though I tend to use Generative Fill in Photoshop to add extra elements, it is limited to using version one of the Firefly engine. Adobe Firefly is using version two, which generates higher-quality images and better human renderings with more realistic features. The images created with version two of Adobe Firefly have better details, richer colors, and improved dynamic range.” Andrew runs the wildly popular Photoshop and Lightroom Facebook group and recently started the AI Art and Digital Art Facebook group too.

Firefly Can Create Directional Lighting
Adobe Firefly can also be helpful when trying to create items that need to fit into an image you already have. For example, when I generated the reindeer for “H is for Hijack,” I explicitly told the AI to light the reindeer from the bottom. I also incorporated the vantage point into the prompt. I knew the reindeer would be flying up and away into the night sky, so the perspective and lighting needed to match for the final image to look right. Knowing the physics and qualities of how light reacts is a really important skill when compositing images together.
There’s no easier way to screw up a composite than to try and merge images without consistent lighting. That means the direction of the light and the light qualities (ie. harsh/soft/warm/cool) need to match for the composite to look realistic. Firefly even has a dropdown menu just for this!
But you also need to know the rules of photography lighting to be a good compositer. For example, in “B is for Burnt Out,” the match in the before image did not cast any light onto my face, so I had to relight my skin closest to the match manually using layers and blending modes. It’s important to think about what areas of my face the match would illuminate naturally if the fire from the match was bright enough to do this. Whenever I’m not sure about the proper way to make the lighting look realistic, I always either take a reference image or search stock for examples. I would either take a frame of the lit match in my mouth without the key light firing so I could see where the fire actually illuminated, or search stock for the terms “lit match, fire, face” to help me get a sense of what the light looks like from a real illuminated match.
Ultimately, since this was made pre-AI, I had to use the brush tool and play with layer blend modes to help create the illusion of the match casting light on my chin and cheek. I created the drips from scratch too. This is why learning how to replicate realistic lighting is such an important skill to know when working on composite photos. Here’s a close-up look.
Thankfully nowadays, Generative Fill in Photoshop does this proverbial spellchecking for you! When you use Generative Fill, it actually sends a larger portion of your image up to the cloud. To assess the image attributes so it can best replicate them in the generation it gives you, it will reference the lighting quality and direction, the vantage point the photo was taken from, and other important qualities to generate the best match.
I often get asked when Firefly version 2 will be integrated into Photoshop. While I’m not sure of an exact date, I would speculate that it will be released sometime in the first half of this year (2024).
Will Firefly Replace Us?
The more I use Photoshop’s Generative Fill and Adobe Firefly, the more I am assured these tools will not replace true creatives. Although it may seem impressive the kinds of images that AI can produce, I view these programs as generative tools. I know the image in my head, and while these tools can help me get there faster, they aren’t able to envision what I conceptualize. Additionally, in Photoshop, all the Generative Fill layers that are created include elements of the background to seamlessly integrate the produced items into the picture. But if you are anywhere near as wishy-washy as I am regarding item placement, color, lighting, and perspective within the image, you’ll still need to have a hefty toolbelt full of Photoshop tools to appropriately bring your images to life.
In “D is for Dreams,” I created all of the items within the head using original compositing techniques but added the birds, clouds, and bottom beach rocks on either shoulder area with AI.
As a project for a masterclass I created on AI, I attempted to create an entire image only using AI in Photoshop. I suggest you try the same. It was really fun, insightful, and it opened my eyes to all the ways I could use Generative Fill to my advantage. This is how I created “O is for Overlensed.” Everything is generated using Photoshop AI, except my son, of course.
There’s a lot of bad press about AI currently, but I want to take a moment to reflect on a quote from one of my headshot clients a few months back. He said, “We could be on the verge of the next new art movement. AI is creating so many new tools for creatives to use and the future could be very, very bright for all artists who choose to adapt to this new art form.”
In the end, it’s imperative to learn both traditional Photoshop tools as well as AI in order to take your composites to the next level. Learning just AI isn’t going to help you when you need to change the color of something you generated, or you need to slightly tweak the perspective of the object. The way to be the most proficient in this new era of photo retouching is to make sure you have extensive knowledge in both fields. Because if you do, the opportunities are absolutely limitless.
A Challenge to You!
Having worked regularly with Generative Fill and being pretty handy in Photoshop, I expected Firefly to be a breeze when I gave it a test run. It proved to require a little more patience than I had anticipated. I challenge anyone who says that using such technologies is cheating to give it a try. Actually. Come up with a concept, visit Firefly, and try it. When using it, I kept coming back to Sherk’s reference to the technology as being a genius toddler. Indeed, sometimes it listened, sometimes it was stubborn and decided to absolutely ignore my prompts. The part I did like about it was its ability to generate photo-realistic elements which I could use in my work. At times, stock platforms don’t have the exact element I’m looking for. Firefly allowed me to tweak the result over and over again until it was close enough to my vision.
Here is our challenge to you: try it, just once. Step outside of what you know, maybe even into something you despise, fear, or feel overwhelmed by. Why? Because you’re an artist and it’s an interesting challenge. You can make a piece like Sherk from scratch using prompts. Maybe you create a fantastical landscape you would love to photograph but don’t have the miles to fly to. Or perhaps, you create elements to export and composite them into base image you have.
We challenge you! Give it a shot and share your creation results below in the comments. It just may end up being the biggest tool you haven’t been using in your work.
Images provided by and used with permission of Kristina Sherk.
